question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
64,233,121 | 2020-10-6 | https://stackoverflow.com/questions/64233121/why-doesnt-pytest-load-conftest-py-when-running-only-a-subset-of-tests | Here is my API test directory layout: api_tests βββ conftest.py βββ query βββ me_test.py Contents of conftest.py: print("CONFTEST LOADED") Contents of me_test.py: """Tests the "me" query""" def test_me(): assert True If I simply run pytest, everything works: ================================================= test session starts ================================================= platform linux -- Python 3.8.5, pytest-6.1.0, py-1.9.0, pluggy-0.13.1 rootdir: /home/hubro/myproject, configfile: pytest.ini collecting ... CONFTEST LOADED collected 3 items api_tests/query/me_test.py . [ 33%] lib/myproject/utils_test.py . [ 66%] lib/myproject/schema/types/scalars_test.py . Notice "CONFTEST LOADED" is printed. Great! However, this test run also picked up all my unit tests, which I don't want. I want to separate my test runs into unit tests and API tests, I don't want to run them all in one go. However, if I simply run pytest api_tests/: ================================================= test session starts ================================================= platform linux -- Python 3.8.5, pytest-6.1.0, py-1.9.0, pluggy-0.13.1 rootdir: /home/hubro/myproject, configfile: pytest.ini collected 1 item api_tests/query/me_test.py . [100%] ================================================== 1 passed in 0.00s ================================================== Now the right tests are run, but the conftest.py file wasn't loaded... How come? I am using Pytest 6.1.0 on Python 3.8. EDIT: Alright, I found an acceptable workaround. I can override INI file options through the command line with the -o option. This works: poetry run pytest -o "testpaths=api_tests" However, I would very much like an answer to the original question so I'm not going to delete it. | The conftest plugin will be registered in both invocations, the only difference being the registration stage. If in doubt, add the --traceconfig argument to list the registered plugins in order of their registration: $ pytest --traceconfig PLUGIN registered: <_pytest.config.PytestPluginManager object at 0x7f23033ff100> PLUGIN registered: <_pytest.config.Config object at 0x7f2302d184c0> ... =================================== test session starts =================================== ... PLUGIN registered: <module 'conftest' from 'path/to/conftest.py'> ... In the first invocation, the conftest.py won't be found immediately since it's down the test root path, so it will be loaded while pytest discovers the tests. In the second invocation, conftest.py is located in test root, so it will be loaded even before the test session starts (after the plugins passed via -p arg and registered via setuptools entrypoint are loaded). Running pytest -s (with output capturing disabled) should reveal the custom print, located above the ==== test session starts ==== line. If you want the print to be identical between the two invocations, put it in a suitable hook. For example, to always print CONFTEST loaded after test collection finished, use: # api_tests/conftest.py def pytest_collectreport(report): print("CONFTEST loaded") There are other options available for custom output placement; best is to check out the list of available hooks under Hooks in pytest reference. | 10 | 8 |
64,234,214 | 2020-10-6 | https://stackoverflow.com/questions/64234214/how-to-generate-a-blob-signed-url-in-google-cloud-run | Under Google Cloud Run, you can select which service account your container is running. Using the default compute service account fails to generate a signed url. The work around listed here works on Google Cloud Compute -- if you allow all the scopes for the service account. There does not seem to be away to do that in Cloud Run (not that I can find). https://github.com/googleapis/google-auth-library-python/issues/50 Things I have tried: Assigned the service account the role: roles/iam.serviceAccountTokenCreator Verified the workaround in the same GCP project in a Virtual Machine (vs Cloud Run) Verified the code works locally in the container with the service account loaded from private key (via json file). from google.cloud import storage client = storage.Client() bucket = client.get_bucket('EXAMPLE_BUCKET') blob = bucket.get_blob('libraries/image_1.png') expires = datetime.now() + timedelta(seconds=86400) blob.generate_signed_url(expiration=expires) Fails with: you need a private key to sign credentials.the credentials you are currently using <class 'google.auth.compute_engine.credentials.Credentials'> just contains a token. see https://googleapis.dev/python/google-api-core/latest/auth.html#setting-up-a-service-account for more details. /usr/local/lib/python3.8/site-packages/google/cloud/storage/_signing.py, line 51, in ensure_signed_credentials Trying to add the workaround, Error calling the IAM signBytes API: { "error": { "code": 400, "message": "Request contains an invalid argument.", "status": "INVALID_ARGUMENT" } } Exception Location: /usr/local/lib/python3.8/site-packages/google/auth/iam.py, line 81, in _make_signing_request Workaround code as mention in Github issue: from google.cloud import storage from google.auth.transport import requests from google.auth import compute_engine from datetime import datetime, timedelta def get_signing_creds(credentials): auth_request = requests.Request() print(credentials.service_account_email) signing_credentials = compute_engine.IDTokenCredentials(auth_request, "", service_account_email=credentials.ser vice_account_email) return signing_credentials client = storage.Client() bucket = client.get_bucket('EXAMPLE_BUCKET') blob = bucket.get_blob('libraries/image_1.png') expires = datetime.now() + timedelta(seconds=86400) signing_creds = get_signing_creds(client._credentials) url = blob.generate_signed_url(expiration=expires, credentials=signing_creds) print(url) How do I generate a signed url under Google Cloud Run? At this point, it seems like I may have to mount the service account key which I wanted to avoid. EDIT: To try and clarify, the service account has the correct permissions - it works in GCE and locally with the JSON private key. | Yes you can, but I had to deep dive to find how (jump to the end if you don't care about the details) If you go in the _signing.py file, line 623, you can see this if access_token and service_account_email: signature = _sign_message(string_to_sign, access_token, service_account_email) ... If you provide the access_token and the service_account_email, you can use the _sign_message method. This method uses the IAM service SignBlob API at this line It's important because you can now sign blob without having locally the private key!! So, that solves the problem, and the following code works on Cloud Run (and I'm sure on Cloud Function) def sign_url(): from google.cloud import storage from datetime import datetime, timedelta import google.auth credentials, project_id = google.auth.default() # Perform a refresh request to get the access token of the current credentials (Else, it's None) from google.auth.transport import requests r = requests.Request() credentials.refresh(r) client = storage.Client() bucket = client.get_bucket('EXAMPLE_BUCKET') blob = bucket.get_blob('libraries/image_1.png') expires = datetime.now() + timedelta(seconds=86400) # In case of user credential use, define manually the service account to use (for development purpose only) service_account_email = "YOUR DEV SERVICE ACCOUNT" # If you use a service account credential, you can use the embedded email if hasattr(credentials, "service_account_email"): service_account_email = credentials.service_account_email url = blob.generate_signed_url(expiration=expires,service_account_email=service_account_email, access_token=credentials.token) return url, 200 Let me know if it's not clear | 26 | 27 |
64,227,384 | 2020-10-6 | https://stackoverflow.com/questions/64227384/how-can-i-know-whether-a-tensorflow-tensor-is-in-cuda-or-cpu | How can I know whether tensorflow tensor is in cuda or cpu? Take this very simple example: import tensorflow as tf tf.debugging.set_log_device_placement(True) # Place tensors on the CPU with tf.device('/device:GPU:0'): a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) # print tensor a print(a) # Run on the GPU c = tf.matmul(a, b) print(c) The code runs fine. Here, I am physically placing tensor 'a' and 'b' on the GPU. While printing 'a', I get: tf.Tensor( [[1. 2. 3.] [4. 5. 6.]], shape=(2, 3), dtype=float32) It does not give any info whether 'a' in CPU or GPU. Now, suppose that there is an intermediate tensor like tensor 'c' which gets created during some operation. How can I know that tensor 'c' is a CPU or a GPU tensor? Also, suppose the tensor is placed on GPU. How can I move it to CPU? | As of Tensorflow 2.3 you can use .device property of a Tensor: import tensorflow as tf a = tf.constant([1, 2, 3]) print(a.device) # /job:localhost/replica:0/task:0/device:CPU:0 More detailed explanation can be found here | 8 | 7 |
64,234,757 | 2020-10-6 | https://stackoverflow.com/questions/64234757/plotly-express-heatmap-cell-size | How can i change the size of cells in a plotly express heatmap? I would need bigger cells import plotly.express as px fig1 = px.imshow(df[col],color_continuous_scale='Greens') fig1.layout.yaxis.type = 'category' fig1.layout.xaxis.type = 'category' fig1.layout.yaxis.tickmode = 'linear' fig1.layout.xaxis.tickmode = 'linear' fig1.layout.xaxis.tickangle = 65 fig1.layout.autosize = True fig1.layout.height = 500 fig1.layout.width = 500 fig1.show() Result (very narrow) | 'px' may not make it square due to the color bar, so why not use 'go'? import plotly.graph_objects as go fig = go.Figure(data=go.Heatmap( z=[[1, 20, 30], [20, 1, 60], [30, 60, 1]])) fig.show() Set the graph size. fig.layout.height = 500 fig.layout.width = 500 Examples at px import plotly.express as px data=[[1, 25, 30, 50, 1], [20, 1, 60, 80, 30], [30, 60, 1, 5, 20]] fig = px.imshow(data, labels=dict(x="Day of Week", y="Time of Day", color="Productivity"), x=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'], y=['Morning', 'Afternoon', 'Evening'] ) fig.update_xaxes(side="top") fig.layout.height = 500 fig.layout.width = 500 fig.show() | 9 | 7 |
64,229,717 | 2020-10-6 | https://stackoverflow.com/questions/64229717/what-is-the-idea-behind-using-nn-identity-for-residual-learning | So, I've read about half the original ResNet paper, and am trying to figure out how to make my version for tabular data. I've read a few blog posts on how it works in PyTorch, and I see heavy use of nn.Identity(). Now, the paper also frequently uses the term identity mapping. However, it just refers to adding the input for a stack of layers the output of that same stack in an element-wise fashion. If the in and out dimensions are different, then the paper talks about padding the input with zeros or using a matrix W_s to project the input to a different dimension. Here is an abstraction of a residual block I found in a blog post: class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, activation='relu'): super().__init__() self.in_channels, self.out_channels, self.activation = in_channels, out_channels, activation self.blocks = nn.Identity() self.shortcut = nn.Identity() def forward(self, x): residual = x if self.should_apply_shortcut: residual = self.shortcut(x) x = self.blocks(x) x += residual return x @property def should_apply_shortcut(self): return self.in_channels != self.out_channels block1 = ResidualBlock(4, 4) And my own application to a dummy tensor: x = tensor([1, 1, 2, 2]) block1 = ResidualBlock(4, 4) block2 = ResidualBlock(4, 6) x = block1(x) print(x) x = block2(x) print(x) >>> tensor([2, 2, 4, 4]) >>> tensor([4, 4, 8, 8]) So at the end of it, x = nn.Identity(x) and I'm not sure the point of its use except to mimic math lingo found in the original paper. I'm sure that's not the case though, and that it has some hidden use that I'm just not seeing yet. What could it be? EDIT Here is another example of implementing residual learning, this time in Keras. It does just what I suggested above and just keeps a copy of the input for adding to the output: def residual_block(x: Tensor, downsample: bool, filters: int, kernel_size: int = 3) -> Tensor: y = Conv2D(kernel_size=kernel_size, strides= (1 if not downsample else 2), filters=filters, padding="same")(x) y = relu_bn(y) y = Conv2D(kernel_size=kernel_size, strides=1, filters=filters, padding="same")(y) if downsample: x = Conv2D(kernel_size=1, strides=2, filters=filters, padding="same")(x) out = Add()([x, y]) out = relu_bn(out) return out | What is the idea behind using nn.Identity for residual learning? There is none (almost, see the end of the post), all nn.Identity does is forwarding the input given to it (basically no-op). As shown in PyTorch repo issue you linked in comment this idea was first rejected, later merged into PyTorch, due to other use (see the rationale in this PR). This rationale is not connected to ResNet block itself, see end of the answer. ResNet implementation Easiest generic version I can think of with projection would be something along those lines: class Residual(torch.nn.Module): def __init__(self, module: torch.nn.Module, projection: torch.nn.Module = None): super().__init__() self.module = module self.projection = projection def forward(self, inputs): output = self.module(inputs) if self.projection is not None: inputs = self.projection(inputs) return output + inputs You can pass as module things like two stacked convolutions and add 1x1 convolution (with padding or with strides or something) as projection module. For tabular data you could use this as module (assuming your input has 50 features): torch.nn.Sequential( torch.nn.Linear(50, 50), torch.nn.ReLU(), torch.nn.Linear(50, 50), torch.nn.ReLU(), torch.nn.Linear(50, 50), ) Basically, all you have to do is is add input to some module to it's output and that is it. Rationale behing nn.Identity It might be easier to construct neural networks (and read them afterwards), example for batch norm (taken from aforementioned PR): batch_norm = nn.BatchNorm2d if dont_use_batch_norm: batch_norm = Identity Now you can use it with nn.Sequential easily: nn.Sequential( ... batch_norm(N, momentum=0.05), ... ) And when printing the network it always has the same number of submodules (with either BatchNorm or Identity) which also makes the whole thing a little smoother IMO. Another use case, mentioned here might be removing parts of existing neural networks: net = tv.models.alexnet(pretrained=True) # Assume net has two parts # features and classifier net.classifier = Identity() Now, instead of running net.features(input) you can run net(input) which might be easier for others to read as well. | 12 | 23 |
64,209,855 | 2020-10-5 | https://stackoverflow.com/questions/64209855/cannot-see-pyc-files-in-pycharm | I am using PyCharm for a project and I need to access pyc files. Unfortunately, IDE seems not to show *.pyc files nor __pycache__, even in searches with double Shift+Shift. I cannot find settings or documentation about it. Do you have any idea on how can I show these files and folders in IDE? | The setting you are looking for is Settings... | Editor | File Types | Ignore Files and Folders. Remove *.pyc and __pycache__ from the list to see the files. | 13 | 15 |
64,222,815 | 2020-10-6 | https://stackoverflow.com/questions/64222815/flask-wtforms-validation-inputrequired-for-at-least-one-field | Is there a way to implement a validation in WTFforms that enforces the fact that at least one of the fields is required? For example, I have two StringFields and I want to make sure the user writes something in at least one of the fields before clicking on "submit". field1 = StringField('Field 1', validators=[???]) field2 = StringField('Field 2', validators=[???]) What should I write in place of the ???? InputRequired() in this case wouldn't do the job as I need to assign it to one of the fields or to both. How can I do that? | Override your form's validate method. For example: class CustomForm(FlaskForm): field1 = StringField('Field 1') field2 = StringField('Field 2') def validate(self, extra_validators=None): if super().validate(extra_validators): # your logic here e.g. if not (self.field1.data or self.field2.data): self.field1.errors.append('At least one field must have a value') return False else: return True return False Note, you can still add individual validators to the fields e.g. input length. | 8 | 7 |
64,215,540 | 2020-10-5 | https://stackoverflow.com/questions/64215540/how-to-do-model-solve-not-show-any-message-in-python-using-pulp | I'm doing a implementation using pulp in python in a code that runtime is very important. #Initialize model model = LpProblem('eUCB_Model', sense=LpMaximize) #Define decision variables y = LpVariable.dicts('tenant', [(i) for i in range(size)], lowBound=None, upBound=None, cat='Binary') #Define model model += lpSum([y[i]*th_hat[t][i] for i in range(size)]) #Define Constraints model += lpSum([y[i]*R[t][i] for i in range(size)]) <= C #solving the model model.solve() my problem is, every time that I call to solve the model using model.solve() the method print a lot of informations in the terminal like this: Welcome to the CBC MILP Solver Version: 2.9.0 Build Date: Feb 12 2015 command line - /Users/henriquelima/opt/anaconda3/lib/python3.7/site-packages/pulp/apis/../solverdir/cbc/osx/64/cbc /var/folders/_6/r2j2fp7n5mxd5_1w2sbs8rvw0000gn/T/514d0624e4d645ae8582e6fa5203bc54-pulp.mps max ratio None allow None threads None presolve on strong None gomory on knapsack on probing on branch printingOptions all solution /var/folders/_6/r2j2fp7n5mxd5_1w2sbs8rvw0000gn/T/514d0624e4d645ae8582e6fa5203bc54-pulp.sol (default strategy 1) At line 2 NAME MODEL At line 3 ROWS At line 6 COLUMNS At line 19 RHS At line 21 BOUNDS At line 25 ENDATA Problem MODEL has 1 rows, 3 columns and 3 elements Coin0008I MODEL read with 0 errors String of None is illegal for double parameter ratioGap value remains 0 String of None is illegal for double parameter allowableGap value remains 0 String of None is illegal for integer parameter threads value remains 0 String of None is illegal for integer parameter strongBranching value remains 5 Option for gomoryCuts changed from ifmove to on Option for knapsackCuts changed from ifmove to on Continuous objective value is 2.03584 - 0.00 seconds Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements Cbc3007W No integer variables - nothing to do Cuts at root node changed objective from -2.03584 to -1.79769e+308 Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) I want to run this method with no one type of information printed in my terminal, this can reduce considerable the time of execution of the program. Do you know how to do this? Thank you everyone. | I don't think showing the log is the performance issue. Anyways, as the documentation shows: https://coin-or.github.io/pulp/technical/solvers.html#pulp.apis.PULP_CBC_CMD You can just pass msg=False as argument by doing: model.solve(PULP_CBC_CMD(msg=False)) | 7 | 8 |
64,220,438 | 2020-10-6 | https://stackoverflow.com/questions/64220438/groupby-based-on-a-multiple-logical-conditions-applied-to-a-different-columns-da | I have this dataframe: df = pd.DataFrame({'value':[1,2,3,4,2,42,12,21,21,424,34,12,42], 'type':['big','small','medium','big','big','big','big','medium','small','small','small','medium','small'], 'entity':['R','R','R','P','R','P','P','P','R','R','P','R','R']}) value type entity 0 1 big R 1 2 small R 2 3 medium R 3 4 big P 4 2 big R 5 42 big P 6 12 big P 7 21 medium P 8 21 small R 9 424 small R 10 34 small P 11 12 medium R 12 42 small R The operation consists of grouping by column 'entity' doing a count operation based on a two logical conditions applied to a column 'value' and column 'type'. In my case, I have to count the values greater than 3 in the column 'name' and are not equal to 'medium' in the column 'type'. The result must be R=3 and P=4. After this, I must add the result to the original dataframe creating a new column named βCountβ. I know this operation can be done in R with the next code: df[y!='medium' & value>3 , new_var:=.N,by=entity] df[is.na(new_var),new_var:=0,] df[,new_var:=max(new_var),by=entity] In a previous task, I had to calculate only the values greater than 3 as condition. In that case, the result was R=3 and P=4 and I got it applying the next code: In []: df.groupby(['entity'])['value'].apply(lambda x: (x>3).sum()) Out[]: entity P 5 R 4 Name: value, dtype: int64 In []: DF=pd.DataFrame(DF) In []: DF.reset_index(inplace=True) In []: df.merge(DF,on=['entity'],how='inner') In []: df=df.rename(columns={'value_x':'value','value_y':'count'},inplace=True) Out[]: value type entity count 0 1 big R 4 1 2 small R 4 2 3 medium R 4 3 2 big R 4 4 21 small R 4 5 424 small R 4 6 12 medium R 4 7 42 small R 4 8 4 big P 5 9 42 big P 5 10 12 big P 5 11 21 medium P 5 12 34 small P 5 My questions are: How do I do it for the two conditions case? In fact, How do I do it for a general case with multiples different conditions? | Create mask by your conditions - here for greater by Series.gt with not equal by Series.ne chained by & for bitwise AND and then use GroupBy.transform for count Trues by sum: mask = df['value'].gt(3) & df['type'].ne('medium') df['count'] = mask.groupby(df['entity']).transform('sum') Solution with helper column new: mask = df['value'].gt(3) & df['type'].ne('medium') df['count'] = df.assign(new = mask).groupby('entity')['new'].transform('sum') print (df) value type entity count 0 1 big R 3 1 2 small R 3 2 3 medium R 3 3 4 big P 4 4 2 big R 3 5 42 big P 4 6 12 big P 4 7 21 medium P 4 8 21 small R 3 9 424 small R 3 10 34 small P 4 11 12 medium R 3 12 42 small R 3 | 11 | 6 |
64,218,839 | 2020-10-6 | https://stackoverflow.com/questions/64218839/removing-loops-with-numpy-einsum | I have a some nested loops (three total) where I'm trying to use numpy.einsum to speed up the calculations, but I'm struggling to get the notation correct. I managed to get rid of one loop, but I can't figure out the other two. Here's what I've got so far: import numpy as np import time def myfunc(r, q, f): nr = r.shape[0] nq = q.shape[0] y = np.zeros(nq) for ri in range(nr): for qi in range(nq): y[qi] += np.einsum('i,i',f[ri,qi]*f[:,qi],np.sinc(q[qi]*r[ri,:]/np.pi)) return y r = np.random.random(size=(1000,1000)) q = np.linspace(0,1,1001) f = np.random.random(size=(r.shape[0],q.shape[0])) start = time.time() y = myfunc(r, q, f) end = time.time() print(end-start) While this was much faster than the original, this is still too slow, and takes about 30 seconds. Note the original without the einsum call was the following (which looks like it will take ~2.5 hours, didn't wait to find out for sure): def myfunc(r, q, f): nr = r.shape[0] nq = q.shape[0] y = np.zeros(nq) for ri in range(nr): for rj in range(nr): for qi in range(nq): y[qi] += f[ri,qi]*f[rj,qi]*np.sinc(q[qi]*r[ri,rj]/np.pi)) return y Does anyone know how to get rid of these loops with an einsum, or any other tool for that matter? | Your function seems to be equivalent to the following: # this is so called broadcasting s = np.sinc(q * r[...,None]/np.pi) np.einsum('iq,jq,ijq->q',f,f,s) Which took about 20 seconds on my system, with most of the time to allocate s. Let's test it for a small sample: np.random.seed(1) r = np.random.random(size=(10,10)) q = np.linspace(0,1,1001) f = np.random.random(size=(r.shape[0],q.shape[0])) (np.abs(np.einsum('iq,jq,ijq->q',f,f,s) - myfunc(r,q,f)) < 1e-6).all() # True Since np.sinc is not a linear operator, I'm not quite sure how we can further reduce the run time. | 8 | 6 |
64,214,769 | 2020-10-5 | https://stackoverflow.com/questions/64214769/finding-multiple-substrings-in-a-string-without-iterating-over-it-multiple-times | I need to find if items from a list appear in a string, and then add the items to a different list. This code works: data =[] line = 'akhgvfalfhda.dhgfa.lidhfalihflaih**Thing1**aoufgyafkugafkjhafkjhflahfklh**Thing2**dlfkhalfhafli...' _legal = ['thing1', 'thing2', 'thing3', 'thing4',...] for i in _legal: if i in line: data.append(i) However, the code iterates over line (which could be long) multiple times- as many times as there are item in _legal (which could be a lot). That's too slow for me, and I'm searching for a way to do it faster. line doesn't have any specific format, so using .split() couldn't work, as far as I know. Edit: changed line so that it better represents the problems. | One way I could think of to improve is: Get all unique lengths of the words in _legal Build a dictionary of words from line of those particular lengths using a sliding window technique. The complexity should be O( len(line)*num_of_unique_lengths ), this should be better than brute force. Now look for each thing in the dictionary in O(1). Code: line = 'thing1 thing2 456 xxualt542l lthin. dfjladjfj lauthina ' _legal = ['thing1', 'thing2', 'thing3', 'thing4', 't5', '5', 'fj la'] ul = {len(i) for i in _legal} s=set() for l in ul: s = s.union({line[i:i+l] for i in range(len(line)-l)}) print(s.intersection(set(_legal))) Output: {'thing1', 'fj la', 'thing2', 't5', '5'} | 9 | 4 |
64,207,149 | 2020-10-5 | https://stackoverflow.com/questions/64207149/python-setuptools-package-directory-does-not-exist | I have a project with this setup.py file: import setuptools with open("README.md", "r") as fh: long_description = fh.read() setuptools.setup( name="", version="0.0.1", author="", author_email="", description="", long_description=long_description, long_description_content_type="text/markdown", packages=setuptools.find_packages(where="./src", exclude=("./tests",)), classifiers=[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ], python_requires='>=3.8', ) This is my project directory structure (first two levels): $ tree -L 2 . βββ README.md βββ setup.py βββ src β βββ my_pkg βββ tests βββ conftest.py βββ data βββ __init__.py βββ integration βββ __pycache__ βββ unit When I run any setuptools command, I get the following error: $ python setup.py build running build running build_py error: package directory 'my_pkg' does not exist The same happens for other commands like python setup.py develop and python setup.py bdist-wheel. I suspect that it has to do with the src directory, as specified in the find_packages(where="./src") call in the setup.py. However, I've been following the documentation, and it does look the the my_pkg module is discovered at some point. Why does build_py fail to find it? | find_packages() automatically generates package names. That is, in your case all it does is generate ['my_pkg']. It doesn't actually tell setup() where to find my_pkg, just that it should expect to find a package called my_pkg somewhere. You have to separately tell setup() where it should look for packages. Is this confusing and counter intuitive? Yes. Anyway, you can tell setup() where to find my_pkg by using the package_dir argument. eg. package_dir={"":"src"} | 15 | 20 |
64,206,070 | 2020-10-5 | https://stackoverflow.com/questions/64206070/pytorch-runtimeerror-enforce-fail-at-inline-container-cc209-file-not-fou | Problem I'm trying to load a file using PyTorch, but the error states archive/data.pkl does not exist. Code import torch cachefile = 'cacheddata.pth' torch.load(cachefile) Output --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-8edf1f27a4bd> in <module> 1 import torch 2 cachefile = 'cacheddata.pth' ----> 3 torch.load(cachefile) ~/opt/anaconda3/envs/matching/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 582 opened_file.seek(orig_position) 583 return torch.jit.load(opened_file) --> 584 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) 585 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) 586 ~/opt/anaconda3/envs/matching/lib/python3.8/site-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, **pickle_load_args) 837 838 # Load the data (which may in turn use `persistent_load` to load tensors) --> 839 data_file = io.BytesIO(zip_file.get_record('data.pkl')) 840 unpickler = pickle_module.Unpickler(data_file, **pickle_load_args) 841 unpickler.persistent_load = persistent_load RuntimeError: [enforce fail at inline_container.cc:209] . file not found: archive/data.pkl Hypothesis I'm guessing this has something to do with pickle, from the docs: This save/load process uses the most intuitive syntax and involves the least amount of code. Saving a model in this way will save the entire module using Pythonβs pickle module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved. The reason for this is because pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors. Versions PyTorch version: 1.6.0 Python version: 3.8.0 | Turned out the file was somehow corrupted. After generating it again it loaded without issue. | 18 | 13 |
64,202,437 | 2020-10-5 | https://stackoverflow.com/questions/64202437/airflow-got-an-unexpected-keyword-argument-conf | I'm learning Apache Airflow to implement it at my workplace, I stumbled on a problem, when trying to pass parameter to function like this (I followed the documentation) from airflow import DAG import pendulum from datetime import datetime, timedelta from airflow.operators.python_operator import PythonOperator args = { "owner": "airflow", "start_date": pendulum.datetime(year=2020, month=10, day=5, tzinfo='Asia/Shanghai'), "retries": 5, "retry_delay": timedelta(minutes=3) } dag = DAG( "example_dag_v2", schedule_interval="@daily", default_args=args ) def my_mult_function(number): return number*number mult_task = PythonOperator( task_id = 'mult_task', provide_context=True, python_callable=my_mult_function, op_kwargs={'number': 5}, dag = dag ) mult_task I keep getting this error TypeError: my_mult_function() got an unexpected keyword argument 'conf' where did I do wrong ? Solution: so i found the solution but still dont understand why the solution is def my_mult_function(number, **kwargs): return number*number i passed **kwargs on the parameters, and it works! But i still dont understand why i need to pass the **kwargs ? | You have set provide_context=True so PythonOperator will send the execute context to your python_callable. So a generic catch all keyword arguments, **kwargs fixes the issue. https://github.com/apache/airflow/blob/v1-10-stable/airflow/operators/python_operator.py#L108. If you are not going to use anything from the context then set provide_context=False. | 17 | 30 |
64,199,121 | 2020-10-4 | https://stackoverflow.com/questions/64199121/missing-scope-error-dropbox-api-authentication | I'm trying to download a large Dropbox folder with a bunch of subfolders to an external harddrive using the scripts at: http://tilburgsciencehub.com/examples/dropbox/ import dropbox from get_dropbox import get_folders, get_files from dropbox.exceptions import ApiError, AuthError # read access token access_token = "sl.superlongstring" print('Authenticating with Dropbox...') try: dbx = dropbox.Dropbox(access_token, scope=['files.content.read', 'files.metadata.read']) print('...authenticated with Dropbox owned by ' + dbx.users_get_current_account().name.display_name) except AuthError as err: print(err) no errors here , correctly displays my name. try: folders=get_folders(dbx, '/Audioboeken') print(folders) download_dir = r'L:\\00 Audioboeken' print('Obtaining list of files in target directory...') get_files(dbx, folder_id, download_dir) except ApiError as err: print(err) except AuthError as autherr: print(autherr) This errors: dropbox.exceptions.AuthError: AuthError('randomstringofnumbers, AuthError('missing_scope', TokenScopeError(required_scope='files.metadata.read'))) I've tried adding scope to the login request, but that doesn't seem to help..(not sure if I did that correctly) App Permissions: The checkbox for files.content.read and files.metadata.read are checked. | The 'missing_scope' error indicates that while the app is permitted to use that scope, the particular access token you're using to make the API call does not have that scope granted. Adding a scope to your app via the App Console does not retroactively grant that scope to existing access tokens. That being the case, to make any API calls that require that scope, you'll need to get a new access token with that scope. | 10 | 34 |
64,199,814 | 2020-10-4 | https://stackoverflow.com/questions/64199814/reverse-key-value-pairing-in-python-dictionary | I need a way to reverse my key values pairing. Let me illustrate my requirements. dict = {1: (a, b), 2: (c, d), 3: (e, f)} I want the above to be converted to the following: dict = {1: (e, f), 2: (c, d), 3: (a, b)} | You just need: new_dict = dict(zip(old_dict, reversed(old_dict.values()))) Note, prior to Python 3.8, where dict_values objects are not reversible, you will need something like: new_dict = dict(zip(old_dict, reversed(list(old_dict.values())))) | 12 | 12 |
64,194,634 | 2020-10-4 | https://stackoverflow.com/questions/64194634/why-pip-freeze-returns-some-gibberish-instead-of-package-version | Here is what I did: β― pip freeze aiohttp @ file:///Users/aiven/Library/Caches/pypoetry/artifacts/50/32/0b/b64b02b6cefa4c089d84ab9edf6f0d960ca26cfbe57fe0e693a00912da/aiohttp-3.6.2-py3-none-any.whl async-timeout @ file:///Users/aiven/Library/Caches/pypoetry/artifacts/0d/5d/3e/630122e534c1b25e36c3142597c4b0b2e9d3f2e0a9cea9f10ac219f9a7/async_timeout-3.0.1-py3-none-any.whl attrs @ file:///Users/aiven/Library/Caches/pypoetry/artifacts/7f/e7/44/32ca3c400bb4d8a2f1a91d1d3f22bbaee2f4727a037aad898fbf5d36ce/attrs-20.2.0-py2.py3-none-any.whl chardet @ file:///Users/aiven/Library/Caches/pypoetry/artifacts/c2/02/35/0d93b80c730b360c5e3d9bdc1b8d1929dbd784ffa8e3db025c14c045e4/chardet-3.0.4-py2.py3-none-any.whl ... Version of pip: β― pip -V pip 20.2.3 from /Users/aiven/projects/foobar/venv/lib/python3.8/site-packages/pip (python 3.8) I expected something like this: > pip freeze foo==1.1.1 bar==0.2.1 pip freeze -h wasn't very helpful... For context: I installed packages into virtualenv using poetry. | This seems to come from the changes to support PEP 610. In particular refer to the Freezing an environment section. The notion of what "freezing" entails has been expanded to include preserving direct url sources for packages that were installed with direct origins. Poetry, with 1.1.0 has introduced a new installer that now handles discovery and download of artifacts for dependencies. This is different to the behaviour in 1.0.10 which simply let pip handle discover and download of required artifacts (wheels). This means that, now packages are installed using direct URL origins. This causes pip freeze to use direct reference format as specified in PEP 508 (eg: package @ file://../package.whl). For those interested, the url in question will be saved in <package>-<version>.dist-info/direct_url.json in the virtual env's site directory. You can get the old format output (not sure if this will change in the future), using the following command. pip --disable-pip-version-check list --format=freeze | 8 | 14 |
64,197,434 | 2020-10-4 | https://stackoverflow.com/questions/64197434/replace-column-value-based-on-value-in-other-column | so far my dataframe looks like this: ID Area Stage 1 P X 2 Q X 3 P X 4 Q Y I would like to replace the area 'Q' with 'P' for every row where the Stage is equal to 'X'. So the result should look like: ID Area Stage 1 P X 2 P X 3 P X 4 Q Y I tried: data.query('Stage in ["X"]')['Area']=data.query('Stage in ["X"]')['Area'].replace('Q','P') It does not work. Help is appreciated! :) | You can use loc to specify where you want to replace, and pass the replaced series to the assignment: df.loc[df['Stage']=='X', 'Area'] = df['Area'].replace('Q','P') Output: ID Area Stage 0 1 P X 1 2 P X 2 3 P X 3 4 Q Y | 8 | 3 |
64,196,443 | 2020-10-4 | https://stackoverflow.com/questions/64196443/get-last-message-s-from-telegram-channel-with-python | I'm using the python-telegram-bot library to write a bot in Python that sends URLs into a channel where the bot is administrator. Now, I would like to have the bot reading, let's say, the last 5 messages (I don't really care about the number as I just need to read the message on the chat) and store them into a list in the code for further elaborations. I already have my bot working with: bot = telegram.Bot(token='mytoken') bot.sendMessage(chat_id='@mychatid', text=entry.link) But I can't find a bot.getLastMessage or bot.getMessage kind of class into the python-telegram-bot library. In case there's already no written class that does that, how can I implement it via the Telegram API as I'm a bit of a beginner when it comes to API implementation? Thanks. | That's not possible in Bots unfortunately. Here you can find all available methods (that python-telegram-bot invokes behind the scenes) and there's no such method available to fetch messages on demand. The closest you can get through the api is getChat (which would return the pinned_message in that chat). What you can do in this case is, store the messages the bot sends as well as the message updates the bot receives (by setting up a handler) in some storage (database) and fetch from there later on. | 10 | 9 |
64,196,315 | 2020-10-4 | https://stackoverflow.com/questions/64196315/json-dump-into-specific-folder | This seems like it should be simple enough, but haven't been able to find a working example of how to approach this. Simply put I am generating a JSON file based on a list that a script generates. What I would like to do, is use some variables to run the dump() function, and produce a json file into specific folders. By default it of course dumps into the same place the .py file is located, but can't seem to find a way to run the .py file separately, and then produce the JSON file in a new folder of my choice: import json name = 'Best' season = '2019-2020' blah = ['steve','martin'] with open(season + '.json', 'w') as json_file: json.dump(blah, json_file) Take for example the above. What I'd want to do is the following: Take the variable 'name', and use that to generate a folder of the same name inside the folder the .py file is itself. This would then place the JSON file, in the folder, that I can then manipulate. Right now my issue is that I can't find a way to produce the file in a specific folder. Any suggestions, as this does seem simple enough, but nothing I've found had a method to do this. Thanks! | Python's pathlib is quite convenient to use for this task: import json from pathlib import Path data = ['steve','martin'] season = '2019-2020' Paths of the new directory and json file: base = Path('Best') jsonpath = base / (season + ".json") Create the directory if it does not exist and write json file: base.mkdir(exist_ok=True) jsonpath.write_text(json.dumps(data)) This will create the directory relative to the directory you started the script in. If you wanted a absolute path, you could use Path('/somewhere/Best'). If you wanted to start the script while beeing in some other directory and still create the new directory into the script's directory, use: Path(__file__).resolve().parent / 'Best'. | 10 | 7 |
64,189,176 | 2020-10-3 | https://stackoverflow.com/questions/64189176/os-sched-getaffinity0-vs-os-cpu-count | So, I know the difference between the two methods in the title, but not the practical implications. From what I understand: If you use more NUM_WORKERS than are cores actually available, you face big performance drops because your OS constantly switches back and forth trying to keep things in parallel. Don't know how true this is, but I read it here on SO somewhere from someone smarter than me. And in the docs for os.cpu_count() it says: Return the number of CPUs in the system. Returns None if undetermined. This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with len(os.sched_getaffinity(0)) So, I'm trying to work out what the "system" refers to if there can be more CPUs usable by a process than there are in the "system". I just want to safely and efficiently implement multiprocessing.pool functionality. So here is my question summarized: What are the practical implications of: NUM_WORKERS = os.cpu_count() - 1 # vs. NUM_WORKERS = len(os.sched_getaffinity(0)) - 1 The -1 is because I've found that my system is a lot less laggy if I try to work while data is being processed. | If you had a tasks that were pure 100% CPU bound, i.e. did nothing but calculations, then clearly nothing would/could be gained by having a process pool size greater than the number of CPUs available on your computer. But what if there was a mix of I/O thrown in whereby a process would relinquish the CPU waiting for an I/O to complete (or, for example, a URL to be returned from a website, which takes a relatively long time)? To me it's not clear that you couldn't achieve in this scenario improved throughput with a process pool size that exceeds os.cpu_count(). Update Here is code to demonstrate the point. This code, which would probably be best served by using threading, is using processes. I have 8 cores on my desktop. The program simply retrieves 54 URL's concurrently (or in parallel in this case). The program is passed an argument, the size of the pool to use. Unfortunately, there is initial overhead just to create additional processes so the savings begin to fall off if you create too many processes. But if the task were long running and had a lot of I/O, then the overhead of creating the processes would be worth it in the end: from concurrent.futures import ProcessPoolExecutor, as_completed import requests from timing import time_it def get_url(url): resp = requests.get(url, headers={'user-agent': 'my-app/0.0.1'}) return resp.text @time_it def main(poolsize): urls = [ 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', 'https://ibm.com', 'https://microsoft.com', 'https://google.com', ] with ProcessPoolExecutor(poolsize) as executor: futures = {executor.submit(get_url, url): url for url in urls} for future in as_completed(futures): text = future.result() url = futures[future] print(url, text[0:80]) print('-' * 100) if __name__ == '__main__': import sys main(int(sys.argv[1])) 8 processes: (the number of cores I have): func: main args: [(8,), {}] took: 2.316840410232544 sec. 16 processes: func: main args: [(16,), {}] took: 1.7964842319488525 sec. 24 processes: func: main args: [(24,), {}] took: 2.2560818195343018 sec. | 15 | 6 |
64,195,153 | 2020-10-4 | https://stackoverflow.com/questions/64195153/get-index-value-from-pandas-dataframe | I have a Pandas dataframe (countries) and need to get specific index value. (Say index 2 => I need Japan) I used iloc, but i got the data (7.542) return countries.iloc[2] 7.542 | call the index directly return countries.index[2] but what you post here looks like a pandas dataframe instead of a series - if that's the case do countries['Country_Name'].iloc[2] | 7 | 11 |
64,180,054 | 2020-10-3 | https://stackoverflow.com/questions/64180054/importerror-cannot-import-name-command-from-celery-bin-base | When I run the command flower -A main --port=5555 Flower doesn't work, the error is: > ImportError: cannot import name 'Command' from 'celery.bin.base' Any ideas? Main is a Django Project | Flower is always lagging behind Celery, so if you use the latest Celery (they refactored the CLI) it will probably fail. Stick to 4.4.x until Flower catches up. | 21 | 28 |
64,183,806 | 2020-10-3 | https://stackoverflow.com/questions/64183806/extracting-the-exponent-from-scientific-notation | I have a bunch of numbers in scientific notation and I would like to find their exponent. For example: >>> find_exp(3.7e-13) 13 >>> find_exp(-7.2e-11) 11 Hence I only need their exponent ignoring everything else including the sign. I have looked for such a way in python but similar questions are only for formatting purposes. | Common logarithm is what you need here, you can use log10 + floor: from math import log10, floor def find_exp(number) -> int: base10 = log10(abs(number)) return abs(floor(base10)) find_exp(3.7e-13) # 13 find_exp(-7.2e-11) # 11 | 10 | 14 |
64,182,077 | 2020-10-3 | https://stackoverflow.com/questions/64182077/getting-name-of-a-variable-in-python | If I have local/global variable var of any type how do I get its name, i.e. string "var"? I.e. for some imaginary function or operator nameof() next code should work: var = 123 assert nameof(var) == "var" There's .__name__ property for getting name of a function or a type object that variable holds value of, but is there anything like this for getting name of a variable itself? Can this be achieved without wrapping a variable into some magic object, as some libraries do in order to get variable's name? If not possible to achieve without magic wrappers then what is the most common/popular wrapping library used for this case? | You can do this with the package python-varname: https://github.com/pwwang/python-varname First run pip install varname. Then see the code below: from varname import nameof var = 123 name = nameof(var) #name will be 'var' | 16 | 24 |
64,180,609 | 2020-10-3 | https://stackoverflow.com/questions/64180609/delete-both-row-and-column-in-numpy-array | Say I have an array like this: x = [1, 2, 3] [4, 5, 6] [7, 8, 9] And I want to delete both the ith row and column. So if i=1, I'd create (with 0-indexing): [1, 3] [7, 9] Is there an easy way of doing this with a one-liner? I know I can call np.delete() twice, but that seems a little unclean. It'd be exactly equivalent to np.delete(np.delete(x, idx, 0), idx, 1), where idx is the index of the row/column pair to delete - it'd just look cleaner. | In [196]: x = np.arange(1,10).reshape(3,3) If you look at np.delete code, you'll see that it's python (not compiled) and takes different approaches depending on how the delete values are specified. One is to make a res array of right size, and copy two slices to it. Another is to make a boolean mask. For example: In [197]: mask = np.ones(x.shape[0], bool) In [198]: mask[1] = 0 In [199]: mask Out[199]: array([ True, False, True]) Since you are deleting the same row and column, use this indexing: In [200]: x[mask,:][:,mask] Out[200]: array([[1, 3], [7, 9]]) A 1d boolean mask like this can't be 'broadcasted' in the same ways a integer array can. We can do the 2d advanced indexing with: In [201]: idx = np.nonzero(mask)[0] In [202]: idx Out[202]: array([0, 2]) In [203]: np.ix_(idx,idx) Out[203]: (array([[0], [2]]), array([[0, 2]])) In [204]: x[np.ix_(idx,idx)] Out[204]: array([[1, 3], [7, 9]]) Actually ix_ can work directly from the boolean array(s): In [207]: np.ix_(mask,mask) Out[207]: (array([[0], [2]]), array([[0, 2]])) This isn't a one-liner, but it probably is faster than the double delete, since it strips off all the extra baggage that the more general function requires. | 8 | 5 |
64,176,468 | 2020-10-2 | https://stackoverflow.com/questions/64176468/how-to-create-a-subscriptable-mock-object | Suppose, I have a code snippet as foo = SomeClass() bar = foo[1:999].execute() To test this, I have tried something as foo_mock = Mock() foo_mock[1:999].execute() Unfortunately, this raised an exception, TypeError: 'Mock' object is not subscriptable So, How can I create a subscriptable Mock object? | Just use a MagicMock instead. >>> from unittest.mock import Mock, MagicMock >>> Mock()[1:999] TypeError: 'Mock' object is not subscriptable >>> MagicMock()[1:999] <MagicMock name='mock.__getitem__()' id='140737078563504'> It's so called "magic" because it supports __magic__ methods such as __getitem__. | 25 | 46 |
64,169,867 | 2020-10-2 | https://stackoverflow.com/questions/64169867/numpy-finding-interval-which-has-a-least-k-points | I have some points in the interval [0,20] I have a window of size window_size=3 that I can move inside the above interval. Therefore the beginning of the window - let's call start is constrained to [0,17]. Let's say we have some points below: points = [1.4,1.8, 11.3,11.8,12.3,13.2, 18.2,18.3,18.4,18.5] If we wanted a minimum of min_points=4 points the solution of the start ranges of the windows (which I found manually ) are: suitable_starts = [[10.2,11.3],[15.5,17.0]] i.e. The start of the size 3 window can be from 10.2 to 11.3 and from 15.5 to 17.0. Trivially, the corresponding end of the windows would just be +3 of the start ranges. I am looking for a way to algorithmically cover this quickly with clever numpy or scipy or other functionality. The general function I'm looking for is: get_start_windows(interval = [0,20], window_size = 3.0, points = [1.4,1.8,11.3,11.8,12.3,13.2,18.2,18.3,18.4,18.5], min_points = 4 return suitable_starts # suitable_starts = [[10.2,11.3],[15.5,17.0]] Note: As someone in the comments has pointed out there are special cases sometimes when points are exactly window_size apart. However in reality the points are double floats where it is impossible for them to exactly window_size apart so these can be ignored. These special examples include: points = [1.4,1.8, 11.3,11.8,12.3,13.2,14.2,15.2,16.2,17.2,18.2,18.3,18.4,18.5] but these can be safely ignored. | After a bit of struggle I came up with this solution. First a bit of explanations, and order of thoughts: Ideally we would want to set a window size and slide it from the most left acceptable point until the most right acceptable point, and start counting when min_points are in the window, and finish count when min_points no longer inside it (imagine it as a convultion oprtator or so) the basic pitfall is that we want to discrete the sliding, so the trick here is to check only when amount of points can fall under or up higher than min_points, which means on every occurance of element or window_size below it (as optional_starts reflects) then to iterate over optional_starts and sample the first time condition mets, and the last one that condition mets for each interval so the following code was written as described above: def consist_at_least(start, points, min_points, window_size): a = [point for point in points if start <= point <= start + window_size] return len(a)>=min_points points = [1.4,1.8, 11.3,11.8,12.3,13.2, 18.2,18.3,18.4,18.5] min_points = 4 window_size = 3 total_interval = [0,20] optional_starts = points + [item-window_size for item in points if item-window_size>=total_interval[0]] + [total_interval[0] + window_size] + [total_interval[1] - window_size] + [total_interval[0]] optional_starts = [item for item in optional_starts if item<=total_interval[1]-window_size] intervals = [] potential_ends = [] for start in sorted(optional_starts): is_start_interval = len(intervals)%2 == 0 if consist_at_least(start, points, min_points, window_size): if is_start_interval: intervals.append(start) else: potential_ends.append(start) elif len(potential_ends)>0 : intervals.append(potential_ends[-1]) potential_ends = [] if len(potential_ends)>0: intervals.append(potential_ends[-1]) print(intervals) output: [10.2, 11.3, 15.5, 17] Each 2 consequtive elements reflects start and end of interval | 7 | 2 |
64,173,613 | 2020-10-2 | https://stackoverflow.com/questions/64173613/where-is-python-interpreter-located-in-virtualenv | Where is python intrepreter located in virtual environment ? I am making a GUI project and I stuck while finding the python interpreter in my virtual environment. | Execute next code and it will print location of your python interpreter. import sys print(sys.executable) | 7 | 19 |
64,171,665 | 2020-10-2 | https://stackoverflow.com/questions/64171665/why-is-gunicorn-displaying-so-many-processes | I have a simple web app built with Django & running with Gunicorn with Nginx. When I open HTOP, I see there are so many processes & threads spawn -- for a single tutorial app that just displays a login form. See screenshot of HTOP below: Why are there so many of them open for such a simple app? Here is my configuration """gunicorn WSGI server configuration.""" from multiprocessing import cpu_count from os import environ def max_workers(): return cpu_count() * 2 + 1 max_requests = 1000 worker_class = 'gevent' workers = max_workers() Thanks | That's because of gunicorn's design: Gunicorn is based on the pre-fork worker model. This means that there is a central master process that manages a set of worker processes. The master never knows anything about individual clients. All requests and responses are handled completely by worker processes. That means that gunicorn will spawn as many processes and threads as it needs, depending on the configuration and type of the workers set. | 7 | 5 |
64,163,528 | 2020-10-1 | https://stackoverflow.com/questions/64163528/ubuntu-command-pip-not-found-but-there-are-18-similar-ones | I am trying to install a toolkit, I'm on WSL using ubuntu - I downloaded ubuntu yesterday. Here is what the installation process looks like for this toolkit. On windows cmd it says I have python 3.7.9 but on ubuntu its saying I have python 3.8.2 git clone https://github.com... cd program pip install -e . or: pip install program pip install -e . is not working for me, I get this error: user@DESKTOP-REA10BN:~/gym$ pip install -e . Command 'pip' not found, but there are 18 similar ones. however, I checked and I have pip installed, here's what I checked for before running: user@DESKTOP-REA10BN:~$ cd\ > sudo apt-get install python-pip cdsudo: command not found user@DESKTOP-REA10BN:~$ python3 --version Python 3.8.2 user@DESKTOP-REA10BN:~$ python3-pip --version python3-pip: command not found user@DESKTOP-REA10BN:~$ which pip3 /usr/bin/pip3 user@DESKTOP-REA10BN:~$ pip3 -V pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8) my PATHS: /home/user/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files/WindowsApps/CanonicalGroupLimited.UbuntuonWindows_2004.2020.812.0_x64__79rhkp1fndgsc:/mnt/c/windows/system32:/mnt/c/windows:/mnt/c/windows/System32/Wbem:/mnt/c/windows/System32/WindowsPowerShell/v1.0/:/mnt/c/windows/System32/OpenSSH/:/mnt/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/Program Files/NVIDIA Corporation/NVIDIA NvDLISR:/mnt/c/WINDOWS/system32:/mnt/c/WINDOWS:/mnt/c/WINDOWS/System32/Wbem:/mnt/c/WINDOWS/System32/WindowsPowerShell/v1.0/:/mnt/c/WINDOWS/System32/OpenSSH/:/mnt/c/Users/user/AppData/Local/Programs/Python/Python37-32/Scripts/:/mnt/c/Users/user/AppData/Local/Programs/Python/Python37-32/:/mnt/c/Users/user/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/user/AppData/Local/Programs/Microsoft VS Code/bin:/snap/bin | Short answer: Try running python3 -m pip install -e . Some explanations: The different versions of Python are not surprising. WSL is, effectively, an ultra-lightweight virtual machine. Your Windows python installation is entirely independent of the WSL python installation. Python has two widely used major versions, Python 2 and Python 3. The command python runs some minor version of Python 2, while the command python3 runs some minor version of Python 3. Below is my console output. lawruble@Balrog:~/scratch$ python --version Python 2.7.18 lawruble@Balrog:~/scratch$ python3 --version Python 3.8.5 Pip is the python installation manager, and has the same major versions as Python. The command pip runs the Python 2 version of pip, while pip3 runs the Python 3 version of pip. It's better practice to use python3 -m pip over pip3, it helps ensure that you're using the version of pip associated with the version of python you expect to run. | 16 | 23 |
64,160,594 | 2020-10-1 | https://stackoverflow.com/questions/64160594/fastapi-enum-type-models-not-populated | Below is my fastAPI code from typing import Optional, Set from fastapi import FastAPI from pydantic import BaseModel, HttpUrl, Field from enum import Enum app = FastAPI() class Status(Enum): RECEIVED = 'RECEIVED' CREATED = 'CREATED' CREATE_ERROR = 'CREATE_ERROR' class Item(BaseModel): name: str description: Optional[str] = None price: float tax: Optional[float] = None tags: Set[str] = [] status: Status = None @app.put("/items/{item_id}") async def update_item(item_id: int, item: Item): results = {"item_id": item_id, "item": item} return results Below is the swagger doc generated. The Status is not shown. I am new to pydantic and i am not sure on how to show status in the docs | create the Status class by inheriting from both str and Enum class Status(str, Enum): RECEIVED = 'RECEIVED' CREATED = 'CREATED' CREATE_ERROR = 'CREATE_ERROR' References Working with Python enumerations--(FastAPI doc) [BUG] docs don't show nested enum attribute for body--(Issue #329) | 20 | 49 |
64,159,770 | 2020-10-1 | https://stackoverflow.com/questions/64159770/whats-the-difference-between-numpy-random-vs-numpy-random-generate | I've been trying to simulate some Monte Carlos simulations lately and came across numpy.random. Checking the documentation of the exponential generator I've noticed that that's a warning in the page, which tells that Generator.exponential should be used for new code. Althought that, numpy.random.exponential still works, but I couldn't run the Generator counterpart. I've been getting the following error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-14-c4cc7e61aa98> in <module> ----> 1 np.random.Generator.exponential(2, 1000) TypeError: descriptor 'exponential' for 'numpy.random._generator.Generator' objects doesn't apply to a 'int' object My questions are: What's the difference between these 2? How to generate a sample with Generator? | The Generator referred to in the documentation is a class, introduced in NumPy 1.17: it's the core class responsible for adapting values from an underlying bit generator to generate samples from various distributions. numpy.random.exponential is part of the (now) legacy Mersenne-Twister-based random framework. You probably shouldn't worry about the legacy functions being removed any time soon - doing so would break a huge amount of code, but the NumPy developers recommend that for new code, you should use the new system, not the legacy system. Your best source for the rationale for the change to the system is probably NEP 19: https://numpy.org/neps/nep-0019-rng-policy.html To use Generator.exponential as recommended by the documentation, you first need to create an instance of the Generator class. The easiest way to create such an instance is to use the numpy.random.default_rng() function. So you want to start with something like: >>> import numpy >>> my_generator = numpy.random.default_rng() At this point, my_generator is an instance of numpy.random.Generator: >>> type(my_generator) <class 'numpy.random._generator.Generator'> and you can use my_generator.exponential to get variates from an exponential distribution. Here we take 10 samples from an exponential distribution with scale parameter 3.2 (or equivalently, rate 0.3125): >>> my_generator.exponential(3.2, size=10) array([6.26251663, 1.59879107, 1.69010179, 4.17572623, 5.94945358, 1.19466134, 3.93386506, 3.10576934, 1.26095418, 1.18096234]) Your Generator instance can of course also be used to get any other random variates you need: >>> my_generator.integers(0, 100, size=3) array([56, 57, 10]) | 9 | 9 |
64,158,898 | 2020-10-1 | https://stackoverflow.com/questions/64158898/what-does-keras-tokenizer-num-words-specify | Given this piece of code: from tensorflow.keras.preprocessing.text import Tokenizer sentences = [ 'i love my dog', 'I, love my cat', 'You love my dog!' ] tokenizer = Tokenizer(num_words = 1) tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index print(word_index) whether num_words=1 or num_words=100, I get the same output when I run this cell on my jupyter notebook, and I can't seem to understand what difference it makes in tokenization. {'love': 1, 'my': 2, 'i': 3, 'dog': 4, 'cat': 5, 'you': 6} | word_index it's simply a mapping of words to ids for the entire text corpus passed whatever the num_words is the difference is evident in the usage. for example, if we call texts_to_sequences sentences = [ 'i love my dog', 'I, love my cat', 'You love my dog!' ] tokenizer = Tokenizer(num_words = 1+1) tokenizer.fit_on_texts(sentences) tokenizer.texts_to_sequences(sentences) # [[1], [1], [1]] only the love id is returned because the most frequent word instead sentences = [ 'i love my dog', 'I, love my cat', 'You love my dog!' ] tokenizer = Tokenizer(num_words = 100+1) tokenizer.fit_on_texts(sentences) tokenizer.texts_to_sequences(sentences) # [[3, 1, 2, 4], [3, 1, 2, 5], [6, 1, 2, 4]] the ids of the most 100 frequent words is returned | 10 | 13 |
64,151,774 | 2020-10-1 | https://stackoverflow.com/questions/64151774/does-javascript-use-hashtables-for-map-and-set | I'm a Python developer, making my first steps in JavaScript. I started using Map and Set. They seem to have the same API as dict and set in Python, so I assumed they're a hashtable and I can count on O(1) lookup time. But then, out of curiosity, I tried to see what would happen if I were to do this in Chrome's console: new Set([new Set([1, 2, 3])]) What happens is this: Set(1) {Set(3)} JavaScript happily creates the set. How can this be? In Python you would have gotten an error since you can't put a mutable item in a set or a dict. Why does JavaScript allow it? | Consider the following JS code: > m1 = new Map([['a', 1]]) Map { 'a' => 1 } > m2 = new Map() Map {} > m2.set(m1, 3) Map { Map { 'a' => 1 } => 3 } > m2.get(m1) 3 But note, it is hashing based on identity, i.e. ===, so... > m2.get(new Map([['a',1]])) undefined So really, how useful is this map? Note, this isn't different than Python's default behavior. The default status of user-defined type is being hashable: >>> class Foo: pass ... >>> f0 = Foo() >>> s = {f0} >>> Foo() in s False >>> f0 in s True In Python, by default, object.__eq__ will compare based on identity, so the above is fine. However, if you override __eq__, by default, __hash__ is set to None and trying to use a hashing-based container will fail: >>> class Bar: ... def __init__(self, value): ... self.value = value ... def __eq__(self, other): ... return self.value == other.value ... >>> b0 = Bar(0) >>> b1 = Bar(2) >>> {b0, b1} Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'Bar' At this point, you must implement __hash__ to be consistent with __eq__, and note, though, that your user-defined object is never really very "immutable" | 12 | 8 |
64,149,878 | 2020-10-1 | https://stackoverflow.com/questions/64149878/importerror-cannot-import-name-types-from-google-cloud-vision-though-i-have | I have installed google-cloud-vision library following the documentation. It is for some reason unable to import types from google.cloud.vision. It worked fine on my pc, now when I shared with my client, he had a problem with imports though he has the library installed via pip. Here's the line that throws error: from google.cloud import vision from google.cloud.vision import types # this line throws error Any idea how to resolve this issue? | It's probably because there's some version mismatch (or less likely there's other library(s) with the same name). Have your client use a virtual environment. This should resolve the issue. P.S. You'll have to provide him with a requirements.txt file (obtained from pip3 freeze) so that he can do a pip3 install -r requirements.txt on his virtual environment to have the exact same packages as yours. | 12 | 5 |
64,148,371 | 2020-10-1 | https://stackoverflow.com/questions/64148371/discord-bot-can-only-see-itself-and-no-other-users-in-guild | I have recently been following this tutorial to get myself started with Discord's API. Unfortunately, when I got the part about printing all the users in the guild I hit a wall. When I try to print all users' names it only prints the name of the bot and nothing else. For reference, there are six total users in the guild. The bot has Administrator privileges. import os import discord TOKEN = os.environ.get('TOKEN') client = discord.Client() @client.event async def on_ready(): for guild in client.guilds: print(guild, [member.name for member in guild.members]) client.run(TOKEN) | As of discord.py v1.5.0, you are required to use Intents for your bot, you can read more about them by clicking here In other words, you need to do the following changes in your code - import discord from dotenv import load_dotenv load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') GUILD = os.getenv('DISCORD_GUILD') intents = discord.Intents.all() client = discord.Client(intents=intents) @client.event async def on_ready(): for guild in client.guilds: if guild.name == GUILD: break print( f'{client.user} is connected to the following guild: \n' f'{guild.name} (id: {guild.id})' ) # just trying to debug here for guild in client.guilds: for member in guild.members: print(member.name, ' ') members = '\n - '.join([member.name for member in guild.members]) print(f'Guild Members:\n - {members}') client.run(TOKEN) | 12 | 10 |
64,145,131 | 2020-9-30 | https://stackoverflow.com/questions/64145131/socket-bind-vs-socket-listen | I've learned how to write a python server, and figured out that I have a hole in my knowledge. Therefore, I would glad to know more about the differences between the commands bind(), listen() of the module called socket. In addition, when I use bind() with a specific port as a parameter, Is the particular port being in use already, before using the listen() method?! | I found a tutorial which explains in detail: ... bind() is used to associate the socket with the server address. Calling listen() puts the socket into server mode, and accept() waits for an incoming connection. listen() is what differentiates a server socket from a client. Once bind() is called, the port is now reserved and cannot be used again until either the program ends or the close() method is called on the socket. A test program that demonstrates this is as follows: import socket import time HOST = '127.0.0.1' PORT = 65432 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) while 1: time.sleep(1) when running two instances of this program at once, you can see that the one started last has the error: Which proves that the port is reserved before listen() is ever called. | 7 | 10 |
64,141,383 | 2020-9-30 | https://stackoverflow.com/questions/64141383/how-to-import-own-module-within-aws-lambda-function | I am trying to import my own module but I am getting error: Unable to import module 'lambda_function': attempted relative import with no known parent package lambda_function.py Own modulename.py | You import it as if you were to import any other Python module. In other words don't do this: from .name import * but do this: from name import show_name For example: The contents of name.py: def my_name(): print("Your name goes here.") Don't forget to Deploy your function after making changes. | 8 | 13 |
64,139,881 | 2020-9-30 | https://stackoverflow.com/questions/64139881/pandas-error-indexerror-iloc-cannot-enlarge-its-target-object | I want to replace the value of a dataframe cell using pandas. I'm using this line: submission.iloc[i, coli] = train2.iloc[i2, coli-1] I get the following error line: IndexError: iloc cannot enlarge its target object What is the reason for this? | I think this happens because either 'i' or 'coli' is out of bounds in submission. According to the documentation, you can Enlarge the dataframe with loc, meaning it would add the required row and column (in either axis) if you assign a value to a row/column that currently does not exist, but apparently iloc will not do the same. | 27 | 34 |
64,138,572 | 2020-9-30 | https://stackoverflow.com/questions/64138572/pyenv-global-interpreter-not-working-on-windows10 | I have just installed pyenv following the installation guide pyenv-win, things goes smoothly, but i could not make the pyenv global python as the global interpreter I have rehashed after installation using pyenv rehash PS D:\> pyenv versions 3.5.1 3.6.2 3.7.7 * 3.8.2 (set by C:\Users\xxx\.pyenv\pyenv-win\version) results > python --version > 3.8.4 # expected > 3.8.2 therefore, I am not able to use virtualenv with the pyenv installed python interpreter virtualenv py382-djangodev --python=3.8.2 The path 3.8.2 (from --python=3.8.2) does not exist | In windows NT, the PATH variable is a combined result of the system and user variables: The Path is constructed from the system path, which can be viewed in the System Environment Variables field in the System dialog box. The User path is appended to the system path Shims PATH are defined in the user variables, so make sure your host python interpreter path is not defined in your system path | 13 | 4 |
64,137,236 | 2020-9-30 | https://stackoverflow.com/questions/64137236/algorithm-that-generates-all-contiguous-subarrays | Using the following input, [1, 2, 3, 4] I'm trying to get the following output [[1], [1, 2], [1, 2, 3], [1, 2, 3, 4], [2], [2, 3], [2, 3, 4], [3], [3, 4], [4]] As far I have made such an algorithm, but time complexity is not good. def find(height): num1 = 0 out = [] for i in range(len(height)): num2 = 1 for j in range(len(height)): temp = [] for x in range(num1, num2): temp.append(height[x]) num2 += 1 if temp: out.append(temp) num1 += 1 return out Is there any way to speed up that algorithm? | Contiguous sub-sequences The OP specified in comments that they were interested in contiguous sub-sequences. All that is needed to select a contiguous sub-sequence is to select a starting index i and an ending index j. Then we can simply return the slice l[i:j]. def contiguous_subsequences(l): return [l[i:j] for i in range(0, len(l)) for j in range(i+1, len(l)+1)] print(contiguous_subsequences([1,2,3,4])) # [[1], [1, 2], [1, 2, 3], [1, 2, 3, 4], [2], [2, 3], [2, 3, 4], [3], [3, 4], [4]] This function is already implemented in package more_itertools, where it is called substrings: import more_itertools print(list(more_itertools.substrings([0, 1, 2]))) # [(0,), (1,), (2,), (0, 1), (1, 2), (0, 1, 2)] Non-contiguous sub-sequences For completeness. Finding the "powerset" of an iterable is an itertool recipe: import itertools def powerset(iterable): "powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)" s = list(iterable) return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s)+1)) It is also in package more_itertools: import more_itertools print(list(more_itertools.powerset([1,2,3,4]))) # [(), (1,), (2,), (3,), (4,), (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4), (1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4), (1, 2, 3, 4)] | 7 | 6 |
64,132,044 | 2020-9-30 | https://stackoverflow.com/questions/64132044/how-to-replace-none-in-the-list-with-previous-value | I want to replace the None in the list with the previous variables (for all the consecutive None). I did it with if and for (multiple lines). Is there any way to do this in a single line? i.e., List comprehension, Lambda and or map And my idea was using the list comprehension but I was not able to assign variables in a list comprehension to set a previous value. I have got a similar scenario in my project to handle None in such a way, the thing is I don't want to write 10 lines of code for the small functionality. def none_replace(ls): ret = [] prev_val = None for i in ls: if i: prev_val = i ret.append(i) else: ret.append(prev_val) return ret print('Replaced None List:', none_replace([None, None, 1, 2, None, None, 3, 4, None, 5, None, None])) Output: Replaced None List: [None, None, 1, 2, 2, 2, 3, 4, 4, 5, 5, 5] | In Python 3.8 or higher you can do this using the assignment operator: def none_replace(ls): p = None return [p:=e if e is not None else p for e in ls] | 13 | 12 |
64,109,483 | 2020-9-28 | https://stackoverflow.com/questions/64109483/how-to-recognize-if-string-is-human-name | So I have some text data that's been messily parsed, and due to that I get names mixed in with the actual data. Is there any kind of package/library that helps identify whether a word is a name or not? (In this case, I would be assuming US/western/euro-centric names) Otherwise, what would be a good way to flag this? Maybe train a model on a corpus of names and assign each word in the dataset a classification? Just not sure the best way to approach this problem/what kind of model would be suited, or if a solution already exists | import nltk from nltk.tag.stanford import NERTagger st = NERTagger('stanford-ner/all.3class.distsim.crf.ser.gz', 'stanford-ner/stanford-ner.jar') text = """YOUR TEXT GOES HERE""" for sent in nltk.sent_tokenize(text): tokens = nltk.tokenize.word_tokenize(sent) tags = st.tag(tokens) for tag in tags: if tag[1]=='PERSON': print(tag) via Improving the extraction of human names with nltk | 7 | 6 |
64,090,762 | 2020-9-27 | https://stackoverflow.com/questions/64090762/how-can-i-make-the-short-circuiting-of-pythons-any-and-all-functions-effect | Python's any and all built-in functions are supposed to short-circuit, like the logical operators or and and do. However, suppose we have a function definition like so: def func(s): print(s) return True and use it to build a list of values passed to any or all: >>> any([func('s'), func('t')]) 's' 't' True Since the list must be constructed before any is called, the function is also evaluated ahead of time, effectively defeating the short-circuiting. If the function calls are expensive, evaluating all the functions up front is a big loss and is a waste of this ability of any. Knowing that any accepts any kind of iterable, how can we defer the evaluation of func, so that the short-circuiting of any prevents calling func(t)? | We can use a generator expression, passing the functions and their arguments separately and evaluating only in the generator like so: >>> any(func(arg) for arg in ('s', 't')) 's' True For different functions with different signatures, this could look like the following: any( f(*args) for f, args in [(func1, ('s',)), (func2, (1, 't'))] ) That way, any will stop iterating over the generator as soon as one function call evaluates to True, and that means that the function evaluation is fully lazy. Another neat way to postpone the function evaluation is to use lambda expressions, like so: >>> any( ... f() ... for f in [lambda: func('s'), lambda: func('t')] ... ) 's' True | 20 | 31 |
64,101,194 | 2020-9-28 | https://stackoverflow.com/questions/64101194/partial-fraction-decomposition-using-sympy-python | How do I find the constants A,B,C,D,K,S such that 1/(x**6+1) = (A*x+B)/(x**2+1) + (C*x+D)/(x**2-sqrt(3)*x+1) + (K*x+S)/(x**2+sqrt(3)*x+1) is true for every real x. I need some sympy code maybe, not sure. Or any other Python lib which could help here. I tried by hand but it's not easy at all: after 1 hour of calculating, I found that I have probably made some mistake. I tried partial fraction decomposition in SymPy but it does not go that far. I tried Wolfram Alpha too, but it also does not decompose to that level of detail, it seems. WA attempt See the alternate forms which WA gives below. Edit I did a second try entirely by hand and I got these: A = 0 B = 1/3 C = -1/(2*sqrt(3)) D = 1/3 K = 1/(2*sqrt(3)) S = 1/3 How can I verify if these are correct? Edit 2 The main point of my question is: how to do this with some nice/reusable Python code? | You can do this using apart in sympy but apart will look for a rational factorisation by default so you have to tell it to work in Q(sqrt(3)): In [37]: apart(1/(x**6+1)) Out[37]: 2 x - 2 1 - βββββββββββββββ + ββββββββββ β 4 2 β β 2 β 3β
βx - x + 1β 3β
βx + 1β In [36]: apart(1/(x**6+1), extension=sqrt(3)) Out[36]: β3β
x - 2 β3β
x + 2 1 - βββββββββββββββββ + βββββββββββββββββ + ββββββββββ β 2 β β 2 β β 2 β 6β
βx - β3β
x + 1β 6β
βx + β3β
x + 1β 3β
βx + 1β EDIT: The comments are asking for a way to find this in more general cases without needing to know that sqrt(3) generates the extension. We can use .apart(full=True) to compute the full PFE over linear factors: In [29]: e = apart(1/(x**6+1), full=True).doit() In [30]: e Out[30]: β3 β
β3 β
β3 β
β3 β
- ββ - β - ββ + β ββ - β ββ + β 2 2 2 2 2 2 2 2 β
β
- ββββββββββββββ - ββββββββββββββ - ββββββββββββββ - ββββββββββββββ + βββββββββ - βββββββββ β β3 β
β β β3 β
β β β3 β
β β β3 β
β 6β
(x + β
) 6β
(x - β
) 6β
βx + ββ + ββ 6β
βx + ββ - ββ 6β
βx - ββ + ββ 6β
βx - ββ - ββ β 2 2β β 2 2β β 2 2β β 2 2β This is not what is wanted because you want to have real quadratic denominators rather than introducing complex numbers. The terms here come in complex conjugate pairs though so we can combine the pairs: In [46]: terms1 = [] In [47]: for a in terms: ...: if conjugate(a) not in terms1: ...: terms1.append(a) ...: In [49]: terms_real = [(t+t.conjugate()) for t in terms1] In [51]: Add(*(factor(cancel(t)) for t in terms_real)) Out[51]: β3β
x - 2 β3β
x + 2 1 - βββββββββββββββββ + βββββββββββββββββ + ββββββββββ β 2 β β 2 β β 2 β 6β
βx - β3β
x + 1β 6β
βx + β3β
x + 1β 3β
βx + 1β Note that in general there might not be any simple expressions for the roots (Abel-Ruffini) so this kind of expression for the partial fraction expansion will not succeed for all possible denominator polynomials. This is why .apart by default computes an expansion over irreducible denominators (something that can always succeed). | 9 | 5 |
64,070,050 | 2020-9-25 | https://stackoverflow.com/questions/64070050/how-to-get-a-list-of-installed-windows-fonts-using-python | How do I get a list of all the font names that are on my computer's system? | This is just a matter of listing the files in Windows\fonts: import os print(os.listdir(r'C:\Windows\fonts')) The output is a list that starts with something that looks like this: ['arial.ttf', 'arialbd.ttf', 'arialbi.ttf', 'cambria.ttc', 'cambriab.ttf' | 9 | 6 |
64,062,225 | 2020-9-25 | https://stackoverflow.com/questions/64062225/load-markdown-file-on-a-jupyter-notebook-cell | I know about the existance of the %load markdown_file.md magic command but this will load the content of the file on the first run of the cell. If the file changes, the cell won't be updated. Does anyone know if it is possible to avoid this problem and load the content of the file each time the cell runs? | If you want to load a markdown every time a cell is run, you can do: from IPython.display import Markdown, display display(Markdown("markdown_file.md")) | 12 | 18 |
64,127,075 | 2020-9-29 | https://stackoverflow.com/questions/64127075/how-to-retrieve-partial-matches-from-a-list-of-strings | For approaches to retrieving partial matches in a numeric list, go to: How to return a subset of a list that matches a condition? Python: Find in list But if you're looking for how to retrieve partial matches for a list of strings, you'll find the best approaches concisely explained in the answer below. SO: Python list lookup with partial match shows how to return a bool, if a list contains an element that partially matches (e.g. begins, ends, or contains) a certain string. But how can you return the element itself, instead of True or False Example: l = ['ones', 'twos', 'threes'] wanted = 'three' Here, the approach in the linked question will return True using: any(s.startswith(wanted) for s in l) So how can you return the element 'threes' instead? | startswith and in, return a Boolean. The in operator is a test of membership. This can be performed with a list-comprehension or filter. Using a list-comprehension, with in, is the fastest implementation tested. If case is not an issue, consider mapping all the words to lowercase. l = list(map(str.lower, l)). Tested with python 3.11.0 filter: Using filter creates a filter object, so list() is used to show all the matching values in a list. l = ['ones', 'twos', 'threes'] wanted = 'three' # using startswith result = list(filter(lambda x: x.startswith(wanted), l)) # using in result = list(filter(lambda x: wanted in x, l)) print(result) [out]: ['threes'] list-comprehension l = ['ones', 'twos', 'threes'] wanted = 'three' # using startswith result = [v for v in l if v.startswith(wanted)] # using in result = [v for v in l if wanted in v] print(result) [out]: ['threes'] Which implementation is faster? Tested in Jupyter Lab using the words corpus from nltk v3.7, which has 236736 words Words with 'three' ['three', 'threefold', 'threefolded', 'threefoldedness', 'threefoldly', 'threefoldness', 'threeling', 'threeness', 'threepence', 'threepenny', 'threepennyworth', 'threescore', 'threesome'] from nltk.corpus import words %timeit list(filter(lambda x: x.startswith(wanted), words.words())) %timeit list(filter(lambda x: wanted in x, words.words())) %timeit [v for v in words.words() if v.startswith(wanted)] %timeit [v for v in words.words() if wanted in v] %timeit results 62.8 ms Β± 816 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) 53.8 ms Β± 982 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) 56.9 ms Β± 1.33 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) 47.5 ms Β± 1.04 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) | 31 | 50 |
64,063,732 | 2020-9-25 | https://stackoverflow.com/questions/64063732/type-hint-for-return-return-none-and-no-return-at-all | Is there any difference in the return type hint amongst these three functions? def my_func1(): print("Hello World") return None def my_func2(): print("Hello World") return def my_func3(): print("Hello World") Should they all have -> None return type hint, since that's what they in fact return, explicitly or implicitly? Or should my_func2 or my_func3 have literally no return type hint? Motivations for asking are this question, and this nice answer, and the fact that I'm learning Type Hints. | They all should have the -> None return type, since they all clearly return None. Note that there also exists the typing.NoReturn type for functions that actually never return anything, e.g. from typing import NoReturn def raise_err() -> NoReturn: raise AssertionError("oops an error") Other types of functions (pointed out by @chepner) that actually never return and thus should be type hinted with -> NoReturn would be for example an event loop that only ends using sys.exit or any of the os.exec* functions Or should my_func2 or my_func3 have literally no return type hint? In my opinion, they should always have a type hint, since as @yedpodtrzitko said in their answer, functions with no type hints are by default not type checked by Mypy at all, and their return values are basically treated as if they would've been typed to be Any. This greatly reduces the benefits of type checking, and that's one of the reasons I always use the Mypy setting disallow_untyped_defs = True for new projects. | 28 | 37 |
64,043,297 | 2020-9-24 | https://stackoverflow.com/questions/64043297/how-can-i-listen-to-windows-10-notifications-in-python | My Python test script causes our product to raise Windows notifications ("Toasts"). How can my python script verify that the notifications are indeed raised? I see it's possible to make a notification listener in C# using Windows.UI.Notifications.Management.UserNotificationListener (ref), And I see I can make my own notifications in Python using win10toast - but how do I listen to othe apps' notifications? | You can use pywinrt to access the bindings in python. A basic example would look something like this: from winrt.windows.ui.notifications.management import UserNotificationListener, UserNotificationListenerAccessStatus from winrt.windows.ui.notifications import NotificationKinds, KnownNotificationBindings if not ApiInformation.is_type_present("Windows.UI.Notifications.Management.UserNotificationListener"): print("UserNotificationListener is not supported on this device.") exit() listener = UserNotificationListener.get_current() accessStatus = await listener.request_access_async() if accessStatus != UserNotificationListenerAccessStatus.ALLOWED: print("Access to UserNotificationListener is not allowed.") exit() def handler(listener, event): notification = listener.get_notification(event.user_notification_id) # get some app info if available if hasattr(notification, "app_info"): print("App Name: ", notification.app_info.display_info.display_name) listener.add_notification_changed(handler) | 9 | 3 |
64,053,068 | 2020-9-24 | https://stackoverflow.com/questions/64053068/how-to-type-hint-a-functions-optional-return-parameter | How do I type hint an optional output parameter: def myfunc( x: float, return_y: bool = False ) -> float, Optional[float] : # !!! WRONG !!! # ... code here # # y and z are floats if return_y: return z, y return z --- edit This seem to work: -> Tuple[float, Union[None, float]] : but that is so ugly and seems to overwhelm the fact that typically it will only return a simple float. Is that the correct way to do this? See answer below for correct way. --- edit 2 This question was flagged as duplicate of that question. However, that question is about Union return type, while this question is about Optional return type. Note: this design is not a good practice and should be avoided in favour of a consistent return type. Still, in case it has to be done, by optional output parameter it is meant a parameter that may not be returned depending on an input argument's flag. | Since Python 3.10 and PEP 604 you now can use | instead of Union. The return type would be float | Tuple[float, float] The right type hint would be: from typing import Tuple, Union def myfunc(x: float, return_y: bool = False) -> Union[float, Tuple[float, float]]: z = 1.5 if return_y: y = 2.0 return z, y return z However, it is usually not a good practice to have these kinds of return. Either return something like Tuple[float, Optional[float]] or write multiple functions, it will be much easier to handle later on. More about return statement consistency: PEP8 - Programming Recommendations Be consistent in return statements. Either all return statements in a function should return an expression, or none of them should. If any return statement returns an expression, any return statements where no value is returned should explicitly state this as return None, and an explicit return statement should be present at the end of the function (if reachable). Why should functions return values of a consistent type? | 8 | 7 |
64,097,426 | 2020-9-28 | https://stackoverflow.com/questions/64097426/is-there-unstack-in-numpy | There is np.stack in NumPy, but is there an opposite np.unstack same as tf.unstack? | Coming across this late, here is a much simpler answer: def unstack(a, axis=0): return np.moveaxis(a, axis, 0) # return list(np.moveaxis(a, axis, 0)) As a bonus, the result is still a numpy array. The unwrapping happens if you just python-unwrap it: A, B, = unstack([[1, 2], [3, 4]], axis=1) assert list(A) == [1, 3] assert list(B) == [2, 4] Unsurprisingly, it is also the fastest: # np.squeeze β― python -m timeit -s "import numpy as np; a=np.array(np.meshgrid(np.arange(1000), np.arange(1000)));" "C = [np.squeeze(e, 1) for e in np.split(a, a.shape[1], axis = 1)]" 100 loops, best of 5: 2.64 msec per loop # np.take β― python -m timeit -s "import numpy as np; a=np.array(np.meshgrid(np.arange(1000), np.arange(1000)));" "C = [np.take(a, i, axis = 1) for i in range(a.shape[1])]" 50 loops, best of 5: 5.08 msec per loop # np.moveaxis β― python -m timeit -s "import numpy as np; a=np.array(np.meshgrid(np.arange(1000), np.arange(1000)));" "C = np.moveaxis(a, 1, 0)" 100000 loops, best of 5: 3.89 usec per loop # list(np.moveaxis) β― python -m timeit -s "import numpy as np; a=np.array(np.meshgrid(np.arange(1000), np.arange(1000)));" "C = list(np.moveaxis(a, 1, 0))" 1000 loops, best of 5: 205 usec per loop | 10 | 12 |
64,084,033 | 2020-9-27 | https://stackoverflow.com/questions/64084033/modern-2020-way-to-call-c-code-from-python | I am trying to call a C++ function from a Python script. I have seen different solutions on Stackoverflow from 2010-2015 but they are all using complicated packages and was hoping for something easier/newer and more sophisticated. The C++ function I am trying to call takes in a double variable and returns a double. double foo(double var1){ double result = ... return result; } | Python has ctypes package which allows calling functions in DLLs or shared libraries. Compile your C++ project into a shared library (.so) on Linux, or DLL on Windows. Export the functions you wish to expose outside. C++ supports function overloading, to avoid ambiguity in the binary code, additional information is added to function names, known as name mangling. To ensure no name is changed, place inside an extern "C" block. More on importance of extern "C" at the end! Demo: In this dummy demo, our library has a single function, taking an int and printing it. lib.cpp #include <iostream> int Function(int num) { std::cout << "Num = " << num << std::endl; return 0; } extern "C" { int My_Function(int a) { return Function(a); } } We will compile this into a shared object first g++ -fPIC -shared -o libTest.so lib.cpp Now we will utilized ctypes, to load the shared object/dll and functions. myLib.py import ctypes import sys import os dir_path = os.path.dirname(os.path.realpath(__file__)) handle = ctypes.CDLL(dir_path + "/libTest.so") handle.My_Function.argtypes = [ctypes.c_int] def My_Function(num): return handle.My_Function(num) For our test, we will call the function with num = 16 test.py from myLib import * My_Function(16) The expected out as well. EDIT: Comment section does not understand the importance of extern "C". As already explained above, C++ supports function overloading and additional information is added to function names known as name mangling. Consider the following library #include <iostream> int My_Function(int num) { std::cout << "Num = " << num << std::endl; return 0; } Compiled the same: g++ -fPIC -shared -o libTest.so lib.cpp. Listing the exported symbols with nm -gD libTest.so results in: Notice how the function name in the exported symbols is changed to _Z11My_Functioni. Running test.py now fails as it can not find the symbol. You'd have to change myLib.py to reflect the change. However, you do not compile your library, take a look at the resulting symbol and build your module extension because there's no guarantee that re-compiling in the future with different version and additional code will result in the same symbol names. This is why one uses extern "C". Notice how in the first code the function name is unchanged. | 21 | 31 |
64,092,280 | 2020-9-27 | https://stackoverflow.com/questions/64092280/aws-lambda-python-multiple-files-application-cant-import-one-from-another | I have the following structure of my AWS lambda project: module app.py b.py app.py is my default aws lambda function with lambda_handler, it works fine. I decided to pull all the heavy calculations out of it to function calc of b.py. Then, I imported it to app.py: from module.b import calc Now, when I run it locally with sam local invoke Function --event events/event.json, it raises an error: {"errorType":"Runtime.ImportModuleError","errorMessage":"Unable to import module 'app': No module named 'module'"} It seems to me that when it prepares the code to run, it moves the files to some other directory, so the imports break. To fix this, I tried to use relative import: from .b import calc But it also raised an error: {"errorType":"Runtime.ImportModuleError","errorMessage":"Unable to import module 'app': attempted relative import with no known parent package"} How do I setup a multi-file python application on aws lambda? | This is how me resolve that problem. First your root folder need to seem like this: lambda_folder lambda_function.py // Or your main.py.... this file have the method lambda_handler Now... When I use multiple files... I always use a lib folder. Like this: lambda_folder lib lib1.py lib2.py lib3.py lambda_function.py IMPORTANT Inside your lib folder you always need an __init__.py or you can't see the files inside. lambda_folder lib lib1.py lib2.py lib3.py __init__.py lambda_function.py NOTE: the __init__.py needs to have the two underscores before and after init. EXAMPLE lib1.py def sum(a,b): return a+b lambda_function.py from lib import lib1 import json def lambda_handler(event, context): result = lib.sum(5,4) return { "statusCode": 200, "body": "hi " + result } And that's all. | 16 | 23 |
64,095,094 | 2020-9-28 | https://stackoverflow.com/questions/64095094/command-python-setup-py-egg-info-failed-with-error-code-1-in-tmp | I got the following error installing a dependency with pip: pip9.exceptions.InstallationError Command "python setup.py egg_info" failed with error code 1 in /tmp/tmpoons7qgkbuild/opencv-python/ Below is the result of running the command pipenv install opencv-python on a recent linux (5.4.0 x64) system. Locking [packages] dependencies⦠self.repository.get_dependencies(ireq): File "/usr/lib/python3/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 174, in get_dependencies legacy_results = self.get_legacy_dependencies(ireq) File "/usr/lib/python3/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 222, in get_legacy_dependencies result = reqset._prepare_file(self.finder, ireq, ignore_requires_python=True) File "/usr/lib/python3/dist-packages/pipenv/patched/notpip/req/req_set.py", line 644, in _prepare_file abstract_dist.prep_for_dist() File "/usr/lib/python3/dist-packages/pipenv/patched/notpip/req/req_set.py", line 134, in prep_for_dist self.req_to_install.run_egg_info() File "/usr/lib/python3/dist-packages/pipenv/vendor/pip9/req/req_install.py", line 435, in run_egg_info call_subprocess( File "/usr/lib/python3/dist-packages/pipenv/vendor/pip9/utils/__init__.py", line 705, in call_subprocess raise InstallationError( pip9.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/tmpoons7qgkbuild/opencv-python/ | How to fix the pip9.exceptions.InstallationError Make sure the version of your pip and setuptools is sufficient for manylinux2014 wheels. A) System Install sudo python3 -m pip install -U pip sudo python3 -m pip install -U setuptools B) Virtual Env / Pipenv # Within the venv pip3 install -U pip pip3 install -U setuptools Explanation For me, python setup.py egg_info probably failed because of a recent change in python wheels, as manylinux1 wheels were replaced by manylinux2014 wheels according to open-cv faq. | 48 | 89 |
64,080,277 | 2020-9-26 | https://stackoverflow.com/questions/64080277/how-to-get-the-most-recent-message-of-a-channel-in-discord-py | Is there a way to get the most recent message of a specific channel using discord.py? I looked at the official docs and didn't find a way to. | I've now figured it out by myself: For a discord.Client class you just need these lines of code for the last message: (await self.get_channel(CHANNEL_ID).history(limit=1).flatten())[0] If you use a discord.ext.commands.Bot @thegamecracks' answer is correct. | 9 | 12 |
64,046,773 | 2020-9-24 | https://stackoverflow.com/questions/64046773/return-database-name-memory-or-mode-memory-in-database-name-typeerror | I am practicing Django but when I command python manage.py makemigration and python manage.py migrate then I got an error as show in the title. the full error is mentioned below: C:\Users\Manan\python projects\djangoandmongo\new_Socrai>python manage.py migrate Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying sessions.0001_initial... OK Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line utility.execute() File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\core\management\__init__.py", line 395, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\core\management\base.py", line 341, in run _from_argv connections.close_all() File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\db\utils.py", line 230, in close_all connection.close() File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\utils\asyncio.py", line 26, in inner return func(*args, **kwargs) File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\db\backends\sqlite3\base.py", line 261, in close if not self.is_in_memory_db(): File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\db\backends\sqlite3\base.py", line 380, in is_in_memory_db return self.creation.is_in_memory_db(self.settings_dict['NAME']) File "C:\Users\Manan\python projects\djangoandmongo\dandm_env\lib\site-packages\django\db\backends\sqlite3\creation.py", line 12, in is_in_memory_db return database_name == ':memory:' or 'mode=memory' in database_name TypeError: argument of type 'WindowsPath' is not iterable | It seems like the setting DATABASES - NAME expects a string, not a Path object. In your settings try changing this line 'NAME': BASE_DIR / 'db.sqlite3', to 'NAME': str(BASE_DIR / 'db.sqlite3'), so that NAME is a string instead of a Path. The error comes from this line of code django/db/backends/sqlite3/creation.py#L13 and it seems that this commit solves the issue, so in Django v3.1.1 there is no need to use 'NAME': str(BASE_DIR / 'db.sqlite3'), anymore, just using 'NAME': BASE_DIR / 'db.sqlite3', should sufice. | 12 | 30 |
64,118,680 | 2020-9-29 | https://stackoverflow.com/questions/64118680/reload-flag-with-uvicorn-can-we-exclude-certain-code | Is it somehow possible to exclude certain part of the code when reloading the scrip with --reload flag? uvicorn main:app --reload Use case: I have a model which takes a lot of time loading so I was wondering if there is a way to ignore that line of code when reloading. Or is it just impossible? | Update Uvicorn now supports including/excluding certain directories/files to/from watchlist. --reload-include TEXT Set glob patterns to include while watching for files. Includes '*.py' by default, which can be overridden in reload-excludes. --reload-exclude TEXT Set glob patterns to exclude while watching for files. Includes '.*, .py[cod], .sw.*, ~*' by default, which can be overridden in reload-excludes. No there is no way to exclude something, however you can be explicit in what you want to be looked at with the --reload-dir flag: --reload-dir TEXT Set reload directories explicitly, instead of using the current working directory. in https://www.uvicorn.org/#command-line-options | 19 | 23 |
64,099,259 | 2020-9-28 | https://stackoverflow.com/questions/64099259/ansible-ansible-python-interpreter-error | I want to instal influxdb and configuration with ansible. File copy and influxdb configuration is ok But creating database and user create section is give a "ansible_python_interpreter" error. I searched this error and tried something but I can't solve this problem with myself This is my ansible hosts file [loadbalancer] lb ansible_host=192.168.255.134 [loadbalancer:vars] ansible_python_interpreter="/usr/bin/python3" #ansible_python_interpreter="/usr/bin/env python" #ansible_python_interpreter="/usr/libexec/platform-python" This is my yaml file # influxdb install and configuration --- - hosts: lb become: true tasks: - name: Copy Repo Files copy: src: ./files/influxdb.j2 dest: /etc/yum.repos.d/influxdb.repo remote_src: no - name: Install Influxdb yum: name: influxdb state: latest notify: influxdb_ok - name: Crete Database community.general.influxdb_database: hostname: 192.168.255.134 database_name: deneme - name: Create User community.general.influxdb_user: user_name: deneme_user user_password: deneme123 handlers: - name: Start Influx Service service: name: influxdb state: started enabled: yes listen: influxdb_ok I was tried to install python3 remote vm(lb). I was tried to change interpreter parameters. I was tried to install requests module with pip3. [root@centos8 influx]# ansible-playbook influxdb.yaml -K BECOME password: PLAY [lb] *********************************************************************************************************** TASK [Gathering Facts] ********************************************************************************************** ok: [lb] TASK [Copy Repo Files] ********************************************************************************************** ok: [lb] TASK [Install Influxdb] ********************************************************************************************* ok: [lb] TASK [Crete Database] *********************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'requests' fatal: [lb]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (requests) on loadbalancer.servicepark.local's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"} PLAY RECAP ********************************************************************************************************** lb : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 I was tried to install requests module and currently ansible version Right now my ansible machine versions [root@centos8 influx]# python3 --version Python 3.6.8 [root@centos8 influx]# ansible --version ansible 2.10.1 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] lb vm's versions [root@loadbalancer ~]# influx --version InfluxDB shell version: 1.8.2 [root@loadbalancer ~]# python3 --version Python 3.6.8 | There are 3 ways to solve this problem if you encounter it on your remote host: Set ansible_python_interpreter: /usr/bin/python3 variable for all hosts that have python3 installed by default Install Python 2 using Ansibleβs raw module Symlink /usr/bin/python3 to /usr/bin/python using Ansibleβs raw module. All 3 options can be done in Ansible, without sshing into the host. example - name: misc task on ubuntu 18.04 instance hosts: "*" vars: ansible_python_interpreter: /usr/bin/python3 tasks: - debug: var=ansible_host Option 3 - Symlink /usr/bin/python -> /usr/bin/python3 using Ansibleβs raw module Another option in a similar vein to option 2 is to use the raw module to βsymlinkβ /usr/bin/python -> /usr/bin/python3. With a bit of shell magic, we can fashion a command to do this conditionally based on whether either of the files exist using conditionals: if [ -f /usr/bin/python3 ] && [ ! -f /usr/bin/python ]; then ln --symbolic /usr/bin/python3 /usr/bin/python; fi | 6 | 16 |
64,098,376 | 2020-9-28 | https://stackoverflow.com/questions/64098376/getting-oserror-202-where-running-urequests-get-from-micropy | hi im having error with this code but it runs in python shell could any body help me from machine import Pin import time import network import urequests p0 = Pin(0,Pin.OUT) wlan = network.WLAN(network.STA_IF) wlan.active(True) wlan.connect('ssid', 'pass') response = urequests.get('http://jsonplaceholder.typicode.com/albums/1') while True: ans = response.json()['userId'] p0.value(1) time.sleep(1) p0.off() time.sleep(1) print('ok') and this is the error: Traceback (most recent call last): File "<stdin>", line 9, in <module> File "urequests.py", line 108, in get File "urequests.py", line 53, in request OSError: -202 | Your issue (my guess) is that you begin to urequest.get() without connected to WiFi. Create function that do wifi connection and call it def do_connect(): import network wlan = network.WLAN(network.STA_IF) wlan.active(True) if not wlan.isconnected(): print('connecting to network...') wlan.connect('essid', 'password') while not wlan.isconnected(): pass print('network config:', wlan.ifconfig()) Explain: wlan.connect() is asynchronous function and you have to wait, while it connects to wifi and only then continue with urequest.get() | 7 | 9 |
64,128,255 | 2020-9-29 | https://stackoverflow.com/questions/64128255/pyjwt-wont-import-jwt-algorithms-modulenotfounderror-no-module-named-jwt-alg | For some reason, PyJTW doesn't seem to work on my virtualenv on Ubuntu 16.04, but it worked fine on my local Windows machine (inside a venv too). I'm clueless, I've tried different versions, copied the exact same versions as I had on my Windows machine, and yet I still couldn't get this to work. Installed packages: Package Version -------------------------- --------- aiohttp 3.6.2 async-timeout 3.0.1 attrs 20.2.0 cachetools 4.1.1 certifi 2020.6.20 cffi 1.14.3 chardet 3.0.4 click 7.1.2 cryptography 2.9.2 DateTime 4.3 discord.py 1.5.0 Flask 1.1.2 Flask-Discord 0.1.61 flask-oidc 1.4.0 flask-oidc2 1.5.0 httplib2 0.18.1 idna 2.10 itsdangerous 1.1.0 Jinja2 2.11.2 jwt 1.0.0 MarkupSafe 1.1.1 multidict 4.7.6 mysql-connector-python 8.0.21 mysql-connector-repackaged 0.3.1 oauth2client 4.1.3 oauthlib 3.1.0 pip 20.2.3 protobuf 3.13.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.20 PyJWT 1.7.1 pytz 2020.1 requests 2.24.0 requests-oauthlib 1.3.0 rsa 4.6 schedule 0.6.0 setuptools 50.3.0 six 1.15.0 typing-extensions 3.7.4.3 urllib3 1.25.10 Werkzeug 1.0.1 wheel 0.35.1 yarl 1.6.0 zope.interface 5.1.0 The error: [2020-09-29 21:58:44 +0000] [2036] [ERROR] Exception in worker process Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spaw n_worker worker.init_process() File "/usr/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process self.load_wsgi() File "/usr/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in l oad return self.load_wsgiapp() File "/usr/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in l oad_wsgiapp return util.import_app(self.app_uri) File "/usr/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_ app mod = importlib.import_module(module) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/soro/soros-dashboard/wsgi.py", line 1, in <module> from app import app File "/home/soro/soros-dashboard/app.py", line 9, in <module> import keycloak File "/home/soro/soros-dashboard/keycloak.py", line 4, in <module> from jwt.algorithms import RSAAlgorithm ModuleNotFoundError: No module named 'jwt.algorithms' I'm running Python 3.7.7. | I had the same issue. The error seems to be a conflict between the pyjwt and jwt modules (as mentioned by @vimalloc above). What worked for me was to run the following command (NOTE: I am using Python 3.6.10). pip3 install -U pyjwt | 10 | 7 |
64,063,850 | 2020-9-25 | https://stackoverflow.com/questions/64063850/azure-python-sdk-serviceprincipalcredentials-object-has-no-attribute-get-tok | So I have the following Python3 script to list all virtual machines. import os, json from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.network import NetworkManagementClient from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient from azure.common.credentials import ServicePrincipalCredentials credentials = ServicePrincipalCredentials( client_id="xxx", secret="xxx", tenant="xxx" ) resource_client = ResourceManagementClient(credentials, "my-subscription") compute_client = ComputeManagementClient(credentials, "my-subscription") network_client = NetworkManagementClient(credentials, "my-subscription") for vm in compute_client.virtual_machines.list_all(): print("\tVM: {}".format(vm.name)) but for some reason, I get the following error: Traceback (most recent call last): File "/Users/me/a/azure-test.py", line 17, in <module> for vm in compute_client.virtual_machines.list_all(): ... File "/usr/local/lib/python3.8/site-packages/azure/core/pipeline/policies/_authentication.py", line 93, in on_request self._token = self._credential.get_token(*self._scopes) AttributeError: 'ServicePrincipalCredentials' object has no attribute 'get_token' Am I doing something wrong? | The Azure libraries for Python are currently being updated to share common cloud patterns such as authentication protocols, logging, tracing, transport protocols, buffered responses, and retries. This would change the Authentication mechanism a bit as well. In the older version, ServicePrincipalCredentials in azure.common was used for authenticating to Azure and creating a service client. In the newer version, the authentication mechanism has been re-designed and replaced by azure-identity library in order to provide unified authentication based on Azure Identity for all Azure SDKs. Run pip install azure-identity to get the package. In terms of code, what then was: from azure.common.credentials import ServicePrincipalCredentials from azure.mgmt.compute import ComputeManagementClient credentials = ServicePrincipalCredentials( client_id='xxxxx', secret='xxxxx', tenant='xxxxx' ) compute_client = ComputeManagementClient( credentials=credentials, subscription_id=SUBSCRIPTION_ID ) is now: from azure.identity import ClientSecretCredential from azure.mgmt.compute import ComputeManagementClient credential = ClientSecretCredential( tenant_id='xxxxx', client_id='xxxxx', client_secret='xxxxx' ) compute_client = ComputeManagementClient( credential=credential, subscription_id=SUBSCRIPTION_ID ) You can then use the list_all method with compute_client to list all VMs as usual: # List all Virtual Machines in the specified subscription def list_virtual_machines(): for vm in compute_client.virtual_machines.list_all(): print(vm.name) list_virtual_machines() References: Azure SDK for Python on GitHub Migration Guide - Resource Management How to authenticate and authorize Python apps on Azure Example: Use the Azure libraries to provision a virtual machine | 26 | 50 |
64,095,396 | 2020-9-28 | https://stackoverflow.com/questions/64095396/detecting-collisions-between-polygons-and-rectangles-in-pygame | So I am trying to make an among us type game with pygame. I just started, so I don't have much of anything and am working on the map right now. However, one thing I'm struggling with is the collision logic. The map has an elongated octagon shape for now, but I think no matter the shape I will use something like a pygame polygon. When I ran the code I have now, which checks for a collision between my player (pygame rectangle) and the walls (pygame polygon) it says: TypeError: Argument must be rect style object I've figured out this is because of the pygame polygon returning a rectangle, but not being classified that way in the collision checker. I have tried a library called collision, and credit to the collision detection for giving a great effort, but the player was still able to glitch through the walls. Sidenote: I saved the code where I used this library if anyone wants to see it and maybe improve upon my faults. Anyway, to boil it all down: I need a way to detect collisions (really, really preferably in pygame) between polygons and rectangles Thank you for any help you can give and if you have a question/request please leave a comment. Heres my code: import pygame pygame.init() W, H=500, 500 screen = pygame.display.set_mode([500, 500]) running = True bcg=(200, 200, 200) red=(255, 0 ,0) purp=(255, 0, 255) wall=(100, 100, 100) class player: def bg(self): screen.fill(bcg) x,y=self.x,self.y self.outer=( (x,y), (x+800, y), (x+1200, y+200), (x+1200, y+600), (x+800, y+800), (x, y+800), (x-400, y+600), (x-400, y+200), (x,y), (x, y+50), (x-350, y+225), (x-350, y+575), (x, y+750), (x+800, y+750), (x+1150, y+575), (x+1150, y+225), (x+800, y+50), (x, y+50) ) pygame.draw.polygon(screen, wall, self.outer) def __init__(self, color, size=20, speed=0.25): self.x=0 self.y=0 self.col=color self.size=size self.speed=speed def draw(self): s=self.size self.rect=pygame.Rect(W/2-s/2, H/2-s/2, self.size, self.size) pygame.draw.rect(screen, self.col, self.rect) def move(self, x, y): x*=self.speed y*=self.speed if not self.rect.colliderect(self.outer): self.x+=x self.y+=y p=player(red) while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False p.bg() keys=pygame.key.get_pressed() if keys[pygame.K_a]: p.move(1, 0) if keys[pygame.K_d]: p.move(-1, 0) if keys[pygame.K_w]: p.move(0, 1) if keys[pygame.K_s]: p.move(0, -1) p.draw() pygame.display.update() pygame.quit() | Write a function collideLineLine that test if to line segments are intersecting. The algorithm to this function is explained in detail in the answer to the question pygame, detecting collision of a rotating rectangle: def collideLineLine(l1_p1, l1_p2, l2_p1, l2_p2): # normalized direction of the lines and start of the lines P = pygame.math.Vector2(*l1_p1) line1_vec = pygame.math.Vector2(*l1_p2) - P R = line1_vec.normalize() Q = pygame.math.Vector2(*l2_p1) line2_vec = pygame.math.Vector2(*l2_p2) - Q S = line2_vec.normalize() # normal vectors to the lines RNV = pygame.math.Vector2(R[1], -R[0]) SNV = pygame.math.Vector2(S[1], -S[0]) RdotSVN = R.dot(SNV) if RdotSVN == 0: return False # distance to the intersection point QP = Q - P t = QP.dot(SNV) / RdotSVN u = QP.dot(RNV) / RdotSVN return t > 0 and u > 0 and t*t < line1_vec.magnitude_squared() and u*u < line2_vec.magnitude_squared() Write the function colideRectLine that test if a rectangle and a line segment is intersecting. To test if a line segment intersects a rectangle, you have to test if it intersect any of the 4 sides of the rectangle: def colideRectLine(rect, p1, p2): return (collideLineLine(p1, p2, rect.topleft, rect.bottomleft) or collideLineLine(p1, p2, rect.bottomleft, rect.bottomright) or collideLineLine(p1, p2, rect.bottomright, rect.topright) or collideLineLine(p1, p2, rect.topright, rect.topleft)) The next function collideRectPolygon tests if a polygon and a rectangle are intersecting. This can be achieved by testing each line segment on the polygon against the rectangle in a loop: def collideRectPolygon(rect, polygon): for i in range(len(polygon)-1): if colideRectLine(rect, polygon[i], polygon[i+1]): return True return False Finally you can use collideRectPolygon for the collision test. Note, however, that for the test you need to use the polygon as if the player were moving: class player: def bg(self): screen.fill(bcg) self.outer = self.createPolygon(self.x, self.y) pygame.draw.polygon(screen, wall, self.outer) def createPolygon(self, x, y): return [ (x,y), (x+800, y), (x+1200, y+200), (x+1200, y+600), (x+800, y+800), (x, y+800), (x-400, y+600), (x-400, y+200), (x,y), (x, y+50), (x-350, y+225), (x-350, y+575), (x, y+750), (x+800, y+750), (x+1150, y+575), (x+1150, y+225), (x+800, y+50),(x, y+50)] # [...] def move(self, x, y): x *= self.speed y *= self.speed polygon = self.createPolygon(self.x + x, self.y + y) if not collideRectPolygon(self.rect, polygon): self.x += x self.y += y See also Collision and Intersection - Rectangle and polygon Minimal example: repl.it/@Rabbid76/PyGame-CollisionPolygonRectangle Complete example: import pygame pygame.init() W, H=500, 500 screen = pygame.display.set_mode([500, 500]) running = True bcg=(200, 200, 200) red=(255, 0 ,0) purp=(255, 0, 255) wall=(100, 100, 100) def collideLineLine(l1_p1, l1_p2, l2_p1, l2_p2): # normalized direction of the lines and start of the lines P = pygame.math.Vector2(*l1_p1) line1_vec = pygame.math.Vector2(*l1_p2) - P R = line1_vec.normalize() Q = pygame.math.Vector2(*l2_p1) line2_vec = pygame.math.Vector2(*l2_p2) - Q S = line2_vec.normalize() # normal vectors to the lines RNV = pygame.math.Vector2(R[1], -R[0]) SNV = pygame.math.Vector2(S[1], -S[0]) RdotSVN = R.dot(SNV) if RdotSVN == 0: return False # distance to the intersection point QP = Q - P t = QP.dot(SNV) / RdotSVN u = QP.dot(RNV) / RdotSVN return t > 0 and u > 0 and t*t < line1_vec.magnitude_squared() and u*u < line2_vec.magnitude_squared() def colideRectLine(rect, p1, p2): return (collideLineLine(p1, p2, rect.topleft, rect.bottomleft) or collideLineLine(p1, p2, rect.bottomleft, rect.bottomright) or collideLineLine(p1, p2, rect.bottomright, rect.topright) or collideLineLine(p1, p2, rect.topright, rect.topleft)) def collideRectPolygon(rect, polygon): for i in range(len(polygon)-1): if colideRectLine(rect, polygon[i], polygon[i+1]): return True return False class player: def bg(self): screen.fill(bcg) self.outer = self.createPolygon(self.x, self.y) pygame.draw.polygon(screen, wall, self.outer) def createPolygon(self, x, y): return [ (x,y), (x+800, y), (x+1200, y+200), (x+1200, y+600), (x+800, y+800), (x, y+800), (x-400, y+600), (x-400, y+200), (x,y), (x, y+50), (x-350, y+225), (x-350, y+575), (x, y+750), (x+800, y+750), (x+1150, y+575), (x+1150, y+225), (x+800, y+50),(x, y+50)] def __init__(self, color, size=20, speed=0.25): self.x=0 self.y=0 self.col=color self.size=size self.speed=speed def draw(self): s=self.size self.rect=pygame.Rect(W/2-s/2, H/2-s/2, self.size, self.size) pygame.draw.rect(screen, self.col, self.rect) def move(self, x, y): x *= self.speed y *= self.speed polygon = self.createPolygon(self.x + x, self.y + y) if not collideRectPolygon(self.rect, polygon): self.x += x self.y += y p=player(red) while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False p.bg() keys=pygame.key.get_pressed() if keys[pygame.K_a]: p.move(1, 0) if keys[pygame.K_d]: p.move(-1, 0) if keys[pygame.K_w]: p.move(0, 1) if keys[pygame.K_s]: p.move(0, -1) p.draw() pygame.display.update() pygame.quit() | 6 | 8 |
64,118,474 | 2020-9-29 | https://stackoverflow.com/questions/64118474/checking-dict-keys-to-ensure-a-required-key-always-exists-and-that-the-dict-has | I have a dict in python that follows this general format: {'field': ['$.name'], 'group': 'name', 'function': 'some_function'} I want to do some pre-check of the dict to ensure that 'field' always exists, and that no more keys exist beyond 'group' and 'function' which are both optional. I know I can do this by using a long and untidy if statement, but I'm thinking there must be a cleaner way? This is what I currently have: if (('field' in dict_name and len(dict_name.keys()) == 1) or ('group' in dict_name and len(dict_name.keys()) == 2) or ('function' in dict_name and len(dict_name.keys()) == 2) or ('group' in dict_name and 'function' in dict_name and len(dict_name.keys()) == 3)) Essentially I'm first checking if 'field' exists as this is required. I'm then checking to see if it is the only key (which is fine) or if it is a key alongside 'group' and no others, or a key alongside 'function' and no others or a key alongside both 'group' and 'function' and no others. Is there a tidier way of checking the keys supplied are only these 3 keys where two are optional? | As far as I'm concerned you want to check, that The set {'field'} is always contained in the set of your dict keys The set of your dict keys is always contained in the set {'field', 'group', 'function'} So just code it! required_fields = {'field'} allowed_fields = required_fields | {'group', 'function'} d = {'field': 123} # Set any value here if required_fields <= d.keys() <= allowed_fields: print("Yes!") else: print("No!") This solution is scalable for any sets of required and allowed fields unless you have some special conditions (for example, mutually exclusive keys) (thanks to @Duncan for a very elegant code reduction) | 44 | 44 |
64,126,185 | 2020-9-29 | https://stackoverflow.com/questions/64126185/openpyxl-cant-read-xlsx-file-but-if-i-save-the-file-it-opens | So, I tried to open an excel file with openpyxl with this line wb_bs = openpyxl.load_workbook(filename=filepath) And got this error: C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\styles\stylesheet.py:214: UserWarning: Workbook contains no default style, apply openpyxl's default warn("Workbook contains no default style, apply openpyxl's default") Traceback (most recent call last): wb_bs = openpyxl.load_workbook(filename=url_nova, data_only=True) File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\reader\excel.py", line 315, in load_workbook reader.read() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\reader\excel.py", line 280, in read self.read_worksheets() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\reader\excel.py", line 228, in read_worksheets ws_parser.bind_all() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\worksheet\_reader.py", line 434, in bind_all self.bind_cells() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\worksheet\_reader.py", line 337, in bind_cells for idx, row in self.parser.parse(): File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\worksheet\_reader.py", line 149, in parse obj = prop[1].from_tree(element) File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\descriptors\serialisable.py", line 87, in from_tree obj = desc.expected_type.from_tree(el) File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\descriptors\serialisable.py", line 103, in from_tree return cls(**attrib) TypeError: __init__() got an unexpected keyword argument 'address' PS C:\Users\T-Gamer\Python programs\cmtrat\Cmtrat Helper> & C:/Users/T-Gamer/AppData/Local/Programs/Python/Python38-32/python.exe "c:/Users/T-Gamer/Python programs/cmtrat/Cmtrat Helper/excel_scripts/ostest.py" C:\Users\T-Gamer\Python programs\cmtrat\Cmtrat Helper\excel_scripts\copias\diario_padrao.xlsx C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\styles\stylesheet.py:214: UserWarning: Workbook contains no default style, apply openpyxl's default warn("Workbook contains no default style, apply openpyxl's default") Traceback (most recent call last): wb_bs = openpyxl.load_workbook(filename=url_nova, data_only=True) File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\reader\excel.py", line 315, in load_workbook reader.read() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\reader\excel.py", line 280, in read self.read_worksheets() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\reader\excel.py", line 228, in read_worksheets ws_parser.bind_all() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\worksheet\_reader.py", line 434, in bind_all self.bind_cells() File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\worksheet\_reader.py", line 337, in bind_cells for idx, row in self.parser.parse(): File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\worksheet\_reader.py", line 149, in parse obj = prop[1].from_tree(element) File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\descriptors\serialisable.py", line 87, in from_tree obj = desc.expected_type.from_tree(el) File "C:\Users\T-Gamer\AppData\Local\Programs\Python\Python38-32\lib\site-packages\openpyxl\descriptors\serialisable.py", line 103, in from_tree return cls(**attrib) TypeError: __init__() got an unexpected keyword argument 'address' The thing is: If I create the .xlsx file, it opens If I download the file from this specific source(the one I need) and try to open it straight away, it generates the error. If I run the code after I open and save the .xlsx file(no changes), it works. I suppose it has something to do with excel version conflict, but I've tried everything and nothing seems to work. openpyxl==3.0.5 python==3.8.5 | The reason may be the security prevention of MS-Windows: Whenever you download an MS-Office file from an outer source (internet), MS-Windows inserts a flag in that file which marks the file to be opened in protected view only. That protection stays still until you enable editing and save the file with the security flag set off. The warning text that appears when you open a newly downloaded MS-Office file: PROTECTED VIEW Be careful - files from the Internet can contain viruses. Unless you need to edit, it's safer to stay in Protected View. | 6 | 1 |
64,118,331 | 2020-9-29 | https://stackoverflow.com/questions/64118331/attributeerror-module-keras-backend-has-no-attribute-common | I tried to execute some project. But I've got an attribute error. I checked my Tensorflow and Keras version. Name: tensorflow Version: 2.3.1 Name: Keras Version: 2.4.3 Summary: Deep Learning for humans python 3.8.2 The code is here. self.dim_ordering = K.common.image_dim_ordering() Error message: self.dim_ordering = K.common.image_dim_ordering() AttributeError: module 'keras.backend' has no attribute 'common' Is it okay to use K.image_data_format() instead of k.common.image_dim_ordering() ? | Yes. It is okay to use k.image_data_format() In Keras v2 the method has been renamed to image_data_format | 7 | 9 |
64,039,737 | 2020-9-24 | https://stackoverflow.com/questions/64039737/object-is-not-subscriptable-using-django-and-python | I am having this error TypeError: 'StudentSubjectGrade' object is not subscriptable of course the data filtered is exist in the database, and i am sure that the filter is correct. what should i do to correct this ? note: this is recycle question, please dont mind the comment below, def SummaryPeriod(request): period = request.GET.get('period') subject = request.GET.get('subject') teacher = request.GET.get('teacher') print(period, "period", "subject", subject) cate = gradingCategories.objects.all() students = StudentSubjectGrade.objects.filter( grading_Period=period).filter( Subjects=subject).filter( Teacher = teacher ) print(students) Categories = list(cate.values_list('id', flat=True).order_by('id')) table = [] student_name = None table_row = None columns = len(Categories) + 1 table_header = ['Student Names'] table_header.extend(list(cate.values('CategoryName', 'PercentageWeight'))) table.append(table_header) for student in students: if not student['Students_Enrollment_Records__Students_Enrollment_Records__Student_Users__Lastname'] + ' ' + \ student[ 'Students_Enrollment_Records__Students_Enrollment_Records__Student_Users__Firstname'] == student_name: if not table_row is None: table.append(table_row) table_row = [None for d in range(columns)] student_name = student[ 'Students_Enrollment_Records__Students_Enrollment_Records__Student_Users__Lastname'] + ' ' + \ student['Students_Enrollment_Records__Students_Enrollment_Records__Student_Users__Firstname'] table_row[0] = student_name id = student['id'] table_row.append(id) table_row[Categories.index(student['Grading_Categories']) + 1] = student['Average'] * student[ 'Grading_Categories__PercentageWeight'] / 100 table.append(table_row) return render(request, 'Homepage/summaryPeriod.html', {'table': table, "teacher": teacher, "subject": subject, "period": period}) this is my traceback Internal Server Error: /SummaryPeriod/ Traceback (most recent call last): File "C:\Users\USER\Desktop\venv\lib\site-packages\django\core\handlers\exception.py", line 47, in inner response = get_response(request) File "C:\Users\USER\Desktop\venv\lib\site-packages\django\core\handlers\base.py", line 179, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\Desktop\Homepage\views.py", line 2693, in SummaryPeriod if not student['Students_Enrollment_Records__Students_Enrollment_Records__Student_Users__Lastname'] + ' ' + \ TypeError: 'StudentSubjectGrade' object is not subscriptable [01/Dec/2020 21:21:01] "GET /SummaryPeriod/?period=3&subject=18&teacher=5 HTTP/1.1" 500 70398 | TypeError: 'StudentSubjectGrade' object is not subscriptable this means that student is not a dictionary, you cannot use student['key'] to get what you want. you should use student.sth instead. | 18 | 33 |
64,127,278 | 2020-9-29 | https://stackoverflow.com/questions/64127278/what-is-the-proper-way-to-specify-a-custom-template-path-for-jupyter-nbconvert-v | What is the proper way to specify a custom template path for nbconvert? Under nbonvert version 6, templates are now a directory with several files. Those templates can live in any number of locations depending on the platform. Raspbian: ['/home/pi/.local/share/jupyter/nbconvert/templates', '/usr/local/share/jupyter/nbconvert/templates', '/usr/share/jupyter/nbconvert/templates'] OS X with Pyenv: ['/Users/ac/Library/Jupyter/nbconvert/templates', '/Users/ac/.pyenv/versions/3.8.5/Python.framework/Versions/3.8/share/jupyter/nbconvert/templates', '/usr/local/share/jupyter/nbconvert/templates', '/usr/share/jupyter/nbconvert/templates'] I'm trying to sync my templates over several different platforms and would like to specify a custom location. This post from 2 years ago seems correct, but appears to apply to V5 of nbconvert -- the method has changed names from template_path to template_paths. I've tried the solution suggested in the link above using a template that I know works when placed in one of the known locations. I end up with this error when trying to specify a custom location as suggested: jinja2.exceptions.TemplateNotFound: null.j2 I suspect that by setting the path to /path/to/.jupyter/templates/my_template/, I completely override all the other template locations and lose the null.j2 template that my template extends. I've included my template at the end on the off chance it has some errors that are causing this. The docs for V6 config files are not much help either: TemplateExporter.template_paths : List Default: ['.'] No description and PythonExporter.template_paths : List Default: ['.'] No description There's a long thread from May 2019 discussing this on the Git Repo, but I can't quite make sense of what the ultimate conclusion was. My custom Python template: {%- extends 'null.j2' -%} ## set to python3 {%- block header -%} #!/usr/bin/env python3 # coding: utf-8 {% endblock header %} ## remove cell counts entirely {% block in_prompt %} {% if resources.global_content_filter.include_input_prompt -%} {% endif %} {% endblock in_prompt %} ## remove markdown cells entirely {% block markdowncell %} {% endblock markdowncell %} {% block input %} {{ cell.source | ipython2python }} {% endblock input %} ## remove magic statement completely {% block codecell %} {{'' if "get_ipython" in super() else super() }} {% endblock codecell%} | Issue #1428 on the Git Repo contains the basis for this solution. From scratch/recent upgrade from v5 to v6 do the following: Generate a current and up-to-date configuration file for V6 in ~/.jupyter $ jupyter nbconvert --generate-config Edit the configuration file ~/.jupyter/jupyter_nbconvert_config.py to add the following lines: from pathlib import Path # set a custom path for templates in c.TemplateExporter.extra_template_basedirs my_templates = Path('~/my/custom/templates').expanduser().absolute() # add the custom path to the extra_template_basedirs c.TemplateExporter.extra_template_basedirs = [my_templates] Add templates to the ~/my/custom/templates directory Each template must be in its own sub directory (/my/custom/templates/foo_template) Each template must contain a conf.json and index.py.j2 file. the index is the actual template. See below for an example run nbconvert: $ jupyter nbconvert --to python --template my_custom_template foo.ipynb conf.json Basic Example { "base_template": "base", "mimetypes": { "text/x-python": true } } index.py.j2 Example {%- extends 'null.j2' -%} ## set to python3 {%- block header -%} #!/usr/bin/env python3 # coding: utf-8 {% endblock header %} ## remove cell counts entirely {% block in_prompt %} {% if resources.global_content_filter.include_input_prompt -%} {% endif %} {% endblock in_prompt %} ## remove markdown cells entirely {% block markdowncell %} {% endblock markdowncell %} {% block input %} {{ cell.source | ipython2python }} {% endblock input %} ## remove magic statement completely {% block codecell %} {{'' if "get_ipython" in super() else super() }} | 6 | 8 |
64,116,781 | 2020-9-29 | https://stackoverflow.com/questions/64116781/how-do-i-automerge-dependabot-updates-config-version-2 | Following "Dependabot is moving natively into GitHub!", I had to update my dependabot config files to use version 2 format. My .dependabot/config.yaml did look like: version: 1 update_configs: - package_manager: "python" directory: "/" update_schedule: "live" automerged_updates: - match: dependency_type: "all" update_type: "all" I've got the following working: version: 2 updates: - package-ecosystem: pip directory: "/" schedule: interval: daily but I can't seem to add the automerge option again (when checking with the dependabot validator)? | Here is one solution that doesn't require any additional marketplace installations (originally found here). Simply create a new GitHub workflow (e.g. .github/workflows/dependabotautomerge.yml) containing: name: "Dependabot Automerge - Action" on: pull_request: jobs: worker: runs-on: ubuntu-latest if: github.actor == 'dependabot[bot]' steps: - name: automerge uses: actions/[email protected] with: script: | github.pullRequests.createReview({ owner: context.payload.repository.owner.login, repo: context.payload.repository.name, pull_number: context.payload.pull_request.number, event: 'APPROVE' }) github.pullRequests.merge({ owner: context.payload.repository.owner.login, repo: context.payload.repository.name, pull_number: context.payload.pull_request.number }) github-token: ${{github.token}} There are also various third-party solutions available on GitHub Marketplace. | 26 | 14 |
64,099,248 | 2020-9-28 | https://stackoverflow.com/questions/64099248/pytesseract-improve-ocr-accuracy | I want to extract the text from an image in python. In order to do that, I have chosen pytesseract. When I tried extracting the text from the image, the results weren't satisfactory. I also went through this and implemented all the techniques listed down. Yet, it doesn't seem to perform well. Image: Code: import pytesseract import cv2 import numpy as np img = cv2.imread('D:\\wordsimg.png') img = cv2.resize(img, None, fx=1.2, fy=1.2, interpolation=cv2.INTER_CUBIC) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) kernel = np.ones((1,1), np.uint8) img = cv2.dilate(img, kernel, iterations=1) img = cv2.erode(img, kernel, iterations=1) img = cv2.threshold(cv2.medianBlur(img, 3), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] pytesseract.pytesseract.tesseract_cmd = 'C:\\Program Files\\Tesseract-OCR\\tesseract.exe' txt = pytesseract.image_to_string(img ,lang = 'eng') txt = txt[:-1] txt = txt.replace('\n',' ') print(txt) Output: t hose he large form might light another us should took mountai house n story important went own own thought girl over family look some much ask the under why miss point make mile grow do own school was Even 1 unwanted space could cost me a lot. I want the results to be 100% accurate. Any help would be appreciated. Thanks! | I changed resize from 1.2 to 2 and removed all preprocessing. I got good results with psm 11 and psm 12 import pytesseract import cv2 import numpy as np img = cv2.imread('wavy.png') # img = cv2.resize(img, None, fx=1.2, fy=1.2, interpolation=cv2.INTER_CUBIC) img = cv2.resize(img, None, fx=2, fy=2) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) kernel = np.ones((1,1), np.uint8) # img = cv2.dilate(img, kernel, iterations=1) # img = cv2.erode(img, kernel, iterations=1) # img = cv2.threshold(cv2.medianBlur(img, 3), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] cv2.imwrite('thresh.png', img) pytesseract.pytesseract.tesseract_cmd = 'C:\\Program Files (x86)\\Tesseract-OCR\\tesseract.exe' for psm in range(6,13+1): config = '--oem 3 --psm %d' % psm txt = pytesseract.image_to_string(img, config = config, lang='eng') print('psm ', psm, ':',txt) The config = '--oem 3 --psm %d' % psm line uses the string interpolation (%) operator to replace %d with an integer (psm). I'm not exactly sure what oem does, but I've gotten in the habit of using it. More on psm at the end of this answer. psm 11 : those he large form might light another us should name took mountain story important went own own thought girl over family look some much ask the under why miss point make mile grow do own school was psm 12 : those he large form might light another us should name took mountain story important went own own thought girl over family look some much ask the under why miss point make mile grow do own school was psm is short for page segmentation mode. I'm not exactly sure what the different modes are. You can get a feel for what the codes are from the descriptions. You can get the list from tesseract --help-psm Page segmentation modes: 0 Orientation and script detection (OSD) only. 1 Automatic page segmentation with OSD. 2 Automatic page segmentation, but no OSD, or OCR. (not implemented) 3 Fully automatic page segmentation, but no OSD. (Default) 4 Assume a single column of text of variable sizes. 5 Assume a single uniform block of vertically aligned text. 6 Assume a single uniform block of text. 7 Treat the image as a single text line. 8 Treat the image as a single word. 9 Treat the image as a single word in a circle. 10 Treat the image as a single character. 11 Sparse text. Find as much text as possible in no particular order. 12 Sparse text with OSD. 13 Raw line. Treat the image as a single text line, bypassing hacks that are Tesseract-specific. | 7 | 9 |
64,111,015 | 2020-9-28 | https://stackoverflow.com/questions/64111015/pip-install-psutil-is-throwing-error-unsupported-architecture-any-workarou | I want to install psutil on my macOS Catalina, for which I am doing pip install psutil, but it doesn't succeed. Instead I get multiple error messages being thrown from Xcode saying that the architecture is not supported. Has anyone faced similar issues? Here's the entire output: Collecting psutil Using cached psutil-5.7.2.tar.gz (460 kB) Using legacy 'setup.py install' for psutil, since package 'wheel' is not installed. Installing collected packages: psutil Running setup.py install for psutil ... error ERROR: Command errored out with exit status 1: command: /Users/sanjibanbairagya/code/.envs/airbase_backend/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/53/072hdjvd63z1p57y512596rc0000gn/T/pip-install-a_22z6dq/psutil/setup.py'"'"'; __file__='"'"'/private/var/folders/53/072hdjvd63z1p57y512596rc0000gn/T/pip-install-a_22z6dq/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/53/072hdjvd63z1p57y512596rc0000gn/T/pip-record-ca9qt6ec/install-record.txt --single-version-externally-managed --compile --install-headers /Users/sanjibanbairagya/code/.envs/airbase_backend/include/site/python3.8/psutil cwd: /private/var/folders/53/072hdjvd63z1p57y512596rc0000gn/T/pip-install-a_22z6dq/psutil/ Complete output (141 lines): running install running build running build_py creating build creating build/lib.macosx-10.14.6-x86_64-3.8 creating build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_pswindows.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_common.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_psosx.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_psbsd.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_psaix.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_pslinux.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_compat.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_psposix.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil copying psutil/_pssunos.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil creating build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_contracts.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_connections.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/runner.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_unicode.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_misc.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_posix.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_linux.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_sunos.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_aix.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_process.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_bsd.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_system.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_osx.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_memleaks.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_windows.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/__main__.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests copying psutil/tests/test_testutils.py -> build/lib.macosx-10.14.6-x86_64-3.8/psutil/tests running build_ext building 'psutil._psutil_osx' extension creating build/temp.macosx-10.14.6-x86_64-3.8 creating build/temp.macosx-10.14.6-x86_64-3.8/psutil creating build/temp.macosx-10.14.6-x86_64-3.8/psutil/arch creating build/temp.macosx-10.14.6-x86_64-3.8/psutil/arch/osx xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=572 -DPSUTIL_OSX=1 -I/Users/sanjibanbairagya/code/.envs/airbase_backend/include -I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c psutil/_psutil_common.c -o build/temp.macosx-10.14.6-x86_64-3.8/psutil/_psutil_common.o In file included from psutil/_psutil_common.c:9: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:63: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from psutil/_psutil_common.c:9: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:64: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from psutil/_psutil_common.c:9: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:33: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from psutil/_psutil_common.c:9: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_dev_t; /* dev_t */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'? typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ note: '__uint128_t' declared here In file included from psutil/_psutil_common.c:9: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_wctype_t; ^ note: '__uint128_t' declared here In file included from psutil/_psutil_common.c:9: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:75: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types/_va_list.h:31: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/types.h:37:2: error: architecture not supported #error architecture not supported ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: command 'xcrun' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /Users/sanjibanbairagya/code/.envs/airbase_backend/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/53/072hdjvd63z1p57y512596rc0000gn/T/pip-install-a_22z6dq/psutil/setup.py'"'"'; __file__='"'"'/private/var/folders/53/072hdjvd63z1p57y512596rc0000gn/T/pip-install-a_22z6dq/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/53/072hdjvd63z1p57y512596rc0000gn/T/pip-record-ca9qt6ec/install-record.txt --single-version-externally-managed --compile --install-headers /Users/sanjibanbairagya/code/.envs/airbase_backend/include/site/python3.8/psutil Check the logs for full command output. Also, here's the output of uname -a in case that's useful: Darwin Sanjibans-MacBook-Pro.local 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64 Is there a fix / workaround for the above issue? Any kind of help whatsoever would be highly appreciated. Thanks in advance. | At the time of the error, I was using Python 3.8.2. This was fixed after upgrading to Python 3.8.5 | 9 | 5 |
64,115,084 | 2020-9-29 | https://stackoverflow.com/questions/64115084/are-predictions-on-scikit-learn-models-thread-safe | Given some classifier (SVC/Forest/NN/whatever) is it safe to call .predict on the same instance concurrently from different threads? From a distant point of view, my guess is they do not mutate any internal state. But I did not find anything in the docs about it. Here is a minimal example showing what I mean: #!/usr/bin/env python3 import threading from sklearn import datasets from sklearn import svm from sklearn.ensemble import RandomForestClassifier from sklearn.neural_network import MLPClassifier X, y = datasets.load_iris(return_X_y=True) # Some model. Might be any type, e.g.: clf = svm.SVC() clf = RandomForestClassifier(), clf = MLPClassifier(solver='lbfgs') clf.fit(X, y) def use_model_for_predictions(): for _ in range(10000): clf.predict(X[0:1]) # Is this safe? thread_1 = threading.Thread(target=use_model_for_predictions) thread_2 = threading.Thread(target=use_model_for_predictions) thread_1.start() thread_2.start() | Check out this Q&A, the predict and predict_proba methods should be thread safe as they only call NumPy, they do not affect model itself in any case so answer to your question is yes. You can find some info as well in replies here. For example in naive bayes the code is following: def predict(self, X): """ Perform classification on an array of test vectors X. Parameters ---------- X : array-like of shape (n_samples, n_features) Returns ------- C : ndarray of shape (n_samples,) Predicted target values for X """ check_is_fitted(self) X = self._check_X(X) jll = self._joint_log_likelihood(X) return self.classes_[np.argmax(jll, axis=1)] You can see that the first two lines are only checks for input. Abstract method _joint_log_likelihood is the one that interests us, described as: @abstractmethod def _joint_log_likelihood(self, X): """Compute the unnormalized posterior log probability of X I.e. ``log P(c) + log P(x|c)`` for all rows x of X, as an array-like of shape (n_classes, n_samples). Input is passed to _joint_log_likelihood as-is by predict, predict_proba and predict_log_proba. """ And finally for example for multinominal NB the function looks like (source): def _joint_log_likelihood(self, X): """ Compute the unnormalized posterior log probability of X, which is the features' joint log probability (feature log probability times the number of times that word appeared in that document) times the class prior (since we're working in log space, it becomes an addition) """ joint_prob = X * self.feature_log_prob_.T + self.class_log_prior_ return joint_prob You can see that there is nothing thread unsafe in predict. Of course you can go through codes and check that for any of those classifiers :) | 10 | 1 |
64,104,341 | 2020-9-28 | https://stackoverflow.com/questions/64104341/how-to-get-the-most-frequent-row-in-table | How to get the most frequent row in a DataFrame? For example, if I have the following table: col_1 col_2 col_3 0 1 1 A 1 1 0 A 2 0 1 A 3 1 1 A 4 1 0 B 5 1 0 C Expected result: col_1 col_2 col_3 0 1 1 A EDIT: I need the most frequent row (as one unit) and not the most frequent column value that can be calculated with the mode() method. | In Pandas 1.1.0. is possible to use the method value_counts() to count unique rows in DataFrame: df.value_counts() Output: col_1 col_2 col_3 1 1 A 2 0 C 1 B 1 A 1 0 1 A 1 This method can be used to find the most frequent row: df.value_counts().head(1).index.to_frame(index=False) Output: col_1 col_2 col_3 0 1 1 A | 15 | 2 |
64,035,952 | 2020-9-23 | https://stackoverflow.com/questions/64035952/how-to-key-press-detection-on-a-linux-terminal-low-level-style-in-python | I just implemented a Linux command shell in python using only the os library's low level system calls, like fork() and so on. I was wondering how I can implement a key listener that will listen for key (UP|DOWN) to scroll through the history of my shell. I want do do this without using any fancy libraries, but I am also wishing that this is not something super complicated. My code is just about 100 lines of code, so far, and I don't want to create a monster just to get a simple feature :D My thoughts about the problem is, that it should be possible to create a child process with some kind of loop, that will listen for up ^[[A and down ^[[B, key press, and then somehow put the text into my input field, like a normal terminal. So far the thing I am most interested in is the possibility of the key-listener. But next I will probably have to figure out how I will get that text into the input field. About that I am thinking that I probably have to use some of the stdin features that sys provides. I'm only interested in making it work on Linux, and want to continue using low-level system calls, preferably not Python libraries that handle everything for me. This is a learning exercise. | By default the standard input is buffered and uses canonical mode. This allows you to edit your input. When you press the enter key, the input can be read by Python. If you want a lower level access to the input you can use tty.setraw() on the standard input file descriptor. This allows you to read one character at a time using sys.stdin.read(1). Note that in this case the Python script will be responsible for handling special characters, and you will lose some of the functionality like character echoing and deleting. For more information take a look at termios(3). You can read about escape sequences which are used for up and down keys on Wikipedia. You should be able to replicate the standard shell behavior if you handle everything in one process. You may also want to try using a subprocess (not referring to the module - you can use fork() or popen()). You would parse the unbuffered input in the main process and send it to stdin (which can be buffered) of the subprocess. You will probably need to have some inter-process communication to share history with the main process. Here is an example of the code needed to capture the input this way. Note that it is only doing some basic processing and needs more work in order to fit your use-case. import sys import tty import termios def getchar(): fd = sys.stdin.fileno() attr = termios.tcgetattr(fd) try: tty.setraw(fd) return sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSANOW, attr) EOT = '\x04' # CTRL+D ESC = '\x1b' CSI = '[' line = '' while True: c = getchar() if c == EOT: print('exit') break elif c == ESC: if getchar() == CSI: x = getchar() if x == 'A': print('UP') elif x == 'B': print('DOWN') elif c == '\r': print([line]) line = '' else: line += c | 7 | 9 |
64,127,158 | 2020-9-29 | https://stackoverflow.com/questions/64127158/how-to-update-a-pandas-dataframe-from-multiple-api-calls | I need to do a python script to Read a csv file with the columns (person_id, name, flag). The file has 3000 rows. Based on the person_id from the csv file, I need to call a URL passing the person_id to do a GET http://api.myendpoint.intranet/get-data/1234 The URL will return some information of the person_id, like example below. I need to get all rents objects and save on my csv. My output needs to be like this import pandas as pd import requests ids = pd.read_csv(f"{path}/data.csv", delimiter=';') person_rents = df = pd.DataFrame([], columns=list('person_id','carId','price','rentStatus')) for id in ids: response = request.get(f'endpoint/{id["person_id"]}') json = response.json() person_rents.append( [person_id, rent['carId'], rent['price'], rent['rentStatus'] ] ) pd.read_csv(f"{path}/data.csv", delimiter=';' ) person_id;name;flag;cardId;price;rentStatus 1000;Joseph;1;6638;1000;active 1000;Joseph;1;5566;2000;active Response example { "active": false, "ctodx": false, "rents": [{ "carId": 6638, "price": 1000, "rentStatus": "active" }, { "carId": 5566, "price": 2000, "rentStatus": "active" } ], "responseCode": "OK", "status": [{ "request": 345, "requestStatus": "F" }, { "requestId": 678, "requestStatus": "P" } ], "transaction": false } After save the additional data from response on csv, i need to get data from another endpoint using the carId on the URL. The mileage result must be save in the same csv. http://api.myendpoint.intranet/get-mileage/6638 http://api.myendpoint.intranet/get-mileage/5566 The return for each call will be like this {"mileage":1000.0000} {"mileage":550.0000} The final output must be person_id;name;flag;cardId;price;rentStatus;mileage 1000;Joseph;1;6638;1000;active;1000.0000 1000;Joseph;1;5566;2000;active;550.0000 SOmeone can help me with this script? Could be with pandas or any python 3 lib. | Code Explanation Create dataframe, df, with pd.read_csv. It is expected that all of the values in 'person_id', are unique. Use .apply on 'person_id', to call prepare_data. prepare_data expects 'person_id' to be a str or int, as indicated by the type annotation, Union[int, str] Call the API, which will return a dict, to the prepare_data function. Convert the 'rents' key, of the dict, into a dataframe, with pd.json_normalize. Use .apply on 'carId', to call the API, and extract the 'mileage', which is added to dataframe data, as a column. Add 'person_id' to data, which can be used to merge df with s. Convert pd.Series, s to a dataframe, with pd.concat, and then merge df and s, on person_id. Save to a csv with pd.to_csv in the desired form. Potential Issues If there's an issue, it's most likely to occur in the call_api function. As long as call_api returns a dict, like the response shown in the question, the remainder of the code will work correctly to produce the desired output. import pandas as pd import requests import json from typing import Union def call_api(url: str) -> dict: r = requests.get(url) return r.json() def prepare_data(uid: Union[int, str]) -> pd.DataFrame: d_url = f'http://api.myendpoint.intranet/get-data/{uid}' m_url = 'http://api.myendpoint.intranet/get-mileage/' # get the rent data from the api call rents = call_api(d_url)['rents'] # normalize rents into a dataframe data = pd.json_normalize(rents) # get the mileage data from the api call and add it to data as a column data['mileage'] = data.carId.apply(lambda cid: call_api(f'{m_url}{cid}')['mileage']) # add person_id as a column to data, which will be used to merge data to df data['person_id'] = uid return data # read data from file df = pd.read_csv('file.csv', sep=';') # call prepare_data s = df.person_id.apply(prepare_data) # s is a Series of DataFrames, which can be combined with pd.concat s = pd.concat([v for v in s]) # join df with s, on person_id df = df.merge(s, on='person_id') # save to csv df.to_csv('output.csv', sep=';', index=False) If there are any errors when running this code: Leave a comment, to let me know. edit your question, and paste the entire TraceBack, as text, into a code block. Example # given the following start dataframe person_id name flag 0 1000 Joseph 1 1 400 Sam 1 # resulting dataframe using the same data for both id 1000 and 400 person_id name flag carId price rentStatus mileage 0 1000 Joseph 1 6638 1000 active 1000.0 1 1000 Joseph 1 5566 2000 active 1000.0 2 400 Sam 1 6638 1000 active 1000.0 3 400 Sam 1 5566 2000 active 1000.0 | 6 | 4 |
64,123,551 | 2020-9-29 | https://stackoverflow.com/questions/64123551/what-is-the-safest-way-to-queue-multiple-threads-originating-in-a-loop | My script loops through each line of an input file and performs some actions using the string in each line. Since the tasks performed on each line are independent of each other, I decided to separate the task into threads so that the script doesn't have to wait for the task to complete to continue with the loop. The code is given below. def myFunction(line, param): # Doing something with line and param # Sends multiple HTTP requests and parse the response and produce outputs # Returns nothing param = arg[1] with open(targets, "r") as listfile: for line in listfile: print("Starting a thread for: ",line) t=threading.Thread(target=myFunction, args=(line, param,)) threads.append(t) t.start() I realized that this is a bad idea as the number of lines in the input file grew large. With this code, there would be as many threads as the number of lines. Researched a bit and figured that queues would be the way. I want to understand the optimal way of using queues for this scenario and if there are any alternatives which I can use. | Queues are one way to do it. The way to use them is to put function parameters on a queue, and use threads to get them and do the processing. The queue size doesn't matter too much in this case because reading the next line is fast. In another case, a more optimized solution would be to set the queue size to at least twice the number of threads. That way if all threads finish processing an item from the queue at the same time, they will all have the next item in the queue ready to be processed. To avoid complicating the code threads can be set as daemonic so that they don't stop the program from finishing after the processing is done. They will be terminated when the main process finishes. The alternative is to put a special item on the queue (like None) for each thread and make the threads exit after getting it from the queue and then join the threads. For the examples bellow the number of worker threads is set using the workers variable. Here is an example of a solution using a queue. from queue import Queue from threading import Thread queue = Queue(workers * 2) def work(): while True: myFunction(*queue.get()) queue.task_done() for _ in range(workers): Thread(target=work, daemon=True).start() with open(targets, 'r') as listfile: for line in listfile: queue.put((line, param)) queue.join() A simpler solution might be using ThreadPoolExecutor. It is especially simple in this case because the function being called doesn't return anything that needs to be used in the main thread. from concurrent.futures import ThreadPoolExecutor with ThreadPoolExecutor(max_workers=workers) as executor: with open(targets, 'r') as listfile: for line in listfile: executor.submit(myFunction, line, param) Also, if it's not a problem to have all lines stored in memory, there is a solution which doesn't use anything other than threads. The work is split in such a way that the threads read some lines from a list and ignore other lines. A simple example with two threads is where one thread reads odd lines and the other reads even lines. from threading import Thread with open(targets, 'r') as listfile: lines = listfile.readlines() def work_split(n): for line in lines[n::workers]: myFunction(line, param) threads = [] for n in range(workers): t = Thread(target=work_split, args=(n,)) t.start() threads.append(t) for t in threads: t.join() I have done a quick benchmark and the Queue is slightly faster than the ThreadPoolExecutor, but the solution with the split work is faster than both. | 8 | 5 |
64,089,691 | 2020-9-27 | https://stackoverflow.com/questions/64089691/feather-format-for-long-term-storage-since-the-release-of-apache-arrow-1-0-1 | As I'm given to understand due to the search of issues in the Feather Github, as well as questions in stackoverflow such as What are the differences between feather and parquet?, the Feather format was not recommended as long term storage due to Apache Arrow versions being 0.x.x, and considered volatile due to the continuous new releases. My question is, has this situation changed as of the current Apache Arrow's version, 1.0.1? Is Feather considered stable to use as long term storage? | Feather files (using the v2 -- default -- format version, not the v1 "legacy" version) are stable starting with Apache Arrow 1.0.0. | 26 | 30 |
64,089,854 | 2020-9-27 | https://stackoverflow.com/questions/64089854/pytorch-detection-of-cuda | Which is the command to see the "correct" CUDA Version that pytorch in conda env is seeing? This, is a similar question, but doesn't get me far. nvidia-smi says I have cuda version 10.1 conda list tells me cudatoolkit version is 10.2.89 torch.cuda.is_available() shows FALSE, so it sees No CUDA? print(torch.cuda.current_device()), I get 10.0.10 (10010??) (it looks like): AssertionError: The NVIDIA driver on your system is too old (found version 10010) print(torch._C._cuda_getCompiledVersion(), 'cuda compiled version') tells me my version is 10.0.20 (10020??)? 10020 cuda compiled version Why are there so many different versions? What am I missing? P.S I have Nvidia driver 430 on Ubuntu 16.04 with Geforce 1050. It comes with libcuda1-430 when I installed the driver from additional drivers tab in ubuntu (Software and Updates). I installed pytorch with conda which also installed the cudatoolkit using conda install -c fastai -c pytorch -c anaconda fastai | In the conda env (myenv) where pytorch is installed do the following: conda activate myenv torch.version.cuda Nvidia-smi only shows compatible version. Does not seem to talk about the version pytorch's own cuda is built on. | 38 | 53 |
64,124,931 | 2020-9-29 | https://stackoverflow.com/questions/64124931/how-to-fix-versionconflict-locking-failure-in-pipenv | I'm using pipenv inside a docker container. I tried installing a package and found that the installation succeeds (gets added to the Pipfile), but the locking keeps failing. Everything was fine until yesterday. Here's the error: (app) root@7284b7892266:/usr/src/app# pipenv install scrapy-djangoitem Installing scrapy-djangoitemβ¦ Adding scrapy-djangoitem to Pipfile's [packages]β¦ β Installation Succeeded Pipfile.lock (6d808e) out of date, updating to (27ac89)β¦ Locking [dev-packages] dependenciesβ¦ Building requirements... Resolving dependencies... β Locking Failed! Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 807, in <module> main() File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 803, in main parsed.requirements_dir, parsed.packages, parse_only=parsed.parse_only) File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 785, in _main resolve_packages(pre, clear, verbose, system, write, requirements_dir, packages) File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 758, in resolve_packages results = clean_results(results, resolver, project) File "/usr/local/lib/python3.7/site-packages/pipenv/resolver.py", line 634, in clean_results reverse_deps = project.environment.reverse_dependencies() File "/usr/local/lib/python3.7/site-packages/pipenv/project.py", line 376, in environment self._environment = self.get_environment(allow_global=allow_global) File "/usr/local/lib/python3.7/site-packages/pipenv/project.py", line 366, in get_environment environment.extend_dists(pipenv_dist) File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 127, in extend_dists extras = self.resolve_dist(dist, self.base_working_set) File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 122, in resolve_dist deps |= cls.resolve_dist(dist, working_set) File "/usr/local/lib/python3.7/site-packages/pipenv/environment.py", line 121, in resolve_dist dist = working_set.find(req) File "/root/.local/share/virtualenvs/app-lp47FrbD/lib/python3.7/site-packages/pkg_resources/__init__.py", line 642, in find raise VersionConflict(dist, req) pkg_resources.VersionConflict: (importlib-metadata 2.0.0 (/root/.local/share/virtualenvs/app-lp47FrbD/lib/python3.7/site-packages), Requirement.parse('importlib-metadata<2,>=0.12; python_version < "3.8"')) (app) root@7284b7892266:/usr/src/app# What could be wrong? EDIT After removing Pipfile.lock and trying to install a package, I got: (app) root@ef80787b5c42:/usr/src/app# pipenv install httpx Installing httpxβ¦ Adding httpx to Pipfile's [packages]β¦ β Installation Succeeded Pipfile.lock not found, creatingβ¦ Locking [dev-packages] dependenciesβ¦ Building requirements... Resolving dependencies... β Success! Locking [packages] dependenciesβ¦ Building requirements... β Locking...Resolving dependencies... Traceback (most recent call last): File "/usr/local/bin/pipenv", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 21, in new_func return f(get_current_context(), *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/pipenv/cli/command.py", line 252, in install site_packages=state.site_packages File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 2202, in do_install skip_lock=skip_lock, File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 1303, in do_init pypi_mirror=pypi_mirror, File "/usr/local/lib/python3.7/site-packages/pipenv/core.py", line 1113, in do_lock keep_outdated=keep_outdated File "/usr/local/lib/python3.7/site-packages/pipenv/utils.py", line 1323, in venv_resolve_deps c = resolve(cmd, sp) File "/usr/local/lib/python3.7/site-packages/pipenv/utils.py", line 1136, in resolve result = c.expect(u"\n", timeout=environments.PIPENV_INSTALL_TIMEOUT) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/delegator.py", line 215, in expect self.subprocess.expect(pattern=pattern, timeout=timeout) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 344, in expect timeout, searchwindowsize, async_) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 372, in expect_list return exp.expect_loop(timeout) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/expect.py", line 181, in expect_loop return self.timeout(e) File "/usr/local/lib/python3.7/site-packages/pipenv/vendor/pexpect/expect.py", line 144, in timeout raise exc pexpect.exceptions.TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0x7f81e99bec90> searcher: searcher_re: 0: re.compile('\n') <pexpect.popen_spawn.PopenSpawn object at 0x7f81e99bec90> searcher: searcher_re: 0: re.compile('\n') (app) root@ef80787b5c42:/usr/src/app# | Here are my debugging notes. Still not sure which package is causing the problem, but this does seem to fix it. The error you get when you first run pipenv install with pipenv version 2020.8.13. Traceback (most recent call last): File "/usr/local/bin/pipenv", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 21, in new_func return f(get_current_context(), *args, **kwargs) File "/usr/local/lib/python3.6/site-packages/pipenv/cli/command.py", line 252, in install site_packages=state.site_packages File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 1928, in do_install site_packages=site_packages, File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 580, in ensure_project pypi_mirror=pypi_mirror, File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 512, in ensure_virtualenv python=python, site_packages=site_packages, pypi_mirror=pypi_mirror File "/usr/local/lib/python3.6/site-packages/pipenv/core.py", line 999, in do_create_virtualenv project._environment.add_dist("pipenv") File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 135, in add_dist self.extend_dists(dist) File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 127, in extend_dists extras = self.resolve_dist(dist, self.base_working_set) File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 122, in resolve_dist deps |= cls.resolve_dist(dist, working_set) File "/usr/local/lib/python3.6/site-packages/pipenv/environment.py", line 121, in resolve_dist dist = working_set.find(req) File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 642, in find raise VersionConflict(dist, req) pkg_resources.VersionConflict: (importlib-metadata 2.0.0 (/usr/local/lib/python3.6/site-packages), Requirement.parse('importlib-metadata<2,>=0.12; python_version < "3.8"')) If you run pip install -U pipenv it seems to change the importlib-metadata version: Installing collected packages: importlib-metadata Attempting uninstall: importlib-metadata Found existing installation: importlib-metadata 2.0.0 Uninstalling importlib-metadata-2.0.0: Successfully uninstalled importlib-metadata-2.0.0 Successfully installed importlib-metadata-1.7.0 Now if you run pipenv install -d --skip-lock it will finish. It seems like a library is requiring a version >= importlib-metadata 2.0. When I pinned the following dependencies it didn't work at first when running pipenv lock, however, if I removed the lock file (rm Pipenv.lock) then it worked when I ran pipenv lock again. virtualenv = "==20.0.31" importlib-metadata = "==1.7.0" | 34 | 20 |
64,126,653 | 2020-9-29 | https://stackoverflow.com/questions/64126653/decrypting-aes-cbc-in-python-from-openssl-aes | I need to decrypt a file encrypted on OpenSSL with python but I am not understanding the options of pycrypto. Here what I do in OpenSSL openssl enc -aes-256-cbc -a -salt -pbkdf2 -iter 100000 -in "clear.txt" -out "crypt.txt" -pass pass:"mypassword" openssl enc -d -aes-256-cbc -a -pbkdf2 -iter 100000 -in "crypt.txt" -out "out.txt" -pass pass:"mypassword" I tried (which obviously won't work) obj2 = AES.new("mypassword", AES.MODE_CBC) output = obj2.decrypt(text) I just want to do the second step in python, but when looking at the sample: https://pypi.org/project/pycrypto/ obj2 = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456') obj2.decrypt(ciphertext) I don't need IV, How do I specify the salt? the pbkdf2 hash? I am also looked at this thread How to decrypt OpenSSL AES-encrypted files in Python? but did not help. Can someone show me how to do this using python? thank you. | The OpenSSL statement uses PBKDF2 to create a 32 bytes key and a 16 bytes IV. For this, a random 8 bytes salt is implicitly generated and the specified password, iteration count and digest (default: SHA-256) are applied. The key/IV pair is used to encrypt the plaintext with AES-256 in CBC mode and PKCS7 padding, s. here. The result is returned in OpenSSL format, which starts with the 8 bytes ASCII encoding of Salted__, followed by the 8 bytes salt and the actual ciphertext, all Base64 encoded. The salt is needed for decryption, so that key and IV can be reconstructed. Note that the password in the OpenSSL statement is actually passed without quotation marks, i.e. in the posted OpenSSL statement, the quotation marks are part of the password. For the decryption in Python the salt and the actual ciphertext must first be determined from the encrypted data. With the salt the key/IV pair can be reconstructed. Finally, the key/IV pair can be used for decryption. Example: With the posted OpenSSL statement, the plaintext The quick brown fox jumps over the lazy dog was encrypted into the ciphertext U2FsdGVkX18A+AhjLZpfOq2HilY+8MyrXcz3lHMdUII2cud0DnnIcAtomToclwWOtUUnoyTY2qCQQXQfwDYotw== Decryption with Python is possible as follows (using PyCryptodome): from Crypto.Protocol.KDF import PBKDF2 from Crypto.Hash import SHA256 from Crypto.Util.Padding import unpad from Crypto.Cipher import AES import base64 # Determine salt and ciphertext encryptedDataB64 = 'U2FsdGVkX18A+AhjLZpfOq2HilY+8MyrXcz3lHMdUII2cud0DnnIcAtomToclwWOtUUnoyTY2qCQQXQfwDYotw==' encryptedData = base64.b64decode(encryptedDataB64) salt = encryptedData[8:16] ciphertext = encryptedData[16:] # Reconstruct Key/IV-pair pbkdf2Hash = PBKDF2(b'"mypassword"', salt, 32 + 16, count=100000, hmac_hash_module=SHA256) key = pbkdf2Hash[0:32] iv = pbkdf2Hash[32:32 + 16] # Decrypt with AES-256 / CBC / PKCS7 Padding cipher = AES.new(key, AES.MODE_CBC, iv) decrypted = unpad(cipher.decrypt(ciphertext), 16) print(decrypted) Edit - Regarding your comment: 16 MB should be possible, but for larger data the ciphertext would generally be read from a file and the decrypted data would be written to a file, in contrast to the example posted above. Whether the data can be decrypted in one step ultimately depends on the available memory. If the memory is not sufficient, the data must be processed in chunks. When using chunks it would make more sense not to Base64 encode the encrypted data but to store them directly in binary format. This is possible by omitting the -a option in the OpenSSL statement. Otherwise it must be ensured that always integer multiples of the block size (relative to the undecoded ciphertext) are loaded, where 3 bytes of the undecoded ciphertext correspond to 4 bytes of the Base64 encoded ciphertext. In the case of the binary stored ciphertext: During decryption only the first block (16 bytes) should be (binary) read in the first step. From this, the salt can be determined (the bytes 8 to 16), then the key and IV (analogous to the posted code above). The rest of the ciphertext can be (binary) read in chunks of suitable size ( = a multiple of the block size, e.g. 1024 bytes). Each chunk is encrypted/decrypted separately, see multiple encrypt/decrypt-calls. For reading/writing files in chunks with Python see e.g. here.Further details are best answered within the scope of a separate question. | 11 | 9 |
64,115,628 | 2020-9-29 | https://stackoverflow.com/questions/64115628/get-starlette-request-body-in-the-middleware-context | I have such middleware class RequestContext(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next: RequestResponseEndpoint): request_id = request_ctx.set(str(uuid4())) # generate uuid to request body = await request.body() if body: logger.info(...) # log request with body else: logger.info(...) # log request without body response = await call_next(request) response.headers['X-Request-ID'] = request_ctx.get() logger.info("%s" % (response.status_code)) request_ctx.reset(request_id) return response So the line body = await request.body() freezes all requests that have body and I have 504 from all of them. How can I safely read the request body in this context? I just want to log request parameters. | I would not create a Middleware that inherits from BaseHTTPMiddleware since it has some issues, FastAPI gives you a opportunity to create your own routers, in my experience this approach is way better. from fastapi import APIRouter, FastAPI, Request, Response, Body from fastapi.routing import APIRoute from typing import Callable, List from uuid import uuid4 class ContextIncludedRoute(APIRoute): def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: request_id = str(uuid4()) response: Response = await original_route_handler(request) if await request.body(): print(await request.body()) response.headers["Request-ID"] = request_id return response return custom_route_handler app = FastAPI() router = APIRouter(route_class=ContextIncludedRoute) @router.post("/context") async def non_default_router(bod: List[str] = Body(...)): return bod app.include_router(router) Works as expected. b'["string"]' INFO: 127.0.0.1:49784 - "POST /context HTTP/1.1" 200 OK | 12 | 5 |
64,125,560 | 2020-9-29 | https://stackoverflow.com/questions/64125560/how-do-you-broadcast-np-random-choice-across-each-row-of-a-numpy-array | Suppose I have this numpy array: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]] My goal is to select two random elements from each row and create a new numpy array that might look something like: [[2, 4], [5, 8], [9, 10], [15, 16]] I can easily do this using a for loop. However, is there a way that I can use broadcasting, say, with np.random.choice, to avoid having to loop through each row? | Approach #1 Based on this trick, here's a vectorized way - n = 2 # number of elements to select per row idx = np.random.rand(*a.shape).argsort(1)[:,:n] out = np.take_along_axis(a, idx, axis=1) Sample run - In [251]: a Out[251]: array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]) In [252]: idx = np.random.rand(*a.shape).argsort(1)[:,:2] In [253]: np.take_along_axis(a, idx, axis=1) Out[253]: array([[ 2, 1], [ 6, 7], [ 9, 11], [16, 15]]) Approach #2 Another based on masks to select exactly two per row - def select_two_per_row(a): m,n = a.shape mask = np.zeros((m,n), dtype=bool) R = np.arange(m) idx1 = np.random.randint(0,n,m) mask[R,idx1] = 1 mask2 = np.zeros(m*(n-1), dtype=bool) idx2 = np.random.randint(0,n-1,m) + np.arange(m)*(n-1) mask2[idx2] = 1 mask[~mask] = mask2 out = a[mask].reshape(-1,2) return out Approach #3 Another based on integer based indexing again to select exactly two per row - def select_two_per_row_v2(a): m,n = a.shape idx1 = np.random.randint(0,n,m) idx2 = np.random.randint(1,n,m) out = np.take_along_axis(a, np.c_[idx1, idx1 - idx2], axis=1) return out Timings - In [209]: a = np.random.rand(100000,10) # App1 with argsort In [210]: %%timeit ...: idx = np.random.rand(*a.shape).argsort(1)[:,:2] ...: out = np.take_along_axis(a, idx, axis=1) 23.2 ms Β± 137 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) # App1 with argpartition In [221]: %%timeit ...: idx = np.random.rand(*a.shape).argpartition(axis=1,kth=1)[:,:2] ...: out = np.take_along_axis(a, idx, axis=1) 18.3 ms Β± 115 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) In [214]: %timeit select_two_per_row(a) 9.89 ms Β± 37.7 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) In [215]: %timeit select_two_per_row_v2(a) 5.78 ms Β± 9.19 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) | 8 | 10 |
64,122,700 | 2020-9-29 | https://stackoverflow.com/questions/64122700/efficiently-remove-partial-duplicates-in-a-list-of-tuples | I have a list of tuples, the list can vary in length between ~8 - 1000 depending on the length of the tuples. Each tuple in the list is unique. A tuple is of length N where each entry is a generic word. An example tuple can be of length N (Word 1, Word 2, Word 3, ..., Word N) For any tuple in the list, element j in said tuple will either be '' or Word j A very simplified example with alphabetic letters would be l = [('A', 'B', '', ''), ('A', 'B', 'C', ''), ('', '', '', 'D'), ('A', '', '', 'D'), ('', 'B', '', '')] Every position at each tuple will either have the same value or be empty. I want to remove all the tuples which have all their non '' values in another tuple at the same position. As an example, (A,B,'','') has all its non '' values in (A,B,C,'') and should therefore be removed. filtered_l = [(A,B,C,''),(A,'','',D)] The length of the tuples is always of the same length (not necessarily 4). The length of the tuples would be between 2-10. What is the fastest way to do this? | Let's conceptualize each tuple as a binary array, where 1 is "contains something" and 2 is "contains an empty string". Since the item at each position will be the same, we don't need to care what is at each position, only that something is. l = [('A','B','',''),('A','B','C',''),('','','','D'),('A','','','D'),('','B','','')] l_bin = [sum(2**i if k else 0 for i,k in enumerate(tup)) for tup in l] # [3, 7, 8, 9, 2] # [0b0011, 0b0111, 0b1000, 0b1001, 0b0010] # that it's backwards doesn't really matter, since it's consistent Now, we can walk through that list and build a new datastructure without 'duplicates'. Since we have our tuples encoded as binary, we can determine a duplicate, 'encompassed' by another, by doing bitwise operations - given a and b, if a | b == a, then a must contain b. codes = {} for tup, b in zip(l, l_bin): # check if any existing code contains the potential new one # in this case, skip adding the new one if any(a | b == a for a in codes): continue # check if the new code contains a potential existing one or more # in which case, replace the existing code(s) with the new code for a in list(codes): if b | a == b: codes.pop(a) # and finally, add this code to our datastructure codes[b] = tup Now we can withdraw our 'filtered' list of tuples: output = list(codes.values()) # [('A', 'B', 'C', ''), ('A', '', '', 'D')] Note that (A, B, C, '') contains both (A, B, '', '') and ('', B, '', ''), and that (A, '', '', D') contains ('', '', '', D), so this should be correct. As of python 3.8, dict preserves insertion order, so the output should be in the same order that the tuples originally appeared in the list. This solution wouldn't be perfectly efficient, since the number of codes might stack up, but it should be between O(n) and O(n^2), depending on the number of unique codes left at the end (and since the length of each tuple is significantly less than the length of l, it should be closer to O(n) than to O(n^2). | 9 | 6 |
64,122,311 | 2020-9-29 | https://stackoverflow.com/questions/64122311/group-pandas-dataframe-in-unusual-way | Problem I have the following Pandas dataframe: data = { 'ID': [100, 100, 100, 100, 200, 200, 200, 200, 200, 300, 300, 300, 300, 300], 'value': [False, False, True, False, False, True, True, True, False, False, False, True, True, False], } df = pandas.DataFrame (data, columns = ['ID','value']) I want to get the following groups: Group 1: for each ID, all False rows until the first True row of that ID Group 2: for each ID, all False rows after the last True row of that ID Group 3: all true rows Can this be done with pandas? What I've tried I've tried group = df.groupby((df['value'].shift() != df['value']).cumsum()) but this returns an incorrect result. | Let us try shift + cumsum create the groupby key: BTW I really like the way you display your expected output s = df.groupby('ID')['value'].apply(lambda x : x.ne(x.shift()).cumsum()) d = {x : y for x ,y in df.groupby(s)} d[2] ID value 2 100 True 5 200 True 6 200 True 7 200 True 11 300 True 12 300 True d[1] ID value 0 100 False 1 100 False 4 200 False 9 300 False 10 300 False d[3] ID value 3 100 False 8 200 False 13 300 False | 15 | 9 |
64,111,320 | 2020-9-29 | https://stackoverflow.com/questions/64111320/sqlalchemy-hybrid-property-v-property-hybrid-method-v-classmethod | I have a Model (which I'm using as an abstract base class), that has some common methods and properties. SQLAlchemy allows creating properties and methods with @hybrid_property and @hybrid_method, but also the standard @property, @classmethod, @staticmethod decorators give me the results I'm after. Are there any advantages and disadvantages using the SQLA decorators over the standard python decorators? When should I use, or shouldn't use, the SQLA decorators? | The hybrid provides for an expression that works at both the Python level as well as at the SQL expression level Let's look at an example: class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) firstname = Column(String(50)) lastname = Column(String(50)) @hybrid_property def fullname(self): return self.firstname + ' ' + self.lastname @property def long_name(self): return self.firstname + ' ' + self.lastname session.add(User(firstname='Brendan', lastname='Simon')) session.commit() # error # print(session.query(User).filter(User.long_name == 'Brendan Simon').first().id) # works fine because @hybrid_property print(session.query(User).filter(User.fullname == 'Brendan Simon').first().id) Also you can customize SQL expression using @fullname.expression. When should I use, or shouldn't use, the SQLA decorators? I think you will know when you need it. For example, you can use it for fast aliases: class MetaData(Base): __tablename__ = 'meta_data' system_field = Column(String(20)) # a lot of calculations and processing in different places # a lot of fields ... One day in a few parts system_field was(or will be) renamed to new_field(doesn't matter why, who and when - just fact). You can do something like this as a quick solution: @hybrid_property def new_field(self): return self.system_field @new_field.setter def new_field(self, value: str): self.system_field = value # data from somewhere... # data = {'new_field': 'default', other fields...} # will works fine + in other places will work as `system_field` with old sql queries process_meta(MetaData(**{data})) So this is really good feature, but if you are thinking about whether you need it or not, then you definitely don't need it. | 15 | 12 |
64,113,002 | 2020-9-29 | https://stackoverflow.com/questions/64113002/how-i-can-aggregate-employee-based-on-their-department-and-show-average-salary-i | This is my code which has data in which I want to perform the task using pandas.DataFrame.groupby import pandas as pd data = {'employees_no': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], 'employees_name': ['Jugal Sompura', 'Maya Rajput', 'Chaitya Panchal', 'Sweta Rampariya', 'Prakshal Patel', 'Dhruv Panchal', 'Prachi Desai', 'Krunal Gosai', 'Hemil Soni', 'Gopal Pithadia', 'Jatin Shah', 'Raj Patel', 'Shreya Desai'], 'department_name': ['HR', 'Administrative Assistant', 'Production', 'Accountant', 'Production', 'Engineer', 'Finance', 'Engineer', 'Quality Assurance', 'Engineer', 'Engineer', 'Customer Service', 'CEO'], 'salary': [130000.0, 65000.0, 45000.0, 65000.0, 47000.0, 40000.0, 90000.0, 45000.0, 35000.0, 45000.0, 30000.0, 40000.0, 250000.0] } df = pd.DataFrame (data, columns = ['employees_no', 'employees_name', 'department_name', 'salary']) print(df) --------------------------------------------------------------------- employees_no employees_name department_name salary 0 1 Jugal Sompura HR 130000.0 1 2 Maya Rajput Administrative Assistant 65000.0 2 3 Chaitya Panchal Production 45000.0 3 4 Sweta Rampariya Accountant 65000.0 4 5 Prakshal Patel Production 47000.0 5 6 Dhruv Panchal Engineer 40000.0 6 7 Prachi Desai Finance 90000.0 7 8 Krunal Gosai Engineer 45000.0 8 9 Hemil Soni Quality Assurance 35000.0 9 10 Gopal Pithadia Engineer 45000.0 10 11 Jatin Shah Engineer 30000.0 11 12 Raj Patel Customer Service 40000.0 12 13 Shreya Desai CEO 250000.0 --------------------------------------------------------------------- I tried this and could only get this output. print(df.groupby('department_name').agg({'salary':'mean'})) --------------------------------------------------------------------- department_name salary Accountant 65000.0 Administrative Assistant 65000.0 CEO 250000.0 Customer Service 40000.0 Engineer 40000.0 Finance 90000.0 HR 130000.0 Production 46000.0 Quality Assurance 35000.0 --------------------------------------------------------------------- I'm not able to get output like this... department_name employees_name avg_salary Accountant Sweta Rampariya 65000.0 Administrative Assistant Maya Rajput 65000.0 CEO Shreya Desai 250000.0 Customer Service Raj Patel 40000.0 Engineer Dhruv Panchal 40000.0 Gopal Pithadia Krunal Gosai Jatin Shah Finance Prachi Desai 90000.0 HR Jugal Sompura 130000.0 Production Chaitya Panchal 46000.0 Prakshal Patel Quality Assurance Hemil Soni 35000.0 Can you help me with this? | Extending what @Chris did and adding the part of remove average salary values if department_name is same. Here's the full code: import pandas as pd data = {'employees_no': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], 'employees_name': ['Jugal Sompura', 'Maya Rajput', 'Chaitya Panchal', 'Sweta Rampariya', 'Prakshal Patel', 'Dhruv Panchal', 'Prachi Desai', 'Krunal Gosai', 'Hemil Soni', 'Gopal Pithadia', 'Jatin Shah', 'Raj Patel', 'Shreya Desai'], 'department_name': ['HR', 'Administrative Assistant', 'Production', 'Accountant', 'Production', 'Engineer', 'Finance', 'Engineer', 'Quality Assurance', 'Engineer', 'Engineer', 'Customer Service', 'CEO'], 'salary': [130000.0, 65000.0, 45000.0, 65000.0, 47000.0, 40000.0, 90000.0, 45000.0, 35000.0, 45000.0, 30000.0, 40000.0, 250000.0] } df = pd.DataFrame (data) df['avg_sal'] = df.groupby('department_name')['salary'].transform('mean') new_df = df.set_index(["department_name", "employees_name"]).sort_index() new_df.loc[new_df.index.get_level_values(0).duplicated()==True,'avg_sal']='' print (new_df['avg_sal']) This will print as follows: department_name employees_name Accountant Sweta Rampariya 65000 Administrative Assistant Maya Rajput 65000 CEO Shreya Desai 250000 Customer Service Raj Patel 40000 Engineer Dhruv Panchal 40000 Gopal Pithadia Jatin Shah Krunal Gosai Finance Prachi Desai 90000 HR Jugal Sompura 130000 Production Chaitya Panchal 46000 Prakshal Patel Quality Assurance Hemil Soni 35000 | 7 | 3 |
64,105,616 | 2020-9-28 | https://stackoverflow.com/questions/64105616/greenlet-runtime-error-and-deployed-app-in-docker-keeps-booting-all-the-workers | RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject And all the workers are being booted. 2020-09-28T14:09:41.864089908Z [2020-09-28 14:09:41 +0000] [31] [INFO] Booting worker with pid: 31 2020-09-28T14:09:43.933141974Z [2020-09-28 14:09:43 +0000] [32] [INFO] Booting worker with pid: 32 2020-09-28T14:09:44.317436676Z [2020-09-28 14:09:44 +0000] [33] [INFO] Booting worker with pid: 33 2020-09-28T14:09:44.795236476Z [2020-09-28 14:09:44 +0000] [34] [INFO] Booting worker with pid: 34 It was working fine a week back or so and now I'm starting to have the problem. | as https://discuss.redash.io/t/binary-compatibility-issue-with-greenlet/7237 indicates a workaround is greenlet==0.4.16 or upgrade your gevent to 20.9.0 following fix is suggested on the greenlet github page https://github.com/python-greenlet/greenlet/issues/178#issuecomment-697342964 also see following issues https://github.com/python-greenlet/greenlet/issues/180 https://github.com/python-greenlet/greenlet/issues/182 https://github.com/python-greenlet/greenlet/issues/178 | 16 | 23 |
64,082,541 | 2020-9-26 | https://stackoverflow.com/questions/64082541/difference-between-python-type-hints-of-type-and-type | Today, I came across a function type hinted with type. I have done some research as to when one should type hint with type or Type, and I can't find a satisfactory answer. From my research it seems there's some overlap between the two. My question: What is the difference between type and Type? What is an example use case that shows when to use type vs Type? Research Looking at the source for Type (from typing tag 3.7.4.3), I can see this: # Internal type variable used for Type[]. CT_co = TypeVar('CT_co', covariant=True, bound=type) # This is not a real generic class. Don't use outside annotations. class Type(Generic[CT_co], extra=type): """A special construct usable to annotate class objects. ``` It looks like Type may just be an alias for type, except it supports Generic parameterization. Is this correct? Example Here is some sample code made using Python==3.8.5 and mypy==0.782: from typing import Type def foo(val: type) -> None: reveal_type(val) # mypy output: Revealed type is 'builtins.type' def bar(val: Type) -> None: reveal_type(val) # mypy output: Revealed type is 'Type[Any]' class Baz: pass foo(type(bool)) foo(Baz) foo(Baz()) # error: Argument 1 to "foo" has incompatible type "Baz"; expected "type" bar(type(bool)) bar(Baz) bar(Baz()) # error: Argument 1 to "bar" has incompatible type "Baz"; expected "Type[Any]" Clearly mypy recognizes a difference. | type is a metaclass. Just like object instances are instances of classes, classes are instances of metaclasses. Type is an annotation used to tell a type checker that a class object itself is to be handled at wherever the annotation is used, instead of an instance of that class object. There's a couple ways they are related. The annotated return type when type is applied to an argument is Type. This is in the same way that list applied to an argument (like list((1, 2))) has an annotated returned type of List. Using reveal_type in: reveal_type(type(1)) we are asking what is the inferred type annotation for the return value of type when it is given 1. The answer is Type, more specifically Type[Literal[1]]. Type a type-check-time construct, type is a runtime construct. This has various implications I'll explain later. Moving onto your examples, in: class Type(Generic[CT_co], extra=type): ... We are not annotating extra as type, we are instead passing the keyword-argument extra with value type to the metaclass of Type. See Class-level Keyword Arguments for more examples of this construct. Note that extra=type is very different from extra: type: one is assigning a value at runtime, and one is annotating with a type hint at type-check time. Now for the interesting part: if mypy is able to do successful type checking with both, why use one over the other? The answer lies in that Type, being a type-check time construct, is much more well integrated with the typing ecosystem. Given this example: from typing import Type, TypeVar T = TypeVar("T") def smart(t: Type[T], v: T) -> T: return v def naive(t: type, v: T) -> T: return v v1: int = smart(int, 1) # Success. v2: int = smart(str, 1) # Error. v3: int = naive(int, 1) # Success. v4: int = naive(str, 1) # Success. v1, v3 and v4 type-check successfully. You can see that v4 from naive was a false positive, given that the type of 1 is int, not str. But because you cannot parametrized the type metaclass (it is not Generic), we're unable to get the safety that we have with smart. I consider this to be more of a language limitation. You can see PEP 585 which is attempting to bridge the same kind of gap, but for list / List. At the end of the day though, the idea is still the same: the lowercase version is the runtime class, the uppercase version is the type annotation. Both can overlap, but there are features exclusive to both. | 6 | 8 |
64,105,927 | 2020-9-28 | https://stackoverflow.com/questions/64105927/using-q-object-with-variable | I'd like to use the django.db.models.Q object in a way that the query term is coming from a variable. What i'd like to achieve is identical to this: q = Q(some_field__icontains='sth') Obj.objects.filter(q) , but the some_field value should come from a variable: field_name='some_field' q = Q('%s__icontains=sth' % field_name) Obj.objects.filter(q) , but this solution does not give me the correct result of course. I also tried to use dictionary this way: dt = {'%s__icontains' % field_name: 'sth'} q = Q(**dt) Obj.objects.filter(q) , but this also fails on the result. How could I use the Q object using variables as query term? Thanks. | You can pass a 2-tuple to a Q object with the name of the fieldname(s) and lookups as first item, and the value as second item: Obj.objects.filter(Q(('%s__icontains' % field_name, 'sth'))) this is probably the most convenient way. That being said the dictionary unpacking, although less elegant, should also work. | 7 | 9 |
64,103,507 | 2020-9-28 | https://stackoverflow.com/questions/64103507/how-to-rename-a-single-node-of-a-networkx-graph | I wanted to know how I can change a single node name of a node of a digraph. I am new to networkx and could only find answers on how to change all node names. In my case I am iterating over a graph A to create graph B. p and c are nodes of graph A. The edge (p,c) of graph A contains data I want to add to the node p of B. However, when I am adding the edge data from graph A to the already existing node p of graph B, I would like to update the name of p to be equal to the name of c so I am able to reference it again for the next edge of graph A because it then is the edge (c,x) and I can use the c to reference it again... The relevant part of my code looks like this new_stages = A.in_edge(c, data='stages') stages = B.node[p]['stages'] stages.append(new_stages) <<Update node p to have name of c??>> B.add_node(p, stages=new_stage_set) Any help is appreciated, thanks! | You have nx.relabel_nodes for this. Here's a simple use case: G = nx.from_edgelist([('a','b'), ('f','g')]) mapping = {'b':'c'} G = nx.relabel_nodes(G, mapping) G.edges() # EdgeView([('a', 'c'), ('f', 'g')]) | 11 | 18 |
64,100,160 | 2020-9-28 | https://stackoverflow.com/questions/64100160/numpy-split-array-into-chunks-of-equal-size-with-remainder | Is there a numpy function that splits an array into equal chunks of size m (excluding any remainder which would have a size less than m). I have looked at the function np.array_split but that doesn't let you split by specifying the sizes of the chunks. An example of what I'm looking for is below: X = np.arange(10) split (X, size = 3) -> [ [0,1,2],[3,4,5],[6,7,8], [9] ] split (X, size = 4) -> [ [0,1,2,3],[4,5,6,7],[8,9]] split (X, size = 5) -> [ [0,1,2,3,4],[5,6,7,8,9]] | Here's one way with np.split + np.arange/range - def split_given_size(a, size): return np.split(a, np.arange(size,len(a),size)) Sample runs - In [140]: X Out[140]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [141]: split_given_size(X,3) Out[141]: [array([0, 1, 2]), array([3, 4, 5]), array([6, 7, 8]), array([9])] In [143]: split_given_size(X,4) Out[143]: [array([0, 1, 2, 3]), array([4, 5, 6, 7]), array([8, 9])] In [144]: split_given_size(X,5) Out[144]: [array([0, 1, 2, 3, 4]), array([5, 6, 7, 8, 9])] | 11 | 19 |
64,099,107 | 2020-9-28 | https://stackoverflow.com/questions/64099107/convert-multipolygon-geometry-into-list | How can I please convert a multipolygon geometry into a list? I tried this: mycoords=geom.exterior.coords mycoordslist = list(mycoords) But I get the error: AttributeError: 'MultiPolygon' object has no attribute 'exterior' | You will have to loop over geometries within your MultiPolygon. mycoordslist = [list(x.exterior.coords) for x in geom.geoms] Note that the result is a list of coords lists. | 7 | 14 |
64,096,953 | 2020-9-28 | https://stackoverflow.com/questions/64096953/how-to-convert-yolo-format-bounding-box-coordinates-into-opencv-format | I have Yolo format bounding box annotations of objects saved in a .txt files. Now I want to load those coordinates and draw it on the image using OpenCV, but I donβt know how to convert those float values into OpenCV format coordinates values I tried this post but it didnβt help, below is a sample example of what I am trying to do Code and output import matplotlib.pyplot as plt import cv2 img = cv2.imread(<image_path>) dh, dw, _ = img.shape fl = open(<label_path>, 'r') data = fl.readlines() fl.close() for dt in data: _, x, y, w, h = dt.split(' ') nx = int(float(x)*dw) ny = int(float(y)*dh) nw = int(float(w)*dw) nh = int(float(h)*dh) cv2.rectangle(img, (nx,ny), (nx+nw,ny+nh), (0,0,255), 1) plt.imshow(img) Actual Annotations and Image 0 0.286972 0.647157 0.404930 0.371237 0 0.681338 0.366221 0.454225 0.418060 | There's another Q&A on this topic, and there's this1 interesting comment below the accepted answer. The bottom line is, that the YOLO coordinates have a different centering w.r.t. to the image. Unfortunately, the commentator didn't provide the Python port, so I did that here: import cv2 import matplotlib.pyplot as plt img = cv2.imread(<image_path>) dh, dw, _ = img.shape fl = open(<label_path>, 'r') data = fl.readlines() fl.close() for dt in data: # Split string to float _, x, y, w, h = map(float, dt.split(' ')) # Taken from https://github.com/pjreddie/darknet/blob/810d7f797bdb2f021dbe65d2524c2ff6b8ab5c8b/src/image.c#L283-L291 # via https://stackoverflow.com/questions/44544471/how-to-get-the-coordinates-of-the-bounding-box-in-yolo-object-detection#comment102178409_44592380 l = int((x - w / 2) * dw) r = int((x + w / 2) * dw) t = int((y - h / 2) * dh) b = int((y + h / 2) * dh) if l < 0: l = 0 if r > dw - 1: r = dw - 1 if t < 0: t = 0 if b > dh - 1: b = dh - 1 cv2.rectangle(img, (l, t), (r, b), (0, 0, 255), 1) plt.imshow(img) plt.show() So, for some Lenna image, that'd be the output, which I think shows the correct coordinates w.r.t. your image: ---------------------------------------- System information ---------------------------------------- Platform: Windows-10-10.0.16299-SP0 Python: 3.8.5 Matplotlib: 3.3.2 OpenCV: 4.4.0 ---------------------------------------- 1Please upvote the linked answers and comments. | 19 | 48 |
64,096,624 | 2020-9-28 | https://stackoverflow.com/questions/64096624/what-is-the-difference-between-using-softmax-as-a-sequential-layer-in-tf-keras-a | what is the difference between using softmax as a sequential layer in tf.keras and softmax as an activation function for a dense layer? tf.keras.layers.Dense(10, activation=tf.nn.softmax) and tf.keras.layers.Softmax(10) | they are the same, you can test it on your own # generate data x = np.random.uniform(0,1, (5,20)).astype('float32') # 1st option X = Dense(10, activation=tf.nn.softmax) A = X(x) # 2nd option w,b = X.get_weights() B = Softmax()(tf.matmul(x,w) + b) tf.reduce_all(A == B) # <tf.Tensor: shape=(), dtype=bool, numpy=True> Pay attention also when using tf.keras.layers.Softmax, it doesn't require to specify the units, it's a simple activation by default, the softmax is computed on the -1 axis, you can change this if you have tensor outputs > 2D and want to operate softmax on other dimensionalities. You can change this easily in the second option | 8 | 6 |
64,095,346 | 2020-9-28 | https://stackoverflow.com/questions/64095346/pickle-how-does-it-pickle-a-function | In a post I posted yesterday, I accidentally found changing the __qualname__ of a function has an unexpected effect on pickle. By running more tests, I found that when pickling a function, pickle does not work in the way I thought, and changing the __qualname__ of the function has a real effect on how pickle behaves. The snippets below are tests I ran, import pickle from sys import modules # a simple function to pickle def hahaha(): return 1 print('hahaha',hahaha,'\n') # change the __qualname__ of function hahaha hahaha.__qualname__ = 'sdfsdf' print('set hahaha __qualname__ to sdfsdf',hahaha,'\n') # make a copy of hahaha setattr(modules['__main__'],'abcabc',hahaha) print('create abcabc which is just hahaha',abcabc,'\n') try: pickle.dumps(hahaha) except Exception as e: print('pickle hahaha') print(e,'\n') try: pickle.dumps(abcabc) except Exception as e: print('pickle abcabc, a copy of hahaha') print(e,'\n') try: pickle.dumps(sdfsdf) except Exception as e: print('pickle sdfsdf') print(e) As you can see by running the snippets, both hahaha and abcabc cannot be pickled because of the exception: Can't pickle <function sdfsdf at 0x7fda36dc5f28>: attribute lookup sdfsdf on __main__ failed. I'm really confused by this exception, What does pickle look for when it pickles a function? Although the __qualname__ of hahaha was changed to 'sdfsdf', the function hahaha as well as its copy abcabc is still callable in the session (as they are in dir(sys.modules['__main__'])), then why pickle cannot pickle them? What is the real effect of changing the __qualname__ of a function? I understand by changing the __qualname__ of hahaha to 'sdfsdf' won't make sdfsdf callable, as it won't show up in dir(sys.modules['__main__']). However, as you can see by running the snippets, after changing the __qualname__ of hahaha to 'sdfsdf', the object hahaha as well as its copy abcabc has changed to something like <function sdfsdf at 'some_address'>. What is the difference between the objects in sys.modules['__main__'] and <function sdfsdf at 'some_address'>? | Pickling of function objects is defined in the save_global method in pickle.py: First, the name of the function is retrieved via __qualname__: name = getattr(obj, '__qualname__', None) Afterwards, after retrieving the module, it is reimported: __import__(module_name, level=0) module = sys.modules[module_name] This freshly imported module is then used to look up the function as an attribute: obj2, parent = _getattribute(module, name) obj2 would now be a new copy of the function, but since sdfsdf doesn't exist in this module, pickling fails here. You can make this work, but you have to be consistent: >>> import sys >>> import pickle >>> def hahaha(): return 1 >>> hahaha.__qualname__ = "sdfsdf" >>> setattr(sys.modules["__main__"], "sdfsdf", hahaha) >>> pickle.dumps(hahaha) b'\x80\x04\x95\x17\x00\x00\x00\x00\x00\x00\x00\x8c\x08__main__\x94\x8c\x06sdfsdf\x94\x93\x94.' | 8 | 5 |
64,095,876 | 2020-9-28 | https://stackoverflow.com/questions/64095876/multiprocessing-fork-vs-spawn | I was reading the description of the two from the python doc: spawn The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process objects run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver. [Available on Unix and Windows. The default on Windows and macOS.] fork The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic. [Available on Unix only. The default on Unix.] And my question is: is it that the fork is much quicker 'cuz it does not try to identify which resources to copy? is it that, since fork duplicates everything, it would "waste" much more resources comparing to spawn()? | is it that the fork is much quicker 'cuz it does not try to identify which resources to copy? Yes, it's much quicker. The kernel can clone the whole process and only copies modified memory-pages as a whole. Piping resources to a new process and booting the interpreter from scratch is not necessary. is it that, since fork duplicates everything, it would "waste" much more resources comparing to spawn()? Fork on modern kernels does only "copy-on-write" and it only affects memory-pages which actually change. The caveat is that "write" already encompasses merely iterating over an object in CPython. That's because the reference-count for the object gets incremented. If you have long running processes with lots of small objects in use, this can mean you waste more memory than with spawn. Anecdotally I recall Facebook claiming to have memory-usage reduced considerably with switching from "fork" to "spawn" for their Python-processes. | 71 | 21 |
64,094,162 | 2020-9-27 | https://stackoverflow.com/questions/64094162/i-have-accidently-delete-my-sceret-key-form-settings-py-in-django | while pulling from git hub i lost my secret key which i have updated. is there any way to obtain secret key for the same project. while pulling from git hub i lost my secret key which i have updated. is there any way to obtain secret key for the same project. | run : python manage.py shell write and enter the following lines sequentially: from django.core.management.utils import get_random_secret_key print(get_random_secret_key()) exit() copy this secret_key to your settings.py SECRET_KEY. And reload this server. If it will not work, refresh the page with ctrl+shift+r, delete cache. If it will not work again, try to remove all rows from django_session table where in your database. My English skills are not good, sorry about that. | 9 | 13 |
64,090,818 | 2020-9-27 | https://stackoverflow.com/questions/64090818/unconsumed-column-names-sqlalchemy-python | I am facing the following error using SQLAlchemy: Unconsumed column names: company I want to insert data for 1 specific column, and not all columns in the table: INSERT INTO customers (company) VALUES ('sample name'); My code: engine.execute(table('customers').insert().values({'company': 'sample name'})) Create Table: 'CREATE TABLE `customers` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `company` varchar(255) DEFAULT NULL, `first_name` varchar(255) DEFAULT NULL, `last_name` varchar(255) DEFAULT NULL, `phone` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id_UNIQUE` (`id`), UNIQUE KEY `company_UNIQUE` (`company`) ) ENGINE=InnoDB AUTO_INCREMENT=63 DEFAULT CHARSET=utf8' | After hours of frustration, I was able to test a way that I think works for my use case. As we know, you can insert to specific columns, or all columns in a table. In my use case, I dynamically need to insert to the customers table, depending on what columns a user has permissions to insert to. I found that I needed to define all columns in the table() method of sqlalchemy, but I can pass in whatever columns and values that I need dynamically to the values() method. Final code: engine.execute(table('customers', column('company'), column('first_name'), column('last_name'), column('email'), column('phone')).insert().values({'company': 'sample name'})) | 11 | 10 |
64,083,104 | 2020-9-26 | https://stackoverflow.com/questions/64083104/making-python-generator-via-c20-coroutines | Let's say I have this python code: def double_inputs(): while True: x = yield yield x * 2 gen = double_inputs() next(gen) print(gen.send(1)) It prints "2", just as expected. I can make a generator in c++20 like that: #include <coroutine> template <class T> struct generator { struct promise_type; using coro_handle = std::coroutine_handle<promise_type>; struct promise_type { T current_value; auto get_return_object() { return generator{coro_handle::from_promise(*this)}; } auto initial_suspend() { return std::suspend_always{}; } auto final_suspend() { return std::suspend_always{}; } void unhandled_exception() { std::terminate(); } auto yield_value(T value) { current_value = value; return std::suspend_always{}; } }; bool next() { return coro ? (coro.resume(), !coro.done()) : false; } T value() { return coro.promise().current_value; } generator(generator const & rhs) = delete; generator(generator &&rhs) :coro(rhs.coro) { rhs.coro = nullptr; } ~generator() { if (coro) coro.destroy(); } private: generator(coro_handle h) : coro(h) {} coro_handle coro; }; generator<char> hello(){ //TODO:send string here via co_await, but HOW??? std::string word = "hello world"; for(auto &ch:word){ co_yield ch; } } int main(int, char**) { for (auto i = hello(); i.next(); ) { std::cout << i.value() << ' '; } } This generator just produces a string letter by letter, but the string is hardcoded in it. In python, it is possible not only to yield something FROM the generator but to yield something TO it too. I believe it could be done via co_await in C++. I need it to work like this: generator<char> hello(){ std::string word = co_await producer; // Wait string from producer somehow for(auto &ch:word){ co_yield ch; } } int main(int, char**) { auto gen = hello(); //make consumer producer("hello world"); //produce string for (; gen.next(); ) { std::cout << gen.value() << ' '; //consume string letter by letter } } How can I achieve that? How to make this "producer" using c++20 coroutines? | You have essentially two problems to overcome if you want to do this. The first is that C++ is a statically typed language. This means that the types of everything involved need to be known at compile time. This is why your generator type needs to be a template, so that the user can specify what type it shepherds from the coroutine to the caller. So if you want to have this bi-directional interface, then something on your hello function must specify both the output type and the input type. The simplest way to go about this is to just create an object and pass a non-const reference to that object to the generator. Each time it does a co_yield, the caller can modify the referenced object and then ask for a new value. The coroutine can read from the reference and see the given data. However, if you insist on using the future type for the coroutine as both output and input, then you need to both solve the first problem (by making your generator template take OutputType and InputType) as well as this second problem. See, your goal is to get a value to the coroutine. The problem is that the source of that value (the function calling your coroutine) has a future object. But the coroutine cannot access the future object. Nor can it access the promise object that the future references. Or at least, it can't do so easily. There are two ways to go about this, with different use cases. The first manipulates the coroutine machinery to backdoor a way into the promise. The second manipulates a property of co_yield to do basically the same thing. Transform The promise object for a coroutine is usually hidden and inaccessible from the coroutine. It is accessible to the future object, which the promise creates and which acts as an interface to the promised data. But it is also accessible during certain parts of the co_await machinery. Specifically, when you perform a co_await on any expression in a coroutine, the machinery looks at your promise type to see if it has a function called await_transform. If so, it will call that promise object's await_transform on every expression you co_await on (at least, in a co_await that you directly write, not implicit awaits, such as the one created by co_yield). As such, we need to do two things: create an overload of await_transform on the promise type, and create a type whose sole purpose is to allow us to call that await_transform function. So that would look something like this: struct generator_input {}; ... //Within the promise type: auto await_transform(generator_input); One quick note. The downside of using await_transform like this is that, by specifying even one overload of this function for our promise, we impact every co_await in any coroutine that uses this type. For a generator coroutine, that's not very important, since there's not much reason to co_await unless you're doing a hack like this. But if you were creating a more general mechanism that could distinctly await on arbitrary awaitables as part of its generation, you'd have a problem. OK, so we have this await_transform function; what does this function need to do? It needs to return an awaitable object, since co_await is going to await on it. But the purpose of this awaitable object is to deliver a reference to the input type. Fortunately, the mechanism co_await uses to convert the awaitable into a value is provided by the awaitable's await_resume method. So ours can just return an InputType&: //Within the `generator<OutputType, InputType>`: struct passthru_value { InputType &ret_; bool await_ready() {return true;} void await_suspend(coro_handle) {} InputType &await_resume() { return ret_; } }; //Within the promise type: auto await_transform(generator_input) { return passthru_value{input_value}; //Where `input_value` is the `InputType` object stored by the promise. } This gives the coroutine access to the value, by invoking co_await generator_input{};. Note that this returns a reference to the object. The generator type can easily be modified to allow the ability to modify an InputType object stored in the promise. Simply add a pair of send functions for overwriting the input value: void send(const InputType &input) { coro.promise().input_value = input; } void send(InputType &&input) { coro.promise().input_value = std::move(input); } This represents an asymmetric transport mechanism. The coroutine retrieves a value at a place and time of its own choosing. As such, it is under no real obligation to respond instantly to any changes. This is good in some respects, as it allows a coroutine to insulate itself from deleterious changes. If you're using a range-based for loop over a container, that container cannot be directly modified (in most ways) by the outside world or else your program will exhibit UB. So if the coroutine is fragile in that way, it can copy the data from the user and thus prevent the user from modifying it. All in all, the needed code isn't that large. Here's a run-able example of your code with these modifications: #include <coroutine> #include <exception> #include <string> #include <iostream> struct generator_input {}; template <typename OutputType, typename InputType> struct generator { struct promise_type; using coro_handle = std::coroutine_handle<promise_type>; struct passthru_value { InputType &ret_; bool await_ready() {return true;} void await_suspend(coro_handle) {} InputType &await_resume() { return ret_; } }; struct promise_type { OutputType current_value; InputType input_value; auto get_return_object() { return generator{coro_handle::from_promise(*this)}; } auto initial_suspend() { return std::suspend_always{}; } auto final_suspend() { return std::suspend_always{}; } void unhandled_exception() { std::terminate(); } auto yield_value(OutputType value) { current_value = value; return std::suspend_always{}; } void return_void() {} auto await_transform(generator_input) { return passthru_value{input_value}; } }; bool next() { return coro ? (coro.resume(), !coro.done()) : false; } OutputType value() { return coro.promise().current_value; } void send(const InputType &input) { coro.promise().input_value = input; } void send(InputType &&input) { coro.promise().input_value = std::move(input); } generator(generator const & rhs) = delete; generator(generator &&rhs) :coro(rhs.coro) { rhs.coro = nullptr; } ~generator() { if (coro) coro.destroy(); } private: generator(coro_handle h) : coro(h) {} coro_handle coro; }; generator<char, std::string> hello(){ auto word = co_await generator_input{}; for(auto &ch: word){ co_yield ch; } } int main(int, char**) { auto test = hello(); test.send("hello world"); while(test.next()) { std::cout << test.value() << ' '; } } Be more yielding An alternative to using an explicit co_await is to exploit a property of co_yield. Namely, co_yield is an expression and therefore it has a value. Specifically, it is (mostly) equivalent to co_await p.yield_value(e), where p is the promise object (ohh!) and e is what we're yielding. Fortunately, we already have a yield_value function; it returns std::suspend_always. But it could also return an object that always suspends, but also which co_await can unpack into an InputType&: struct yield_thru { InputType &ret_; bool await_ready() {return false;} void await_suspend(coro_handle) {} InputType &await_resume() { return ret_; } }; ... //in the promise auto yield_value(OutputType value) { current_value = value; return yield_thru{input_value}; } This is a symmetric transport mechanism; for every value you yield, you receive a value (which may be the same one as before). Unlike the explicit co_await method, you can't receive a value before you start to generate them. This could be useful for certain interfaces. And of course, you could combine them as you see fit. | 20 | 17 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.