question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
62,106,028 | 2020-5-30 | https://stackoverflow.com/questions/62106028/what-is-the-difference-between-np-linspace-and-np-arange | I have always used np.arange. I recently came across np.linspace. I am wondering what exactly is the difference between them... Looking at their documentation: np.arange: Return evenly spaced values within a given interval. np.linspace: Return evenly spaced numbers over a specified interval. The only difference I can see is linspace having more options... Like choosing to include the last element. Which one of these two would you recommend and why? And in which cases is np.linspace superior? | np.linspace allows you to define how many values you get including the specified min and max value. It infers the stepsize: >>> np.linspace(0,1,11) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) np.arange allows you to define the stepsize and infers the number of steps(the number of values you get). >>> np.arange(0,1,.1) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) contributions from user2357112: np.arange excludes the maximum value unless rounding error makes it do otherwise. For example, the following results occur due to rounding error: >>> numpy.arange(1, 1.3, 0.1) array([1. , 1.1, 1.2, 1.3]) You can exclude the stop value (in our case 1.3) using endpoint=False: >>> numpy.linspace(1, 1.3, 3, endpoint=False) array([1. , 1.1, 1.2]) | 79 | 109 |
62,100,869 | 2020-5-30 | https://stackoverflow.com/questions/62100869/ansible-error-the-python-2-bindings-for-rpm-are-needed-for-this-module | Im trying to pip install a requirements file in my python3 environment using the following task pip: python3: yes requirements: ./requirements/my_requirements.txt extra_args: -i http://mypypi/windows/simple I checked which version ansible is running on the controller node (RH7) and it's 3.6.8 ansible-playbook 2.9.9 config file = None configured module search path = ['/home/{hidden}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] No config file found; using defaults I am however getting the following error: fatal: [default]: FAILED! => {"changed": false, "msg": "The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.. The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} My controller node is running RH7. The targets are centos7 (provisioned by vagrantfiles) Does anyonek now how to solve this? | I had a similar problem with the "Amazon Linux 2" distribution that uses yum, but does not support dnf as of this writing. As mentioned in the comments above, my problem was in the ansible-managed nodes (AWS EC2 instances running Amazon Linux 2) and not in the controller. Solved it by imposing the use of python2, adding ansible_python_interpreter=/usr/bin/python2 for this group of hosts in the ansible inventory file, as in the following snippet: [amz_linux] server2 ansible_host=ec2-xx-yy-zz-pp.eu-west-1.compute.amazonaws.com [amz_linux:vars] ansible_user=ec2-user ansible_python_interpreter=/usr/bin/python2 Tried it with this playbook, adapted from a Redhat quick guide. --- - hosts: amz_linux become: yes tasks: - name: install Apache server yum: name: httpd state: latest - name: enable and start Apache server service: name: httpd enabled: yes state: started - name: create web admin group group: name: web state: present - name: create web admin user user: name: webadm comment: "Web Admin" groups: web append: yes - name: set content directory group/permissions file: path: /var/www/html owner: root group: web state: directory mode: u=rwx,g=rwx,o=rx,g+s - name: create default page content copy: content: "Welcome to {{ ansible_fqdn}} on {{ ansible_default_ipv4.address }}" dest: /var/www/html/index.html owner: webadm group: web mode: u=rw,g=rw,o=r Actual ansible-playbook run (after using ssh-add to add the instance private key to the ssh agent.) $ ansible-playbook -i ansible/hosts ansible/apache_amz_linux.yaml PLAY [amz_linux] ********************************************************************************************************** TASK [Gathering Facts] **************************************************************************************************** The authenticity of host 'ec2-xxxxxxxxxxx.eu-west-1.compute.amazonaws.com (xxxxxxxxxxx)' can't be established. ECDSA key fingerprint is SHA256:klksjdflskdjflskdfjsldkfjslkdjflskdjf/sdkfj. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes ok: [server2] TASK [install Apache server] ********************************************************************************************** changed: [server2] TASK [enable and start Apache server] ************************************************************************************* changed: [server2] TASK [create web admin group] ********************************************************************************************* changed: [server2] TASK [create web admin user] ********************************************************************************************** changed: [server2] TASK [set content directory group/permissions] **************************************************************************** changed: [server2] TASK [create default page content] **************************************************************************************** changed: [server2] PLAY RECAP **************************************************************************************************************** server2 : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 | 15 | 15 |
62,066,474 | 2020-5-28 | https://stackoverflow.com/questions/62066474/python-flask-automatically-generated-swagger-openapi-3-0 | Im trying to generate swagger document for my existing Flask app, I tried with Flask-RESTPlus initially and found out the project is abundant now and checked at the forked project flask-restx https://github.com/python-restx/flask-restx but still i dont think they support openapi 3.0 Im a bit confused to choose the package for my need. Im looking to solve a problem where we dont want to manually create swagger doc for our API, instead we would like to generate automatically using a packages. import os import requests import json, yaml from flask import Flask, after_this_request, send_file, safe_join, abort from flask_restx import Resource, Api, fields from flask_restx.api import Swagger app = Flask(__name__) api = Api(app=app, doc='/docs', version='1.0.0-oas3', title='TEST APP API', description='TEST APP API') response_fields = api.model('Resource', { 'value': fields.String(required=True, min_length=1, max_length=200, description='Book title') }) @api.route('/compiler/', endpoint='compiler') # @api.doc(params={'id': 'An ID'}) @api.doc(responses={403: 'Not Authorized'}) @api.doc(responses={402: 'Not Authorized'}) # @api.doc(responses={200: 'Not Authorized'}) class DemoList(Resource): @api.expect(response_fields, validate=True) @api.marshal_with(response_fields, code=200) def post(self): """ returns a list of conferences """ api.payload["value"] = 'Im the response ur waiting for' return api.payload @api.route('/swagger') class HelloWorld(Resource): def get(self): data = json.loads(json.dumps(api.__schema__)) with open('yamldoc.yml', 'w') as yamlf: yaml.dump(data, yamlf, allow_unicode=True, default_flow_style=False) file = os.path.abspath(os.getcwd()) try: @after_this_request def remove_file(resp): try: os.remove(safe_join(file, 'yamldoc.yml')) except Exception as error: log.error("Error removing or closing downloaded file handle", error) return resp return send_file(safe_join(file, 'yamldoc.yml'), as_attachment=True, attachment_filename='yamldoc.yml', mimetype='application/x-yaml') except FileExistsError: abort(404) # main driver function if __name__ == '__main__': app.run(port=5003, debug=True) The above code is a combination of my try on different packages, but it can generate swagger 2.0 doc but im trying to generate doc for openapi 3.0 Can some one suggest a good package which is supporting openapi 3.0 way of generating swagger yaml or json. | I found a package to generate openapi 3.0 document https://apispec.readthedocs.io/en/latest/install.html This package serves the purpose neatly. Find the below code for detailed usage. from apispec import APISpec from apispec.ext.marshmallow import MarshmallowPlugin from apispec_webframeworks.flask import FlaskPlugin from marshmallow import Schema, fields from flask import Flask, abort, request, make_response, jsonify from pprint import pprint import json class DemoParameter(Schema): gist_id = fields.Int() class DemoSchema(Schema): id = fields.Int() content = fields.Str() spec = APISpec( title="Demo API", version="1.0.0", openapi_version="3.0.2", info=dict( description="Demo API", version="1.0.0-oas3", contact=dict( email="[email protected]" ), license=dict( name="Apache 2.0", url='http://www.apache.org/licenses/LICENSE-2.0.html' ) ), servers=[ dict( description="Test server", url="https://resources.donofden.com" ) ], tags=[ dict( name="Demo", description="Endpoints related to Demo" ) ], plugins=[FlaskPlugin(), MarshmallowPlugin()], ) spec.components.schema("Demo", schema=DemoSchema) # spec.components.schema( # "Gist", # { # "properties": { # "id": {"type": "integer", "format": "int64"}, # "name": {"type": "string"}, # } # }, # ) # # spec.path( # path="/gist/{gist_id}", # operations=dict( # get=dict( # responses={"200": {"content": {"application/json": {"schema": "Gist"}}}} # ) # ), # ) # Extensions initialization # ========================= app = Flask(__name__) @app.route("/demo/<gist_id>", methods=["GET"]) def my_route(gist_id): """Gist detail view. --- get: parameters: - in: path schema: DemoParameter responses: 200: content: application/json: schema: DemoSchema 201: content: application/json: schema: DemoSchema """ # (...) return jsonify('foo') # Since path inspects the view and its route, # we need to be in a Flask request context with app.test_request_context(): spec.path(view=my_route) # We're good to go! Save this to a file for now. with open('swagger.json', 'w') as f: json.dump(spec.to_dict(), f) pprint(spec.to_dict()) print(spec.to_yaml()) Hope this helps someone!! :) Update: More Detailed Documents Python Flask automatically generated Swagger 3.0/Openapi Document - http://donofden.com/blog/2020/06/14/Python-Flask-automatically-generated-Swagger-3-0-openapi-Document Python Flask automatically generated Swagger 2.0 Document - http://donofden.com/blog/2020/05/30/Python-Flask-automatically-generated-Swagger-2-0-Document | 19 | 25 |
62,119,073 | 2020-5-31 | https://stackoverflow.com/questions/62119073/why-are-migrations-files-often-excluded-from-code-formatting | We're applying Black code style to a django project. In all the tutorials / examples I find (such as in django cookiecutter and this blog), I keep seeing django's migrations files excluded from the linter. But to my mind, these are still python files. Sure, django may not autogenerate them to meet the Black spec. But it's not like developers always write their code to meet Black spec... that's what linting is for! Why would migration files be considered different to any other python files?! NB I'm aware of the possibility of changing an already-applied migration if you've got pre-existing migrations - this requires care on first application (as does first application to the rest of the codebase, frankly) but surely isn't a reason not to do it? EDIT - @torxed asked for an example of a django migration file I'm not sure how helpful this'll be tbh, but a typical django migration file might look like this (in this case adding a char field to a table): # Generated by Django 2.2.3 on 2019-10-28 09:45 from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('api', '0009_figures_notes_tables'), ] operations = [ migrations.AlterField( model_name='project', name='processed_kmz_sha', field=models.CharField(max_length=255), ), ] | I bit the bullet and applied Black to my migrations files, progressively across half a dozen django projects. No problems at all, everything deployed in production for months now. So the answer is: No reason at all why not to do this, and I think migrations files should be included, so that reading them is a consistent experience with the rest of the project. | 14 | 8 |
62,158,734 | 2020-6-2 | https://stackoverflow.com/questions/62158734/deprecationwarning-the-default-dtype-for-empty-series-will-be-object-instead | I appending a new row to an existing pandas dataframe as follows: df= df.append(pd.Series(), ignore_index=True) This is resulting in the subject DeprecationWarning. The existing df has a mix of string, float and dateime.date datatypes (8 columns totals). Is there a way to explicitly specify the columns types in the df.append? I have looked here and here but I still have no solution. Please advise if there is a better way to append a row to the end of an existing dataframe without triggering this warning. | You can try this Type_new = pd.Series([],dtype=pd.StringDtype()) This will create a blank data frame for us. | 18 | 17 |
62,082,873 | 2020-5-29 | https://stackoverflow.com/questions/62082873/conda-not-activated-in-power-shell | I have already install anaconda on my Windows 10 laptop. I'm trying to activate the Python environment named pyenv. First, I check the conda env list in my laptop, this is the output on the power shell: PS C:\Users\User> conda env list # conda environments: # base * C:\Users\User\Anaconda3 pyenv C:\Users\User\Anaconda3\envs\pyenv Then I activate pyenv: PS C:\Users\User> conda activate pyenv But I check again, it still activates base environment: PS C:\Users\User> conda env list # conda environments: # base * C:\Users\User\Anaconda3 pyenv C:\Users\User\Anaconda3\envs\pyenv When I use the Anaconda prompt, it works normally: (base) C:\Users\User>conda activate pyenv (pyenv) C:\Users\User> Does anyone know why it causes this problem and how to fix this? Update: Running conda init powershell: PS C:\Users\User> conda init powershell no change C:\Users\User\Anaconda3\Scripts\conda.exe no change C:\Users\User\Anaconda3\Scripts\conda-script.py no change C:\Users\User\Anaconda3\Scripts\conda-env-script.py no change C:\Users\User\Anaconda3\condabin\conda.bat no change C:\Users\User\Anaconda3\Library\bin\conda.bat no change C:\Users\User\Anaconda3\condabin\_conda_activate.bat no change C:\Users\User\Anaconda3\condabin\rename_tmp.bat no change C:\Users\User\Anaconda3\condabin\conda_hook.bat no change C:\Users\User\Anaconda3\Scripts\activate.bat no change C:\Users\User\Anaconda3\condabin\activate.bat no change C:\Users\User\Anaconda3\condabin\deactivate.bat modified C:\Users\User\Anaconda3\etc\profile.d\conda.sh modified C:\Users\User\Anaconda3\etc\fish\conf.d\conda.fish no change C:\Users\User\Anaconda3\shell\condabin\Conda.psm1 modified C:\Users\User\Anaconda3\shell\condabin\conda-hook.ps1 no change C:\Users\User\Anaconda3\Lib\site-packages\xontrib\conda.xsh modified C:\Users\User\Anaconda3\etc\profile.d\conda.csh modified C:\Users\User\Documents\WindowsPowerShell\profile.ps1 Update 2: It works when using CMD: C:\Users\User>conda activate pyenv (pyenv) C:\Users\User> | After a while, my Powershell appear this error when I open it. . : File C:\Users\User\Documents\WindowsPowerShell\profile.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170. At line:1 char:3 + . 'C:\Users\BinoyGhosh\Documents\WindowsPowerShell\profile.ps1' + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : SecurityError: (:) [], PSSecurityException + FullyQualifiedErrorId : UnauthorizedAccess Then I found this solution. Run Powershell as Administrator Run this line set-executionpolicy remotesigned Close the terminal Then it works. | 20 | 21 |
62,141,051 | 2020-6-1 | https://stackoverflow.com/questions/62141051/flask-how-to-register-a-wrapper-to-all-methods | I've been moving from bottle to flask. I'm the type of person that prefers writing my own code instead of downloading packages from the internet if I the code needed is 20 lines or less. Take for example support for Basic authentication protocol. In bottle I could write: def allow_anonymous(): """assign a _allow_anonymous flag to functions not requiring authentication""" def wrapper(fn): fn._allow_anonymous = True return fn return wrapper def auth_middleware(fn): """perform authentication (pre-req)""" def wrapper(*a, **ka): # if the allow_anonymous annotation is set then bypass this auth if hasattr(fn, '_allow_anonymous') and fn._allow_anonymous: return fn(*a, **ka) user, password = request.auth or (None, None) if user is None or not check(user, password): err = HTTPError(401, text) err.add_header('WWW-Authenticate', 'Basic realm="%s"' % realm) return err return fn(*a, **ka) return wrapper ... app = Bottle() app.install(middleware.auth_middleware) The above code gave me full support for basic auth protocol for all methods unless explicitly decorated with the @allow_anonymous wrapper. I'm just a beginner with flask. I'm having a hard time accomplishing the bottle-compatible code above in flask without adding dependencies on more python packages or excessive boiler-plate. How is this handled directly and clearly in flask? | You can definitely some of the functionality of flask-httpauth yourself, if you wish :-P I would think you will need to play some before_request games (not very beautiful), or alternatively call flask's add_url_rule with a decorated method for each api endpoint (or have a route decorator of your own that will do this). The add_url_rule gets a view function that is usually your api endpoint handler, but in your case, will be a wrapped method in a manner very much like the one you gave in the post (auth_middleware). The gist of it: from flask import Flask, make_response, request app = Flask(__name__) def view_wrapper(fn): """ Create a wrapped view function that checks user authorization """ def protected_view(*a, **ka): # if the allow_anonymous annotation is set then bypass this auth if hasattr(fn, '_allow_anonymous') and fn._allow_anonymous: return fn(*a, **ka) # consult werkzeug's authorization mixin user, password = (request.authorization.username, request.authorization.password) if request.authorization else (None, None) if user is None or not check(user, password): err_response = make_response(text, 401) err_response.headers['WWW-Authenticate'] = 'Basic realm="%s"' % realm return err_response return fn(*a, **ka) return protected_view # An endpoint def hello(): return 'hello there' app.add_url_rule('/', 'hello', view_wrapper(hello)) Of course, this can (and should) be further enhanced with Blueprints, etc. | 13 | 5 |
62,057,838 | 2020-5-28 | https://stackoverflow.com/questions/62057838/how-to-retrieve-the-labels-used-in-a-segmentation-mask-in-aws-sagemaker | From a segmentation mask, I am trying to retrieve what labels are being represented in the mask. This is the image I am running through a semantic segmentation model in AWS Sagemaker. Code for making prediction and displaying mask. from sagemaker.predictor import json_serializer, json_deserializer, RealTimePredictor from sagemaker.content_types import CONTENT_TYPE_CSV, CONTENT_TYPE_JSON %%time ss_predict = sagemaker.RealTimePredictor(endpoint=ss_model.endpoint_name, sagemaker_session=sess, content_type = 'image/jpeg', accept = 'image/png') return_img = ss_predict.predict(img) from PIL import Image import numpy as np import io num_labels = 21 mask = np.array(Image.open(io.BytesIO(return_img))) plt.imshow(mask, vmin=0, vmax=num_labels-1, cmap='jet') plt.show() This image is the segmentation mask that was created and it represents the motorbike and everything else is the background. [ As you can see from the code there are 21 possible labels and 2 were used in the mask, one for the motorbike and another for the background. What I would like to figure out now is how to print which labels were actually used in this mask out of the 21 possible options? Please let me know if you need any further information and any help is much appreciated. | Somewhere you should have a mapping from label integers to label classes, e.g. label_map = {0: 'background', 1: 'motorbike', 2: 'train', ...} If you are using the Pascal VOC dataset, that would be (1=aeroplane, 2=bicycle, 3=bird, 4=boat, 5=bottle, 6=bus, 7=car , 8=cat, 9=chair, 10=cow, 11=diningtable, 12=dog, 13=horse, 14=motorbike, 15=person, 16=potted plant, 17=sheep, 18=sofa, 19=train, 20=tv/monitor) - see here: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/segexamples/index.html Then you can simply use that map: used_classes = np.unique(mask) for cls in used_classes: print("Found class: {}".format(label_map[cls])) | 8 | 1 |
62,148,564 | 2020-6-2 | https://stackoverflow.com/questions/62148564/read-files-with-only-specific-names-from-amazon-s3 | I have connected to Amazon S3 and am trying to retrieve data from the JSON content from multiple buckets using the below code. But I have to read only specific JSON files, but not all. How do I do it? Code: for i in bucket: try: result = client.list_objects(Bucket=i,Prefix = 'PROCESSED_BY/FILE_JSON', Delimiter='/') content_object = s3.Object(i, "PROCESSED_BY/FILE_JSON/?Account.json") file_content = content_object.get()['Body'].read().decode('utf-8') json_content = json.loads(file_content) except KeyError: pass Bucket structure example. test-eob/PROCESSED_BY/FILE_JSON/222-Account.json test-eob/PROCESSED_BY/FILE_JSON/1212121-Account.json test-eob/PROCESSED_BY/FILE_JSON/122-multi.json test-eob/PROCESSED_BY/FILE_JSON/qwqwq-Account.json test-eob/PROCESSED_BY/FILE_JSON/wqwqw-multi.json From the above list, I want to only read *-Account.json files. How can I achieve this? | There are several ways to do this in Python. For example, checking if 'stringA' is in 'stringB': list1=['test-eob/PROCESSED_BY/FILE_JSON/222-Account.json', 'test-eob/PROCESSED_BY/FILE_JSON/1212121-Account.json', 'test-eob/PROCESSED_BY/FILE_JSON/122-multi.json', 'test-eob/PROCESSED_BY/FILE_JSON/qwqwq-Account.json', 'test-eob/PROCESSED_BY/FILE_JSON/wqwqw-multi.json',] for i in list1: if 'Account' in i: print (i) else: pass | 9 | 3 |
62,123,125 | 2020-5-31 | https://stackoverflow.com/questions/62123125/how-to-join-strings-between-parentheses-in-a-list-of-strings | poke_list = [... 'Charizard', '(Mega', 'Charizard', 'X)', '78', '130', ...] #1000+ values Is it possible to merge strings that start with '(' and end with ')' and then reinsert it into the same list or a new list? My desired output poke_list = [... 'Charizard (Mega Charizard X)', '78', '130', ...] | Another way to do it, slightly shorter than other solution poke_list = ['Bulbasaur', 'Charizard', '(Mega', 'Charizard', 'X)', '78', 'Pikachu', '(Raichu)', '130'] fixed = [] acc = fixed for x in poke_list: if x[0] == '(': acc = [fixed.pop()] acc.append(x) if x[-1] == ')': fixed.append(' '.join(acc)) acc = fixed if not acc is fixed: fixed.append(' '.join(acc)) print(fixed) Also notice that this solution assumes that the broken list doesn't start with a parenthesis to fix, and also manage the case where an item has both opening and closing parenthesis (case excluded in other solution) The idea is to either append values to main list (fixed) or to some inner list which will be joined later if we have detected opening parenthesis. If the inner list was never closed when exiting the loop (likely illegal) we append it anyway to the fixed list when exiting the loop. This way of doing things if very similar to the transformation of a flat expression containing parenthesis to a hierarchy of lists. The code would of course be slightly different and should manage more than one level of inner list. | 7 | 2 |
62,092,147 | 2020-5-29 | https://stackoverflow.com/questions/62092147/how-to-efficiently-assign-to-a-slice-of-a-tensor-in-tensorflow | I want to assign some values to slices of an input tensor in one of my model in TensorFlow 2.x (I am using 2.2 but ready to accept a solution for 2.1). A non-working template of what I am trying to do is: import tensorflow as tf from tensorflow.keras.models import Model class AddToEven(Model): def call(self, inputs): outputs = inputs outputs[:, ::2] += inputs[:, ::2] return outputs of course when building this (AddToEven().build(tf.TensorShape([None, None]))) I get the following error: TypeError: 'Tensor' object does not support item assignment I can achieve this simple example via the following: class AddToEvenScatter(Model): def call(self, inputs): batch_size = tf.shape(inputs)[0] n = tf.shape(inputs)[-1] update_indices = tf.range(0, n, delta=2)[:, None] scatter_nd_perm = [1, 0] inputs_reshaped = tf.transpose(inputs, scatter_nd_perm) outputs = tf.tensor_scatter_nd_add( inputs_reshaped, indices=update_indices, updates=inputs_reshaped[::2], ) outputs = tf.transpose(outputs, scatter_nd_perm) return outputs (you can sanity-check with: model = AddToEvenScatter() model.build(tf.TensorShape([None, None])) model(tf.ones([1, 10])) ) But as you can see it's very complicated to write. And this is only for a static number of updates (here 1) on a 1D (+ batch size) tensor. What I want to do is a bit more involved and I think writing it with tensor_scatter_nd_add is going to be a nightmare. A lot of the current QAs on the topic cover the case for variables but not tensors (see e.g. this or this). It is mentionned here that indeed pytorch supports this, so I am surprised to see no response from any tf members on that topic recently. This answer doesn't really help me, because I will need some kind of mask generation which is going to be awful as well. The question is thus: how can I do slice assignment efficiently (computation-wise, memory-wise and code-wise) w/o tensor_scatter_nd_add? The trick is that I want this to be as dynamical as possible, meaning that the shape of the inputs could be variable. (For anyone curious I am trying to translate this code in tf). This question was originally posted in a GitHub issue. | Here is another solution based on binary mask. """Solution based on binary mask. - We just add this mask to inputs, instead of multiplying.""" class AddToEven(tf.keras.Model): def __init__(self): super(AddToEven, self).__init__() def build(self, inputshape): self.built = True # Actually nothing to build with, becuase we don't have any variables or weights here. @tf.function def call(self, inputs): w = inputs.get_shape()[-1] # 1-d mask generation for w-axis (activate even indices only) m_w = tf.range(w) # [0, 1, 2,... w-1] m_w = ((m_w%2)==0) # [True, False, True ,...] with dtype=tf.bool # Apply 1-d mask to 2-d input m_w = tf.expand_dims(m_w, axis=0) # just extend dimension as to be (1, W) m_w = tf.cast(m_w, dtype=inputs.dtype) # in advance, we need to convert dtype # Here, we just add this (1, W) mask to (H,W) input magically. outputs = inputs + m_w # This add operation is allowed in both TF and numpy! return tf.reshape(outputs, inputs.get_shape()) Sanity-check here. # sanity-check as model model = AddToEven() model.build(tf.TensorShape([None, None])) z = model(tf.zeros([2,4])) print(z) Result (with TF 2.1) is like this. tf.Tensor( [[1. 0. 1. 0.] [1. 0. 1. 0.]], shape=(2, 4), dtype=float32) -------- Below is the previous answer -------- You need to create tf.Variable in build() method. It also allows dynamic size by shape=(None,). In the code below, I specified the input shape as (None, None). class AddToEven(tf.keras.Model): def __init__(self): super(AddToEven, self).__init__() def build(self, inputshape): self.v = tf.Variable(initial_value=tf.zeros((0,0)), shape=(None, None), trainable=False, dtype=tf.float32) @tf.function def call(self, inputs): self.v.assign(inputs) self.v[:, ::2].assign(self.v[:, ::2] + 1) return self.v.value() I tested this code with TF 2.1.0 and TF1.15 # test add_to_even = AddToEven() z = add_to_even(tf.zeros((2,4))) print(z) Result: tf.Tensor( [[1. 0. 1. 0.] [1. 0. 1. 0.]], shape=(2, 4), dtype=float32) P.S. There are some other ways, such as using tf.numpy_function(), or generating mask function. | 8 | 3 |
62,097,219 | 2020-5-30 | https://stackoverflow.com/questions/62097219/getting-a-error-400-redirect-uri-mismatch-when-trying-to-use-oauth2-with-google | I am trying to connect to Google Sheets' API from a Django view. The bulk of the code I have taken from this link: https://developers.google.com/sheets/api/quickstart/python Anyway, here are the codes: sheets.py (Copy pasted from the link above, function renamed) from __future__ import print_function import pickle import os.path from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly'] # The ID and range of a sample spreadsheet. SAMPLE_SPREADSHEET_ID = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms' SAMPLE_RANGE_NAME = 'Class Data!A2:E' def test(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.pickle stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) service = build('sheets', 'v4', credentials=creds) # Call the Sheets API sheet = service.spreadsheets() result = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID, range=SAMPLE_RANGE_NAME).execute() values = result.get('values', []) if not values: print('No data found.') else: print('Name, Major:') for row in values: # Print columns A and E, which correspond to indices 0 and 4. print('%s, %s' % (row[0], row[4])) urls.py urlpatterns = [ path('', views.index, name='index') ] views.py from django.http import HttpResponse from django.shortcuts import render from .sheets import test # Views def index(request): test() return HttpResponse('Hello world') All the view function does is just call the test() method from the sheets.py module. Anyway, when I run my server and go the URL, another tab opens up for the Google oAuth2, which means that the credentials file is detected and everything. However, in this tab, the following error message is displayed from Google: Error 400: redirect_uri_mismatch The redirect URI in the request, http://localhost:65262/, does not match the ones authorized for the OAuth client. In my API console, I have the callback URL set exactly to 127.0.0.1:8000 to match my Django's view URL. I don't even know where the http://localhost:65262/ URL comes from. Any help in fixing this? And can someone explain to me why this is happening? Thanks in advance. EDIT I tried to remove the port=0 in the flow method as mentioned in the comment, then the URL mismatch occurs with http://localhost:8080/, which is again pretty weird because my Django app is running in the 8000 port. | You shouldn't be using Flow.run_local_server() unless you don't have the intention of deploying the code. This is because run_local_server launches a browser on the server to complete the flow. This works just fine if you're developing the project locally for yourself. If you're intent on using the local server to negotiate the OAuth flow. The Redirect URI configured in your secrets must match that, the local server default for the host is localhost and port is 8080. If you're looking to deploy the code, you must perform the flow via an exchange between the user's browser, your server and Google. Since you have a Django server already running, you can use that to negotiate the flow. For example, Say there is a tweets app in a Django project with urls.py module as follows. from django.urls import path, include from . import views urlpatterns = [ path('google_oauth', views.google_oath, name='google_oauth'), path('hello', views.say_hello, name='hello'), ] urls = include(urlpatterns) You could implement a guard for views that require credentials as follow. import functools import json import urllib from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from django.shortcuts import redirect from django.http import HttpResponse SCOPES = ['https://www.googleapis.com/auth/userinfo.email', 'https://www.googleapis.com/auth/userinfo.profile', 'openid'] def provides_credentials(func): @functools.wraps(func) def wraps(request): # If OAuth redirect response, get credentials flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES, redirect_uri="http://localhost:8000/tweet/hello") existing_state = request.GET.get('state', None) current_path = request.path if existing_state: secure_uri = request.build_absolute_uri( ).replace('http', 'https') location_path = urllib.parse.urlparse(existing_state).path flow.fetch_token( authorization_response=secure_uri, state=existing_state ) request.session['credentials'] = flow.credentials.to_json() if location_path == current_path: return func(request, flow.credentials) # Head back to location stored in state when # it is different from the configured redirect uri return redirect(existing_state) # Otherwise, retrieve credential from request session. stored_credentials = request.session.get('credentials', None) if not stored_credentials: # It's strongly recommended to encrypt state. # location is needed in state to remember it. location = request.build_absolute_uri() # Commence OAuth dance. auth_url, _ = flow.authorization_url(state=location) return redirect(auth_url) # Hydrate stored credentials. credentials = Credentials(**json.loads(stored_credentials)) # If credential is expired, refresh it. if credentials.expired and creds.refresh_token: creds.refresh(Request()) # Store JSON representation of credentials in session. request.session['credentials'] = credentials.to_json() return func(request, credentials=credentials) return wraps @provides_credentials def google_oauth(request, credentials): return HttpResponse('Google OAUTH <a href="/tweet/hello">Say Hello</a>') @provides_credentials def say_hello(request, credentials): # Use credentials for whatever return HttpResponse('Hello') Note that this is only an example. If you decide to go this route, I recommend looking into extracting the OAuth flow to its very own Django App. | 7 | 5 |
62,095,767 | 2020-5-29 | https://stackoverflow.com/questions/62095767/how-to-create-a-custom-preprocessinglayer-in-tf-2-2 | I would like to create a custom preprocessing layer using the tf.keras.layers.experimental.preprocessing.PreprocessingLayer layer. In this custom layer, placed after the input layer, I would like to normalize my image using tf.cast(img, tf.float32) / 255. I tried to find some code or example showing how to create this preprocessing layer, but I couldn't find. Please, can someone provide a full example creating and using the PreprocessingLayer layer ? | If you want to have a custom preprocessing layer, actually you don't need to use PreprocessingLayer. You can simply subclass Layer Take the simplest preprocessing layer Rescaling as an example, it is under the tf.keras.layers.experimental.preprocessing.Rescaling namespace. However, if you check the actual implementation, it is just subclass Layer class Source Code Link Here but has @keras_export('keras.layers.experimental.preprocessing.Rescaling') @keras_export('keras.layers.experimental.preprocessing.Rescaling') class Rescaling(Layer): """Multiply inputs by `scale` and adds `offset`. For instance: 1. To rescale an input in the `[0, 255]` range to be in the `[0, 1]` range, you would pass `scale=1./255`. 2. To rescale an input in the `[0, 255]` range to be in the `[-1, 1]` range, you would pass `scale=1./127.5, offset=-1`. The rescaling is applied both during training and inference. Input shape: Arbitrary. Output shape: Same as input. Arguments: scale: Float, the scale to apply to the inputs. offset: Float, the offset to apply to the inputs. name: A string, the name of the layer. """ def __init__(self, scale, offset=0., name=None, **kwargs): self.scale = scale self.offset = offset super(Rescaling, self).__init__(name=name, **kwargs) def call(self, inputs): dtype = self._compute_dtype scale = math_ops.cast(self.scale, dtype) offset = math_ops.cast(self.offset, dtype) return math_ops.cast(inputs, dtype) * scale + offset def compute_output_shape(self, input_shape): return input_shape def get_config(self): config = { 'scale': self.scale, 'offset': self.offset, } base_config = super(Rescaling, self).get_config() return dict(list(base_config.items()) + list(config.items())) So it proves that Rescaling preprocessing is just another normal layer. The main part is the def call(self, inputs) function. You can create whatever complicated logic to preprocess your inputs and then return. A easier documentation about custom layer can be find here In a nutshell, you can do the preprocessing by layer, either by Lambda which for simple operation or by subclassing Layer to achieve your goal. | 7 | 9 |
62,079,732 | 2020-5-29 | https://stackoverflow.com/questions/62079732/did-i-o-become-slower-since-python-2-7 | I'm currently having a small side project in which I want to sort a 20GB file on my machine as fast as possible. The idea is to chunk the file, sort the chunks, merge the chunks. I just used pyenv to time the radixsort code with different Python versions and saw that 2.7.18 is way faster than 3.6.10, 3.7.7, 3.8.3 and 3.9.0a. Can anybody explain why Python 3.x is slower than 2.7.18 in this simple example? Were there new features added? import os def chunk_data(filepath, prefixes): """ Pre-sort and chunk the content of filepath according to the prefixes. Parameters ---------- filepath : str Path to a text file which should get sorted. Each line contains a string which has at least 2 characters and the first two characters are guaranteed to be in prefixes prefixes : List[str] """ prefix2file = {} for prefix in prefixes: chunk = os.path.abspath("radixsort_tmp/{:}.txt".format(prefix)) prefix2file[prefix] = open(chunk, "w") # This is where most of the execution time is spent: with open(filepath) as fp: for line in fp: prefix2file[line[:2]].write(line) Execution times (multiple runs): 2.7.18: 192.2s, 220.3s, 225.8s 3.6.10: 302.5s 3.7.7: 308.5s 3.8.3: 279.8s, 279.7s (binary mode), 295.3s (binary mode), 307.7s, 380.6s (wtf?) 3.9.0a: 292.6s The complete code is on Github, along with a minimal complete version Unicode Yes, I know that Python 3 and Python 2 deal different with strings. I tried opening the files in binary mode (rb / wb), see the "binary mode" comments. They are a tiny bit faster on a couple of runs. Still, Python 2.7 is WAY faster on all runs. Try 1: Dictionary access When I phrased this question, I thought that dictionary access might be a reason for this difference. However, I think the total execution time is way less for dictionary access than for I/O. Also, timeit did not show anything important: import timeit import numpy as np durations = timeit.repeat( 'a["b"]', repeat=10 ** 6, number=1, setup="a = {'b': 3, 'c': 4, 'd': 5}" ) mul = 10 ** -7 print( "mean = {:0.1f} * 10^-7, std={:0.1f} * 10^-7".format( np.mean(durations) / mul, np.std(durations) / mul ) ) print("min = {:0.1f} * 10^-7".format(np.min(durations) / mul)) print("max = {:0.1f} * 10^-7".format(np.max(durations) / mul)) Try 2: Copy time As a simplified experiment, I tried to copy the 20GB file: cp via shell: 230s Python 2.7.18: 237s, 249s Python 3.8.3: 233s, 267s, 272s The Python stuff is generated by the following code. My first thought was that the variance is quite high. So this could be the reason. But then, the variance of chunk_data execution time is also high, but the mean is noticeably lower for Python 2.7 than for Python 3.x. So it seems not to be an I/O scenario as simple as I tried here. import time import sys import os version = sys.version_info version = "{}.{}.{}".format(version.major, version.minor, version.micro) if os.path.isfile("numbers-tmp.txt"): os.remove("numers-tmp.txt") t0 = time.time() with open("numbers-large.txt") as fin, open("numers-tmp.txt", "w") as fout: for line in fin: fout.write(line) t1 = time.time() print("Python {}: {:0.0f}s".format(version, t1 - t0)) My System Ubuntu 20.04 Thinkpad T460p Python through pyenv | This is a combination of multiple effects, mostly the fact that Python 3 needs to perform unicode decoding/encoding when working in text mode and if working in binary mode it will send the data through dedicated buffered IO implementations. First of all, using time.time to measure execution time uses the wall time and hence includes all sorts of Python unrelated things such as OS-level caching and buffering, as well as buffering of the storage medium. It also reflects any interference with other processes that require the storage medium. That's why you are seeing these wild variations in timing results. Here are the results for my system, from seven consecutive runs for each version: py3 = [660.9, 659.9, 644.5, 639.5, 752.4, 648.7, 626.6] # 661.79 +/- 38.58 py2 = [635.3, 623.4, 612.4, 589.6, 633.1, 613.7, 603.4] # 615.84 +/- 15.09 Despite the large variation it seems that these results indeed indicate different timings as can be confirmed for example by a statistical test: >>> from scipy.stats import ttest_ind >>> ttest_ind(p2, p3)[1] 0.018729004515179636 i.e. there's only a 2% chance that the timings emerged from the same distribution. We can get a more precise picture by measuring the process time rather than the wall time. In Python 2 this can be done via time.clock while Python 3.3+ offers time.process_time. These two functions report the following timings: py3_process_time = [224.4, 226.2, 224.0, 226.0, 226.2, 223.7, 223.8] # 224.90 +/- 1.09 py2_process_time = [171.0, 171.1, 171.2, 171.3, 170.9, 171.2, 171.4] # 171.16 +/- 0.16 Now there's much less spread in the data since the timings reflect the Python process only. This data suggests that Python 3 takes about 53.7 seconds longer to execute. Given the large amount of lines in the input file (550_000_000) this amounts to about 97.7 nanoseconds per iteration. The first effect causing increased execution time are unicode strings in Python 3. The binary data is read from the file, decoded and then encoded again when it is written back. In Python 2 all strings are stored as binary strings right away, so this doesn't introduce any encoding/decoding overhead. You don't see this effect clearly in your tests because it disappears in the large variation introduced by various external resources which are reflected in the wall time difference. For example we can measure the time it takes for a roundtrip from binary to unicode to binary: In [1]: %timeit b'000000000000000000000000000000000000'.decode().encode() 162 ns ± 2 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) This does include two attribute lookups as well as two function calls, so the actual time needed is smaller than the value reported above. To see the effect on execution time, we can change the test script to use binary modes "rb" and "wb" instead of text modes "r" and "w". This reduces the timing results for Python 3 as follows: py3_binary_mode = [200.6, 203.0, 207.2] # 203.60 +/- 2.73 That reduces the process time by about 21.3 seconds or 38.7 nanoseconds per iteration. This is in agreement with timing results for the roundtrip benchmark minus timing results for name lookups and function calls: In [2]: class C: ...: def f(self): pass ...: In [3]: x = C() In [4]: %timeit x.f() 82.2 ns ± 0.882 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [5]: %timeit x 17.8 ns ± 0.0564 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each) Here %timeit x measures the additional overhead of resolving the global name x and hence the attribute lookup and function call make 82.2 - 17.8 == 64.4 seconds. Subtracting this overhead twice from the above roundtrip data gives 162 - 2*64.4 == 33.2 seconds. Now there's still a difference of 32.4 seconds between Python 3 using binary mode and Python 2. This comes from the fact that all the IO in Python 3 goes through the (quite complex) implementation of io.BufferedWriter .write while in Python 2 the file.write method proceeds fairly straightforward to fwrite. We can check the types of the file objects in both implementations: $ python3.8 >>> type(open('/tmp/test', 'wb')) <class '_io.BufferedWriter'> $ python2.7 >>> type(open('/tmp/test', 'wb')) <type 'file'> Here we also need to note that the above timing results for Python 2 have been obtained by using text mode, not binary mode. Binary mode aims to support all objects implementing the buffer protocol which results in additional work being performed also for strings (see also this question). If we switch to binary mode also for Python 2 then we obtain: py2_binary_mode = [212.9, 213.9, 214.3] # 213.70 +/- 0.59 which is actually a bit larger than the Python 3 results (18.4 ns / iteration). The two implementations also differ in other details such as the dict implementation. To measure this effect we can create a corresponding setup: from __future__ import print_function import timeit N = 10**6 R = 7 results = timeit.repeat( "d[b'10'].write", setup="d = dict.fromkeys((str(i).encode() for i in range(10, 100)), open('test', 'rb'))", # requires file 'test' to exist repeat=R, number=N ) results = [x/N for x in results] print(['{:.3e}'.format(x) for x in results]) print(sum(results) / R) This gives the following results for Python 2 and Python 3: Python 2: ~ 56.9 nanoseconds Python 3: ~ 78.1 nanoseconds This additional difference of about 21.2 nanoseconds amounts to about 12 seconds for the full 550M iterations. The above timing code checks the dict lookup for only one key, so we also need to verify that there are no hash collisions: $ python3.8 -c "print(len({str(i).encode() for i in range(10, 100)}))" 90 $ python2.7 -c "print len({str(i).encode() for i in range(10, 100)})" 90 | 8 | 14 |
62,144,904 | 2020-6-2 | https://stackoverflow.com/questions/62144904/python-how-to-retrieve-the-best-model-from-optuna-lightgbm-study | I would like to get the best model to use later in the notebook to predict using a different test batch. reproducible example (taken from Optuna Github) : import lightgbm as lgb import numpy as np import sklearn.datasets import sklearn.metrics from sklearn.model_selection import train_test_split import optuna # FYI: Objective functions can take additional arguments # (https://optuna.readthedocs.io/en/stable/faq.html#objective-func-additional-args). def objective(trial): data, target = sklearn.datasets.load_breast_cancer(return_X_y=True) train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25) dtrain = lgb.Dataset(train_x, label=train_y) dvalid = lgb.Dataset(valid_x, label=valid_y) param = { "objective": "binary", "metric": "auc", "verbosity": -1, "boosting_type": "gbdt", "lambda_l1": trial.suggest_loguniform("lambda_l1", 1e-8, 10.0), "lambda_l2": trial.suggest_loguniform("lambda_l2", 1e-8, 10.0), "num_leaves": trial.suggest_int("num_leaves", 2, 256), "feature_fraction": trial.suggest_uniform("feature_fraction", 0.4, 1.0), "bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.4, 1.0), "bagging_freq": trial.suggest_int("bagging_freq", 1, 7), "min_child_samples": trial.suggest_int("min_child_samples", 5, 100), } # Add a callback for pruning. pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc") gbm = lgb.train( param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback] ) preds = gbm.predict(valid_x) pred_labels = np.rint(preds) accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels) return accuracy my understanding is that the study below will tune for accuracy. I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a pickle, I just want to use the model somewhere else in my notebook. if __name__ == "__main__": study = optuna.create_study( pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize" ) study.optimize(objective, n_trials=100) print("Best trial:") trial = study.best_trial print(" Params: ") for key, value in trial.params.items(): print(" {}: {}".format(key, value)) desired output would be best_model = ~model from above~ new_target_pred = best_model.predict(new_data_test) metrics.accuracy_score(new_target_test, new__target_pred) | I think you can use the callback argument of Study.optimize to save the best model. In the following code example, the callback checks if a given trial is corresponding to the best trial and saves the model as a global variable best_booster. best_booster = None gbm = None def objective(trial): global gbm # ... def callback(study, trial): global best_booster if study.best_trial == trial: best_booster = gbm if __name__ == "__main__": study = optuna.create_study( pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize" ) study.optimize(objective, n_trials=100, callbacks=[callback]) If you define your objective function as a class, you can remove the global variables. I created a notebook as a code example. Please take a look at it: https://colab.research.google.com/drive/1ssjXp74bJ8bCAbvXFOC4EIycBto_ONp_?usp=sharing I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a pickle FYI, if you can pickle the boosters, I think you can make the code simple by following this FAQ. | 23 | 13 |
62,115,817 | 2020-5-31 | https://stackoverflow.com/questions/62115817/tensorflow-keras-rmse-metric-returns-different-results-than-my-own-built-rmse-lo | This is a regression problem My custom RMSE loss: def root_mean_squared_error_loss(y_true, y_pred): return tf.keras.backend.sqrt(tf.keras.losses.MSE(y_true, y_pred)) Training code sample, where create_model returns a dense fully connected sequential model from tensorflow.keras.metrics import RootMeanSquaredError model = create_model() model.compile(loss=root_mean_squared_error_loss, optimizer='adam', metrics=[RootMeanSquaredError()]) model.fit(train_.values, targets, validation_split=0.1, verbose=1, batch_size=32) Train on 3478 samples, validate on 387 samples Epoch 1/100 3478/3478 [==============================] - 2s 544us/sample - loss: 1.1983 - root_mean_squared_error: 0.7294 - val_loss: 0.7372 - val_root_mean_squared_error: 0.1274 Epoch 2/100 3478/3478 [==============================] - 1s 199us/sample - loss: 0.8371 - root_mean_squared_error: 0.3337 - val_loss: 0.7090 - val_root_mean_squared_error: 0.1288 Epoch 3/100 3478/3478 [==============================] - 1s 187us/sample - loss: 0.7336 - root_mean_squared_error: 0.2468 - val_loss: 0.6366 - val_root_mean_squared_error: 0.1062 Epoch 4/100 3478/3478 [==============================] - 1s 187us/sample - loss: 0.6668 - root_mean_squared_error: 0.2177 - val_loss: 0.5823 - val_root_mean_squared_error: 0.0818 I expected both loss and root_mean_squared_error to have same values, why is there a difference? | Two key differences, from source code: RMSE is a stateful metric (it keeps memory) - yours is stateless Square root is applied after taking a global mean, not before an axis=-1 mean like MSE does As a result of 1, 2 is more involved: mean of a running quantity, total, is taken, with respect to another running quantity, count; both quantities are reset via RMSE.reset_states(). The raw formula fix is easy - but integrating statefulness will require work, as is beyond the scope of this question; refer to source code to see how it's done. A fix for 2 with a comparison, below. import numpy as np import tensorflow as tf from tensorflow.keras.metrics import RootMeanSquaredError as RMSE def root_mean_squared_error_loss(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.math.squared_difference(y_true, y_pred))) np.random.seed(0) #%%########################################################################### rmse = RMSE(dtype='float64') rmsel = root_mean_squared_error_loss x1 = np.random.randn(32, 10) y1 = np.random.randn(32, 10) x2 = np.random.randn(32, 10) y2 = np.random.randn(32, 10) #%%########################################################################### print("TensorFlow RMSE:") print(rmse(x1, y1)) print(rmse(x2, y2)) print("=" * 46) print(rmse(x1, y1)) print(rmse(x2, y2)) print("\nMy RMSE:") print(rmsel(x1, y1)) print(rmsel(x2, y2)) TensorFlow RMSE: tf.Tensor(1.4132492562096124, shape=(), dtype=float64) tf.Tensor(1.3875944990740972, shape=(), dtype=float64) ============================================== tf.Tensor(1.3961984634354354, shape=(), dtype=float64) # same inputs, different result tf.Tensor(1.3875944990740972, shape=(), dtype=float64) # same inputs, different result My RMSE: tf.Tensor(1.4132492562096124, shape=(), dtype=float64) # first result agrees tf.Tensor(1.3614563994283353, shape=(), dtype=float64) # second differs since stateless | 8 | 8 |
62,152,885 | 2020-6-2 | https://stackoverflow.com/questions/62152885/pydantic-basemodel-not-found-in-fastapi | I have python3 3.6.9 on Kubuntu 18.04. I have installed fastapi using pip3 install fastapi. I'm trying to test drive the framework through its official documentation and I'm in the relational database section of its guide. In schemas.py: from typing import List from pydantic import BaseModel class VerseBase(BaseModel): AyahText: str NormalText: str class Verse(VerseBase): id: int class Config: orm_mode = True VS code highlights an error in from pydantic import BaseModel and it tells that: No name 'BaseModel' in module 'pydantic'. Additionally, when I try to run uvicorn main:app reload I have gotten the following error: File "./main.py", line 6, in <module> from . import crud, models, schemas ImportError: attempted relative import with no known parent package I have tried to renstall pydantic using pip3 but it tells me that: Requirement already satisfied: dataclasses>=0.6; python_version < "3.7" in ./.local/lib/python3.6/site-packages (from pydantic) (0.7) | The problem of highlighting in VS code may be a problem due to the fact that you did not open the folder. It's quite annoying as it happens often to me as well (and I have basically your same config). Regarding the second problem you mention, it is probably due to the fact that the folder in which the script lays, does not have a __init__.py file. If you add it, it should work since python will interpret the folder as a module. As an alternative, you could try to import with the full path from the top folder (e.g. from app.module.main import app). For more information about modules see the following links: Python 3.8 Modules Real Python | 15 | 2 |
62,158,664 | 2020-6-2 | https://stackoverflow.com/questions/62158664/search-in-each-of-the-s3-bucket-and-see-if-the-given-folder-exists | I'm trying to get the files from specific folders in s3 Buckets: I have 4 buckets in s3 with the following names: 1 - 'PDF' 2 - 'TXT' 3 - 'PNG' 4 - 'JPG' The folder structure for all s3 buckets looks like this: 1- PDF/analysis/pdf-to-img/processed/files 2- TXT/report/processed/files 3- PNG/analysis/reports/png-to-txt/processed/files 4- JPG/jpg-to-txt/empty I have to check if this folder prefix processed/files is present in the bucket, and if it is present, I'll read the files present in those directories, else I'll ignore them. Code: buckets = ['PDF','TXT','PNG','JPG'] client = boto3.client('s3') for i in bucket: result = client.list_objects(Bucket=i,Prefix = 'processed/files', Delimiter='/') print(result) I can enter into each directory if the folder structure is same, but how can I handle this when the folder structure varies for each bucket? | This is maybe a lengthy process. buckets = ['PDF','TXT','PNG','JPG'] s3_client = getclient('s3') for i in buckets: result = s3_client.list_objects(Bucket= i, Prefix='', Delimiter ='') contents = result.get('Contents') for content in contents: if 'processed/files/' in content.get('Key'): print("Do the process") You can get the list of directories from the s3 bucket. If it contains the required folder do the required process. | 7 | 5 |
62,155,465 | 2020-6-2 | https://stackoverflow.com/questions/62155465/sessionnotcreatedexception-this-version-of-chromedriver-only-supports-chrome-ve | I am using python 3 on windows 7, selenium, chromedriver version 84 (latest) to automate my chrome browser. I am using this script: from selenium import webdriver #import chromedriver_binary # Adds chromedriver binary to path driver = webdriver.Chrome() driver.get("http://www.python.org") and I always get this error upon running it. Traceback (most recent call last): File "D:\Huzefa\Desktop\zzzzzz.py", line 4, in <module> driver = webdriver.Chrome() File "C:\Users\Huzefa\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 81, in __init__ desired_capabilities=desired_capabilities) File "C:\Users\Huzefa\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 157, in __init__ self.start_session(capabilities, browser_profile) File "C:\Users\Huzefa\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 252, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "C:\Users\Huzefa\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute self.error_handler.check_response(response) File "C:\Users\Huzefa\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 84 My ChromeDriver is in path. Also i have used other versions of chromedriver but i am not able to navigate to a website! | Your ChromeDriver version and your installed version of Chrome need to match up. You are using ChromeDriver for Chrome version 84, which at the time of this answer, is a beta (non-stable) build of Chrome; you're probably not using it. Likely you're on version 83. Check your Chrome version (Help -> About) and then find the correct ChromeDriver release. You could instead use webdriver-manager which can handle this for you. | 13 | 10 |
62,152,591 | 2020-6-2 | https://stackoverflow.com/questions/62152591/bug-in-numpy-ndarray-min-max-method | I'm assuming I'm doing something wrong here, but I'm working on a project in Pycharm, which notified me when using the ndarray.max() function that initial was undefined (parameter 'initial' unfilled). Looking at the documentation, it does show that there is no default value for initial argument. When ctrl-clicking the ndarray.max() function in Pycharm, opens the following function: def max(self, axis=None, out=None, keepdims=False, initial, *args, **kwargs): # real signature unknown; NOTE: unreliably restored from __doc__ """ a.max(axis=None, out=None, keepdims=False, initial=<no value>, where=True) Return the maximum along a given axis. Refer to `numpy.amax` for full documentation. See Also -------- numpy.amax : equivalent function """ pass Which appears to not even do anything. Either way, the code works, only an IDE error is given. Am I using the wrong function? I know there's amax and max, as well as the package level numpy.max, but the above seems to be unwanted behaviour. If this is a bug, I wouldn't know how to report it / start an issue or whatever, haha. | it appears empty because it's not implemented in python, probably C/C++, as you can figure out from # real signature unknown; NOTE: unreliably restored from __doc__ - it's just a hint for you what parameter this function has. It's not even valid python ;) Basing on documentation of amax: initial scalar, optional The minimum value of an output element. Must be present to allow computation on empty slice. See reduce for details. You'd better pass something to initial | 12 | 2 |
62,102,912 | 2020-5-30 | https://stackoverflow.com/questions/62102912/shape-mismatch-problem-in-tensorflow-2-2-training-using-yolo4-cfg | I recently added a new feature to my yolov3 implementation which is models are currently loaded directly from DarkNet cfg files for convenience, I tested the code with yolov3 configuration as well as yolov4 configuration they both work just fine except for v4 training. Shortly after I start training I get a shapes mismatch error and I'll be very grateful if someone can help me get rid of the error and get to finally complete my project. Please let me know in the comments and I will provide you with any resources you need to help me with fixing the problem and thank you in advance... This is what I run in order to reproduce: if __name__ == '__main__': tr = Trainer((608, 608, 3), '../Config/yolo4.cfg', '../Config/beverly_hills.txt', 1344, 756, score_threshold=0.1, train_tf_record='../Data/TFRecords/beverly_hills_train.tfrecord', valid_tf_record='../Data/TFRecords/beverly_hills_test.tfrecord') tr.train( 100, 8, 1e-3, dataset_name='beverly_hills', merge_evaluation=False, n_epoch_eval=10, clear_outputs=True ) L links to files you need: bh_labels.csv (794 Kb) beverly_hills.txt (162 B) beverly_hills_train.tfrecord (509 Mb) beverly_hills_test.tfrecord (89 Mb) Here is the error message: Traceback (most recent call last): File "trainer.py", line 629, in <module> clear_outputs=True File "../Helpers/utils.py", line 62, in wrapper result = func(*args, **kwargs) File "trainer.py", line 490, in train validation_data=valid_dataset, File "/root/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper return method(self, *args, **kwargs) File "/root/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1090, in fit tmp_logs = train_function(iterator) File "/root/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 766, in __call__ result = self._call(*args, **kwds) File "/root/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 826, in _call return self._stateless_fn(*args, **kwds) File "/root/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2811, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "/root/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1838, in _filtered_call cancellation_manager=cancellation_manager) File "/root/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1914, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "/root/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 549, in call ctx=ctx) File "/root/.local/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [4,76,76,3,1] vs. [4,19,19,3,1] [[node yolo_loss/logistic_loss/mul (defined at ../Helpers/utils.py:260) ]] [Op:__inference_train_function_38735] Errors may have originated from an input operation. Input Source operations connected to node yolo_loss/logistic_loss/mul: yolo_loss/split_1 (defined at ../Helpers/utils.py:222) yolo_loss/split (defined at ../Helpers/utils.py:196) Function call stack: train_function And when I change the batch_size to 8 instead of 4, the error mutates into the following(the error source changes): Traceback (most recent call last): File "/Users/emadboctor/Desktop/Code/yolov3-keras-tf2/Main/trainer.py", line 693, in <module> clear_outputs=True, File "/Users/emadboctor/Desktop/Code/yolov3-keras-tf2/Helpers/utils.py", line 62, in wrapper result = func(*args, **kwargs) File "/Users/emadboctor/Desktop/Code/yolov3-keras-tf2/Main/trainer.py", line 526, in train validation_data=valid_dataset, File "/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 848, in fit tmp_logs = train_function(iterator) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__ result = self._call(*args, **kwds) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 644, in _call return self._stateless_fn(*args, **kwds) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2420, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1665, in _filtered_call self.captured_inputs) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 598, in call ctx=ctx) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [8,13,13,3,2] vs. [8,52,52,3,2] [[node gradient_tape/yolo_loss/sub_5/BroadcastGradientArgs (defined at Users/emadboctor/Desktop/Code/yolov3-keras-tf2/Main/trainer.py:526) ]] [Op:__inference_train_function_42744] Function call stack: train_function | Adding this line in models.py solved the shapes problem and the training started as expected: if '4' in self.model_configuration: self.output_layers.reverse() | 7 | 3 |
62,150,925 | 2020-6-2 | https://stackoverflow.com/questions/62150925/how-do-i-update-values-without-refreshing-the-page-on-my-flask-project | I have a website that shows the prices of items in a video game. Currently, I have an "auto-refresh" script that refreshes the page every 5 seconds, but it is a bit annoying as every time you search for a product, it removes your search because the page refreshes. I would like to update the numbers in my table without refreshing the page for the user. I read something about 'updating the DOM', in javascript but didn't get it. Here is the link to my website: http://xeltool.com/ And here is my python code: @app.route('/bprices', methods=['GET']) def bPrices(): f = requests.get( 'https://api.hypixel.net/skyblock/bazaar?key=[cannot show]').json() products = [ { "id": product["product_id"], "sell_price": product["sell_summary"][:1], "buy_price": product["buy_summary"][:1], "sell_volume": product["quick_status"]["sellVolume"], "buy_volume": product["quick_status"]["buyVolume"], } for product in f["products"].values() ] return render_template("bprices.html", products=products) And here is my HTML code: <div class="container"> <div class="search_div"> <input type="text" onkeyup="myFunction()" id="myInput" title="Type in a product" class="search-box" placeholder="Search for a product..." /> <button class="search-btn"><i class="fas fa-search"></i></button> </div> <table id="myTable" class="table table-striped table-bordered table-sm table-dark sortable" cellspacing="0" > <thead> <tr> <th aria-label="Product Name" data-balloon-pos="up">Product</th> <th aria-label="Product's buy price" data-balloon-pos="up"> Buy Price </th> <th aria-label="Product's sell price" data-balloon-pos="up"> Sell Price </th> <th aria-label="Product's buy volume" data-balloon-pos="up"> Buy Volume </th> <th aria-label="Product's sell volume" data-balloon-pos="up"> Sell Volume </th> <th> Margin </th> </tr> </thead> <tbody> {% for product in products %} <tr> <td>{{ product.id|replace("_", ' ')|lower()|title() }}</td> {% for buy in product.buy_price %} <td>{{ buy.pricePerUnit }}</td> {% for sell in product.sell_price %} <td>{{ sell.pricePerUnit }}</td> <td>{{ product.buy_volume| numberFormat }}</td> <td>{{ product.sell_volume| numberFormat }}</td> {% set margin = buy.pricePerUnit - sell.pricePerUnit%} {% set marginPer = margin/buy.pricePerUnit * 100%} <td aria-label="{{ marginPer|round(1, 'floor') }} % " data-balloon-pos="right" > {{ margin|round(1, 'floor')}} </td> {% endfor %}{% endfor %} </tr> {% endfor %} </tbody> </table> </div> If you NEED the API to test this out, I can provide a link to it :) | You have 3 options: AJAX - https://www.w3schools.com/js/js_ajax_intro.asp SSE - https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events Websocket - https://developer.mozilla.org/en-US/docs/Glossary/WebSockets I think the best option in your case is SSE since the server knows that the price was changed so it can push it to the clients. | 9 | 13 |
62,061,703 | 2020-5-28 | https://stackoverflow.com/questions/62061703/runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modi | I am using pytorch-1.5 to do some gan test. My code is very simple gan code which just fit the sin(x) function: import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt # Hyper Parameters BATCH_SIZE = 64 LR_G = 0.0001 LR_D = 0.0001 N_IDEAS = 5 ART_COMPONENTS = 15 PAINT_POINTS = np.vstack([np.linspace(-1, 1, ART_COMPONENTS) for _ in range(BATCH_SIZE)]) def artist_works(): # painting from the famous artist (real target) r = 0.02 * np.random.randn(1, ART_COMPONENTS) paintings = np.sin(PAINT_POINTS * np.pi) + r paintings = torch.from_numpy(paintings).float() return paintings G = nn.Sequential( # Generator nn.Linear(N_IDEAS, 128), # random ideas (could from normal distribution) nn.ReLU(), nn.Linear(128, ART_COMPONENTS), # making a painting from these random ideas ) D = nn.Sequential( # Discriminator nn.Linear(ART_COMPONENTS, 128), # receive art work either from the famous artist or a newbie like G nn.ReLU(), nn.Linear(128, 1), nn.Sigmoid(), # tell the probability that the art work is made by artist ) opt_D = torch.optim.Adam(D.parameters(), lr=LR_D) opt_G = torch.optim.Adam(G.parameters(), lr=LR_G) for step in range(10000): artist_paintings = artist_works() # real painting from artist G_ideas = torch.randn(BATCH_SIZE, N_IDEAS) # random ideas G_paintings = G(G_ideas) # fake painting from G (random ideas) prob_artist0 = D(artist_paintings) # D try to increase this prob prob_artist1 = D(G_paintings) # D try to reduce this prob D_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1)) G_loss = torch.mean(torch.log(1. - prob_artist1)) opt_D.zero_grad() D_loss.backward(retain_graph=True) # reusing computational graph opt_D.step() opt_G.zero_grad() G_loss.backward() opt_G.step() But when i runing it got this error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! Is there something wrong with my code? | This happens because the opt_D.step() modifies the parameters of your discriminator inplace. But these parameters are required to compute the gradient for the generator. You can fix this by changing your code to: for step in range(10000): artist_paintings = artist_works() # real painting from artist G_ideas = torch.randn(BATCH_SIZE, N_IDEAS) # random ideas G_paintings = G(G_ideas) # fake painting from G (random ideas) prob_artist1 = D(G_paintings) # G tries to fool D G_loss = torch.mean(torch.log(1. - prob_artist1)) opt_G.zero_grad() G_loss.backward() opt_G.step() prob_artist0 = D(artist_paintings) # D try to increase this prob # detach here to make sure we don't backprop in G that was already changed. prob_artist1 = D(G_paintings.detach()) # D try to reduce this prob D_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1)) opt_D.zero_grad() D_loss.backward(retain_graph=True) # reusing computational graph opt_D.step() You can find more about this issue here https://github.com/pytorch/pytorch/issues/39141 | 11 | 11 |
62,139,040 | 2020-6-1 | https://stackoverflow.com/questions/62139040/pythons-csv-module-vs-pandas | I am using Pandas to read CSV file data, but the CSV module is also there to manage the CSV file. What is the difference between these both? What are the cons of using Pandas over the CSV module? | Based upon benchmarks CSV is faster to load data for smaller datasets (< 1K rows) Pandas is several times faster for larger datasets Code to Generate Benchmarks Benchmarks | 19 | 12 |
62,135,100 | 2020-6-1 | https://stackoverflow.com/questions/62135100/how-to-define-a-pytest-fixture-to-be-used-by-all-tests-within-a-given-tests-subd | Given a directory tests with a few subdirectories each containing test modules, how can one create a pytest fixture to be run before each test found in a particular subdirectory only? tests ├── __init__.py ├── subdirXX │ ├── test_module1.py │ ├── test_module2.py │ ├── __init__.py ├── subdirYY │ ├── test_module3.py │ ├── test_module4.py │ ├── __init__.py I'd like to have a fixture that will run before each test found in modules within the subdirYY only (in this case, in modules test_module3.py and test_module4.py). I currently have a fixture defined twice, once inside each module within the subdirYY which works but is redundant: @pytest.fixture(autouse=True) def clean_directory(): ... If this is not possible to achieve, each of the tests within the subdirYY is decorated with a custom mark (@pytest.mark.mycustommark) so making sure that a certain fixture will run before each test marked with a particular custom mark is a viable option, too. | Put your autouse fixture in a conftest.py file inside subdirYY. For more information, see the pytest docs about sharing fixtures and the docs on autouse fixtures which specifically mention conftest.py: if an autouse fixture is defined in a conftest.py file then all tests in all test modules belows its directory will invoke the fixture. | 9 | 7 |
62,102,897 | 2020-5-30 | https://stackoverflow.com/questions/62102897/certifacte-verify-failed-certificate-has-expired-ssl-c1108 | When trying to run my Discord bot I get this error: raise ClientConnectorCertificateError( aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host discordapp.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1108)')] It just started happening out of nowhere and happens every time. I'm using Python 3.8 on Windows 10. What does this mean and how do I fix it? | To fix this: Go to discord.com with Internet Explorer (Ran as Administrator) Click the lock on the top right Click view certificates Install one PS: If your antivirus is active for the web browser and this solution doesn't work, try disabling it and try again. | 19 | 21 |
62,100,772 | 2020-5-30 | https://stackoverflow.com/questions/62100772/can-you-make-python3-give-an-error-when-comparing-strings-to-bytes | When converting code from Python 2 to Python 3 one issue is that the behaviour when testing strings and bytes for equality has changed. For example: foo = b'foo' if foo == 'foo': print("They match!") prints nothing on Python 3 and "They match!" on Python 2. In this case it is easy to spot but in many cases the check is performed on variables which may have been defined elsewhere so there is no obvious type information. I would like to make the Python 3 interpreter give an error whenever there is an equality test between string and bytes rather than silently conclude that they are different. Is there any way to accomplish this? | There is an option, -b, you can pass to the Python interpreter to cause it to emit a warning or error when comparing byte / str. > python --help usage: /bin/python [option] ... [-c cmd | -m mod | file | -] [arg] ... Options and arguments (and corresponding environment variables): -b : issue warnings about str(bytes_instance), str(bytearray_instance) and comparing bytes/bytearray with str. (-bb: issue errors) This produces a BytesWarning as seen here: > python -bb -i Python 3.8.0 Type "help", "copyright", "credits" or "license" for more information. >>> v1 = b'foo' >>> v2 = 'foo' >>> v1 == v2 Traceback (most recent call last): File "<stdin>", line 1, in <module> BytesWarning: Comparison between bytes and string | 7 | 6 |
62,121,832 | 2020-5-31 | https://stackoverflow.com/questions/62121832/is-there-a-way-to-add-a-column-of-type-dictionary-to-a-spark-dataframe-in-pyspar | This is how I create a dataframe with primitive data types in pyspark: from pyspark.sql.types import StructType, StructField, DoubleType, StringType, IntegerType fields = [StructField('column1', IntegerType(), True), StructField('column2', IntegerType(), True)] schema = StructType(fields) df = spark.createDataFrame([], schema) values = [tuple([i]) + tuple([i]) for i in range(3)] df = spark.createDataFrame(values, schema) Now, if I want to have a third column with dictionary data, eg: {"1": 1.0, "2": 2.0, "3": 3.0}, what should I do? I want to create this data frame: +--------------------+-----------------+------------------------------+ |column1 |column2 |column3 | +--------------------+-----------------+------------------------------+ |1 |1 |{"1": 1.0, "2": 1.0, "3": 1.0}| +--------------------+-----------------+------------------------------+ |2 |2 |{"1": 2.0, "2": 2.0, "3": 2.0}| +--------------------+-----------------+------------------------------+ |3 |3 |{"1": 3.0, "2": 3.0, "3": 3.0}| +--------------------+-----------------+------------------------------+ There is a MapType that seems to be helpful, but I can't figure out how to use it? And assuming the data frame is created, how to filter it based on the third column, given a dict to select the rows of the data frame that have that dict value? | Example how to create: from pyspark.sql.types import MapType, IntegerType, DoubleType, StringType, StructType, StructField import pyspark.sql.functions as f schema = StructType([ StructField('column1', IntegerType()), StructField('column2', IntegerType()), StructField('column3', MapType(StringType(), DoubleType()))]) data = [(1, 2, {'a':3.5, 'b':4.2}), (4, 8, {'b':3.7, 'e':4.9})] df = spark.createDataFrame(data, schema=schema) df.show() Output: +-------+-------+--------------------+ |column1|column2| column3| +-------+-------+--------------------+ | 1| 2|[a -> 3.5, b -> 4.2]| | 4| 8|[e -> 4.9, b -> 3.7]| +-------+-------+--------------------+ Example on how to filter DataFrame only leaving elements which have a certain key (assuming you don't have null values in the map and your Spark version is 2.4+ cause early versions don't have element_at): filtered_df = df.where(f.element_at(df.column3, 'a').isNotNull()) Output: +-------+-------+--------------------+ |column1|column2| column3| +-------+-------+--------------------+ | 1| 2|[a -> 3.5, b -> 4.2]| +-------+-------+--------------------+ I might have misunderstood your question - if your intention is to only leave rows where map column equal to a specific dictionary you have it is a little bit more tricky. As far as I know Spark doesn't have comparison operation on dictionary types (it is somewhat unusual operation). There is a way to implement it using udf, which will be not very efficient. The code for that might look like this: from pyspark.sql.types import MapType, IntegerType, DoubleType, StringType, StructType, StructField, BooleanType my_dict = {'b':2.7, 'e':4.9} from pyspark.sql.functions import udf def map_equality_comparer(my_dict): @udf(BooleanType()) def comparer(m): if len(m) != len(my_dict): return False for k, v in m.items(): if my_dict.get(k) != v: return False return True return comparer filtered_df = df.where(map_equality_comparer(my_dict)(df.column3)) filtered_df.show() If this is too slow for you you might consider creating a canonical representation of your Dictionaries and comparing those (e.g. converting dictionaries to sorted arrays of key value pairs and filtering based on equality of these arrays). | 8 | 5 |
62,110,746 | 2020-5-31 | https://stackoverflow.com/questions/62110746/is-there-a-better-way-to-check-if-a-number-is-range-of-two-numbers | I am trying to check if a number is in range of integers and returns a number based on which range it lies. I was wondering if is there a better and more efficient way of doing this: def checkRange(number): if number in range(0, 5499): return 5000 elif number in range(5500, 9499): return 10000 elif number in range(9500, 14499): return 15000 elif number in range(14500, 19499): return 20000 elif number in range(19500, 24499): return 25000 elif number in range(24500, 29499): return 30000 elif number in range(29500, 34499): return 35000 elif number in range(34500, 39499): return 40000 elif number in range(39500, 44499): return 45000 This felt like a waste of resources and would greatly appreciate if there is a better way to do this. | Since you have continuous, sorted ranges, a quicker and less verbose way to do this, is to use the bisect module to find the index in a list of breakpoints and then use it to get the corresponding value from a list of values: import bisect break_points = [5499, 9499, 14499, 19499, 24499, 29499, 34499, 39499, 44499] values = [5000, 10000, 15000, 20000, 25000, 30000, 35000, 40000, 45000] n = 10000 index = bisect.bisect_left(break_points, n) values[index] # 15000 You'll need to test for n values that exceed the last breakpoint if that's a possibility. Alternatively you can add a default value to the end of the values list. | 18 | 27 |
62,113,587 | 2020-5-31 | https://stackoverflow.com/questions/62113587/adding-claims-to-drf-simple-jwt-payload | Using djangorestframework_simplejwt library, when POST to a custom view #urls.py path('api/token/', MyTokenObtainPairView.as_view(), name='token_obtain'), #views.py class MyTokenObtainPairView(TokenObtainPairView): serializer_class = MyTokenObtainPairSerializer I'm able to get a the following access token eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTkwOTEwNjg0LCJqdGkiOiI3M2MxYmZkOWNmMGY0ZjI3OTY4MGY0ZjhlYjA1NDQ5NyIsInVzZXJfaWQiOjExfQ.5vs0LmNGseU6rtq3vuQyApupxhQM3FBAoKAq8MUukIBOOYfDAV9guuCVEYDoGgK6rdPSIq2mvcSxkILG8OH5LQ By going to https://jwt.io/ I can see the payload is currently { "token_type": "access", "exp": 1590910684, "jti": "73c1bfd9cf0f4f279680f4f8eb054497", "user_id": 11 } So, we can see that the second part of the token is the payload - containing the claims. I've explored how to add more information to the Response body and now would like to know how to customize the Payload data by adding iat claim, username and today's date. | As you already created a subclass for the desired view (MyTokenObtainPairView) and a subclass for its corresponding serializer (MyTokenObtainPairSerializer), add the following to the serializer class MyTokenObtainPairSerializer(TokenObtainPairSerializer): ... @classmethod def get_token(cls, user): token = super().get_token(user) # Add custom claims token['iat'] = datetime.datetime.now() token['user'] = user.username token['date'] = str(datetime.date.today()) return token Then, when you POST to that same location, you'll get an access token like this eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTkwOTE0MTk4LCJqdGkiOiJhZDZmNzZhZjFmOGU0ZWJlOGI2Y2Y5YjQ4MGQzZjY2MiIsInVzZXJfaWQiOjExLCJpYXQiOjE1OTA5MTc0OTgsInVzZXIiOiJ0aWFnbyIsImRhdGUiOiIyMDIwLTA1LTMxIn0.-5U9P-WWmhlOenzCvc6b7_71Tz17LyNxe_DOMwwqH4RqrNsilVukEcZWFRGupLHRZjIvPya2QJGpiju9ujzQuw Using JWT you can see the Payload changing accordingly { "token_type": "access", "exp": 1590914198, "jti": "ad6f76af1f8e4ebe8b6cf9b480d3f662", "user_id": 11, "iat": 1590917498, "user": "tiago", "date": "2020-05-31" } | 10 | 15 |
62,102,453 | 2020-5-30 | https://stackoverflow.com/questions/62102453/how-to-define-callbacks-in-separate-files-plotly-dash | Background Dash web applications have a dash application instance, usually named app, and initiated like this: app = dash.Dash(__name__) Then, callbacks are added to the application using a callback decorator: @app.callback(...) def my_function(...): # do stuff. In most of the tutorials you find, the callbacks are defined with all of the application layout in the app.py. This of course is just the MWE way of doing things. In a real application, separating code to modules and packages would greatly improve readability and maintainability, but naively separating the callbacks to and layouts just results into circular imports. Question What would be the correct way to separate callbacks and layouts from the app.py in a single page app? MWE Here is a minimal (non-)working example with the problem File structure . ├── my_dash_app │ ├── app.py │ └── views │ ├── first_view.py │ └── __init__.py └── setup.py setup.py import setuptools setuptools.setup( name='dash-minimal-realworld', version='1.0.0', install_requires=['dash>=1.12.0'], packages=setuptools.find_packages(), ) app.py import dash from my_dash_app.views.first_view import make_layout app = dash.Dash(__name__) app.layout = make_layout() if __name__ == '__main__': app.run_server(debug=True) first_view.py from dash.dependencies import Input, Output import dash_core_components as dcc import dash_html_components as html from my_dash_app.app import app def make_layout(): return html.Div([ dcc.Input(id='my-id', value='initial value', type='text'), html.Div(id='my-div') ]) @app.callback(Output(component_id='my-div', component_property='children'), [Input(component_id='my-id', component_property='value')]) def update_output_div(input_value): return 'You\'ve entered "{}"'.format(input_value) Running python ./my_dash_app/app.py results into circular dependency: ImportError: cannot import name 'make_layout' from 'my_dash_app.views.first_view' (c:\tmp\dash_minimal_realworld\my_dash_app\views\first_view.py) | I don't think (but I might be wrong) that there's a correct way of doing it per se, but what you could do it have a central module (maindash.py) around your startup code app = dash.Dash(__name__), and have different callbacks simply import app from my_dash_app.maindash. This would set up the callbacks in their own separate modules but re-use that one central module for the app instance. It's easiest to show an overview of it like this: app.py being the main script called to start everything up. maindash.py is in charge of creating the main app instance. first_view.py is where the decorators are defined to set up all the callbacks. Here's the result: . ├── my_dash_app │ ├── app.py │ ├── maindash.py │ └── views │ ├── first_view.py │ └── __init__.py └── setup.py Since imports are re-used in Python, there's no real harm in doing from my_dash_app.maindash import app several times from different other modules, such as event handlers and the main script. They'll share the same import instance - thus re-using the dash.Dash() instance as well. Just make sure you import the central module before setting up the handlers, and you should be good to go. Here's the code snippets separated for testing: app.py from my_dash_app.maindash import app from my_dash_app.views.first_view import make_layout if __name__ == '__main__': app.layout = make_layout() app.run_server(debug=True) maindash.py import dash app = dash.Dash(__name__) first_view.py from my_dash_app.maindash import app from dash.dependencies import Input, Output import dash_core_components as dcc import dash_html_components as html def make_layout(): return html.Div([ dcc.Input(id='my-id', value='initial value', type='text'), html.Div(id='my-div') ]) @app.callback(Output(component_id='my-div', component_property='children'), [Input(component_id='my-id', component_property='value')]) def update_output_div(input_value): return 'You\'ve entered "{}"'.format(input_value) | 38 | 21 |
62,102,618 | 2020-5-30 | https://stackoverflow.com/questions/62102618/sum-values-in-a-list-of-lists-of-dictionaries-using-common-key-value-pairs | How do I sum duplicate elements in a list of lists of dictionaries? Sample list: data = [ [ {'user': 1, 'rating': 0}, {'user': 2, 'rating': 10}, {'user': 1, 'rating': 20}, {'user': 3, 'rating': 10} ], [ {'user': 4, 'rating': 4}, {'user': 2, 'rating': 80}, {'user': 1, 'rating': 20}, {'user': 1, 'rating': 10} ], ] Expected output: op = [ [ {'user': 1, 'rating': 20}, {'user': 2, 'rating': 10}, {'user': 3, 'rating': 10} ], [ {'user': 4, 'rating': 4}, {'user': 2, 'rating': 80}, {'user': 1, 'rating': 30}, ], ] | You can try: from itertools import groupby result = [] for lst in data: sublist = sorted(lst, key=lambda d: d['user']) grouped = groupby(sublist, key=lambda d: d['user']) result.append([ {'user': name, 'rating': sum([d['rating'] for d in group])} for name, group in grouped]) # Sort the `result` `rating` wise: result = [sorted(sub, key=lambda d: d['rating']) for sub in result] # %%timeit # 7.54 µs ± 220 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) UPDATE (A more efficient solution): result = [] for lst in data: visited = {} for d in lst: if d['user'] in visited: visited[d['user']]['rating'] += d['rating'] else: visited[d['user']] = d result.append(sorted(visited.values(), key=lambda d: d['rating'])) # %% timeit # 2.5 µs ± 54 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) Result: # print(result) [ [ {'user': 2, 'rating': 10}, {'user': 3, 'rating': 10}, {'user': 1, 'rating': 20} ], [ {'user': 4, 'rating': 4}, {'user': 1, 'rating': 30}, {'user': 2, 'rating': 80} ] ] | 10 | 4 |
62,100,550 | 2020-5-30 | https://stackoverflow.com/questions/62100550/django-importerror-cannot-import-name-reporterprofile-from-partially-initiali | I have two apps: collection and accounts, with both having models defined. I'm importing a model ReporterProfile from accounts to collection. Similarly, I'm importing a model Report from collection to accounts. The Report model from collection is called in a model class method in accounts like this: from collection.models import Report class ReporterProfile(models.Model): .... def published_articles_number(self): num = Report.objects.filter(reporterprofile=self.id).count() return num Similarly, I am importing ReporterProfile and User models from accounts to collection model like this: from accounts.models import ReporterProfile, User from <project_name> import settings class Report(models.Model): reporterprofile = models.ForeignKey(ReporterProfile, on_delete=models.CASCADE, verbose_name="Report Author") ... class Comment(models.Model): report = models.ForeignKey(Report, on_delete=models.CASCADE, related_name='comments') user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, verbose_name="Comment by") ... When running the server or makemigrations, I get the error: File "F:\project_name\accounts\models.py", line 8, in <module> from collection.models import Report File "F:\project_name\collection\models.py", line 2, in <module> from accounts.models import ReporterProfile, User ImportError: cannot import name 'ReporterProfile' from partially initialized module 'accounts.models' (most likely due to a circular import) (F:\project_name\accounts\models.py) I think the error is coming because of a wrong importing pattern. What should I do? | For ForeignKey: Instead of using reporterprofile = models.ForeignKey(ReporterProfile, ...), you can use reporterprofile = models.ForeignKey("accounts.ReporterProfile", ...), so you don't have to import the model. For preventing circulor import error : Instead of using : from accounts.models import ReporterProfile [...] foo = ReporterProfile() You can use: import accounts.models [...] foo = accounts.models.ReporterProfile() | 17 | 43 |
62,095,847 | 2020-5-29 | https://stackoverflow.com/questions/62095847/pandas-groupby-concat-ungrouped-column-into-comma-separated-string | I have the following example df: col1 col2 col3 doc_no 0 a x f 0 1 a x f 1 2 b x g 2 3 b y g 3 4 c x t 3 5 c y t 4 6 a x f 5 7 d x t 5 8 d x t 6 I want to group by the first 3 columns (col1, col2, col3), concatenate the fourth column (doc_no) into a line of strings based on the groupings of the first 3 columns, as well as also generate a sorted count column of the 3 column grouping (count). Example desired output below (column order doesn't matter): col1 col2 col3 count doc_no 0 a x f 3 0, 1, 5 1 d x t 2 5, 6 2 b x g 1 2 3 b y g 1 3 4 c x t 1 3 5 c y t 1 4 How would I go about doing this? I used the below line to get just the grouping and the count: grouped_df = df.groupby(['col1','col2','col3']).size().reset_index(name='count')\ .sort_values(['count'], ascending=False).reset_index() But I'm not sure how to also get the concatenated doc_no column in the same code line. | Try groupby and agg like so: (df.groupby(['col1', 'col2', 'col3'])['doc_no'] .agg(['count', ('doc_no', lambda x: ','.join(map(str, x)))]) .sort_values('count', ascending=False) .reset_index()) col1 col2 col3 count doc_no 0 a x f 3 0,1,5 1 d x t 2 5,6 2 b x g 1 2 3 b y g 1 3 4 c x t 1 3 5 c y t 1 4 agg is simple to use because you can specify a list of reducers to run on a single column. | 7 | 10 |
62,090,541 | 2020-5-29 | https://stackoverflow.com/questions/62090541/how-to-iterate-over-all-values-of-an-enum-including-any-nested-enums | Imagine one has two classes derived from Enum, e.g. class Color(Enum): blue = 'blue' red = 'red' class Properties(Enum): height = 'h' weight = 'w' colors = Color What is the best way to (probably recursively) iterate over all Enum-labels of a nested Enum like Properties, including the ones of Enum-members like Properties.colors in the example above (i.e. including Color.blue and Color.red)? Checking for the type of the value? | Here's a quick example that just prints them out. I'll leave it as an exercise to the reader to make this a generic generator or whatever applies to the actual use case. :) >>> from typing import Type >>> def print_enum(e: Type[Enum]) -> None: ... for p in e: ... try: ... assert(issubclass(p.value, Enum)) ... print_enum(p.value) ... except (AssertionError, TypeError): ... print(p) ... >>> print_enum(Properties) Properties.height Properties.weight Color.blue Color.red | 10 | 5 |
62,087,499 | 2020-5-29 | https://stackoverflow.com/questions/62087499/failing-to-install-mysql-python | I am tying to install MySQL-python in a python 2.7 virtual environment but I am getting the following error: Installing collected packages: MySQL-python Running setup.py install for MySQL-python ... error ERROR: Command errored out with exit status 1: command: /home/jhylands/py2/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-scNGlE/MySQL-python/setup.py'"'"'; __file__='"'"'/tmp/pip-install-scNGlE/MySQL-python/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-FUMQGL/install-record.txt --single-version-externally-managed --compile --install-headers /home/jhylands/py2/include/site/python2.7/MySQL-python cwd: /tmp/pip-install-scNGlE/MySQL-python/ Complete output (30 lines): running install running build running build_py creating build creating build/lib.linux-x86_64-2.7 copying _mysql_exceptions.py -> build/lib.linux-x86_64-2.7 creating build/lib.linux-x86_64-2.7/MySQLdb copying MySQLdb/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdb copying MySQLdb/converters.py -> build/lib.linux-x86_64-2.7/MySQLdb copying MySQLdb/connections.py -> build/lib.linux-x86_64-2.7/MySQLdb copying MySQLdb/cursors.py -> build/lib.linux-x86_64-2.7/MySQLdb copying MySQLdb/release.py -> build/lib.linux-x86_64-2.7/MySQLdb copying MySQLdb/times.py -> build/lib.linux-x86_64-2.7/MySQLdb creating build/lib.linux-x86_64-2.7/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants copying MySQLdb/constants/REFRESH.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants running build_ext building '_mysql' extension creating build/temp.linux-x86_64-2.7 x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=/build/python2.7-1x6jhf/python2.7-2.7.18~rc1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/usr/include/mysql -I/usr/include/python2.7 -c _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o _mysql.c:44:10: fatal error: my_config.h: No such file or directory 44 | #include "my_config.h" | ^~~~~~~~~~~~~ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /home/jhylands/py2/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-scNGlE/MySQL-python/setup.py'"'"'; __file__='"'"'/tmp/pip-install-scNGlE/MySQL-python/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-FUMQGL/install-record.txt --single-version-externally-managed --compile --install-headers /home/jhylands/py2/include/site/python2.7/MySQL-python Check the logs for full command output. I have already tried installing the solutions suggested in this post and this post | So I managed to solve the issue with the following command: sudo wget https://raw.githubusercontent.com/paulfitz/mysql-connector-c/master/include/my_config.h -P /usr/include/mysql/ Which I found from a comment on this answer. | 9 | 16 |
62,086,686 | 2020-5-29 | https://stackoverflow.com/questions/62086686/how-to-extract-values-from-pandas-series-without-index | I have the following when I print my data structure: print(speed_tf) 44.0 -24.4 45.0 -12.2 46.0 -12.2 47.0 -12.2 48.0 -12.2 Name: Speed, dtype: float64 I believe this is a pandas Series but not sure I do not want the first column at all I just want -24.4 -12.2 -12.2 -12.2 -12.2 I tried speed_tf.reset_index() index Speed 0 44.0 -24.4 1 45.0 -12.2 2 46.0 -12.2 3 47.0 -12.2 4 48.0 -12.2 How can I just get the Speed values with index starting at 0? | speed_tf.values Should do what you want. | 23 | 33 |
62,078,016 | 2020-5-29 | https://stackoverflow.com/questions/62078016/smooth-the-edges-of-binary-images-face-using-python-and-open-cv | I am looking for a perfect way to smooth edges of binary images. The problem is the binary image appears to be a staircase like borders which is very unpleasing for my further masking process. I am attaching a raw binary image that is to be converted into smooth edges and I am also providing the expected outcome. I am also looking for a solution that would work even if we increase the dimensions of the image. Problem Image Expected Outcome | You can do that in Python/OpenCV with the help of Skimage by blurring the binary image. Then apply a one-sided clip. Input: import cv2 import numpy as np import skimage.exposure # load image img = cv2.imread('bw_image.png') # blur threshold image blur = cv2.GaussianBlur(img, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT) # stretch so that 255 -> 255 and 127.5 -> 0 # C = A*X+B # 255 = A*255+B # 0 = A*127.5+B # Thus A=2 and B=-127.5 #aa = a*2.0-255.0 does not work correctly, so use skimage result = skimage.exposure.rescale_intensity(blur, in_range=(127.5,255), out_range=(0,255)) # save output cv2.imwrite('bw_image_antialiased.png', result) # Display various images to see the steps cv2.imshow('result', result) cv2.waitKey(0) cv2.destroyAllWindows() You will have to adjust the amount of blur for the degree of aliasing in the image. | 9 | 8 |
62,074,633 | 2020-5-28 | https://stackoverflow.com/questions/62074633/how-to-increase-the-memory-limits-in-google-cloud-run | I'm building a simple Flask based app using Cloud Run + Cloud Firestore. There is one method that brings a lot of data, and the logs are showing this error: `Memory limit of 244M exceeded with 248M used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits` How I can increase the memory limit in the cloudbuild.yaml? Our YAML file contains the following: # cloudbuild.yaml steps: # build & push the container image - name: "gcr.io/kaniko-project/executor:latest" args: ["--cache=true", "--cache-ttl=48h", "--destination=gcr.io/$PROJECT_ID/todo:latest"] # Deploy container image to Cloud Run - name: "gcr.io/cloud-builders/gcloud" args: ['beta', 'run', 'deploy', 'todo', '--image', 'gcr.io/$PROJECT_ID/todo:latest', '--region', 'us-central1', '--allow-unauthenticated', '--platform', 'managed'] Thank you | In the args of the last step, add '--memory', '512Mi' The format for size is a fixed or floating point number followed by a unit: G, M, or K corresponding to gigabyte, megabyte, or kilobyte, respectively, or use the power-of-two equivalents: Gi, Mi, Ki corresponding to gibibyte, mebibyte or kibibyte respectively. | 7 | 15 |
62,068,323 | 2020-5-28 | https://stackoverflow.com/questions/62068323/iterating-over-tf-tensor-is-not-allowed-autograph-is-disabled-in-this-function | I am using tensorflow 2.1 along with python 3.7 The following snippet of code is being used to build a tensorflow graph. The code runs without errors when executed as a standalone python script. (Probably tensorflow is running in eager mode? I am not sure.) import tensorflow as tf patches = tf.random.uniform(shape=(1, 10, 50, 300), dtype=tf.dtypes.float32) s = tf.shape(patches) patches = [patches[0][x][y] - tf.reduce_mean(patches[0][x][y]) for y in tf.range(s[2]) for x in tf.range(s[1])] However, the code fails when this is part of a tensorflow graph. I receive the following error: tensorflow. python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed: AutoGraph is disabled in this function. Try decorating it directly with @tf.function. I also added the decorator @tf.function to the method which wraps the above lines of code. It didn't help. I am not sure if I fully understand the meaning of decorating with @tf.function. I also checked that this could be a problem with using python list comprehension inside the tensorflow graph. I am not sure how to use tf.map_fn or tf.while_loop for my case, since I have nested loops. Thanks in advance! | List comprehensions are not yet supported in autograph. The error that's raised needs to be improved, too. Piling up on https://github.com/tensorflow/tensorflow/issues/32546 should help resolve it sooner. Until comprehensions are supported, you have to use map_fn, which in this case would look something like this: def outer_comp(x): def inner_comp(y): return patches[0][x][y] - tf.reduce_mean(patches[0][x][y]) return tf.map_fn(inner_comp, tf.range(s[2]), dtype=tf.float32) patches = tf.map_fn(outer_comp, tf.range(s[1]), dtype=tf.float32) That said, I believe you can just use reduce_mean directly: patches = patches - tf.expand_dims(tf.reduce_mean(patches, axis=3), -1) | 16 | 21 |
62,066,599 | 2020-5-28 | https://stackoverflow.com/questions/62066599/how-to-get-the-pid-of-the-process-started-by-subprocess-run-and-kill-it | I'm using Windows 10 and Python 3.7. I ran the following command. import subprocess exeFilePath = "C:/Users/test/test.exe" subprocess.run(exeFilePath) The .exe file launched with this command, I want to force-quit when the button is clicked or when the function is executed. Looking at a past question, it has been indicated that the way to force quit is to get a PID and do an OS.kill as follows. import signal os.kill(self.p.pid, signal.CTRL_C_EVENT) However, I don't know how to get the PID of the process started in subprocess.run. What should I do? | Assign a variable to your subprocess import os import signal import subprocess exeFilePath = "C:/Users/test/test.exe" p = subprocess.Popen(exeFilePath) print(p.pid) # the pid os.kill(p.pid, signal.SIGTERM) #or signal.SIGKILL In same cases the process has children processes. You need to kill all processes to terminate it. In that case you can use psutil #python -m pip install —user psutil import psutil #remember to assign subprocess to a variable def kills(pid): '''Kills all process''' parent = psutil.Process(pid) for child in parent.children(recursive=True): child.kill() parent.kill() #assumes variable p kills(p.pid) This will kill all processes in that PID | 7 | 7 |
62,041,999 | 2020-5-27 | https://stackoverflow.com/questions/62041999/where-to-set-n-job-estimator-or-gridsearchcv | I often use GridSearchCV for hyperparameter tuning. For example, for tuning regularization parameter C in Logistic Regression. Whenever an estimator I am using has its own n_jobs parameter I am confused where to set it, in estimator or in GridSearchCV, or in both? Same thing applies to cross_validate. | This is a very interesting question. I don't have a definitive answer, but some elements that are worth mentioning to understand the issue, and don't fir in a comment. Let's start with why you should or should not use multiprocessing : Multiprocessing is useful for independent tasks. This is the case in a GridSearch, where all your different variations of your models are independent. Multiprocessing is not useful / make things slower when : Task are too small : creating a new process takes time, and if your task is really small, this overhead with slow the execution of the whole code Too many processes are spawned : your computer have a limited number of cores. If you have more processes than cores, a load balancing mechanism will force the computer to regularly switch the processes that are running. These switches take some time, resulting in a slower execution. The first take-out is that you should not use n_jobs in both GridSearch and the model you're optimizing, because you will spawn a lot of processes and end up slowing the execution. Now, a lot of sklearn models and functions are based on Numpy/SciPy which in turn, are usually implemented in C/Fortran, and thus already use multiprocessing. That means that these should not be used with n_jobs>1 set in the GridSearch. If you assume your model is not already parallelized, you can choose to set n_jobsat the model level or at the GridSearch level. A few models are able to be fully parallelized (RandomForest for instance), but most may have at least some part that is sequential (Boosting for instance). In the other end, GridSearch has no sequential component by design, so it would make sense to set n_jobs in GridSearch rather than in the model. That being said, it depend on the implementation of the model, and you can't have a definitive answer without testing for yourself for your case. For example, if you pipeline consume a lot of memory for some reason, setting n_jobs in the GridSearch may cause memory issues. As a complement, here is a very interesting note on parallelism in sklearn | 9 | 7 |
62,059,196 | 2020-5-28 | https://stackoverflow.com/questions/62059196/gensim-fasttext-why-load-facebook-vectors-doesnt-work | I've tried to load pre-trained FastText vectors from fastext - wiki word vectors. My code is below, and it works well. from gensim.models import FastText model = FastText.load_fasttext_format('./wiki.en/wiki.en.bin') but, the warning message is a little annoying. gensim_fasttext_pretrained_vector.py:13: DeprecationWarning: Call to deprecated `load_fasttext_format` (use load_facebook_vectors (to use pretrained embeddings) The message said, load_fasttext_format will be deprecated so, it will be better to use load_facebook_vectors. So I decided to changed the code. and My changed code is like below. from gensim.models import FastText model = FastText.load_facebook_vectors('./wiki.en/wiki.en.bin') But, the error occurred, the error message is like this. Traceback (most recent call last): File "gensim_fasttext_pretrained_vector.py", line 13, in <module> model = FastText.load_facebook_vectors('./wiki.en/wiki.en.bin') AttributeError: type object 'FastText' has no attribute 'load_facebook_vectors' I couldn't understand why these thing happen. I just change what the messages said, but it doesn't work. If you know anything about this, please let me know. Always, thanks for you guys help. | You're almost there, you need to change two things: First of all, it's fasttext all lowercase letters, not Fasttext. Second of all, to use load_facebook_vectors, you need first to create a datapath object before using it. So, you should do like so: from gensim.models import fasttext from gensim.test.utils import datapath wv = fasttext.load_facebook_vectors(datapath("./wiki.en/wiki.en.bin")) | 8 | 7 |
62,060,079 | 2020-5-28 | https://stackoverflow.com/questions/62060079/how-to-solve-package-conflict-on-conda | I want to use Conda to create a virtual environment from a YAML file. However, many packages end up with a Conflict error. The best way to solve this is to install each package individually instead of creating a virtual environment from a YAML file, right? If anyone knows of a better way to do it, please let me know. | Use conda-forge which has a strong dependency resolution implementation. Newer conda versions (>=4.6) introduced a strict channel priority feature. Type conda config --describe channel_priority for more information. The solution is to add the conda-forge channel on top of defaults in your .condarc file when using conda-forge packages and activate the strict channel priority with: $ conda config --set channel_priority strict This will ensure that all the dependencies will come from the conda-forge channel unless they exist only on defaults. You could also use Pipenv, and the Pipfile feature it comes with. Pipenv will attempt to install sub-dependencies that satisfy all the requirements from your core dependencies. see more: https://realpython.com/pipenv-guide/ | 8 | 3 |
62,042,172 | 2020-5-27 | https://stackoverflow.com/questions/62042172/how-to-remove-noise-in-image-opencv-python | I have some cropped images and I need images that have black texts on white background. Firstly I apply adaptive thresholding and then I try to remove noise. Although I tried a lot of noise removal techniques but when the image changed, the techniques I used failed. The best method for converting image color to binary for my images is Adaptive Gaussian Thresholding. Here is my code: im_gray = cv2.imread("image.jpg", cv2.IMREAD_GRAYSCALE) image = cv2.GaussianBlur(im_gray, (5,5), 1) th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,3,2) I need smooth values, Decimal separator(dot) and postfix letters. How can I do this? | Before binarization, it is necessary to correct the nonuniform illumination of the background. For example, like this: import cv2 image = cv2.imread('9qBsB.jpg') image=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) se=cv2.getStructuringElement(cv2.MORPH_RECT , (8,8)) bg=cv2.morphologyEx(image, cv2.MORPH_DILATE, se) out_gray=cv2.divide(image, bg, scale=255) out_binary=cv2.threshold(out_gray, 0, 255, cv2.THRESH_OTSU )[1] cv2.imshow('binary', out_binary) cv2.imwrite('binary.png',out_binary) cv2.imshow('gray', out_gray) cv2.imwrite('gray.png',out_gray) Result: | 11 | 22 |
62,048,408 | 2020-5-27 | https://stackoverflow.com/questions/62048408/how-to-remove-progressbar-in-tqdm-once-the-iteration-is-complete | How can I archive this? from tqdm import tqdm for link in tqdm(links): try: #Do Some Stff except: pass print("Done:") Result: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 111.50it/s] Done: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 111.50it/s] Done: Expected Result (Showing the status bar but don't print it after into the console) Done: Done: | tqdm actually takes several arguments, one of them is leave, which according to the docs: If [default: True], keeps all traces of the progressbar upon termination of iteration. If None, will leave only if position is 0 So: >>> for _ in tqdm(range(2)): ... time.sleep(1) ... 100%|██████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.01s/it] Whereas setting leave=False yields: >>> for _ in tqdm(range(2), leave=False): ... time.sleep(1) ... >>> | 33 | 53 |
62,039,535 | 2020-5-27 | https://stackoverflow.com/questions/62039535/extract-images-from-excel-file-with-python | I have an Excel sheet with 100 rows. Each one has various informations, including an id, and a cell containing a photo. I use pandas to load the data into dictionaries : import pandas as pd df = pd.read_excel('myfile.xlsx') data = [] for index,row in df.iterrows(): data.append({ 'id':row['id'], 'field2':row['field2'], 'field3':row['field3'] }) For the image column, I want to extract each image, name it with the id of the row (image_row['id'].jpg) and put it into a folder. Then, I want to store the path to the image as below : for index,row in df.iterrows(): data.append({ 'id':row['id'], 'field2':row['field2'], 'field3':row['field3'], 'image':'path/image_'+row['id']+'.jpg' }) I'm looking for a way to do that, or another way if better. Do you have any idea ? I'm on Linux, so I can't use this method with pywin32. Thanks a lot -- EDIT You can find here an exemple of sheet i use | I found a solution using openpyxl and openpyxl-image-loader modules # installing the modules pip3 install openpyxl pip3 install openpyxl-image-loader Then, in the script : #Importing the modules import openpyxl from openpyxl_image_loader import SheetImageLoader #loading the Excel File and the sheet pxl_doc = openpyxl.load_workbook('myfile.xlsx') sheet = pxl_doc['Sheet_name'] #calling the image_loader image_loader = SheetImageLoader(sheet) #get the image (put the cell you need instead of 'A1') image = image_loader.get('A1') #showing the image image.show() #saving the image image.save('my_path/image_name.jpg') In the end, I can store the path and the image name in my dictionaries in a loop for each row | 8 | 31 |
61,974,312 | 2020-5-23 | https://stackoverflow.com/questions/61974312/is-python-memory-safe | With Deno being the new Node.js rival and all, the memory-safe nature of Rust has been mentioned in a lot of news articles, one particular piece stated Rust and Go are good for their memory-safe nature, as are Swift and Kotlin but the latter two are not used for systems programming that widely. Safe Rust is the true Rust programming language. If all you do is write Safe Rust, you will never have to worry about type-safety or memory-safety. You will never endure a dangling pointer, a use-after-free, or any other kind of Undefined Behavior. This piqued my interest into understanding if Python can be regarded as memory-safe and if yes or no, how safe or unsafe? From the outset, the article on memory safety on Wikipedia does not even mention Python and the article on Python only mentions memory management it seems. The closest I've come to finding an answer was this one by Daniel: The wikipedia article associates type-safe to memory-safe, meaning, that the same memory area cannot be accessed as e.g. integer and string. In this way Python is type-safe. You cannot change the type of a object implicitly. But even this only seems to imply a connection between two aspects (using an association from Wikipedia, which again is debatable) and no definitive answer on whether Python can be regarded as memory-safe. | Wikipedia lists the following examples of memory safety issues: Access errors: invalid read/write of a pointer Buffer overflow - out-of-bound writes can corrupt the content of adjacent objects, or internal data (like bookkeeping information for the heap) or return addresses. Buffer over-read - out-of-bound reads can reveal sensitive data or help attackers bypass address space layout randomization. Python at least tries to protect against these. Race condition - concurrent reads/writes to shared memory That's actually not that hard to do in languages with mutable data structures. (Advocates of functional programming and immutable data structures often use this fact as an argument in their favor). Invalid page fault - accessing a pointer outside the virtual memory space. A null pointer dereference will often cause an exception or program termination in most environments, but can cause corruption in operating system kernels or systems without memory protection, or when use of the null pointer involves a large or negative offset. Use after free - dereferencing a dangling pointer storing the address of an object that has been deleted. Uninitialized variables - a variable that has not been assigned a value is used. It may contain an undesired or, in some languages, a corrupt value. Null pointer dereference - dereferencing an invalid pointer or a pointer to memory that has not been allocated Wild pointers arise when a pointer is used prior to initialization to some known state. They show the same erratic behaviour as dangling pointers, though they are less likely to stay undetected. There's no real way to prevent someone from trying to access a null pointer. In C# and Java, this results in an exception. In C++, this results in undefined behavior. Memory leak - when memory usage is not tracked or is tracked incorrectly Stack exhaustion - occurs when a program runs out of stack space, typically because of too deep recursion. A guard page typically halts the program, preventing memory corruption, but functions with large stack frames may bypass the page. Memory leaks in languages like C#, Java, and Python have different meanings than they do in languages like C and C++ where you manage memory manually. In C or C++, you get a memory leak by failing to deallocate allocated memory. In a language with managed memory, you don't have to explicitly de-allocate memory, but it's still possible to do something quite similar by accidentally maintaining a reference to an object somewhere even after the object is no longer needed. This is actually quite easy to do with things like event handlers in C# and long-lived collection classes; I've actually worked on projects where there were memory leaks in spite of the fact that we were using managed memory. In one sense, working with an environment that has managed memory can actually make these issues more dangerous because programmers can have a false sense of security. In my experience, even experienced engineers often fail to do memory profiling or write test cases to check for this (again, likely due to the environment giving them a false sense of security). Stack exhaustion is quite easy to do in Python too (e.g. with infinite recursion). Heap exhaustion - the program tries to allocate more memory than the amount available. In some languages, this condition must be checked for manually after each allocation. Still quite possible - I'm rather embarrassed to admit that I've personally done that in C# by loading an enormous file into memory (although not in Python yet). Double free - repeated calls to free may prematurely free a new object at the same address. If the exact address has not been reused, other corruption may occur, especially in allocators that use free lists. Invalid free - passing an invalid address to free can corrupt the heap. Mismatched free - when multiple allocators are in use, attempting to free memory with a deallocation function of a different allocator[20] Unwanted aliasing - when the same memory location is allocated and modified twice for unrelated purposes. Unwanted aliasing is actually quite easy to do in Python. Here's an example in Java (full disclosure: I wrote the accepted answer); you could just as easily do something quite similar in Python. The others are managed by the Python interpreter itself. So, it would seem that memory-safety is relative. Depending on exactly what you consider a "memory-safety issue," it can actually be quite difficult to entirely prevent. High-level languages like Java, C#, and Python can prevent many of the worst of these errors, but there are other issues that are difficult or impossible to completely prevent. | 15 | 11 |
62,025,723 | 2020-5-26 | https://stackoverflow.com/questions/62025723/how-to-validate-a-pydantic-object-after-editing-it | Is there any obvious way to validate a pydantic model after changing some attribute? Say I create a simple Model and object: from pydantic import BaseModel class A(BaseModel): b: int = 0 a=A() Then edit it, so that it is actually invalid: a.b = "foobar" Can I force a re-validation and expect a ValidationError to be raised? I tried A.validate(a) # no error a.copy(update=dict(b='foobar')) # no error What did work was A(**dict(a._iter())) ValidationError: 1 validation error for A b value is not a valid integer (type=type_error.integer) But that is not really straightforward and I need to use the supposedly private method _iter. Is there a clean alternative? | pydantic can do this for you, you just need validate_assignment: from pydantic import BaseModel class A(BaseModel): b: int = 0 class Config: validate_assignment = True | 24 | 29 |
61,981,156 | 2020-5-24 | https://stackoverflow.com/questions/61981156/unable-to-locate-package-python-pip-ubuntu-20-04 | I am trying to install mininet-wifi. After downloading it, I have been using the following command to install it: sudo util/install.sh -Wlnfv However, I keep getting the error: E: Unable to locate package python-pip I have tried multiple times to download python-pip. I know mininet-wifi utilizes python 2 instead of python 3. I have tried to download python-pip using the command: sudo apt-get install python-pip But that leads to the same error: E: Unable to locate package python-pip | Pip for Python 2 is not included in the Ubuntu 20.04 repositories. You need to install pip for Python 2 using the get-pip.py script. 1. Start by enabling the universe repository: sudo add-apt-repository universe 2. Update the packages index and install Python 2: sudo apt update sudo apt install python2 3. Use curl to download the get-pip.py script specific to python 2.7: curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py 4. Once the repository is enabled, run the script as sudo user with python2 to install pip : sudo python2 get-pip.py Pip will be installed globally. If you want to install it only for your user, run the command without sudo. The script will also install setuptools and wheel, which allow you to install source distributions Verify the installation by printing the pip version number: pip2 --version The output will look something like this: pip 20.0.2 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7) | 33 | 69 |
61,948,723 | 2020-5-22 | https://stackoverflow.com/questions/61948723/how-to-extend-a-pydantic-object-and-change-some-fields-type | There are two similar pydantic object like that. The only difference is some fields are optionally. How can I just define the fields in one object and extend into another one? class ProjectCreateObject(BaseModel): project_id: str project_name: str project_type: ProjectTypeEnum depot: str system: str ... class ProjectPatchObject(ProjectCreateObject): project_id: str project_name: Optional[str] project_type: Optional[ProjectTypeEnum] depot: Optional[str] system: Optional[str] ... | I find a good and easy way by __init__subclass__. The docs also can be generated successfully. class ProjectCreateObject(BaseModel): project_id: str project_name: str project_type: ProjectTypeEnum depot: str system: str ... def __init_subclass__(cls, optional_fields=(), **kwargs): """ allow some fields of subclass turn into optional """ super().__init_subclass__(**kwargs) for field in optional_fields: cls.__fields__[field].outer_type_ = Optional cls.__fields__[field].required = False _patch_fields = ProjectCreateObject.__fields__.keys() - {'project_id'} class ProjectPatchObject(ProjectCreateObject, optional_fields=_patch_fields): pass | 11 | 6 |
61,937,520 | 2020-5-21 | https://stackoverflow.com/questions/61937520/proper-way-to-create-class-variable-in-data-class | I've just begun playing around with Python's Data Classes, and I would like confirm that I am declaring Class Variables in the proper way. Using regular python classes class Employee: raise_amount = .05 def __init__(self, fname, lname, pay): self.fname = fname self.lname = lname self.pay = pay Using python Data Class @dataclass class Employee: fname: str lname: str pay: int raise_amount = .05 The class variable I am referring to is raise_amount. Is this a properly declared class variable using Data Classes? Or is there a better way of doing so? I have tested the data class implementation already and it provides the expected functionality, but I am mainly wondering if my implementation is following best practices. | To create a class variable, annotate the field as a typing.ClassVar or not at all. from typing import ClassVar from dataclasses import dataclass @dataclass class Foo: ivar: float = 0.5 cvar: ClassVar[float] = 0.5 nvar = 0.5 foo = Foo() Foo.ivar, Foo.cvar, Foo.nvar = 1, 1, 1 print(Foo().ivar, Foo().cvar, Foo().nvar) # 0.5 1 1 print(foo.ivar, foo.cvar, foo.nvar) # 0.5 1 1 print(Foo(), Foo(12)) # Foo(ivar=0.5) Foo(ivar=12) There is a subtle difference in that the unannotated field is completely ignored by @dataclass, whereas the ClassVar field is stored but not converted to an attribute. dataclasses — Data Classes The member variables [...] are defined using PEP 526 type annotations. Class variables One of two places where dataclass() actually inspects the type of a field is to determine if a field is a class variable as defined in PEP 526. It does this by checking if the type of the field is typing.ClassVar. If a field is a ClassVar, it is excluded from consideration as a field and is ignored by the dataclass mechanisms. Such ClassVar pseudo-fields are not returned by the module-level fields() function. | 90 | 131 |
61,927,877 | 2020-5-21 | https://stackoverflow.com/questions/61927877/how-to-crop-opencv-image-from-center | How can I crop an image using OpenCV from the center? I think it has something to do with this line, but if there is a better way please inform me. crop_img = img[y:y+h, x:x+w] | Just an additional comment for the Lenik's answer (It is the first time I want to contribute in StackOverflow and don't have enough reputation to comment the answer), you need to be sure x and y are integers. Probably in this case x and y would always be integers as most of resolutions are even, but is a good practice to keep the values inside an int(). center = image.shape x = center[1]/2 - w/2 y = center[0]/2 - h/2 crop_img = img[int(y):int(y+h), int(x):int(x+w)] | 10 | 14 |
61,913,632 | 2020-5-20 | https://stackoverflow.com/questions/61913632/python-convert-string-type-to-datetime-type | I have a two variables that i want to compare. When printed, this is what they look like: 2020-05-20 13:01:30 2020-05-20 14:49:03 However, one is a string type, and the other a datetime type. If I want to convert the string one into date type so I can compare them, is the only way to use strptime? Because this seems a little redundant to me, since the string already has the exact format I want it to have. Basically, is there a function that does the same as strptime, but without re-formating it? As you can imagine, googling this problem is impossible, as all I'm getting is people trying to format any kind of string into datetime, so all the answers are just pointing at strptime. | If you work with Python 3.7+, for ISO 8601 compatible strings, use datetime.fromisoformat() as this is considerably more efficient than strptime or dateutil's parser. Ex: from datetime import datetime dtobj = datetime.fromisoformat('2020-05-20 13:01:30') print(repr(dtobj)) # datetime.datetime(2020, 5, 20, 13, 1, 30) You can find a benchmark vs. strptime etc. here or here. | 12 | 11 |
61,978,049 | 2020-5-23 | https://stackoverflow.com/questions/61978049/reverse-search-an-image-in-yandex-images-using-python | I'm interested in automatizing reverse image search. Yandex in particular is great for busting catfishes, even better than Google Images. So, consider this Python code: import requests import webbrowser try: filePath = "C:\\path\\whateverThisIs.png" searchUrl = 'https://yandex.ru/images/' multipart = {'encoded_image': (filePath, open(filePath, 'rb')), 'image_content': ''} response = requests.post(searchUrl, files=multipart, allow_redirects=False) #fetchUrl = response.headers['Location'] print(response) print(dir(response)) print(response.content) input() except Exception as e: print(e) print(e.with_traceback) input()``` The script fails with KeyError, 'location' is not found. I know the code works cause if you substitute searchUrl with http://www.google.hr/searchbyimage/upload then the script returns the correct url. So, in short the expected outcome would be a url with an image search. In actuality we get a KeyError where that url was supposed to be stored. Evidently, Yandex doesn't work in exactly the same way, maybe the url is off (although I tried a heap ton of variations) or the reason may be completely different. Regardless of that, help in solving this problem is much appreciated! | You can get url with an image search by using this code. Tested on ubuntu 18.04, with python 3.7 and requests 2.23.0 import json import requests file_path = "C:\\path\\whateverThisIs.png" search_url = 'https://yandex.ru/images/search' files = {'upfile': ('blob', open(file_path, 'rb'), 'image/jpeg')} params = {'rpt': 'imageview', 'format': 'json', 'request': '{"blocks":[{"block":"b-page_type_search-by-image__link"}]}'} response = requests.post(search_url, params=params, files=files) query_string = json.loads(response.content)['blocks'][0]['params']['url'] img_search_url = search_url + '?' + query_string print(img_search_url) | 11 | 17 |
61,976,560 | 2020-5-23 | https://stackoverflow.com/questions/61976560/how-to-delete-queue-updates-in-telegram-api | I'm trying to delete messages from /getUpdates in telegram API but I didn't know how.. I tried to use /deleteMessage https://api.telegram.org/bot<TOKEN>/deleteMessage?chat_id=blahblah&message_id=BlahBlah But it didn't delete message from API database.. | TL;DR: Call getUpdates() with the offset parameter set to the last message's id, incremented by 1 We'll need to let Telegram know which message's we've processed. To do this, set the offset parameter to the update_id + 1 of the last message your script has processed. Call getUpdates() to get the update_id of the latest message https://api.telegram.org/<MY-TOKEN>/getUpdates { "ok": true, "result": [ { "update_id": 343126593, # <-- Remember / Save this id "message": { ... Increment the update_id by 1 On the next getUpdates() call, set the offset parameter to the id: https://api.telegram.org/<MY-TOKEN>/getUpdates?offset=343126594 | 10 | 13 |
61,947,044 | 2020-5-22 | https://stackoverflow.com/questions/61947044/keyring-warning-when-running-pip-list-o | I've been trying to run pip list -o and pip list --outdated to see if any packages need to be updated but it enters a loop of printing: WARNING: Keyring is skipped due to an exception: Failed to create the collection: Prompt dismissed.. I've upgraded keyring and the version was already up-to-date. I've seen this keyring warning whilst using pip install {package} --upgrade to upgrade other packages as well. | I searched the web about that topic and find that GitHub issue. If your pip version is any version before "21.1", you can try to upgrade pip to the latest version with pip install --upgrade pip command. Also, as a workaround, you can consider the following answer of jrd from the above link: Exporting PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring prevent python from using any keyring. PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring pipenv update does not ask me anything anymore. So, as a temporary solution, one might want to put this in a .env file. | 18 | 11 |
61,924,233 | 2020-5-20 | https://stackoverflow.com/questions/61924233/the-from-address-does-not-match-a-verified-sender-identity-mail-cannot-be-sent | I follow this link: https://sendgrid.com/docs/ui/sending-email/sender-verification In my sendgrid account the from_email is set as verifie Single sender authentication, but when i send email verifications in my localhost, i still receive the same message : The from address does not match a verified Sender Identity my config: EMAIL_HOST = 'smtp.sendgrid.net' EMAIL_HOST_USER = 'apikey' EMAIL_HOST_PASSWORD = 'your api generate password' EMAIL_PORT = 587 EMAIL_USE_TLS = True | You need to add another line to your config containing the verified sender's address: DEFAULT_FROM_EMAIL = '[email protected]' | 9 | 7 |
61,943,545 | 2020-5-21 | https://stackoverflow.com/questions/61943545/python-get-keys-from-unbound-typeddict | I would like to get the keys from an unbound TypedDict subclass. What is the correct way to do so? Below I have a hacky method, and I'm wondering if there's a more standard way. Current Method I used inspect.getmembers on the TypedDict subclass, and saw the __annotations__ attribute houses a mapping of the keys + type annotations. From there, I use .keys() to get access to all of the keys. from typing_extensions import TypedDict class SomeTypedDict(TypedDict): key1: str key2: int print(SomeTypedDict.__annotations__.keys()) Prints: dict_keys(['key1', 'key2']) This does work, but I am wondering, is there a better/more standard way? Versions python==3.6.5 typing-extensions==3.7.4.2 | The code documentation explicitly states (referring to a sample derived class Point2D): The type info can be accessed via the Point2D.__annotations__ dict, and the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets. So if the modules code says this, there is no reason to look for another method. Note that your method only printed the names of the dictionary keys. You can get the names and the type simply by accessing the full dictionary: print(SomeTypedDict.__annotations__) Which will get you back all the info: {'key1': <class 'str'>, 'key2': <class 'int'>} | 16 | 21 |
61,951,026 | 2020-5-22 | https://stackoverflow.com/questions/61951026/pygame-drawing-a-border-of-a-rectangle | I am creating a 3d pong game using pygame. I wanted to add a thick black border layer to the rectangle to make it more stylish. Here's what I tried: pygame.draw.rect(screen, (0,0,255), (x,y,150,150), 0) pygame.draw.rect(screen, (0,0,0), (x-1,y-1,155,155), 1) pygame.draw.rect(screen, (0,0,0), (x-2,y-2,155,155), 1) pygame.draw.rect(screen, (0,0,0), (x-3,y-3,155,155), 1) pygame.draw.rect(screen, (0,0,0), (x-4,y-4,155,155), 1) It worked but as the game I am trying to create is a 3d game this method was time consuming. Please tell me if there is any inbuilt method in pygame to draw borders around a rectangle. Sorry for my English. | You could also put it in a function, and make it more concise with for loops. First, you'll note that the four rectangles you drew were in a nice, easy pattern, so you could compact the drawing of the four rectangles like this: pygame.draw.rect(surface, (0,0,255), (x,y,150,150), 0) for i in range(4): pygame.draw.rect(surface, (0,0,0), (x-i,y-i,155,155), 1) Then, because pygame draw functions do not need to be run in the global scope, you can put all this into a function: def drawStyleRect(surface): pygame.draw.rect(surface, (0,0,255), (x,y,150,150), 0) for i in range(4): pygame.draw.rect(surface, (0,0,0), (x-i,y-i,155,155), 1) Then, in your mainloop, all you have to do is run: while not done: ... drawStyleRect(screen) # Or whatever you named the returned surface of 'pygame.display.set_mode()' ... You could even put the drawing function in a separate module, if you really wanted to. | 7 | 5 |
61,922,334 | 2020-5-20 | https://stackoverflow.com/questions/61922334/how-to-solve-attributeerror-module-google-protobuf-descriptor-has-no-attribu | I encountered it while executing from object_detection.utils import label_map_util in jupyter notebook. It is actually the tensorflow object detection tutorial notebook(it comes with the tensorflow object detection api) The complete error log: AttributeError Traceback (most recent call last) <ipython-input-7-7035655b948a> in <module> 1 from object_detection.utils import ops as utils_ops ----> 2 from object_detection.utils import label_map_util 3 from object_detection.utils import visualization_utils as vis_util ~\AppData\Roaming\Python\Python37\site-packages\object_detection\utils\label_map_util.py in <module> 25 import tensorflow as tf 26 from google.protobuf import text_format ---> 27 from object_detection.protos import string_int_label_map_pb2 28 29 ~\AppData\Roaming\Python\Python37\site-packages\object_detection\protos\string_int_label_map_pb2.py in <module> 19 syntax='proto2', 20 serialized_options=None, ---> 21 create_key=_descriptor._internal_create_key, 22 serialized_pb=b'\n2object_detection/protos/string_int_label_map.proto\x12\x17object_detection.protos\"\xc0\x01\n\x15StringIntLabelMapItem\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\n\n\x02id\x18\x02 \x01(\x05\x12\x14\n\x0c\x64isplay_name\x18\x03 \x01(\t\x12M\n\tkeypoints\x18\x04 \x03(\x0b\x32:.object_detection.protos.StringIntLabelMapItem.KeypointMap\x1a(\n\x0bKeypointMap\x12\n\n\x02id\x18\x01 \x01(\x05\x12\r\n\x05label\x18\x02 \x01(\t\"Q\n\x11StringIntLabelMap\x12<\n\x04item\x18\x01 \x03(\x0b\x32..object_detection.protos.StringIntLabelMapItem' 23 ) AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key' | The protoc version I got through pip show protobuf and protoc --version were different. The version in pip was a bit outdated. After I upgraded the pip version with pip install --upgrade protobuf the problem was solved. | 104 | 199 |
61,908,834 | 2020-5-20 | https://stackoverflow.com/questions/61908834/creating-virtual-environment-using-python-3-8-when-python-2-7-is-present | I am trying to create a virtual environment using mkvirtualenv with python 3 in Windows but the environment is created with python 2.7.My pip version is also from python 2.7 which i have avoided using py -m pip install virtualenvwrapper-win When i do mkvirtualenv test environment is created with python 2.7 Please help me with a solution Thanks in advance:) | If you would like to create a virtualenv with python 3.X having the version 2.X You just have to pass a parameter argument for your virtual env. $ virtualenv venv -p $(which python3) This command will point to your current python3 install folder, and create a virtualenv copied from your current python3 binaries. If you would like to see what this command does, just fire the command: $ which python3 #should print your current python3 binary folder. | 17 | 28 |
62,012,194 | 2020-5-25 | https://stackoverflow.com/questions/62012194/how-to-make-a-line-plot-from-a-pandas-dataframe-with-a-long-or-wide-format | (This is a self-answered post to help others shorten their answers to plotly questions by not having to explain how plotly best handles data of long and wide format) I'd like to build a plotly figure based on a pandas dataframe in as few lines as possible. I know you can do that using plotly.express, but this fails for what I would call a standard pandas dataframe; an index describing row order, and column names describing the names of a value in a dataframe: Sample dataframe: a b c 0 100.000000 100.000000 100.000000 1 98.493705 99.421400 101.651437 2 96.067026 98.992487 102.917373 3 95.200286 98.313601 102.822664 4 96.691675 97.674699 102.378682 An attempt: fig=px.line(x=df.index, y = df.columns) This raises an error: ValueError: All arguments should have the same length. The length of argument y is 3, whereas the length of previous arguments ['x'] is 100` | Here you've tried to use a pandas dataframe of a wide format as a source for px.line. And plotly.express is designed to be used with dataframes of a long format, often referred to as tidy data (and please take a look at that. No one explains it better that Wickham). Many, particularly those injured by years of battling with Excel, often find it easier to organize data in a wide format. So what's the difference? Wide format: data is presented with each different data variable in a separate column each column has only one data type missing values are often represented by np.nan works best with plotly.graphobjects (go) lines are often added to a figure using fid.add_traces() colors are normally assigned to each trace Example: a b c 0 -1.085631 0.997345 0.282978 1 -2.591925 0.418745 1.934415 2 -5.018605 -0.010167 3.200351 3 -5.885345 -0.689054 3.105642 4 -4.393955 -1.327956 2.661660 5 -4.828307 0.877975 4.848446 6 -3.824253 1.264161 5.585815 7 -2.333521 0.328327 6.761644 8 -3.587401 -0.309424 7.668749 9 -5.016082 -0.449493 6.806994 Long format: data is presented with one column containing all the values and another column listing the context of the value missing values are simply not included in the dataset. works best with plotly.express (px) colors are set by a default color cycle and are assigned to each unique variable Example: id variable value 0 0 a -1.085631 1 1 a -2.591925 2 2 a -5.018605 3 3 a -5.885345 4 4 a -4.393955 ... ... ... ... 295 95 c -4.259035 296 96 c -5.333802 297 97 c -6.211415 298 98 c -4.335615 299 99 c -3.515854 How to go from wide to long? df = pd.melt(df, id_vars='id', value_vars=df.columns[:-1]) The two snippets below will produce the very same plot: How to use px to plot long data? fig = px.line(df, x='id', y='value', color='variable') How to use go to plot wide data? colors = px.colors.qualitative.Plotly fig = go.Figure() fig.add_traces(go.Scatter(x=df['id'], y = df['a'], mode = 'lines', line=dict(color=colors[0]))) fig.add_traces(go.Scatter(x=df['id'], y = df['b'], mode = 'lines', line=dict(color=colors[1]))) fig.add_traces(go.Scatter(x=df['id'], y = df['c'], mode = 'lines', line=dict(color=colors[2]))) fig.show() By the looks of it, go is more complicated and offers perhaps more flexibility? Well, yes. And no. You can easily build a figure using px and add any go object you'd like! Complete go snippet: import numpy as np import pandas as pd import plotly.express as px import plotly.graph_objects as go # dataframe of a wide format np.random.seed(123) X = np.random.randn(100,3) df=pd.DataFrame(X, columns=['a','b','c']) df=df.cumsum() df['id']=df.index # plotly.graph_objects colors = px.colors.qualitative.Plotly fig = go.Figure() fig.add_traces(go.Scatter(x=df['id'], y = df['a'], mode = 'lines', line=dict(color=colors[0]))) fig.add_traces(go.Scatter(x=df['id'], y = df['b'], mode = 'lines', line=dict(color=colors[1]))) fig.add_traces(go.Scatter(x=df['id'], y = df['c'], mode = 'lines', line=dict(color=colors[2]))) fig.show() Complete px snippet: import numpy as np import pandas as pd import plotly.express as px from plotly.offline import iplot # dataframe of a wide format np.random.seed(123) X = np.random.randn(100,3) df=pd.DataFrame(X, columns=['a','b','c']) df=df.cumsum() df['id']=df.index # dataframe of a long format df = pd.melt(df, id_vars='id', value_vars=df.columns[:-1]) # plotly express fig = px.line(df, x='id', y='value', color='variable') fig.show() | 12 | 33 |
61,986,052 | 2020-5-24 | https://stackoverflow.com/questions/61986052/visual-studio-code-terminal-doesnt-activate-conda-environment | I read this Stack Overflow post on a similar issue, but the suggestions there don't seem to be working. I installed Visual Studio Code on my Windows machine and added the Python extension. Then I changed the Python path for my project to C:\Users\username\.conda\envs\tom\python.exe. The .vscode/settings.json has this in it: { "python.pythonPath": "C:\\Users\\username\\.conda\\envs\\tom\\python.exe" } The status bar in Visual Studio Code also shows: But when I do conda env list even after doing conda activate tom in the terminal I get the output: # conda environments: # base * C:\ProgramData\Anaconda3 tom C:\Users\username\.conda\envs\tom Instead of: # conda environments: # base C:\ProgramData\Anaconda3 tom * C:\Users\username\.conda\envs\tom Also the packages not installed in base don't get imported when I try python app.py. What should I do? where python runs, but it doesn't give any output. Also, import os import sys os.path.dirname(sys.executable) gives 'C:\\Python38' | First, open the Anaconda prompt (How to access Anaconda command prompt in Windows 10 (64-bit)), and type: conda activate tom To activate your virtual environment. Then to open Visual Studio Code in this active environment, type code And it should work. | 57 | 52 |
61,913,882 | 2020-5-20 | https://stackoverflow.com/questions/61913882/importerror-cannot-import-name-tablelist-from-camelot-core | i tried to extract the tables from a pdf using camelot but it is showing this error message! import camelot tables = camelot.read_pdf("C:/Users/shres/Desktop/PY/Arun District Council_ASR-2019.pdf", pages='all') tables tables.export("test.csv", f='csv') tables[0] tables[0].parsing_report { 'accuracy' : 99.02, 'whitespace':12.24, 'order': 1, 'page' : 1 } tables[0].to_csv('test.csv') tables[0].df ******error: the code shows this error****** ImportError: cannot import name 'TableList' from 'camelot.core' (C:\Users\shres\AppData\Local\Programs\Python\Python38\lib\site-packages\camelot\core\__init__.py) | you might want to reinstall it. Camelot and camelot-py are two different packages but they have the same import name. pip uninstall camelot pip uninstall camelot-py pip install camelot-py[cv] | 15 | 29 |
62,008,457 | 2020-5-25 | https://stackoverflow.com/questions/62008457/overlap-between-mask-and-fired-beams-in-pygame-ai-car-model-vision | I try to implement beam collision detection with a predefined track mask in Pygame. My final goal is to give an AI car model vision to see a track it's riding on: This is my current code where I fire beams to mask and try to find an overlap: import math import sys import pygame as pg RED = (255, 0, 0) GREEN = (0, 255, 0) BLUE = (0, 0, 255) pg.init() beam_surface = pg.Surface((500, 500), pg.SRCALPHA) def draw_beam(surface, angle, pos): # compute beam final point x_dest = 250 + 500 * math.cos(math.radians(angle)) y_dest = 250 + 500 * math.sin(math.radians(angle)) beam_surface.fill((0, 0, 0, 0)) # draw a single beam to the beam surface based on computed final point pg.draw.line(beam_surface, BLUE, (250, 250), (x_dest, y_dest)) beam_mask = pg.mask.from_surface(beam_surface) # find overlap between "global mask" and current beam mask hit = mask.overlap(beam_mask, (pos[0] - 250, pos[1] - 250)) if hit is not None: pg.draw.line(surface, BLUE, mouse_pos, hit) pg.draw.circle(surface, GREEN, hit, 3) surface = pg.display.set_mode((500, 500)) mask_surface = pg.image.load("../assets/mask.png") mask = pg.mask.from_surface(mask_surface) clock = pg.time.Clock() while True: for e in pg.event.get(): if e.type == pg.QUIT: pg.quit() sys.exit() mouse_pos = pg.mouse.get_pos() surface.fill((0, 0, 0)) surface.blit(mask_surface, mask_surface.get_rect()) for angle in range(0, 120, 30): draw_beam(surface, angle, mouse_pos) pg.display.update() clock.tick(30) Let's describe what happens in the code snippet. One by one, I draw beams to beam_surface, make masks from them, and find overlap with background mask defined by one rectangle and a circle (black color in gifs). If there is a "hit point" (overlap point between both masks), I draw it with a line connecting hit point and mouse position. It works fine for angles <0,90>: But it's not working for angles in range <90,360>: Pygame's overlap() documentation tells this: Starting at the top left corner it checks bits 0 to W - 1 of the first row ((0, 0) to (W - 1, 0)) then continues to the next row ((0, 1) to (W - 1, 1)). Once this entire column block is checked, it continues to the next one (W to 2 * W - 1). This means that this approach will work only if the beam hits the mask approximately from the top left corner. Do you have any advice on how to make it work for all of the situations? Is this generally a good approach to solve this problem? | Your approach works fine, if the x and y component of the ray axis points in the positive direction, but it fails if it points in the negative direction. As you pointed out, that is caused by the way pygame.mask.Mask.overlap works: Starting at the top left corner it checks bits 0 to W - 1 of the first row ((0, 0) to (W - 1, 0)) then continues to the next row ((0, 1) to (W - 1, 1)). Once this entire column block is checked, it continues to the next one (W to 2 * W - 1). To make the algorithm work, you have to ensure that the rays point always in the positive direction. Hence if the ray points in the negative x direction, then flip the mask and the ray vertical and if the ray points in the negative y direction than flip the ray horizontal. Use pygame.transform.flip() top create 4 masks. Not flipped, flipped horizontal, flipped vertical and flipped vertical and horizontal: mask = pg.mask.from_surface(mask_surface) mask_fx = pg.mask.from_surface(pg.transform.flip(mask_surface, True, False)) mask_fy = pg.mask.from_surface(pg.transform.flip(mask_surface, False, True)) mask_fx_fy = pg.mask.from_surface(pg.transform.flip(mask_surface, True, True)) flipped_masks = [[mask, mask_fy], [mask_fx, mask_fx_fy]] Determine if the direction of the ray: c = math.cos(math.radians(angle)) s = math.sin(math.radians(angle)) Get the flipped mask dependent on the direction of the ray: flip_x = c < 0 flip_y = s < 0 filpped_mask = flipped_masks[flip_x][flip_y] Compute the flipped target point: x_dest = 250 + 500 * abs(c) y_dest = 250 + 500 * abs(s) Compute the flipped offset: offset_x = 250 - pos[0] if flip_x else pos[0] - 250 offset_y = 250 - pos[1] if flip_y else pos[1] - 250 Get the nearest intersection point of the flipped ray and mask and unflip the intersection point: hit = filpped_mask.overlap(beam_mask, (offset_x, offset_y)) if hit is not None and (hit[0] != pos[0] or hit[1] != pos[1]): hx = 500 - hit[0] if flip_x else hit[0] hy = 500 - hit[1] if flip_y else hit[1] hit_pos = (hx, hy) pg.draw.line(surface, BLUE, mouse_pos, hit_pos) pg.draw.circle(surface, GREEN, hit_pos, 3) See the example: repl.it/@Rabbid76/PyGame-PyGame-SurfaceLineMaskIntersect-2 import math import sys import pygame as pg RED = (255, 0, 0) GREEN = (0, 255, 0) BLUE = (0, 0, 255) pg.init() beam_surface = pg.Surface((500, 500), pg.SRCALPHA) def draw_beam(surface, angle, pos): c = math.cos(math.radians(angle)) s = math.sin(math.radians(angle)) flip_x = c < 0 flip_y = s < 0 filpped_mask = flipped_masks[flip_x][flip_y] # compute beam final point x_dest = 250 + 500 * abs(c) y_dest = 250 + 500 * abs(s) beam_surface.fill((0, 0, 0, 0)) # draw a single beam to the beam surface based on computed final point pg.draw.line(beam_surface, BLUE, (250, 250), (x_dest, y_dest)) beam_mask = pg.mask.from_surface(beam_surface) # find overlap between "global mask" and current beam mask offset_x = 250 - pos[0] if flip_x else pos[0] - 250 offset_y = 250 - pos[1] if flip_y else pos[1] - 250 hit = filpped_mask.overlap(beam_mask, (offset_x, offset_y)) if hit is not None and (hit[0] != pos[0] or hit[1] != pos[1]): hx = 499 - hit[0] if flip_x else hit[0] hy = 499 - hit[1] if flip_y else hit[1] hit_pos = (hx, hy) pg.draw.line(surface, BLUE, pos, hit_pos) pg.draw.circle(surface, GREEN, hit_pos, 3) #pg.draw.circle(surface, (255, 255, 0), mouse_pos, 3) surface = pg.display.set_mode((500, 500)) #mask_surface = pg.image.load("../assets/mask.png") mask_surface = pg.Surface((500, 500), pg.SRCALPHA) mask_surface.fill((255, 0, 0)) pg.draw.circle(mask_surface, (0, 0, 0, 0), (250, 250), 100) pg.draw.rect(mask_surface, (0, 0, 0, 0), (170, 170, 160, 160)) mask = pg.mask.from_surface(mask_surface) mask_fx = pg.mask.from_surface(pg.transform.flip(mask_surface, True, False)) mask_fy = pg.mask.from_surface(pg.transform.flip(mask_surface, False, True)) mask_fx_fy = pg.mask.from_surface(pg.transform.flip(mask_surface, True, True)) flipped_masks = [[mask, mask_fy], [mask_fx, mask_fx_fy]] clock = pg.time.Clock() while True: for e in pg.event.get(): if e.type == pg.QUIT: pg.quit() sys.exit() mouse_pos = pg.mouse.get_pos() surface.fill((0, 0, 0)) surface.blit(mask_surface, mask_surface.get_rect()) for angle in range(0, 359, 30): draw_beam(surface, angle, mouse_pos) pg.display.update() clock.tick(30) Not,the algorithm can be further improved. The ray is always drawn on the bottom right quadrant of the beam_surface. Hence the other 3 quadrants are no longer needed and the size of beam_surface can be reduced to 250x250. The start of the ray is at (0, 0) rather than (250, 250) and the computation of the offsets hast to be slightly adapted: beam_surface = pg.Surface((250, 250), pg.SRCALPHA) def draw_beam(surface, angle, pos): c = math.cos(math.radians(angle)) s = math.sin(math.radians(angle)) flip_x = c < 0 flip_y = s < 0 filpped_mask = flipped_masks[flip_x][flip_y] # compute beam final point x_dest = 500 * abs(c) y_dest = 500 * abs(s) beam_surface.fill((0, 0, 0, 0)) # draw a single beam to the beam surface based on computed final point pg.draw.line(beam_surface, BLUE, (0, 0), (x_dest, y_dest)) beam_mask = pg.mask.from_surface(beam_surface) # find overlap between "global mask" and current beam mask offset_x = 499-pos[0] if flip_x else pos[0] offset_y = 499-pos[1] if flip_y else pos[1] hit = filpped_mask.overlap(beam_mask, (offset_x, offset_y)) if hit is not None and (hit[0] != pos[0] or hit[1] != pos[1]): hx = 499 - hit[0] if flip_x else hit[0] hy = 499 - hit[1] if flip_y else hit[1] hit_pos = (hx, hy) pg.draw.line(surface, BLUE, pos, hit_pos) pg.draw.circle(surface, GREEN, hit_pos, 3) | 7 | 9 |
61,968,794 | 2020-5-23 | https://stackoverflow.com/questions/61968794/what-is-the-best-practice-for-keeping-kafka-consumer-alive-in-python | Something is puzzling for me when it comes to keeping consumers alive. Let's say I have a topic to which data is constantly being written. But, in an hour in a day, there are no new messages. If I had set a timeout for my consumers, when there are no new messages, the consumer will get closed. Now, new messages arrive. But, there are not consumers alive to consume them. How should I handle such scenarios? My consumers may consume all messages and get closed. What is the best way to keep them alive? Is there any way to invoke them automatically upon the arrival of new messages? What are the best practices for such scenarios? | Why not just import time from confluent_kafka import Consumer consumer = Consumer({ 'bootstrap.servers': 'localhost:9092', 'group.id': 'my-consumer-1', 'auto.offset.reset': 'earliest' }) consumer.subscribe(['topicName']) while True: try: message = consumer.poll(10.0) if not message: time.sleep(120) # Sleep for 2 minutes if message.error(): print(f"Consumer error: {message.error()}") continue print(f"Received message: {msg.value().decode('utf-8')}") except: # Handle any exception here ... finally: consumer.close() print("Goodbye") I cannot comment on the requirement of "setting a timeout for consumers", but in most of the cases consumers are supposed to run "forever" and should also be added to consumer groups in a way that they are highly available. | 11 | 3 |
62,010,434 | 2020-5-25 | https://stackoverflow.com/questions/62010434/how-do-i-get-the-snake-to-grow-and-chain-the-movement-of-the-snakes-body | I want to implement a snake game. The snake meanders through the playground. Every time when the snake eats some food, the length of the snake increase by one element. The elements of the snakes body follow its head like a chain. snake_x, snake_y = WIDTH//2, HEIGHT//2 body = [] move_x, move_y = (1, 0) food_x, food_y = new_food(body) run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: move_x, move_y = (-1, 0) elif event.key == pygame.K_RIGHT: move_x, move_y = (1, 0) elif event.key == pygame.K_UP: move_x, move_y = (0, -1) elif event.key == pygame.K_DOWN: move_x, move_y = (0, 1) snake_x = (snake_x + move_x) % WIDTH snake_y = (snake_y + move_y) % HEIGHT if snake_x == food_x and snake_y == food_y: food_x, food_y = new_food(body) body.append((snake_x, snake_x)) # [...] How do I accomplish, that the body parts follow the snake's head on its path, when the snake's head moves ahead? | In general you have to distinguish between 2 different types of snake. In the first case, the snake moves in a grid and every time when the snake moves, it strides ahead one field in the grid. In the other type, the snakes position is not in a raster and not snapped on the fields of the grid, the position is free and the snake slides smoothly through the fields. In former each element of the body is snapped to the fields of the grid, as the head is. The other is more trick, because the position of a body element depends on the size of the element and the dynamic, previous positions of the snakes head. First the snake, which is snapped to a grid. The elements of the snake can be stored in a list of tuples. Each tuple contains the column and row of the snakes element in the grid. The changes to the items in the list directly follow the movement of the snake. If the snake moves, a the new position is add to the head of the list and the tail of the list is removed. For instance we have a snake with the following elements: body = [(3, 3), (3, 4), (4, 4), (5, 4), (6, 4)] When the snakes head moves form (3, 3) to (3, 2), then the new head position is add to the head of the list (body.insert(0, (3, 2)): body = [(3, 2), (3, 3), (3, 4), (4, 4), (5, 4), (6, 4)] Finally the tail of the ist is removed (del body[-1]): body = [(3, 2), (3, 3), (3, 4), (4, 4), (5, 4)] Minimal example: repl.it/@Rabbid76/PyGame-SnakeMoveInGrid import pygame import random pygame.init() COLUMNS, ROWS, SIZE = 10, 10, 20 screen = pygame.display.set_mode((COLUMNS*SIZE, ROWS*SIZE)) clock = pygame.time.Clock() background = pygame.Surface((COLUMNS*SIZE, ROWS*SIZE)) background.fill((255, 255, 255)) for i in range(1, COLUMNS): pygame.draw.line(background, (128, 128, 128), (i*SIZE-1, 0), (i*SIZE-1, ROWS*SIZE), 2) for i in range(1, ROWS): pygame.draw.line(background, (128, 128, 128), (0, i*SIZE-1), (COLUMNS*SIZE, i*SIZE-1), 2) def random_pos(body): while True: pos = random.randrange(COLUMNS), random.randrange(ROWS) if pos not in body: break return pos length = 1 body = [(COLUMNS//2, ROWS//2)] dir = (1, 0) food = random_pos(body) run = True while run: clock.tick(5) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: dir = (-1, 0) elif event.key == pygame.K_RIGHT: dir = (1, 0) elif event.key == pygame.K_UP: dir = (0, -1) elif event.key == pygame.K_DOWN: dir = (0, 1) body.insert(0, body[0][:]) body[0] = (body[0][0] + dir[0]) % COLUMNS, (body[0][1] + dir[1]) % ROWS if body[0] == food: food = random_pos(body) length += 1 while len(body) > length: del body[-1] screen.blit(background, (0, 0)) pygame.draw.rect(screen, (255, 0, 255), (food[0]*SIZE, food[1]*SIZE, SIZE, SIZE)) for i, pos in enumerate(body): color = (255, 0, 0) if i==0 else (0, 192, 0) if (i%2)==0 else (255, 128, 0) pygame.draw.rect(screen, color, (pos[0]*SIZE, pos[1]*SIZE, SIZE, SIZE)) pygame.display.flip() Now the snake with completely free positioning. We have to track all the positions which the snake's head has visited in a list. We have to place the elements of the snakes body on the positions in the list like the pearls of a chain. The key is, to compute the Euclidean distance between the last element of the body in the chain and the following positions on the track. When an new point with a distance that is large enough is found, then an new pearl (element) is add to the chain (body). dx, dy = body[-1][0]-pos[0], body[-1][1]-pos[1] if math.sqrt(dx*dx + dy*dy) >= distance: body.append(pos) The following function has 3 arguments. track is the list of the head positions. no_pearls is then number of elements of the shakes body and distance is the Euclidean distance between the elements. The function creates and returns a list of the snakes body positions. def create_body(track, no_pearls, distance): body = [(track[0])] track_i = 1 for i in range(1, no_pearls): while track_i < len(track): pos = track[track_i] track_i += 1 dx, dy = body[-1][0]-pos[0], body[-1][1]-pos[1] if math.sqrt(dx*dx + dy*dy) >= distance: body.append(pos) break while len(body) < no_pearls: body.append(track[-1]) del track[track_i:] return body Minimal example: repl.it/@Rabbid76/PyGame-SnakeMoveFree import pygame import random import math pygame.init() COLUMNS, ROWS, SIZE = 10, 10, 20 WIDTH, HEIGHT = COLUMNS*SIZE, ROWS*SIZE screen = pygame.display.set_mode((WIDTH, HEIGHT)) clock = pygame.time.Clock() background = pygame.Surface((WIDTH, HEIGHT)) background.fill((255, 255, 255)) for i in range(1, COLUMNS): pygame.draw.line(background, (128, 128, 128), (i*SIZE-1, 0), (i*SIZE-1, ROWS*SIZE), 2) for i in range(1, ROWS): pygame.draw.line(background, (128, 128, 128), (0, i*SIZE-1), (COLUMNS*SIZE, i*SIZE-1), 2) def hit(pos_a, pos_b, distance): dx, dy = pos_a[0]-pos_b[0], pos_a[1]-pos_b[1] return math.sqrt(dx*dx + dy*dy) < distance def random_pos(body): pos = None while True: pos = random.randint(SIZE//2, WIDTH-SIZE//2), random.randint(SIZE//2, HEIGHT-SIZE//2) if not any([hit(pos, bpos, 20) for bpos in body]): break return pos def create_body(track, no_pearls, distance): body = [(track[0])] track_i = 1 for i in range(1, no_pearls): while track_i < len(track): pos = track[track_i] track_i += 1 dx, dy = body[-1][0]-pos[0], body[-1][1]-pos[1] if math.sqrt(dx*dx + dy*dy) >= distance: body.append(pos) break while len(body) < no_pearls: body.append(track[-1]) del track[track_i:] return body length = 1 track = [(WIDTH//2, HEIGHT//2)] dir = (1, 0) food = random_pos(track) run = True while run: clock.tick(60) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: dir = (-1, 0) elif event.key == pygame.K_RIGHT: dir = (1, 0) elif event.key == pygame.K_UP: dir = (0, -1) elif event.key == pygame.K_DOWN: dir = (0, 1) track.insert(0, track[0][:]) track[0] = (track[0][0] + dir[0]) % WIDTH, (track[0][1] + dir[1]) % HEIGHT body = create_body(track, length, 20) if hit(body[0], food, 20): food = random_pos(body) length += 1 screen.blit(background, (0, 0)) pygame.draw.circle(screen, (255, 0, 255), food, SIZE//2) for i, pos in enumerate(body): color = (255, 0, 0) if i==0 else (0, 192, 0) if (i%2)==0 else (255, 128, 0) pygame.draw.circle(screen, color, pos, SIZE//2) pygame.display.flip() | 7 | 19 |
61,921,935 | 2020-5-20 | https://stackoverflow.com/questions/61921935/aws-lambda-failed-to-find-libmagic | I'm using in my lambda function the magic library to determine the file`s type. I first deployed it to a local container to check that everything works. My DockerFile : FROM lambci/lambda:build-python3.8 WORKDIR /app RUN mkdir -p .aws COPY requirements.txt ./ COPY credentials /app/.aws/ RUN mv /app/.aws/ ~/.aws/ RUN pip install --no-cache-dir -r requirements.txt RUN pip install --no-cache-dir -r requirements.txt -t "/app/dependencies/" WORKDIR /app/dependencies RUN zip -r lambda.zip * requirements.txt : python-magic libmagic In my local container when I run tests on the lambda logic everything went ok and passed (including the part that uses the magic code..). I created a zip that contains the lambda.py code and with the python dependencies (last 3 lines in the docker file). When I upload the zip to aws and test the lambda I'm getting the following error : { "errorMessage": "Unable to import module 'lambda': failed to find libmagic. Check your installation", "errorType": "Runtime.ImportModuleError" } As you can see, on my local container I'm using baseline image lambci/lambda:build-python3.8 that should be the same aws uses when the lambda is launching. I tried also to add python-magic-bin==0.4.14 to the requirements.txt instead of the magic and libmagic but it didnt help either because it seems that this module is for windows. Into the lambda.zip I put also the lambda.py which is the file that includes my lambda function : import boto3 import urllib.parse from io import BytesIO import magic def lambda_handler(event, context): s3 = boto3.client("s3") if event: print("Event : ", event) event_data = event["Records"][0] file_name = urllib.parse.unquote_plus(event_data['s3']['object']['key']) print("getting file: {}".format(file_name)) bucket_name = event_data['s3']['bucket']['name'] file_from_s3 = s3.get_object(Bucket=bucket_name, Key=file_name) file_obj = BytesIO(file_from_s3['Body'].read()) print(magic.from_buffer(file_obj.read(2048))) What am I doing wrong ? | While using filetype as suggested by other answers is much simpler, that library does not detect as many file types as magic does. You can make python-magic work on aws lambda with python3.8 by doing the following: Add libmagic.so.1 to a lib folder at the root of the lambda package. This lib folder will be automatically added to LD_LIBRARY_PATH on aws lambda. This library can be found in /usr/lib64/libmagic.so.1 on an amazon linux ec2 instance for example. Create a magic file or take the one available on an amazon linux ec2 instance in /usr/share/misc/magic and add it to your lambda package. The Magic constructor from python-magic takes a magic_file argument. Make this point to your magic file. You can then create the magic object with magic.Magic(magic_file='path_to_your_magic_file') and then call any function from python-magic you like on that object. These steps are not necessary on the python3.7 runtime as those libraries are already present in aws lambda. | 8 | 11 |
61,986,490 | 2020-5-24 | https://stackoverflow.com/questions/61986490/what-does-librosa-load-return | I'm working with the librosa library, and I would like to know what information is returned by the librosa.load function when I read a audio (.wav) file. Is it the instantaneous sound pressure in pa, or the just the instantaneous amplitude of the sound signal with no units? | To confirm the previous answer, librosa.load returns a time series that in librosa glossary is defined as: "time series: Typically an audio signal, denoted by y, and represented as a one-dimensional numpy.ndarray of floating-point values. y[t] corresponds to the amplitude of the waveform at sample t." The amplitude is usually measured as a function of the change in pressure around the microphone or receiver device that originally picked up the audio. (See more here). | 8 | 10 |
61,919,670 | 2020-5-20 | https://stackoverflow.com/questions/61919670/how-nltk-tweettokenizer-different-from-nltk-word-tokenize | I am unable to understand the difference between the two. Though, I come to know that word_tokenize uses Penn-Treebank for tokenization purposes. But nothing on TweetTokenizer is available. For which sort of data should I be using TweetTokenizer over word_tokenize? | Well, both tokenizers almost work the same way, to split a given sentence into words. But you can think of TweetTokenizer as a subset of word_tokenize. TweetTokenizer keeps hashtags intact while word_tokenize doesn't. I hope the below example will clear all your doubts... from nltk.tokenize import TweetTokenizer from nltk.tokenize import word_tokenize tt = TweetTokenizer() tweet = "This is a cooool #dummysmiley: :-) :-P <3 and some arrows < > -> <-- @remy: This is waaaaayyyy too much for you!!!!!!" print(tt.tokenize(tweet)) print(word_tokenize(tweet)) # output # ['This', 'is', 'a', 'cooool', '#dummysmiley', ':', ':-)', ':-P', '<3', 'and', 'some', 'arrows', '<', '>', '->', '<--', '@remy', ':', 'This', 'is', 'waaaaayyyy', 'too', 'much', 'for', 'you', '!', '!', '!'] # ['This', 'is', 'a', 'cooool', '#', 'dummysmiley', ':', ':', '-', ')', ':', '-P', '<', '3', 'and', 'some', 'arrows', '<', '>', '-', '>', '<', '--', '@', 'remy', ':', 'This', 'is', 'waaaaayyyy', 'too', 'much', 'for', 'you', '!', '!', '!', '!', '!', '!'] You can see that word_tokenize has split #dummysmiley as '#' and 'dummysmiley', while TweetTokenizer didn't, as '#dummysmiley'. TweetTokenizer is built mainly for analyzing tweets. You can learn more about tokenizer from this link | 11 | 23 |
62,010,704 | 2020-5-25 | https://stackoverflow.com/questions/62010704/how-can-i-make-my-bullets-look-like-they-are-comming-out-of-my-guns-tip | I am having an issue where my bullets dont look like they are coming out of my gun they look like they are coming out of the players body VIDEO as you can see in the video it shoots somewhere else or its the gun its the same thing for the left side it shoots good going up but it shoots bad going down VIDEO I tried angeling my gun to 120 but what happens is everything good works for the right side not for the left side VIDEO as you can see it just glitches my projectile class class projectile(object): def __init__(self, x, y, dirx, diry, color): self.x = x self.y = y self.dirx = dirx self.diry = diry self.slash = pygame.image.load("round.png") self.slash = pygame.transform.scale(self.slash,(self.slash.get_width()//2,self.slash.get_height()//2)) self.rect = self.slash.get_rect() self.rect.topleft = ( self.x, self.y ) self.speed = 18 self.color = color self.hitbox = (self.x + -18, self.y, 46,60) how my projectiles append if event.type == pygame.MOUSEBUTTONDOWN: # this is for the bullets if len(bullets) < 3: if box1.health > 25: mousex, mousey = pygame.mouse.get_pos() playerman.isJump = True start_x, start_y = playerman.x - 30, playerman.y - 65 mouse_x, mouse_y = event.pos dir_x, dir_y = mouse_x - start_x, mouse_y - start_y distance = math.sqrt(dir_x**2 + dir_y**2) if distance > 0: new_bullet = projectile(start_x, start_y, dir_x/distance, dir_y/distance, (0,0,0)) bullets.append(new_bullet) # this is displaying the bullets for the player for bullet in bullets[:]: bullet.move() if bullet.x < 0 or bullet.x > 900 or bullet.y < 0 or bullet.y > 900: bullets.pop(bullets.index(bullet)) def draw(self,drawX,drawY): self.rect.topleft = (drawX,drawY) # the guns hitbox # rotatiing the gun dx = self.look_at_pos[0] - self.rect.centerx dy = self.look_at_pos[1] - self.rect.centery angle = (190/math.pi) * math.atan2(-dy, dx) gun_size = self.image.get_size() pivot = (8, gun_size[1]//2) blitRotate(window, self.image, self.rect.center, pivot, angle) if((angle > 90 or angle < -90) and self.gunDirection != "left"): self.gunDirection = "left" self.image = pygame.transform.flip(self.image, False, True) if((angle < 90 and angle > -90) and self.gunDirection != "right"): self.gunDirection = "right" self.image = pygame.transform.flip(self.image, False, True) my full gun class class handgun(): def __init__(self,x,y,height,width,color): self.x = x self.y = y self.height = height self.width = width self.color = color self.rect = pygame.Rect(x,y,height,width) # LOL THESE IS THE HAND self.shootsright = pygame.image.load("hands.png") self.image = self.shootsright self.rect = self.image.get_rect(center = (self.x, self.y)) self.look_at_pos = (self.x, self.y) self.isLookingAtPlayer = False self.look_at_pos = (x,y) self.hitbox = (self.x + -18, self.y, 46,60) self.gunDirection = "right" def draw(self,drawX,drawY): self.rect.topleft = (drawX,drawY) # the guns hitbox # rotatiing the gun dx = self.look_at_pos[0] - self.rect.centerx dy = self.look_at_pos[1] - self.rect.centery angle = (120/math.pi) * math.atan2(-dy, dx) gun_size = self.image.get_size() pivot = (8, gun_size[1]//2) blitRotate(window, self.image, self.rect.center, pivot, angle) if((angle > 90 or angle < -90) and self.gunDirection != "left"): self.gunDirection = "left" self.image = pygame.transform.flip(self.image, False, True) if((angle < 90 and angle > -90) and self.gunDirection != "right"): self.gunDirection = "right" self.image = pygame.transform.flip(self.image, False, True) def lookAt( self, coordinate ): self.look_at_pos = coordinate white = (255,255,255) handgun1 = handgun(300,300,10,10,white) how my images are blitted ```def blitRotate(surf, image, pos, originPos, angle): # calcaulate the axis aligned bounding box of the rotated image w, h = image.get_size() sin_a, cos_a = math.sin(math.radians(angle)), math.cos(math.radians(angle)) min_x, min_y = min([0, sin_a*h, cos_a*w, sin_a*h + cos_a*w]), max([0, sin_a*w, -cos_a*h, sin_a*w - cos_a*h]) # calculate the translation of the pivot pivot = pygame.math.Vector2(originPos[0], -originPos[1]) pivot_rotate = pivot.rotate(angle) pivot_move = pivot_rotate - pivot # calculate the upper left origin of the rotated image origin = (pos[0] - originPos[0] + min_x - pivot_move[0], pos[1] - originPos[1] - min_y + pivot_move[1]) # get a rotated image rotated_image = pygame.transform.rotate(image, angle) # rotate and blit the image surf.blit(rotated_image, origin) I think what I am trying to say is how could I make my gun rotate at exactly at my mouse poisition without any problems my full code script | It looks to me as if your bullets are originating at the players coordinates and not at the edge of the gun. You probably need to apply the same offset you used for the gun, to the projectile origin. Or just extract the top right and bottom right coordinates of the gun after rotation and set the projectiles origin to equal the average position. As for why the gun does not seem to be pointing in the direction of the pointer, the coordinate origin is at the top left of the object and not the center. It would probably be easier to offset the position of the cursor object by half width and height. Otherwise you would need to do some additional trigonometry to adjust the offset based on the distance to the cursor. Example This is a pretty good example of finding the 2 points, but note that the other 2 points are mirrored. (this could be fixed but i didn't bother) Adapted from - How can you rotate an image around an off center pivot in Pygame And From - Rotate point about another point in degrees python Online IDE - https://repl.it/@ikoursh/tracking import pygame import math import time def rotate(origin, point, angle): """ Rotate a point counterclockwise by a given angle around a given origin. """ angle = math.radians(angle) ox, oy = origin px, py = point qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy) qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy) return [qx, qy] def blitRotate(surf, image, pos, originPos, angle): # calculate the axis aligned bounding box of the rotated image w, h = image.get_size() sin_a, cos_a = math.sin(math.radians(angle)), math.cos(math.radians(angle)) min_x, min_y = min([0, sin_a * h, cos_a * w, sin_a * h + cos_a * w]), max( [0, sin_a * w, -cos_a * h, sin_a * w - cos_a * h]) # calculate the translation of the pivot pivot = pygame.math.Vector2(originPos[0], -originPos[1]) pivot_rotate = pivot.rotate(angle) pivot_move = pivot_rotate - pivot # calculate the upper left origin of the rotated image origin = ( round(pos[0] - originPos[0] + min_x - pivot_move[0]), round(pos[1] - originPos[1] - min_y + pivot_move[1])) box_rel = [[0, 0], [w, 0], [w, -h], [0, -h]] box_n_rotate = [rotate(originPos, p, -angle) for p in box_rel] # crete a box with negative rotation for i in range(len(box_n_rotate)): box_n_rotate[i][0] += pos[0] - originPos[0] box_n_rotate[i][1] += pos[1] - originPos[1] for c in box_n_rotate[:2]: pygame.draw.circle(screen, (0, 255, 0), [round(c[i]) for i in range(len(c))], 5) # get a rotated image rotated_image = pygame.transform.rotate(image, angle) # rotate and blit the image surf.blit(rotated_image, origin) pygame.init() size = (400, 400) screen = pygame.display.set_mode(size) clock = pygame.time.Clock() image = pygame.image.load('boomerang64.png') pivot = (48, 21) angle, frame = 0, 0 done = False while not done: clock.tick(60) for event in pygame.event.get(): if event.type == pygame.QUIT: done = True screen.fill(0) pos = (200 + math.cos(frame * 0.05) * 100, 200 + math.sin(frame * 0.05) * 50) blitRotate(screen, image, pos, pivot, angle) pygame.draw.line(screen, (0, 255, 0), (pos[0] - 20, pos[1]), (pos[0] + 20, pos[1]), 3) pygame.draw.line(screen, (0, 255, 0), (pos[0], pos[1] - 20), (pos[0], pos[1] + 20), 3) pygame.display.flip() frame += 1 angle += 1 # time.sleep(0.2) pygame.quit() | 8 | 10 |
61,980,300 | 2020-5-24 | https://stackoverflow.com/questions/61980300/changing-in-the-quantity-of-variants-reflecting-in-the-wrong-item-in-order-summa | I have a problem with the variations and the quantity related to it in the order summary page. It was working perfectly and all of a sudden (this is an example to simplify): when I add to the cart 2 items: Item X with a size small Item X with a size medium When I change the quantity of item X size medium, this change is reflecting in item X size small which was chosen first. In the order summary, there are a plus and minus in the template to change the quantity. I have identified the problem but I can't figure out why it is occurring Here is the template: {% block content %} <main> <div class="container"> <div class="table-responsive text-nowrap" style="margin-top:90px"> <h2> Order Summary</h2> <table class="table"> <thead> <tr> <th scope="col">#</th> <th scope="col">Item Title</th> <th scope="col">Price</th> <th scope="col">Quantity</th> <th scope="col">Size</th> <th scope="col">Total Item Price</th> </tr> </thead> <tbody> {% for order_item in object.items.all %} <tr> <th scope="row">{{ forloop.counter }}</th> <td>{{ order_item.item.title }}</td> <td>{{ order_item.item.price }}</td> <td> <a href="{% url 'core:remove-single-item-from-cart' order_item.item.slug %}"><i class="fas fa-minus mr-2"></a></i> {{ order_item.quantity }} <a href="{% url 'core:add-to-cart' order_item.item.slug %}"><i class="fas fa-plus ml-2"></a></i> </td> <td> {% if order_item.variation.all %} {% for variation in order_item.variation.all %} {{ variation.title|capfirst }} {% endfor %} {% endif %} </td> <td> {% if order_item.item.discount_price %} $ {{ order_item.get_total_discount_item_price }} <span class="badge badge-primary" style="margin-left:10px">Saving ${{ order_item.get_amount_saved }}</span> {% else %} $ {{ order_item.get_total_item_price }} {% endif %} <a style="color:red" href="{% url 'core:remove-from-cart' order_item.item.slug %}"> <i class="fas fa-trash float-right"></i> </a> </td> </tr> {% empty %} <tr> <td colspan='5'>Your Cart is Empty</td> </tr> <tr> <td colspan="5"> <a class='btn btn-primary float-right ml-2'href='/'>Continue Shopping</a> </tr> {% endfor %} {% if object.coupon %} <tr> <td colspan="4"><b>Coupon</b></td> <td><b>-${{ object.coupon.amount }}</b></td> </tr> {% endif %} <tr> <td colspan="5"><b>Sub total</b></td> <td><b>${{ object.get_total }}</b></td> </tr> <tr> <td colspan="5">Taxes</td> <td>${{ object.get_taxes|floatformat:2 }}</td> </tr> {% if object.grand_total %} <tr> <td colspan="5"><b>Grand Total</b></td> <td><b>${{ object.grand_total|floatformat:2 }}</b></td> </tr> <tr> <td colspan="6"> <a class='btn btn-primary float-right ml-2'href='/'>Continue Shopping</a> <a class='btn btn-warning float-right'href='/checkout/'>Proceed to Checkout</a></td> </tr> {% endif %} </tbody> </table> </div> </div> </main> <!--Main layout--> {% endblock content %} Here is the views.py class OrderSummaryView(LoginRequiredMixin, View): def get(self, *args, **kwargs): try: order = Order.objects.get(user=self.request.user, ordered=False) context = { 'object': order } return render(self.request, 'order_summary.html', context) except ObjectDoesNotExist: messages.warning(self.request, "You do not have an active order") return redirect("/") @login_required def add_to_cart(request, slug): item = get_object_or_404(Item, slug=slug) order_item_qs = OrderItem.objects.filter( item=item, user=request.user, ordered=False ) item_var = [] # item variation if request.method == 'POST': for items in request.POST: key = items val = request.POST[key] try: v = Variation.objects.get( item=item, category__iexact=key, title__iexact=val ) item_var.append(v) except: pass if len(item_var) > 0: for items in item_var: order_item_qs = order_item_qs.filter( variation__exact=items, ) if order_item_qs.exists(): order_item = order_item_qs.first() order_item.quantity += 1 order_item.save() else: order_item = OrderItem.objects.create( item=item, user=request.user, ordered=False ) order_item.variation.add(*item_var) order_item.save() order_qs = Order.objects.filter(user=request.user, ordered=False) if order_qs.exists(): order = order_qs[0] # check if the order item is in the order if not order.items.filter(item__id=order_item.id).exists(): order.items.add(order_item) messages.info(request, "This item quantity was updated.") return redirect("core:order-summary") else: ordered_date = timezone.now() order = Order.objects.create( user=request.user, ordered_date=ordered_date) order.items.add(order_item) messages.info(request, "This item was added to cart.") return redirect("core:order-summary") @login_required def remove_from_cart(request, slug): item = get_object_or_404(Item, slug=slug) order_qs = Order.objects.filter( user=request.user, ordered=False ) if order_qs.exists(): order = order_qs[0] # check if the order item is in the order if order.items.filter(item__slug=item.slug).exists(): order_item = OrderItem.objects.filter( item=item, user=request.user, ordered=False )[0] order.items.remove(order_item) order_item.delete() messages.info(request, "This item was removed from your cart") return redirect("core:order-summary") else: messages.info(request, "This item was not in your cart") return redirect("core:product", slug=slug) else: messages.info(request, "You don't have an active order") return redirect("core:product", slug=slug) @login_required def remove_single_item_from_cart(request, slug): item = get_object_or_404(Item, slug=slug) order_qs = Order.objects.filter( user=request.user, ordered=False ) if order_qs.exists(): order = order_qs[0] # check if the order item is in the order if order.items.filter(item__slug=item.slug).exists(): order_item = OrderItem.objects.filter( item=item, user=request.user, ordered=False )[0] if order_item.quantity > 1: order_item.quantity -= 1 order_item.save() else: order.items.remove(order_item) messages.info(request, "This item quantity was updated") return redirect("core:order-summary") else: messages.info(request, "This item was not in your cart") return redirect("core:product", slug=slug) else: messages.info(request, "You do not have an active order") return redirect("core:product", slug=slug) # End Remove Items (Products removed from Cart) here is the models.py class Item(models.Model): title = models.CharField(max_length=100) ------------------------------------------------------------------------- updated = models.DateTimeField(auto_now_add=False, auto_now=True) active = models.BooleanField(default=True) def __str__(self): return self.title def get_absolute_url(self): return reverse("core:product", kwargs={ 'slug': self.slug }) def get_add_to_cart_url(self): return reverse("core:add-to-cart", kwargs={ 'slug': self.slug }) def get_remove_from_cart_url(self): return reverse("core:remove-from-cart", kwargs={ 'slug': self.slug }) class VariationManager(models.Manager): def all(self): return super(VariationManager, self).filter(active=True) def sizes(self): return self.all().filter(category='size') def colors(self): return self.all().filter(category='color') VAR_CATEGORIES = ( ('size', 'size',), ('color', 'color',), ('package', 'package'), ) class Variation(models.Model): item = models.ForeignKey(Item, on_delete=models.CASCADE) category = models.CharField( max_length=120, choices=VAR_CATEGORIES, default='size') title = models.CharField(max_length=120) image = models.ImageField(null=True, blank=True) price = models.DecimalField( decimal_places=2, max_digits=100, null=True, blank=True) objects = VariationManager() active = models.BooleanField(default=True) def __str__(self): return self.title class OrderItem(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) ordered = models.BooleanField(default=False) item = models.ForeignKey(Item, on_delete=models.CASCADE) quantity = models.IntegerField(default=1) variation = models.ManyToManyField(Variation) def __str__(self): return f"{self.quantity} of {self.item.title}" | I checked your code. In your code, you are fetching items and then changing the quantity. Item X with larger size and Item X with a smaller size, I feel both are representing the same item. So changing in 1 item will reflect in same item with different sizes. Do you have any way to identify an item based on item_id as well as size? OrderItem.objects.filter( item=item, user=request.user, ordered=False )[0]. to something like OrderItem.objects.filter( item=item, user=request.user, size=item.size ordered=False )[0]. Something like adding size = S or L, Will make a difference. Additionally you are taking 1st element(using the array[0]). If there are two items with same data you might be doing the operation or wrong item. Instead of filter you can use get, If items are unique. | 7 | 5 |
62,026,559 | 2020-5-26 | https://stackoverflow.com/questions/62026559/use-dictionary-data-to-append-data-to-pandas-dataframe | I have a dataframe and 2 separate dictionaries. Both dictionaries have the same keys but have different values. dict_1 has key-value pairs where the values are unique ids that correspond with the dataframe df. I want to be able to use the 2 dictionaries and the unique ids from the dict_1 to append the values of dict_2 into the dataframe df. Example of dataframe df: col_1 col_2 id col_3 100 500 a1 478 785 400 a1 490 ... ... a1 ... ... ... a2 ... ... ... a2 ... ... ... a2 ... ... ... a3 ... ... ... a3 ... ... ... a3 ... ... ... a4 ... ... ... a4 ... ... ... a4 ... Example of dict_1: 1:['a1', 'a3'], 2:['a2', 'a4'], 3:[...], 4:[...], 5:[...], . Example of dict_2: 1:[0, 1], 2:[1, 1], 3:[...], 4:[...], 5:[...], . I'm trying to append the data from dict_2 using id's from dict_1 into the main df. In a sense add the 2 values (or n values) from the lists of dict_2 as 2 columns (or n columns) into the df. Resultant df: col_1 col_2 id col_3 new_col_1 new_col_2 100 500 a1 478 0 1 785 400 a1 490 0 1 ... ... a1 ... 0 1 ... ... a2 ... 1 1 ... ... a2 ... 1 1 ... ... a2 ... 1 1 ... ... a3 ... 0 1 ... ... a3 ... 0 1 ... ... a3 ... 0 1 ... ... a4 ... 1 1 ... ... a4 ... 1 1 ... ... a4 ... 1 1 | IIUC, the keys in your two dictionaries are aligned. One way is to create a dataframe with a column id containing the values in dict_1 and 2 (in this case but can be more) columns from the values in dict_2 aligned on the same key. Then use merge on id to get the result back in df # the two dictionaries. note in dict_2 I added an element for the list in key 2 # to show it works for any number of columns dict_1 = {1:['a1', 'a3'],2:['a2', 'a4'],} dict_2 = {1:[0,1],2:[1,1,2]} #create a dataframe from dict_2, here it might be something easier but can't find it df_2 = pd.concat([pd.Series(vals, name=key) for key, vals in dict_2.items()], axis=1).T print(df_2) #index are the keys, and columns are the future new_col_x 0 1 2 1 0.0 1.0 NaN 2 1.0 1.0 2.0 #concat with the dict_1 once explode the values in the list, # here just a print to see what it's doing print (pd.concat([pd.Series(dict_1, name='id').explode(),df_2], axis=1)) id 0 1 2 1 a1 0.0 1.0 NaN 1 a3 0.0 1.0 NaN 2 a2 1.0 1.0 2.0 2 a4 1.0 1.0 2.0 # use previous concat, with a rename to change column names and merge to df df = df.merge(pd.concat([pd.Series(dict_1, name='id').explode(),df_2], axis=1) .rename(columns=lambda x: f'new_col_{x+1}' if isinstance(x, int) else x), on='id', how='left') and you get print (df) col_1 col_2 id col_3 new_col_1 new_col_2 new_col_3 0 100 500 a1 478 0.0 1.0 NaN 1 785 400 a1 490 0.0 1.0 NaN 2 ... ... a1 ... 0.0 1.0 NaN 3 ... ... a2 ... 1.0 1.0 2.0 4 ... ... a2 ... 1.0 1.0 2.0 5 ... ... a2 ... 1.0 1.0 2.0 6 ... ... a3 ... 0.0 1.0 NaN 7 ... ... a3 ... 0.0 1.0 NaN 8 ... ... a3 ... 0.0 1.0 NaN 9 ... ... a4 ... 1.0 1.0 2.0 10 ... ... a4 ... 1.0 1.0 2.0 11 ... ... a4 ... 1.0 1.0 2.0 | 7 | 6 |
62,019,960 | 2020-5-26 | https://stackoverflow.com/questions/62019960/difference-between-pass-statement-and-3-dots-in-python | What's the difference between the pass statement: def function(): pass and 3 dots: def function(): ... Which way is better and faster to execute(CPython)? | pass has been in the language for a very long time and is just a no-op. It is designed to explicitly do nothing. ... is a token having the singleton value Ellipsis, similar to how None is a singleton value. Putting ... as your method body has the same effect as for example: def foo(): 1 The ... can be interpreted as a sentinel value where it makes sense from an API-design standpoint, e.g. if you overwrite __getitem__ to do something special if Ellipsis are passed, and then giving foo[...] special meaning. It is not specifically meant as a replacement for no-op stubs, though I have seen it being used that way and it doesn't hurt either | 71 | 64 |
61,977,830 | 2020-5-23 | https://stackoverflow.com/questions/61977830/unsatisfiableerror-conda | I'm trying to create my own anaconda package and after many attempts I've finally managed to create a conda usable package out of my code. (It depends on a package from haasad channel, so it should be installed like this: conda install -c monomonedula sten -c haasad). The problem appear when I'm trying to install a package called stellargraph in the same environment: Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: \ Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions The most frustrating part here is missing output for what packages are actually conflicting. Why is it empty & how do I fix it? UPD. On another machine it suddenly showed which dependencies are actually conflicting, but it still hard to make sense of it. So once again, how do I fix this? Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: \ Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package numpy conflicts for: stellargraph -> gensim[version='>=3.4.0'] -> numpy[version='>=1.11.3,<2.0a0|>=1.16.5,<2.0a0|>=1.14.6,<2.0a0|>=1.13.3,<2.0a0|>=1.12.1,<2.0a0|>=1.15.1,<2.0a0|>=1.9.3,<2.0a0'] stellargraph -> numpy[version='>=1.14'] Package scipy conflicts for: stellargraph -> scipy[version='>=1.1.0'] stellargraph -> gensim[version='>=3.4.0'] -> scipy[version='>=0.18.1'] Package numpy-base conflicts for: sten -> numpy -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.17.2.*|1.17.3.*|1.17.4.*|1.18.1.*|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0|1.17.0|1.17.0|1.17.0|1.17.0',build='py36h2f8d375_0|py37h2f8d375_0|py37hde5b4d6_0|py36hdbf6ddf_6|py27hdbf6ddf_7|py27h2b20989_7|py37h2b20989_7|py27h2b20989_7|py37hdbf6ddf_7|py37hdbf6ddf_8|py27hdbf6ddf_8|py35hdbf6ddf_8|py37h7cdd4dd_9|py37h3dfced4_9|py36h3dfced4_9|py35h3dfced4_9|py37h81de0dd_9|py27h74e8950_9|py35h74e8950_9|py37h74e8950_9|py27h81de0dd_9|py27h74e8950_10|py36h74e8950_10|py35h81de0dd_10|py37h2f8d375_10|py27h2f8d375_11|py36hde5b4d6_11|py37hde5b4d6_11|py37h2f8d375_12|py27h2f8d375_12|py37hde5b4d6_12|py36hde5b4d6_12|py38hde5b4d6_12|py38h2f8d375_12|py36h9be14a7_1|py27h2b20989_0|py36h2b20989_0|py27hdbf6ddf_0|py36hdbf6ddf_0|py36h2b20989_0|py27h2b20989_0|py36hdbf6ddf_0|py35hdbf6ddf_0|py36h2b20989_1|py37hdbf6ddf_1|py36h2b20989_2|py36hdbf6ddf_2|py27h2b20989_3|py27h2b20989_4|py27hdbf6ddf_4|py36h2b20989_4|py36hdbf6ddf_4|py35h2b20989_4|py36h2f8d375_4|py27h2f8d375_4|py37h81de0dd_4|py36h81de0dd_4|py37h2f8d375_5|py37hde5b4d6_5|py37h7cdd4dd_0|py35h7cdd4dd_0|py27h3dfced4_0|py37h3dfced4_0|py36h74e8950_0|py36h81de0dd_0|py27h81de0dd_0|py36h2f8d375_0|py37h2f8d375_0|py37h2f8d375_0|py36h2f8d375_0|py36h81de0dd_0|py27h2f8d375_1|py37h81de0dd_1|py37h2f8d375_1|py36h2f8d375_0|py37h2f8d375_0|py27h81de0dd_0|py36h81de0dd_0|py37h2f8d375_0|py36h2f8d375_0|py36h81de0dd_0|py27h81de0dd_0|py27hde5b4d6_0|py37hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py27hde5b4d6_0|py37h2f8d375_1|py27h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1|py36hde5b4d6_1|py36h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py27h2f8d375_1|py37h2f8d375_1|py37hde5b4d6_1|py36hde5b4d6_1|py27hde5b4d6_1|py37h2f8d375_0|py36h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py37hde5b4d6_0|py38h2f8d375_0|py27hde5b4d6_0|py27h2f8d375_0|py38hde5b4d6_0|py36hde5b4d6_0|py36hde5b4d6_0|py27h2f8d375_0|py36hde5b4d6_0|py27h2f8d375_0|py36hde5b4d6_0|py27h2f8d375_0|py36hde5b4d6_0|py27h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py27h2f8d375_0|py37h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py37h2f8d375_0|py27h2f8d375_0|py36hde5b4d6_0|py37h81de0dd_0|py27h2f8d375_0|py37h81de0dd_0|py27h2f8d375_0|py36h81de0dd_1|py27h81de0dd_1|py36h2f8d375_1|py35h2f8d375_0|py35h81de0dd_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py27h2f8d375_0|py35h2f8d375_0|py35h81de0dd_0|py37h81de0dd_0|py37h74e8950_0|py27h74e8950_0|py35h74e8950_0|py35h3dfced4_0|py36h3dfced4_0|py36h7cdd4dd_0|py27h7cdd4dd_0|py36hde5b4d6_5|py27hde5b4d6_5|py27h2f8d375_5|py36h2f8d375_5|py38hde5b4d6_4|py38h2f8d375_4|py35h81de0dd_4|py27h81de0dd_4|py35h2f8d375_4|py37h2f8d375_4|py35hdbf6ddf_4|py37hdbf6ddf_4|py37h2b20989_4|py27hdbf6ddf_3|py36hdbf6ddf_3|py37hdbf6ddf_3|py37h2b20989_3|py36h2b20989_3|py37hdbf6ddf_2|py27hdbf6ddf_2|py37h2b20989_2|py27h2b20989_2|py27h2b20989_1|py36hdbf6ddf_1|py27hdbf6ddf_1|py37h2b20989_1|py27hdbf6ddf_0|py35hdbf6ddf_0|py35h2b20989_0|py35h9be14a7_1|py27h9be14a7_1|py35h0ea5e3f_1|py27h0ea5e3f_1|py36h0ea5e3f_1|py27hde5b4d6_12|py36h2f8d375_12|py27hde5b4d6_11|py36h2f8d375_11|py37h2f8d375_11|py35h2f8d375_10|py27h2f8d375_10|py36h2f8d375_10|py36h81de0dd_10|py37h81de0dd_10|py27h81de0dd_10|py35h74e8950_10|py37h74e8950_10|py35h81de0dd_9|py36h74e8950_9|py36h81de0dd_9|py27h3dfced4_9|py27h7cdd4dd_9|py35h7cdd4dd_9|py36h7cdd4dd_9|py35h2b20989_8|py27h2b20989_8|py37h2b20989_8|py36h2b20989_8|py36hdbf6ddf_8|py36hdbf6ddf_7|py27hdbf6ddf_7|py36h2b20989_7|py37h2b20989_7|py37hdbf6ddf_7|py35h2b20989_7|py35hdbf6ddf_7|py36h2b20989_7|py36hdbf6ddf_7|py27hdbf6ddf_6|py37hdbf6ddf_6|py37h2b20989_6|py36h2b20989_6|py27h2b20989_6|py36hde5b4d6_0'] stellargraph -> numpy[version='>=1.14'] -> numpy-base[version='1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.17.2.*|1.17.3.*|1.17.4.*|1.18.1.*|1.17.0|1.17.0|1.17.0|1.17.0',build='py36h2f8d375_0|py37h2f8d375_0|py37hde5b4d6_0|py36h9be14a7_1|py27h2b20989_0|py36h2b20989_0|py27hdbf6ddf_0|py36hdbf6ddf_0|py36h2b20989_0|py27h2b20989_0|py36hdbf6ddf_0|py35hdbf6ddf_0|py36h2b20989_1|py37hdbf6ddf_1|py36h2b20989_2|py36hdbf6ddf_2|py27h2b20989_3|py27h2b20989_4|py27hdbf6ddf_4|py36h2b20989_4|py36hdbf6ddf_4|py35h2b20989_4|py36h2f8d375_4|py27h2f8d375_4|py37h81de0dd_4|py36h81de0dd_4|py37h2f8d375_5|py37hde5b4d6_5|py37h7cdd4dd_0|py35h7cdd4dd_0|py27h3dfced4_0|py37h3dfced4_0|py36h74e8950_0|py36h81de0dd_0|py27h81de0dd_0|py36h2f8d375_0|py37h2f8d375_0|py37h2f8d375_0|py36h2f8d375_0|py36h81de0dd_0|py27h2f8d375_1|py37h81de0dd_1|py37h2f8d375_1|py36h2f8d375_0|py37h2f8d375_0|py27h81de0dd_0|py36h81de0dd_0|py37h2f8d375_0|py36h2f8d375_0|py36h81de0dd_0|py27h81de0dd_0|py27hde5b4d6_0|py37hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py27hde5b4d6_0|py37h2f8d375_1|py27h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1|py36hde5b4d6_1|py36h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py27h2f8d375_1|py37h2f8d375_1|py37hde5b4d6_1|py36hde5b4d6_1|py27hde5b4d6_1|py37h2f8d375_0|py36h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py37hde5b4d6_0|py38h2f8d375_0|py27hde5b4d6_0|py27h2f8d375_0|py38hde5b4d6_0|py36hde5b4d6_0|py36hde5b4d6_0|py27h2f8d375_0|py36hde5b4d6_0|py27h2f8d375_0|py36hde5b4d6_0|py27h2f8d375_0|py36hde5b4d6_0|py27h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py27h2f8d375_0|py37h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py37h2f8d375_0|py27h2f8d375_0|py36hde5b4d6_0|py37h81de0dd_0|py27h2f8d375_0|py37h81de0dd_0|py27h2f8d375_0|py36h81de0dd_1|py27h81de0dd_1|py36h2f8d375_1|py35h2f8d375_0|py35h81de0dd_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py27h2f8d375_0|py35h2f8d375_0|py35h81de0dd_0|py37h81de0dd_0|py37h74e8950_0|py27h74e8950_0|py35h74e8950_0|py35h3dfced4_0|py36h3dfced4_0|py36h7cdd4dd_0|py27h7cdd4dd_0|py36hde5b4d6_5|py27hde5b4d6_5|py27h2f8d375_5|py36h2f8d375_5|py38hde5b4d6_4|py38h2f8d375_4|py35h81de0dd_4|py27h81de0dd_4|py35h2f8d375_4|py37h2f8d375_4|py35hdbf6ddf_4|py37hdbf6ddf_4|py37h2b20989_4|py27hdbf6ddf_3|py36hdbf6ddf_3|py37hdbf6ddf_3|py37h2b20989_3|py36h2b20989_3|py37hdbf6ddf_2|py27hdbf6ddf_2|py37h2b20989_2|py27h2b20989_2|py27h2b20989_1|py36hdbf6ddf_1|py27hdbf6ddf_1|py37h2b20989_1|py27hdbf6ddf_0|py35hdbf6ddf_0|py35h2b20989_0|py35h9be14a7_1|py27h9be14a7_1|py35h0ea5e3f_1|py27h0ea5e3f_1|py36h0ea5e3f_1|py36hde5b4d6_0'] There are not so many dependencies in my package so it is a mystery to me why it's not working | Offhand i am not sure what the conflict you are seeing, or how to fix your environment, however I am able to install the stellargraph, sten and mono... packages from a fresh, base cloned environment. It may be more useful to build an environment from scratch, for others to use. Here are the commands I used: conda create --name monomo --clone base conda activate monomo conda install -c monomonedula sten -c haasad conda install -c stellargraph stellargraph (monomo) C:\Users\me>conda list # packages in environment at C:\Users\me\miniconda3\envs\monomo: # # Name Version Build Channel ... gensim 3.8.0 py37hf9181ef_0 ... numpy 1.18.1 py37h93ca92e_0 numpy-base 1.18.1 py37hc3f5095_1 ... python 3.7.6 h60c2a47_2 ... scipy 1.4.1 py37h9439919_0 ... stellargraph 1.1.0 py_0 stellargraph sten 0.1.0 py_0 monomonedula ... Maybe you can install all of these packages on one command with the versions listed. I.E. conda install scipy=1.4.1=py37h9439919_0 numpy-base=1.18.1=py37hc3f5095_1 and so on... | 10 | 2 |
61,987,350 | 2020-5-24 | https://stackoverflow.com/questions/61987350/is-finished-with-status-crash-normal-for-cloud-functions | I tried Google Cloud Functions with Python and there was a problem with running it. It said: Error: could not handle the request I checked the logs, but there was no error, just a log message: Function execution took 16 ms, finished with status: 'crash' When I simplified the function to a printout then it worked properly. Then I added raise Exception('test') before the printout to see if the exception is going to Stackdriver Errors, but it didn't, I got the finished with status: 'crash' message again only in the log. Is this normal behavior? Or is it a bug and instead of crash I should see the exception as an error in the log? | Quite rightly, as alluded to in the Comments, the crash seems buggy about Google Cloud Functions with Python. The issue was reported to the Internal Google Cloud Functions engineers and evaluation is still ongoing. You can monitor this link for fixes | 13 | 5 |
62,019,358 | 2020-5-26 | https://stackoverflow.com/questions/62019358/django-management-command-doesnt-flush-stdout | I'm trying to print to console before and after processing that takes a while in a Django management command, like this: import requests import xmltodict from django.core.management.base import BaseCommand def get_all_routes(): url = 'http://busopen.jeju.go.kr/OpenAPI/service/bis/Bus' r = requests.get(url) data = xmltodict.parse(r.content) return data['response']['body']['items']['item'] class Command(BaseCommand): help = 'Updates the database via Bus Info API' def handle(self, *args, **options): self.stdout.write('Saving routes ... ', ending='') for route in get_all_routes(): route_obj = Route( route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum']) route_obj.save() self.stdout.write('done.') In the above code, Saving routes ... is expected to print before the loop begins, and done. right next to it when the loop completes so that it looks like Saving routes ... done. in the end. However, the former doesn't print until the loop completes, when both strings finally print at the same time, which is not what I expected. I found this question, where the answer suggests flushing the output i.e. self.stdout.flush(), so I added that to my code: def handle(self, *args, **options): self.stdout.write('Saving routes ... ', ending='') self.stdout.flush() for route in get_all_routes(): route_obj = Route( route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum']) route_obj.save() self.stdout.write('done.') Still, the result remains unchanged. What could have I done wrong? | The thing to keep in mind is you're using self.stdout (as suggested in the Django docs), which is BaseCommand's override of Python's standard sys.stdout. There are two main differences between the 2 relevant to your problem: The default "ending" in BaseCommand's version of self.stdout.write() is a new-line, forcing you to use the ending='' parameter, unlike sys.stdout.write() that has an empty ending as the default. This in itself is not causing your problem. The BaseCommand version of flush() does not really do anything (who would have thought?). This is a known bug: https://code.djangoproject.com/ticket/29533 So you really have 2 options: Not use BaseCommand's self.stdout but instead use sys.stdout, in which case the flush does work Force the stdout to be totally unbuffered while running the management command by passing the "-u" parameter to python. So instead of running python manage.py <subcommand>, run python -u manage.py <subcommand> Hope this helps. | 8 | 8 |
61,997,937 | 2020-5-25 | https://stackoverflow.com/questions/61997937/how-to-solve-type-is-partially-unknown-warning-from-pyright | I'm using strict type checks via pyright. When I have a method that returns a pytorch DataLoader, then pyright complains about my type definition: Declared return type, "DataLoader[Unknown]", is partially unknown Pyright (reportUnknownVariableType) Taking a look at the type stub from pytorch's DataLoader (reduced to the important parts): class DataLoader(Generic[T_co]): dataset: Dataset[T_co] @overload def __init__(self, dataset: Dataset[T_co], ... As far as I can see, the generic type T_co of the DataLoader should be defined by the __init__ dataset parameter. Pyright also complains about my Dataset type definition: Type of parameter "dataset" is partially unknown Parameter type is "Dataset[Unknown]" Pyright (reportUnknownParameterType) Taking a look at the Dataset type stub: class Dataset(Generic[T_co]): def __getitem__(self, index: int) -> T_co: ... shows to me that the type should be inferred by the return type of __getitem__. My dataset's type signature of __getitem__ looks like this: def __getitem__(self, index: int) -> Tuple[Tensor, Tensor]: Based on this I would expect Dataset and DataLoader to be inferred as Dataset[Tuple[Tensor, Tensor]] and DataLoader[Tuple[Tensor, Tensor]] but that is not the case. My guess is that pyright fails to statically infer the types here. I thought I could define the type signature my self like this: Dataset[Tuple[Tensor, Tensor]] but that actually results in my python script crashing with: TypeError: 'type' object is not subscriptable How can I properly define the type for Dataset and DataLoader? | Since there was no reply on this question I was not sure if it is actually a bug in pyright. I therefore opened this issue on the github repository: https://github.com/microsoft/pyright/issues/698 Eric Traut explained in detail what the issue is and that pyright is working as designed. I try to give the gist of the main points here. Problem explanation Pyright attempts to infer return types if they are not provided but if they are provided as in this case, they need to be fully typed. Pyright does not fill in missing parts of a given type annotation. For example, pyright will try to infer the return type for the following function definition: def get_dataset(): But if the return type is given as Dataset then that is the return type pyright expects. def get_dataset() -> Dataset: In this case Dataset is a generic class that does not handle subscripting like Dataset[int]. In Python 3.7 (what we are using) the Python interpreter will evaluate these type annotations what leads to the mentioned exception. Solution As of Python 3.10 the Python interpreter will no longer evaluate type annotations and the following type annotation will just work: def get_dataset() -> Dataset[int]: As of Python 3.7 it is possible to enable this behavior via the following import: from __future__ import annotations This is documented in PEP 563. You will also need to disable the rule E1136 for pylint to not warn about "unsubscriptable-object". Another workaround is to quote the type definition like this: def get_dataset() -> "Dataset[int]": | 18 | 20 |
61,982,672 | 2020-5-24 | https://stackoverflow.com/questions/61982672/cuda-gpu-processing-typeerror-compile-kernel-got-an-unexpected-keyword-argum | Today I started working with CUDA and GPU processing. I found this tutorial: https://www.geeksforgeeks.org/running-python-script-on-gpu/ Unfortunately my first attempt to run gpu code failed: from numba import jit, cuda import numpy as np # to measure exec time from timeit import default_timer as timer # normal function to run on cpu def func(a): for i in range(10000000): a[i]+= 1 # function optimized to run on gpu @jit(target ="cuda") def func2(a): for i in range(10000000): a[i]+= 1 if __name__=="__main__": n = 10000000 a = np.ones(n, dtype = np.float64) b = np.ones(n, dtype = np.float32) start = timer() func(a) print("without GPU:", timer()-start) start = timer() func2(a) print("with GPU:", timer()-start) Output: /home/amu/anaconda3/bin/python /home/amu/PycharmProjects/gpu_processing_base/gpu_base_1.py without GPU: 4.89985659904778 Traceback (most recent call last): File "/home/amu/PycharmProjects/gpu_processing_base/gpu_base_1.py", line 30, in <module> func2(a) File "/home/amu/anaconda3/lib/python3.7/site-packages/numba/cuda/dispatcher.py", line 40, in __call__ return self.compiled(*args, **kws) File "/home/amu/anaconda3/lib/python3.7/site-packages/numba/cuda/compiler.py", line 758, in __call__ kernel = self.specialize(*args) File "/home/amu/anaconda3/lib/python3.7/site-packages/numba/cuda/compiler.py", line 769, in specialize kernel = self.compile(argtypes) File "/home/amu/anaconda3/lib/python3.7/site-packages/numba/cuda/compiler.py", line 785, in compile **self.targetoptions) File "/home/amu/anaconda3/lib/python3.7/site-packages/numba/core/compiler_lock.py", line 32, in _acquire_compile_lock return func(*args, **kwargs) TypeError: compile_kernel() got an unexpected keyword argument 'boundscheck' Process finished with exit code 1 I have installed numba and cudatoolkit mentioned in the tutorial in an anaconda environment in pycharm. | Adding an answer to get this off the unanswered queue. The code in that example is broken. It isn't anything wrong with your numba or CUDA installations. There is no way that the code in your question (or the blog you copied it from) can emit the result the blog post claims. There are many ways this could potentially be modified to work. One would be like this: from numba import vectorize, jit, cuda import numpy as np # to measure exec time from timeit import default_timer as timer # normal function to run on cpu def func(a): for i in range(10000000): a[i]+= 1 # function optimized to run on gpu @vectorize(['float64(float64)'], target ="cuda") def func2(x): return x+1 if __name__=="__main__": n = 10000000 a = np.ones(n, dtype = np.float64) start = timer() func(a) print("without GPU:", timer()-start) start = timer() func2(a) print("with GPU:", timer()-start) Here func2 becomes a ufunc which is compiled for the device. It will then be run over the whole input array on the GPU. Doing so does this: $ python bogoexample.py without GPU: 4.314514834433794 with GPU: 0.21419800259172916 So it is faster, but keep in mind that the GPU time includes the time taken for compilation of the GPU ufunc Another alternative would be to actually write a GPU kernel. Like this: from numba import vectorize, jit, cuda import numpy as np # to measure exec time from timeit import default_timer as timer # normal function to run on cpu def func(a): for i in range(10000000): a[i]+= 1 # function optimized to run on gpu @vectorize(['float64(float64)'], target ="cuda") def func2(x): return x+1 # kernel to run on gpu @cuda.jit def func3(a, N): tid = cuda.grid(1) if tid < N: a[tid] += 1 if __name__=="__main__": n = 10000000 a = np.ones(n, dtype = np.float64) for i in range(0,5): start = timer() func(a) print(i, " without GPU:", timer()-start) for i in range(0,5): start = timer() func2(a) print(i, " with GPU ufunc:", timer()-start) threadsperblock = 1024 blockspergrid = (a.size + (threadsperblock - 1)) // threadsperblock for i in range(0,5): start = timer() func3[blockspergrid, threadsperblock](a, n) print(i, " with GPU kernel:", timer()-start) which runs like this: $ python bogoexample.py 0 without GPU: 4.885275377891958 1 without GPU: 4.748716968111694 2 without GPU: 4.902181145735085 3 without GPU: 4.889955999329686 4 without GPU: 4.881594380363822 0 with GPU ufunc: 0.16726416163146496 1 with GPU ufunc: 0.03758022002875805 2 with GPU ufunc: 0.03580896370112896 3 with GPU ufunc: 0.03530424740165472 4 with GPU ufunc: 0.03579768259078264 0 with GPU kernel: 0.1421878095716238 1 with GPU kernel: 0.04386183246970177 2 with GPU kernel: 0.029975440353155136 3 with GPU kernel: 0.029602501541376114 4 with GPU kernel: 0.029780613258481026 Here you can see that the kernel runs slightly faster than the ufunc, and that caching (and this is caching of the JIT compiled functions, not memoization of the calls) significantly speeds up the call on the GPU. | 12 | 21 |
62,029,371 | 2020-5-26 | https://stackoverflow.com/questions/62029371/python-poetry-error-setting-settings-virtualenvs-in-project-does-not-exist | I am setting poetry to create virtual environments in the project directory. I entered: poetry config settings.virtualenvs.in-project true and received error [ValueError] Setting settings.virtualenvs.in-project does not exist Also there is the text home/alex/.poetry/lib/poetry/_vendor/py2.7/subprocess32.py:149: RuntimeWarning: The _posixsubprocess module is not being used. Child process reliability may suffer if your program uses threads. "program uses threads.", RuntimeWarning) How can I solve the error? It seems that error deals with version of the python. I am using Ubuntu 16.04 version Poetry version 1.0.5 | The config has changed with the release of poetry 1.0. The prefix settings is no longer needed. So just type poetry config virtualenvs.in-project true. Concerning the subprocess warning: This seems to be just a warning and has no influence on the correct working of poetry. Also have a look at my comment in poetry's issue tracker. @ptd: poetry can work with python2 and python3. | 27 | 50 |
62,012,775 | 2020-5-26 | https://stackoverflow.com/questions/62012775/how-to-run-different-pytest-arguments-or-marks-from-vs-code-test-runner-interfac | I'm having trouble getting the VS Code PyTest code runner to work the way I'd like. It seems pytest options may be an all-or-nothing situation. Is there any way to run different sets of PyTest options easily in the VS Code interface? For example: By default, run all tests not marked with @pytest.mark.slow. This can be done with the argument -m "not slow" But, if I put that in a pytest.ini file, then it will never run any tests marked slow, even if I pick that particular test in the interface and try to run it. The resulting output is collected 1 item... 1 item deselected. Run sometimes with coverage enabled, and sometimes without. The only way I can see to do this is to run PyTest from the command line, which then loses the benefit of auto-discovery, running/debugging individual tests from the in-line interface, etc. What am I missing? Note: Currently using VS Code 1.45.1, Python 3.7.6, and PyTest 5.3.5 | You're not missing anything. There currently isn't a way to provide per-execution arguments to get the integration you want with the Test Explorer. | 8 | 7 |
62,032,115 | 2020-5-26 | https://stackoverflow.com/questions/62032115/removing-python-3-8-entry-in-mac-os-path | PROBLEM DESCRIPTION I'm setting up a new MacBook and decided to jump too fast into downloading Python 3.8. I downloaded it from the website https://www.python.org/ before realizing it's better practice to do so with homebrew. GOAL - Remove Python 3.8 from my PATH to later install with Homebrew I cleared Python 3.8 from my filesystem thanks to this page https://nektony.com/how-to/uninstall-python-on-mac, but the path the Version 3.8 is still in my PATH variable. Typing echo $PATH in my terminal (zsh) returns /Library/Frameworks/Python.framework/Versions/3.8/bin along with other paths. Does anyone know how I can remove this path? It no longer exists in my filesystem so it's pointing to nothing. WHAT I HAVE TRIED I have checked all the following files using nano and they all do not have the export command which would place it in the path in the first place. Files checked: /etc/profile /etc/bashrc ~/.bash_profile ~/.bash_login ~/.profile ~/.bashrc MY ENV I am running a macbook pro with Catalina (10.15.4) and using zsh as my terminal. Any help is appreciated, thanks a lot!! | Found the solution! Through running grep {subset of the path you're trying to remove} . (don't forget the period at the end), I found all places where that path was found on my computer. That brought me to seeing that the ./.zprofile file was exporting the Python 3.8 path. I removed it from that file, saved it and restarted my Terminal. Now, the path is gone and I am happy | 7 | 12 |
62,000,970 | 2020-5-25 | https://stackoverflow.com/questions/62000970/celery-beat-keyerror-scheduler | I am trying to run a periodic celery task using celery beat and docker for my Flask application. However when I run the container I get the below error: Removing corrupted schedule file 'celerybeat-schedule': error(22, 'Invalid argument') Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/kombu/utils/objects.py", line 42, in __get__ return obj.__dict__[self.__name__] KeyError: 'scheduler' I define my beat scheduler inside my settings.py like so: CELERYBEAT_SCHEDULE = { 'fetch-expensify-reports': { 'task': 'canopact.blueprints.carbon.tasks.fetch_reports', 'schedule': 10.0 } } This config gets passed into my create_celery_app function in my app.py file: def create_celery_app(app=None): """ Create a new Celery object and tie together the Celery config to the app's config. Wrap all tasks in the context of the application. :param app: Flask app :return: Celery app """ app = app or create_app() celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'], include=CELERY_TASK_LIST) celery.conf.update(app.config) TaskBase = celery.Task class ContextTask(TaskBase): abstract = True def __call__(self, *args, **kwargs): with app.app_context(): return TaskBase.__call__(self, *args, **kwargs) celery.Task = ContextTask return celery I have tried to split out the celery worker and celery beat schedule in my docker-compose.yml file like so: celery: build: . command: celery worker -l info -A canopact.blueprints.contact.tasks env_file: - '.env' volumes: - '.:/canopact' celery_beat: build: . command: celery beat -l info -A canopact.blueprints.contact.tasks env_file: - '.env' volumes: - '.:/canopact' However I get the same issue. I have also tried to delete my celerybeat-schedule file which seems to be corrupted as per recommendations from other posts. However upon running docker-compose up the file gets created again and the same error is thrown. I am using celery 4.3.0. Below is the full trace back when trying to start the container. celery_beat_1 | celery beat v4.3.0 (rhubarb) is starting. celery_beat_1 | __ - ... __ - _ celery_beat_1 | LocalTime -> 2020-05-24 19:44:38 celery_beat_1 | Configuration -> celery_beat_1 | . broker -> redis://:**@redis:6379/0 celery_beat_1 | . loader -> celery.loaders.app.AppLoader celery_beat_1 | . scheduler -> celery.beat.PersistentScheduler celery_beat_1 | . db -> celerybeat-schedule celery_beat_1 | . logfile -> [stderr]@%INFO celery_beat_1 | . maxinterval -> 5.00 minutes (300s) celery_beat_1 | [2020-05-24 19:44:38,622: INFO/MainProcess] beat: Starting... celery_beat_1 | [2020-05-24 19:44:38,696: ERROR/MainProcess] Removing corrupted schedule file 'celerybeat-schedule': error(22, 'Invalid argument') celery_beat_1 | Traceback (most recent call last): celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/kombu/utils/objects.py", line 42, in __get__ celery_beat_1 | return obj.__dict__[self.__name__] celery_beat_1 | KeyError: 'scheduler' celery_beat_1 | celery_beat_1 | During handling of the above exception, another exception occurred: celery_beat_1 | celery_beat_1 | Traceback (most recent call last): celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 485, in setup_schedule celery_beat_1 | self._store = self._open_schedule() celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 475, in _open_schedule celery_beat_1 | return self.persistence.open(self.schedule_filename, writeback=True) celery_beat_1 | File "/usr/local/lib/python3.7/shelve.py", line 243, in open celery_beat_1 | return DbfilenameShelf(filename, flag, protocol, writeback) celery_beat_1 | File "/usr/local/lib/python3.7/shelve.py", line 227, in __init__ celery_beat_1 | Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) celery_beat_1 | File "/usr/local/lib/python3.7/dbm/__init__.py", line 94, in open celery_beat_1 | return mod.open(file, flag, mode) celery_beat_1 | _gdbm.error: [Errno 22] Invalid argument celery_beat_1 | [2020-05-24 19:44:38,730: CRITICAL/MainProcess] beat raised exception <class '_gdbm.error'>: error(22, 'Invalid argument') celery_beat_1 | Traceback (most recent call last): celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/kombu/utils/objects.py", line 42, in __get__ celery_beat_1 | return obj.__dict__[self.__name__] celery_beat_1 | KeyError: 'scheduler' celery_beat_1 | celery_beat_1 | During handling of the above exception, another exception occurred: celery_beat_1 | celery_beat_1 | Traceback (most recent call last): celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 485, in setup_schedule celery_beat_1 | self._store = self._open_schedule() celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 475, in _open_schedule celery_beat_1 | return self.persistence.open(self.schedule_filename, writeback=True) celery_beat_1 | File "/usr/local/lib/python3.7/shelve.py", line 243, in open celery_beat_1 | return DbfilenameShelf(filename, flag, protocol, writeback) celery_beat_1 | File "/usr/local/lib/python3.7/shelve.py", line 227, in __init__ celery_beat_1 | Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) celery_beat_1 | File "/usr/local/lib/python3.7/dbm/__init__.py", line 94, in open celery_beat_1 | return mod.open(file, flag, mode) celery_beat_1 | _gdbm.error: [Errno 22] Invalid argument celery_beat_1 | celery_beat_1 | During handling of the above exception, another exception occurred: celery_beat_1 | celery_beat_1 | Traceback (most recent call last): celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/apps/beat.py", line 109, in start_scheduler celery_beat_1 | service.start() celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 588, in start celery_beat_1 | humanize_seconds(self.scheduler.max_interval)) celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/kombu/utils/objects.py", line 44, in __get__ celery_beat_1 | value = obj.__dict__[self.__name__] = self.__get(obj) celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 632, in scheduler celery_beat_1 | return self.get_scheduler() celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 627, in get_scheduler celery_beat_1 | lazy=lazy, celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 467, in __init__ celery_beat_1 | Scheduler.__init__(self, *args, **kwargs) celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 226, in __init__ celery_beat_1 | self.setup_schedule() celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 493, in setup_schedule celery_beat_1 | self._store = self._destroy_open_corrupted_schedule(exc) celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 481, in _destroy_open_corrupted_schedule celery_beat_1 | return self._open_schedule() celery_beat_1 | File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 475, in _open_schedule celery_beat_1 | return self.persistence.open(self.schedule_filename, writeback=True) celery_beat_1 | File "/usr/local/lib/python3.7/shelve.py", line 243, in open celery_beat_1 | return DbfilenameShelf(filename, flag, protocol, writeback) celery_beat_1 | File "/usr/local/lib/python3.7/shelve.py", line 227, in __init__ celery_beat_1 | Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) celery_beat_1 | File "/usr/local/lib/python3.7/dbm/__init__.py", line 94, in open celery_beat_1 | return mod.open(file, flag, mode) celery_beat_1 | _gdbm.error: [Errno 22] Invalid argument celery_beat_1 | [2020-05-24 19:44:38,736: WARNING/MainProcess] Traceback (most recent call last): celery_beat_1 | [2020-05-24 19:44:38,737: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/kombu/utils/objects.py", line 42, in __get__ celery_beat_1 | [2020-05-24 19:44:38,738: WARNING/MainProcess] return obj.__dict__[self.__name__] celery_beat_1 | [2020-05-24 19:44:38,739: WARNING/MainProcess] KeyError celery_beat_1 | [2020-05-24 19:44:38,743: WARNING/MainProcess] : celery_beat_1 | [2020-05-24 19:44:38,744: WARNING/MainProcess] 'scheduler' celery_beat_1 | [2020-05-24 19:44:38,745: WARNING/MainProcess] During handling of the above exception, another exception occurred: celery_beat_1 | [2020-05-24 19:44:38,746: WARNING/MainProcess] Traceback (most recent call last): celery_beat_1 | [2020-05-24 19:44:38,747: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 485, in setup_schedule celery_beat_1 | [2020-05-24 19:44:38,749: WARNING/MainProcess] self._store = self._open_schedule() celery_beat_1 | [2020-05-24 19:44:38,751: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 475, in _open_schedule celery_beat_1 | [2020-05-24 19:44:38,756: WARNING/MainProcess] return self.persistence.open(self.schedule_filename, writeback=True) celery_beat_1 | [2020-05-24 19:44:38,757: WARNING/MainProcess] File "/usr/local/lib/python3.7/shelve.py", line 243, in open celery_beat_1 | [2020-05-24 19:44:38,759: WARNING/MainProcess] return DbfilenameShelf(filename, flag, protocol, writeback) celery_beat_1 | [2020-05-24 19:44:38,760: WARNING/MainProcess] File "/usr/local/lib/python3.7/shelve.py", line 227, in __init__ celery_beat_1 | [2020-05-24 19:44:38,761: WARNING/MainProcess] Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) celery_beat_1 | [2020-05-24 19:44:38,762: WARNING/MainProcess] File "/usr/local/lib/python3.7/dbm/__init__.py", line 94, in open celery_beat_1 | [2020-05-24 19:44:38,764: WARNING/MainProcess] return mod.open(file, flag, mode) celery_beat_1 | [2020-05-24 19:44:38,770: WARNING/MainProcess] _gdbm celery_beat_1 | [2020-05-24 19:44:38,772: WARNING/MainProcess] . celery_beat_1 | [2020-05-24 19:44:38,774: WARNING/MainProcess] error celery_beat_1 | [2020-05-24 19:44:38,776: WARNING/MainProcess] : celery_beat_1 | [2020-05-24 19:44:38,777: WARNING/MainProcess] [Errno 22] Invalid argument celery_beat_1 | [2020-05-24 19:44:38,778: WARNING/MainProcess] During handling of the above exception, another exception occurred: celery_beat_1 | [2020-05-24 19:44:38,779: WARNING/MainProcess] Traceback (most recent call last): celery_beat_1 | [2020-05-24 19:44:38,779: WARNING/MainProcess] File "/usr/local/bin/celery", line 8, in <module> celery_beat_1 | [2020-05-24 19:44:38,780: WARNING/MainProcess] sys.exit(main()) celery_beat_1 | [2020-05-24 19:44:38,782: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/__main__.py", line 16, in main celery_beat_1 | [2020-05-24 19:44:38,783: WARNING/MainProcess] _main() celery_beat_1 | [2020-05-24 19:44:38,785: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 322, in main celery_beat_1 | [2020-05-24 19:44:38,787: WARNING/MainProcess] cmd.execute_from_commandline(argv) celery_beat_1 | [2020-05-24 19:44:38,788: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 496, in execute_from_commandline celery_beat_1 | [2020-05-24 19:44:38,795: WARNING/MainProcess] super(CeleryCommand, self).execute_from_commandline(argv))) celery_beat_1 | [2020-05-24 19:44:38,796: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 298, in execute_from_commandline celery_beat_1 | [2020-05-24 19:44:38,797: WARNING/MainProcess] return self.handle_argv(self.prog_name, argv[1:]) celery_beat_1 | [2020-05-24 19:44:38,798: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 488, in handle_argv celery_beat_1 | [2020-05-24 19:44:38,801: WARNING/MainProcess] return self.execute(command, argv) celery_beat_1 | [2020-05-24 19:44:38,803: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/celery.py", line 420, in execute celery_beat_1 | [2020-05-24 19:44:38,809: WARNING/MainProcess] ).run_from_argv(self.prog_name, argv[1:], command=argv[0]) celery_beat_1 | [2020-05-24 19:44:38,810: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 302, in run_from_argv celery_beat_1 | [2020-05-24 19:44:38,812: WARNING/MainProcess] sys.argv if argv is None else argv, command) celery_beat_1 | [2020-05-24 19:44:38,813: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 386, in handle_argv celery_beat_1 | [2020-05-24 19:44:38,818: WARNING/MainProcess] return self(*args, **options) celery_beat_1 | [2020-05-24 19:44:38,821: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/base.py", line 252, in __call__ celery_beat_1 | [2020-05-24 19:44:38,827: WARNING/MainProcess] ret = self.run(*args, **kwargs) celery_beat_1 | [2020-05-24 19:44:38,827: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/bin/beat.py", line 109, in run celery_beat_1 | [2020-05-24 19:44:38,830: WARNING/MainProcess] return beat().run() celery_beat_1 | [2020-05-24 19:44:38,830: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/apps/beat.py", line 81, in run celery_beat_1 | [2020-05-24 19:44:38,832: WARNING/MainProcess] self.start_scheduler() celery_beat_1 | [2020-05-24 19:44:38,833: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/apps/beat.py", line 109, in start_scheduler celery_beat_1 | [2020-05-24 19:44:38,834: WARNING/MainProcess] service.start() celery_beat_1 | [2020-05-24 19:44:38,836: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 588, in start celery_beat_1 | [2020-05-24 19:44:38,838: WARNING/MainProcess] humanize_seconds(self.scheduler.max_interval)) celery_beat_1 | [2020-05-24 19:44:38,841: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/kombu/utils/objects.py", line 44, in __get__ celery_beat_1 | [2020-05-24 19:44:38,843: WARNING/MainProcess] value = obj.__dict__[self.__name__] = self.__get(obj) celery_beat_1 | [2020-05-24 19:44:38,844: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 632, in scheduler celery_beat_1 | [2020-05-24 19:44:38,849: WARNING/MainProcess] return self.get_scheduler() celery_beat_1 | [2020-05-24 19:44:38,850: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 627, in get_scheduler celery_beat_1 | [2020-05-24 19:44:38,852: WARNING/MainProcess] lazy=lazy, celery_beat_1 | [2020-05-24 19:44:38,852: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 467, in __init__ celery_beat_1 | [2020-05-24 19:44:38,855: WARNING/MainProcess] Scheduler.__init__(self, *args, **kwargs) celery_beat_1 | [2020-05-24 19:44:38,859: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 226, in __init__ celery_beat_1 | [2020-05-24 19:44:38,864: WARNING/MainProcess] self.setup_schedule() celery_beat_1 | [2020-05-24 19:44:38,865: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 493, in setup_schedule celery_beat_1 | [2020-05-24 19:44:38,867: WARNING/MainProcess] self._store = self._destroy_open_corrupted_schedule(exc) celery_beat_1 | [2020-05-24 19:44:38,868: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 481, in _destroy_open_corrupted_schedule celery_beat_1 | [2020-05-24 19:44:38,873: WARNING/MainProcess] return self._open_schedule() celery_beat_1 | [2020-05-24 19:44:38,874: WARNING/MainProcess] File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 475, in _open_schedule celery_beat_1 | [2020-05-24 19:44:38,876: WARNING/MainProcess] return self.persistence.open(self.schedule_filename, writeback=True) celery_beat_1 | [2020-05-24 19:44:38,879: WARNING/MainProcess] File "/usr/local/lib/python3.7/shelve.py", line 243, in open celery_beat_1 | [2020-05-24 19:44:38,884: WARNING/MainProcess] return DbfilenameShelf(filename, flag, protocol, writeback) celery_beat_1 | [2020-05-24 19:44:38,885: WARNING/MainProcess] File "/usr/local/lib/python3.7/shelve.py", line 227, in __init__ celery_beat_1 | [2020-05-24 19:44:38,886: WARNING/MainProcess] Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) celery_beat_1 | [2020-05-24 19:44:38,887: WARNING/MainProcess] File "/usr/local/lib/python3.7/dbm/__init__.py", line 94, in open celery_beat_1 | [2020-05-24 19:44:38,889: WARNING/MainProcess] return mod.open(file, flag, mode) celery_beat_1 | [2020-05-24 19:44:38,890: WARNING/MainProcess] _gdbm celery_beat_1 | [2020-05-24 19:44:38,892: WARNING/MainProcess] . celery_beat_1 | [2020-05-24 19:44:38,896: WARNING/MainProcess] error celery_beat_1 | [2020-05-24 19:44:38,898: WARNING/MainProcess] : | This is weird, I haven't got the solution right now, but I found a way to circumnavigate this. Why we are getting the issue : Here are some thoughts on celery docs which explains what is happening here : Beat needs to store the last run times of the tasks in a local database file (named celerybeat-schedule by default), so it needs access to write in the current directory, or alternatively you can specify a custom location for this file: Basicaly celery is trying to read the file named celerybeat-schedule but form some reason it's failing. Why is it failing to read it on docker? I have no clue for now... However this comment on give some lights It's something related to files storage. Here is my workaround. I decided to use Redis as scheduler run times of my task instead of file storage and luckily I found this package which helped me to achieve that. What you can do is this: Update your celery app config using : app.conf.redbeat_redis_url = your redis url Then in your docker file you need to tell celery which scheduler it should use. celery: build: . command: celery worker -l info -A canopact.blueprints.contact.tasks env_file: - '.env' volumes: - '.:/canopact' celery_beat: build: . command: celery beat -l info -A canopact.blueprints.contact.tasks -S redbeat.RedBeatScheduler env_file: - '.env' volumes: - '.:/canopact' | 7 | 5 |
61,974,206 | 2020-5-23 | https://stackoverflow.com/questions/61974206/timeout-within-session-while-sending-requests | I'm trying to learn how I can use timeout within a session while sending requests. The way I've tried below can fetch the content of a webpage but I'm not sure this is the right way as I could not find the usage of timeout in this documentation. import requests link = "https://stackoverflow.com/questions/tagged/web-scraping" with requests.Session() as s: r = s.get(link,timeout=5) print(r.text) How can I use timeout within session? | I'm not sure this is the right way as I could not find the usage of timeout in this documentation. Scroll to the bottom. It's definitely there. You can search for it in the page by pressing Ctrl+F and entering timeout. You're using timeout correctly in your code example. You can actually specify the timeout in a few different ways, as explained in the documentation: If you specify a single value for the timeout, like this: r = requests.get('https://github.com', timeout=5) The timeout value will be applied to both the connect and the read timeouts. Specify a tuple if you would like to set the values separately: r = requests.get('https://github.com', timeout=(3.05, 27)) If the remote server is very slow, you can tell Requests to wait forever for a response, by passing None as a timeout value and then retrieving a cup of coffee. r = requests.get('https://github.com', timeout=None) Try using https://httpstat.us/200?sleep=5000 to test your code. For example, this raises an exception because 0.2 seconds is not long enough to establish a connection with the server: import requests link = "https://httpstat.us/200?sleep=5000" with requests.Session() as s: try: r = s.get(link, timeout=(0.2, 10)) print(r.text) except requests.exceptions.Timeout as e: print(e) Output: HTTPSConnectionPool(host='httpstat.us', port=443): Read timed out. (read timeout=0.2) This raises an exception because the server waits for 5 seconds before sending the response, which is longer than the 2 second read timeout set: import requests link = "https://httpstat.us/200?sleep=5000" with requests.Session() as s: try: r = s.get(link, timeout=(3.05, 2)) print(r.text) except requests.exceptions.Timeout as e: print(e) Output: HTTPSConnectionPool(host='httpstat.us', port=443): Read timed out. (read timeout=2) You specifically mention using a timeout within a session. So maybe you want a session object which has a default timeout. Something like this: import requests link = "https://httpstat.us/200?sleep=5000" class EnhancedSession(requests.Session): def __init__(self, timeout=(3.05, 4)): self.timeout = timeout return super().__init__() def request(self, method, url, **kwargs): print("EnhancedSession request") if "timeout" not in kwargs: kwargs["timeout"] = self.timeout return super().request(method, url, **kwargs) session = EnhancedSession() try: response = session.get(link) print(response) except requests.exceptions.Timeout as e: print(e) try: response = session.get(link, timeout=1) print(response) except requests.exceptions.Timeout as e: print(e) try: response = session.get(link, timeout=10) print(response) except requests.exceptions.Timeout as e: print(e) Output: EnhancedSession request HTTPSConnectionPool(host='httpstat.us', port=443): Read timed out. (read timeout=4) EnhancedSession request HTTPSConnectionPool(host='httpstat.us', port=443): Read timed out. (read timeout=1) EnhancedSession request <Response [200]> | 16 | 5 |
61,989,485 | 2020-5-24 | https://stackoverflow.com/questions/61989485/pre-populate-current-value-of-wtforms-field-in-order-to-edit-it | I have a form inside a modal that I use to edit a review on an item (a perfume). A perfume can have multiple reviews, and the reviews live in an array of nested documents, each one with its own _id. I'm editing each particular review (in case an user wants to edit their review on the perfume once it's been submitted) by submitting the EditReviewForm to this edit_review route: @reviews.route("/review", methods=["GET", "POST"]) @login_required def edit_review(): form = EditReviewForm() review_id = request.form.get("review_id") perfume_id = request.form.get("perfume_id") if form.validate_on_submit(): mongo.db.perfumes.update( {"_id": ObjectId(perfume_id), <I edit my review here> }) return redirect(url_for("perfumes.perfume", perfume_id=perfume_id)) return redirect(url_for("perfumes.perfume", perfume_id=perfume_id)) And this route redirects to my perfume route, which shows the perfume and all the reviews it contains. This is the perfume route: @perfumes.route("/perfume/<perfume_id>", methods=["GET"]) def perfume(perfume_id): current_perfume = mongo.db.perfumes.find_one({"_id": ObjectId(perfume_id)}) add_review_form = AddReviewForm() edit_review_form = EditReviewForm() cur = mongo.db.perfumes.aggregate(etc) edit_review_form.review.data = current_perfume['reviews'][0]['review_content'] return render_template( "pages/perfume.html", title="Perfumes", cursor=cur, perfume=current_perfume, add_review_form=add_review_form, edit_review_form=edit_review_form ) My issue To find a way to get the review _id in that process and have it in my perfume route, so I can pre-populate my EditReviewForm with the current value. Otherwise the form looks empty to the user editing their review. By hardcoding an index (index [0] in this case): edit_review_form.review.data = current_perfume['reviews'][0]['review_content'] I am indeed displaying current values, but of course the same value for all reviews, as the reviews are in a loop in the template, and I need to get the value each review_id has. Is there a way to do this, before I give up with the idea of allowing users to edit their reviews? :D Please do let me know if my question is clear or if there's more information needed. Thanks so much in advance!! UPDATE 2: Trying to reduce further my current template situation to make it clearer: The modal with the review is fired from perfume-reviews.html, from this button: <div class="card-header"> <button type="button" class="btn edit-review" data-perfume_id="{{perfume['_id']}}" data-review_id="{{review['_id']}}" data-toggle="modal" data-target="#editReviewPerfumeModal" id="editFormButton">Edit</button> </div> And that opens the modal where my form with the review is (the field in question is a textarea currently displaying a WYSIWYG from CKEditor: <div class="modal-body"> <form method=POST action="{{ url_for('reviews.edit_review') }}" id="form-edit-review"> <div class="form-group" id="reviewContent"> {{ edit_review_form.review(class="form-control ckeditor", placeholder="Review")}} </div> </form> </div> Currently this isn't working: $(document).on("click", "#editFormButton", function (e) { var reviewText = $(this) .parents(div.card.container) .siblings("div#reviewContent") .children() .text(); $("input#editReviewContent").val(reviewText); }); and throws a ReferenceError: div is not defined. Where am I failing here? (Perhaps in more than one place?) UPDATE 3: this is where the button opens the modal, and underneath it's where the review content displays: <div class="card container"> <div class="row"> <div class="card-header col-9"> <h5>{{review['reviewer'] }} said on {{ review.date_reviewed.strftime('%d-%m-%Y') }}</h5> </div> <div class="card-header col-3"> <button type="button" class="btn btn-success btn-sm mt-2 edit-review float-right ml-2" data-perfume_id="{{perfume['_id']}}" data-review_id="{{review['_id']}}" data-toggle="modal" data-target="#editReviewPerfumeModal" id="editFormButton">Edit</button> </div> </div> <div class="p-3 row"> <div class=" col-10" id="reviewContent"> <li>{{ review['review_content'] | safe }}</li> </div> </div> </div> | You can do this with jQuery as when you open the form, the form will automatically show the review content in there. It will be done by manipulating the dom. Also, add an id to your edit button, in this example, I have given it an id "editFormButton". Similarly, add an id to the div in which review content lies so that it is easier to select, I have given it an id "reviewContent" Similarly, add an id to edit_review_form.review like this edit_review_form.review(id='editReviewContent') <script> $(document).on("click", "#editFormButton", function (e) { var reviewText = $(this) .parents("div.row") .siblings("div.p-3.row") .children("div#reviewContent") .children() .text(); $("input#editReviewContent").val(reviewText); }); </script> Don't forget to include jQuery. Also, you can do it with pure javascript. You can easily search the above equivalents on google. This article is a good start! | 8 | 1 |
62,017,043 | 2020-5-26 | https://stackoverflow.com/questions/62017043/automatic-download-of-appropriate-chromedriver-for-selenium-in-python | Unfortunately, Chromedriver always is version-specific to the Chrome version you have installed. So when you pack your python code AND a chromedriver via PyInstaller in a deployable .exe-file for Windows, it will not work in most cases as you won't be able to have all chromedriver versions in the .exe-file. Anyone knows a way on how to download the correct chromedriver from the website automatically? If not, I'll come up with a code to download the zip-file and unpack it to temp. Thanks! | Here is the other solution, where webdriver_manager does not support. This script will get the latest chrome driver version downloaded. import requests import wget import zipfile import os # get the latest chrome driver version number url = 'https://chromedriver.storage.googleapis.com/LATEST_RELEASE' response = requests.get(url) version_number = response.text # build the donwload url download_url = "https://chromedriver.storage.googleapis.com/" + version_number +"/chromedriver_win32.zip" # download the zip file using the url built above latest_driver_zip = wget.download(download_url,'chromedriver.zip') # extract the zip file with zipfile.ZipFile(latest_driver_zip, 'r') as zip_ref: zip_ref.extractall() # you can specify the destination folder path here # delete the zip file downloaded above os.remove(latest_driver_zip) | 21 | 28 |
62,019,062 | 2020-5-26 | https://stackoverflow.com/questions/62019062/pandas-dataframe-split-multiple-key-values-to-different-columns | I have a dataframe column with the following format: col1 col2 A [{'Id':42,'prices':['30',’78’]},{'Id': 44,'prices':['20','47',‘89’]}] B [{'Id':47,'prices':['30',’78’]},{'Id':94,'prices':['20']},{'Id':84,'prices':['20','98']}] How can I transform it to the following ? col1 Id price A 42 ['30',’78’] A 44 ['20','47',‘89’] B 47 ['30',’78’] B 94 ['20'] B 84 ['20','98'] I was thinking of using apply and lambda as a solution but I am not sure how. Edit : In order to recreate this dataframe I use the following code : data = [['A', "[{'Id':42,'prices':['30','78']},{'Id': 44,'prices':['20','47','89']}]"], ['B', "[{'Id':47,'prices':['30','78']},{'Id':94,'prices':['20']},{'Id':84,'prices':['20','98']}]"]] df = pd.DataFrame(data, columns = ['col1', 'col2']) | Solution if there are lists in column col2: print (type(df['col2'].iat[0])) <class 'list'> L = [{**{'col1': a}, **x} for a, b in df[['col1','col2']].to_numpy() for x in b] df = pd.DataFrame(L) print (df) col1 Id prices 0 A 42 [30, 78] 1 A 44 [20, 47, 89] 2 B 47 [30, 78] 3 B 94 [20] 4 B 84 [20, 98] If there are strings: print (type(df['col2'].iat[0])) <class 'str'> import ast L = [{**{'col1': a}, **x} for a, b in df[['col1','col2']].to_numpy() for x in ast.literal_eval(b)] df = pd.DataFrame(L) print (df) col1 Id prices 0 A 42 [30, 78] 1 A 44 [20, 47, 89] 2 B 47 [30, 78] 3 B 94 [20] 4 B 84 [20, 98] For better understanding is possible use: import ast L = [] for a, b in df[['col1','col2']].to_numpy(): for x in ast.literal_eval(b): d = {'col1': a} out = {**d, **x} L.append(out) df = pd.DataFrame(L) print (df) col1 Id prices 0 A 42 [30, 78] 1 A 44 [20, 47, 89] 2 B 47 [30, 78] 3 B 94 [20] 4 B 84 [20, 98] | 10 | 5 |
62,011,741 | 2020-5-25 | https://stackoverflow.com/questions/62011741/pydantic-dataclass-vs-basemodel | What are the advantages and disadvantages of using Pydantic's dataclass vs BaseModel? Are there any performance issues or is it easier to Pydantic's dataclass in the other python module? | Your question is answered in Pydantic's documentation, specifically: Keep in mind that pydantic.dataclasses.dataclass is a drop-in replacement for dataclasses.dataclass with validation, not a replacement for pydantic.BaseModel (with a small difference in how initialization hooks work). There are cases where subclassing pydantic.BaseModel is the better choice. For more information and discussion see samuelcolvin/pydantic#710. The discussion link will give you some of the context you are looking for. In general, Pydantic's BaseModel implementation is not bound to behave the same as Python's dataclass implementation. The example cited in the issue above is one good example: from pydantic import BaseModel from pydantic.dataclasses import dataclass from typing import List @dataclass class A: x: List[int] = [] # Above definition with a default of `[]` will result in: # ValueError: mutable default <class 'list'> for field x is not allowed: use default_factory # If you resolve this, the output will read as in the comments below. class B(BaseModel): x: List[int] = [] print(A(x=[1, 2]), A(x=[3, 4])) # Output: A(x=[1, 2]) A(x=[3, 4]) print(B(x=[1, 2]), B(x=[3, 4])) # Output: x=[1, 2] x=[3, 4] If what you want first and foremost is dataclass behavior and then to simply augment it with some Pydantic validation features, the pydantic.dataclasses.dataclass approach may be what you want. Otherwise, BaseModel is probably what you want. | 87 | 62 |
61,971,090 | 2020-5-23 | https://stackoverflow.com/questions/61971090/how-can-i-add-images-to-bars-in-axes-matplotlib | I want to add flag images such as below to my bar chart: I have tried AnnotationBbox but that shows with a square outline. Can anyone tell how to achieve this exactly as above image? Edit: Below is my code ax.barh(y = y, width = values, color = r, height = 0.8) height = 0.8 for i, (value, url) in enumerate(zip(values, image_urls)): response = requests.get(url) img = Image.open(BytesIO(response.content)) width, height = img.size left = 10 top = 10 right = width-10 bottom = height-10 im1 = img.crop((left, top, right, bottom)) print(im1.size) im1 ax.imshow(im1, extent = [value - 6, value, i - height / 2, i + height / 2], aspect = 'auto', zorder = 2) Edit 2: height = 0.8 for j, (value, url) in enumerate(zip(ww, image_urls)): response = requests.get(url) img = Image.open(BytesIO(response.content)) ax.imshow(img, extent = [value - 6, value - 2, j - height / 2, j + height / 2], aspect = 'auto', zorder = 2) ax.set_xlim(0, max(ww)*1.05) ax.set_ylim(-0.5, len(yy) - 0.5) plt.tight_layout() | You need the images in a .png format with a transparent background. (Software such as Gimp or ImageMagick could help in case the images don't already have the desired background.) With such an image, plt.imshow() can place it in the plot. The location is given via extent=[x0, x1, y0, y1]. To prevent imshow to force an equal aspect ratio, add aspect='auto'. zorder=2 helps to get the image on top of the bars. Afterwards, the plt.xlim and plt.ylim need to be set explicitly (also because imshow messes with them.) The example code below used 'ada.png' as that comes standard with matplotlib, so the code can be tested standalone. Now it is loading flags from countryflags.io, following this post. Note that the image gets placed into a box in data coordinates (6 wide and 0.9 high in this case). This box will get stretched, for example when the plot gets resized. You might want to change the 6 to another value, depending on the x-scale and on the figure size. import numpy as np import matplotlib.pyplot as plt # import matplotlib.cbook as cbook import requests from io import BytesIO labels = ['CW', 'CV', 'GW', 'SX', 'DO'] colors = ['crimson', 'dodgerblue', 'teal', 'limegreen', 'gold'] values = 30 + np.random.randint(5, 20, len(labels)).cumsum() height = 0.9 plt.barh(y=labels, width=values, height=height, color=colors, align='center') for i, (label, value) in enumerate(zip(labels, values)): # load the image corresponding to label into img # with cbook.get_sample_data('ada.png') as image_file: # img = plt.imread(image_file) response = requests.get(f'https://www.countryflags.io/{label}/flat/64.png') img = plt.imread(BytesIO(response.content)) plt.imshow(img, extent=[value - 8, value - 2, i - height / 2, i + height / 2], aspect='auto', zorder=2) plt.xlim(0, max(values) * 1.05) plt.ylim(-0.5, len(labels) - 0.5) plt.tight_layout() plt.show() PS: As explained by Ernest in the comments and in this post, using OffsetImage the aspect ratio of the image stays intact. (Also, the xlim and ylim stay intact.) The image will not shrink when there are more bars, so you might need to experiment with the factor in OffsetImage(img, zoom=0.65) and the x-offset in AnnotationBbox(..., xybox=(-25, 0)). An extra option could place the flags outside the bar for bars that are too short. Or at the left of the y-axis. The code adapted for horizontal bars could look like: import numpy as np import requests from io import BytesIO import matplotlib.pyplot as plt from matplotlib.offsetbox import OffsetImage, AnnotationBbox def offset_image(x, y, label, bar_is_too_short, ax): response = requests.get(f'https://www.countryflags.io/{label}/flat/64.png') img = plt.imread(BytesIO(response.content)) im = OffsetImage(img, zoom=0.65) im.image.axes = ax x_offset = -25 if bar_is_too_short: x = 0 ab = AnnotationBbox(im, (x, y), xybox=(x_offset, 0), frameon=False, xycoords='data', boxcoords="offset points", pad=0) ax.add_artist(ab) labels = ['CW', 'CV', 'GW', 'SX', 'DO'] colors = ['crimson', 'dodgerblue', 'teal', 'limegreen', 'gold'] values = 2 ** np.random.randint(2, 10, len(labels)) height = 0.9 plt.barh(y=labels, width=values, height=height, color=colors, align='center', alpha=0.8) max_value = values.max() for i, (label, value) in enumerate(zip(labels, values)): offset_image(value, i, label, bar_is_too_short=value < max_value / 10, ax=plt.gca()) plt.subplots_adjust(left=0.15) plt.show() | 7 | 6 |
61,917,910 | 2020-5-20 | https://stackoverflow.com/questions/61917910/how-to-interpret-py-files-as-jupyter-notebooks | I am using an online jupyter notebook that is somehow configured to read all .py files as jupyter notebook files: I am a big fan of this setup and would like to use it everywhere. On my own jupyter installation however, .py files are just interpreted as test files and are not by default loaded into jupyter cells. How can I achieve the same configuration for my jupyter notebook? | What you're looking for is jupytext. You just need to install it into python env from which you're running your jupyter notebooks: pip install jupytext --upgrade And you get this: | 7 | 5 |
61,997,378 | 2020-5-25 | https://stackoverflow.com/questions/61997378/assertionerror-could-not-compute-output-tensor | I am trying to build a model that takes multiple inputs and multiple outputs using a functional API. I followed this to create the code. def create_model_multiple(): input1 = tf.keras.Input(shape=(13,), name = 'I1') input2 = tf.keras.Input(shape=(6,), name = 'I2') hidden1 = tf.keras.layers.Dense(units = 4, activation='relu')(input1) hidden2 = tf.keras.layers.Dense(units = 4, activation='relu')(input2) merge = tf.keras.layers.concatenate([hidden1, hidden2]) hidden3 = tf.keras.layers.Dense(units = 3, activation='relu')(merge) output1 = tf.keras.layers.Dense(units = 2, activation='softmax', name ='O1')(hidden3) output2 = tf.keras.layers.Dense(units = 2, activation='softmax', name = 'O2')(hidden3) model = tf.keras.models.Model(inputs = [input1,input2], outputs = [output1,output2]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model My model.fit command looks like this: history = model.fit({'I1':train_data, 'I2':new_train_data}, {'O1':train_labels, 'O2': new_target_label}, validation_data=(val_data,val_labels), epochs=100, verbose = 1) The shapes of input data are as follows: train_data is (192,13) new_train_data is (192,6) train-labels,new_target_labels is (192,) The code runs for a few steps then raises this error: Epoch 1/100 1/6 [====>.........................] - ETA: 0s - loss: 360.3317 - O1_loss: 127.8019 - O2_loss: 232.5298 - O1_accuracy: 0.3438 - O2_accuracy: 0.4062 --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-29-db61ad0a9d8b> in <module> 3 validation_data=(val_data,val_labels), 4 epochs=100, ----> 5 verbose = 1) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs) 64 def _method_wrapper(self, *args, **kwargs): 65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access ---> 66 return method(self, *args, **kwargs) 67 68 # Running inside `run_distribute_coordinator` already. c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 870 workers=workers, 871 use_multiprocessing=use_multiprocessing, --> 872 return_dict=True) 873 val_logs = {'val_' + name: val for name, val in val_logs.items()} 874 epoch_logs.update(val_logs) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs) 64 def _method_wrapper(self, *args, **kwargs): 65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access ---> 66 return method(self, *args, **kwargs) 67 68 # Running inside `run_distribute_coordinator` already. c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict) 1079 step_num=step): 1080 callbacks.on_test_batch_begin(step) -> 1081 tmp_logs = test_function(iterator) 1082 # Catch OutOfRangeError for Datasets of unknown size. 1083 # This blocks until the batch has finished executing. c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds) 578 xla_context.Exit() 579 else: --> 580 result = self._call(*args, **kwds) 581 582 if tracing_count == self._get_tracing_count(): c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds) 616 # In this case we have not created variables on the first call. So we can 617 # run the first trace but we should fail if variables are created. --> 618 results = self._stateful_fn(*args, **kwds) 619 if self._created_variables: 620 raise ValueError("Creating variables on a non-first call to a function" c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs) 2417 """Calls a graph function specialized to the inputs.""" 2418 with self._lock: -> 2419 graph_function, args, kwargs = self._maybe_define_function(args, kwargs) 2420 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access 2421 c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs) 2772 and self.input_signature is None 2773 and call_context_key in self._function_cache.missed): -> 2774 return self._define_function_with_shape_relaxation(args, kwargs) 2775 2776 self._function_cache.missed.add(call_context_key) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\eager\function.py in _define_function_with_shape_relaxation(self, args, kwargs) 2704 relaxed_arg_shapes) 2705 graph_function = self._create_graph_function( -> 2706 args, kwargs, override_flat_arg_shapes=relaxed_arg_shapes) 2707 self._function_cache.arg_relaxed[rank_only_cache_key] = graph_function 2708 c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 2665 arg_names=arg_names, 2666 override_flat_arg_shapes=override_flat_arg_shapes, -> 2667 capture_by_value=self._capture_by_value), 2668 self._function_attributes, 2669 # Tell the ConcreteFunction to clean up its graph once it goes out of c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 979 _, original_func = tf_decorator.unwrap(python_func) 980 --> 981 func_outputs = python_func(*func_args, **func_kwargs) 982 983 # invariant: `func_outputs` contains only Tensors, CompositeTensors, c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds) 439 # __wrapped__ allows AutoGraph to swap in a converted function. We give 440 # the function a weak reference to itself to avoid a reference cycle. --> 441 return weak_wrapped_fn().__wrapped__(*args, **kwds) 442 weak_wrapped_fn = weakref.ref(wrapped_fn) 443 c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs) 966 except Exception as e: # pylint:disable=broad-except 967 if hasattr(e, "ag_error_metadata"): --> 968 raise e.ag_error_metadata.to_exception(e) 969 else: 970 raise AssertionError: in user code: c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\training.py:941 test_function * outputs = self.distribute_strategy.run( c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\training.py:909 test_step ** y_pred = self(x, training=False) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py:927 __call__ outputs = call_fn(cast_inputs, *args, **kwargs) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\network.py:719 call convert_kwargs_to_constants=base_layer_utils.call_context().saving) c:\users\aniket\documents\aniket\learning-ml\ml_env\lib\site-packages\tensorflow\python\keras\engine\network.py:899 _run_internal_graph assert str(id(x)) in tensor_dict, 'Could not compute output ' + str(x) AssertionError: Could not compute output Tensor("O1_6/Identity:0", shape=(None, 2), dtype=float32) The jupyter-notebook with complete code is here | you have to provide validation_data in the correct format (like your train). you have to pass 2 input data and 2 targets... you are passing only one this is a dummy example def create_model_multiple(): input1 = tf.keras.Input(shape=(13,), name = 'I1') input2 = tf.keras.Input(shape=(6,), name = 'I2') hidden1 = tf.keras.layers.Dense(units = 4, activation='relu')(input1) hidden2 = tf.keras.layers.Dense(units = 4, activation='relu')(input2) merge = tf.keras.layers.concatenate([hidden1, hidden2]) hidden3 = tf.keras.layers.Dense(units = 3, activation='relu')(merge) output1 = tf.keras.layers.Dense(units = 2, activation='softmax', name ='O1')(hidden3) output2 = tf.keras.layers.Dense(units = 2, activation='softmax', name = 'O2')(hidden3) model = tf.keras.models.Model(inputs = [input1,input2], outputs = [output1,output2]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model x1 = np.random.uniform(0,1, (190,13)) x2 = np.random.uniform(0,1, (190,6)) val_x1 = np.random.uniform(0,1, (50,13)) val_x2 = np.random.uniform(0,1, (50,6)) y1 = np.random.randint(0,2, 190) y2 = np.random.randint(0,2, 190) val_y1 = np.random.randint(0,2, 50) val_y2 = np.random.randint(0,2, 50) model = create_model_multiple() history = model.fit({'I1':x1, 'I2':x2}, {'O1':y1, 'O2': y2}, validation_data=([val_x1,val_x2], [val_y1,val_y2]), # <========= epochs=100, verbose = 1) | 15 | 15 |
61,990,363 | 2020-5-24 | https://stackoverflow.com/questions/61990363/rmse-loss-for-multi-output-regression-problem-in-pytorch | I'm training a CNN architecture to solve a regression problem using PyTorch where my output is a tensor of 20 values. I planned to use RMSE as my loss function for the model and tried to use PyTorch's nn.MSELoss() and took the square root for it using torch.sqrt() for that but got confused after obtaining the results.I'll try my best to explain why. It's obvious that for a batch-size bs my output tensor's dimensions would be [bs , 20].I tried to implement and RMSE function of my own : def loss_function (predicted_x , target ): loss = torch.sum(torch.square(predicted_x - target) , axis= 1)/(predicted_x.size()[1]) #Taking the mean of all the squares by dividing it with the number of outputs i.e 20 in my case loss = torch.sqrt(loss) loss = torch.sum(loss)/predicted_x.size()[0] #averaging out by batch-size return loss But the output of my loss_function() and how PyTorch implements it with nn.MSELoss() differed . I'm not sure whether my implementation is wrong or am I using nn.MSELoss() in the wrong way. | The MSE loss is the mean of the squares of the errors. You're taking the square-root after computing the MSE, so there is no way to compare your loss function's output to that of the PyTorch nn.MSELoss() function — they're computing different values. However, you could just use the nn.MSELoss() to create your own RMSE loss function as: loss_fn = nn.MSELoss() RMSE_loss = torch.sqrt(loss_fn(prediction, target)) RMSE_loss.backward() Hope that helps. | 9 | 6 |
61,988,327 | 2020-5-24 | https://stackoverflow.com/questions/61988327/create-a-list-including-row-name-column-name-and-the-value-from-dataframe | I have the following dataframe: A B C A 1 3 0 B 3 2 5 C 0 5 4 All I want is shown below: my_list = [('A','A',1),('A','B',3),('A','C',0),('B','B',2),('B','C',5),('C','C',4)] Thanks in advance! | IIUC, you can do: df.stack().reset_index().agg(tuple,1).tolist() [('A', 'A', 1), ('A', 'B', 3), ('A', 'C', 0), ('B', 'A', 3), ('B', 'B', 2), ('B', 'C', 5), ('C', 'A', 0), ('C', 'B', 5), ('C', 'C', 4)] | 14 | 5 |
61,952,845 | 2020-5-22 | https://stackoverflow.com/questions/61952845/fastapi-single-parameter-body-cause-pydantic-validation-error | I have a POST FastAPI method. I do not want to construct a class nor query string. So, I decide to apply Body() method. @app.post("/test-single-int") async def test_single_int( t: int = Body(...) ): pass This is the request POST http://localhost:8000/test-single-int/ { "t": 10 } And this is the response HTTP/1.1 422 Unprocessable Entity date: Fri, 22 May 2020 10:00:16 GMT server: uvicorn content-length: 83 content-type: application/json connection: close { "detail": [ { "loc": [ "body", "s" ], "msg": "str type expected", "type": "type_error.str" } ] } However, after trying with many samples, I found that they will not error if I have more than one Body(). For example, @app.post("/test-multi-mix") async def test_multi_param( s: str = Body(...), t: int = Body(...), ): pass Request POST http://localhost:8000/test-multi-mix/ { "s": "test", "t": 10 } Response HTTP/1.1 200 OK date: Fri, 22 May 2020 10:16:12 GMT server: uvicorn content-length: 4 content-type: application/json connection: close null Does anyone have any idea about my implementation? Are there wrong? Is it not best practice? Or it is a bug? | It is not a bug, it is how Body behaves, it exists for "extending" request params how documentation outlines: class Item(BaseModel): name: str class User(BaseModel): username: str full_name: str = None @app.put("/items/{item_id}") async def update_item( *, item_id: int, item: Item, user: User, importance: int = Body(..., gt=0), q: str = None ): pass Valid request body for this view would be: { "item": { "name": "Foo", "tax": 3.2 }, "user": { "username": "dave", "full_name": "Dave Grohl" }, "importance": 5 } If you really want to use Body alone you must specify embed=True, this one works as expected: @app.put("/items/{item_id}") async def update_item( *, item_id:int, importance: int = Body(..., gt=0, embed=True), q: str = None ): pass | 10 | 15 |
61,983,158 | 2020-5-24 | https://stackoverflow.com/questions/61983158/how-to-concat-multiple-pandas-dataframe-columns-with-different-token-separator | I am trying to concat multiple Pandas DataFrame columns with different tokens. For example, my dataset looks like this : dataframe = pd.DataFrame({'col_1' : ['aaa','bbb','ccc','ddd'], 'col_2' : ['name_aaa','name_bbb','name_ccc','name_ddd'], 'col_3' : ['job_aaa','job_bbb','job_ccc','job_ddd']}) I want to output something like this: features 0 aaa <0> name_aaa <1> job_aaa 1 bbb <0> name_bbb <1> job_bbb 2 ccc <0> name_ccc <1> job_ccc 3 ddd <0> name_ddd <1> job_ddd Explanation : concat each column with "<{}>" where {} will be increasing numbers. What I've tried so far: I don't want to modify original DataFrame so I created two new dataframe: features_df = pd.DataFrame() final_df = pd.DataFrame() for iters in range(len(dataframe.columns)): features_df[dataframe.columns[iters]] = dataframe[dataframe.columns[iters]] + ' ' + "<{}>".format(iters) final_df['features'] = features_df[features_df.columns].agg(' '.join, axis=1) There is an issue I am facing, It's adding <2> at last but I want output like above, also this is not panda's way to do this task, How I can make it more efficient? | from itertools import chain dataframe['features'] = dataframe.apply(lambda x: ''.join([*chain.from_iterable((v, f' <{i}> ') for i, v in enumerate(x))][:-1]), axis=1) print(dataframe) Prints: col_1 col_2 col_3 features 0 aaa name_aaa job_aaa aaa <0> name_aaa <1> job_aaa 1 bbb name_bbb job_bbb bbb <0> name_bbb <1> job_bbb 2 ccc name_ccc job_ccc ccc <0> name_ccc <1> job_ccc 3 ddd name_ddd job_ddd ddd <0> name_ddd <1> job_ddd | 19 | 8 |
61,980,349 | 2020-5-24 | https://stackoverflow.com/questions/61980349/tensorflow-typeerror-cannot-unpack-non-iterable-float-object | I am using tensorflow V2.2 and run into TyepError when I do model.evaluate. Can someone advise what the issues may be? A screenshot of the execution and error message is shown below. | you need to define a metric when you compile the model model.compile('adam', 'binary_crossentropy', metrics='accuracy') in this way during evaluation, loss and accuracy are returned | 7 | 12 |
61,972,717 | 2020-5-23 | https://stackoverflow.com/questions/61972717/how-to-run-jupyter-notebook-with-a-different-version-of-python | I want to be able to run both Python 3.8 (currrent version) and Python 3.7 in my Jupyter Notebook. I understand creating different IPython kernels from virtual environments is the way. So I downloaded Python 3.7 and locally installed it in my home directory. Used this python binary file to create a virtual environment by > virtualenv -p ~/Python3.7/bin/python3 py37 > source py37/bin/activate This works perfectly and gives 'Python 3.7' correctly on checking with python --version and sys.version. Then for creating IPython kernel, (py37) > ipython kernel install --user --name py37 --display-name "Python 3.7" (py37) > jupyter notebook This also runs without error and the kernel can be confirmed to be added in the Notebook. However it does not run Python 3.7 like the virtual environment, but Python 3.8 like the default kernel. (confirmed with sys.version) I checked ~/.local/share/jupyter/kernels/py37/kernel.json and saw its contents as { "argv": [ "/usr/bin/python3", "-m", "ipykernel_launcher", "-f", "{connection_file}" ], "display_name": "Python 3.7", "language": "python" So naturally I tried editing the /usr/bin/python3 to point to my Python 3.7 binary file path that is ~/Python3.7/bin/python3, but then even the kernel doesn't work properly in the notebook. What can I possibly do? NB: I use Arch Linux, so I installed jupyter, virtualenv, ... through pacman not pip as its recommended in Arch. | Found it myself, the hard way. Let me share anyway, in case this helps anyone. I guess, the problem was that, jupyter notebook installed through pacman searches for python binary files in the PATH variable and not in the path specified by the virtual environment. Since I installed Python 3.7 locally in my home directory, Jupyter can't find it and it might have defaulted to the default python version. So the possible solutions are: Install Jupyter Notebook through pip (instead of pacman) within the virtual environment set on Python 3.7 (This is not at all recommended for Arch Linux users, as installing packages through pip can probably cause issues in future) > wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz > tar -xvf Python-3.7.4.tgz > cd Python-3.5.1/ > ./configure --prefix=$HOME/Python37 > make > make install > virtualenv -p ~/Python3.7/bin/python3 py37 > source py37/bin/activate (py37) > pip install notebook (py37) > python -m notebook Install Python 3.7 within default directory (instead of specifying somewhere else). Create a new IPython kernel using the suitable virtual environment and use jupyter-notebook installed through pacman. (Recommended for Arch Linux users) Note 1: > python points to the updated global Python 3.8 version and > python3 or > python3.7 points to newly installed Python 3.7 Note 2: Once the required kernel is created, you might even be able to use that python version outside the virtual environment. > wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz > tar -xvf Python-3.7.4.tgz > cd Python-3.5.1/ > ./configure > make > sudo make install > virtualenv -p $(which python3.7) py37 > source py37/bin/activate (py37) > ipython kernel install --user --name py37 --display-name "Python 3.7" (py37) > jupyter notebook Add the path of the directory where you have locally installed the new Python version to the $PATH variable, create an IPython kernel and run Jupyter Notebook within suitable virtual environment. (Haven't yet tried this one personally. Just felt that this should work. So no guarantee. Also I don't think this is a good solution) > wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz > tar -xvf Python-3.7.4.tgz > cd Python-3.5.1/ > ./configure --prefix=$HOME/Python37 > make > make install > export PATH="$HOME/Python37/bin:$PATH" > virtualenv -p py37 > source py37/bin/activate (py37) > ipython kernel install --user --name py37 --display-name "Python 3.7" (py37) > jupyter notebook | 7 | 8 |
61,979,855 | 2020-5-23 | https://stackoverflow.com/questions/61979855/changing-colours-of-an-area-in-an-image-using-opencv-in-python | I have a picture were I want to change all white-ish pixels to grey, but only for a certain area of the image. Example picture, I just want to change the picture outside of the red rectangle, without changing the image within the red rectangle: I already have the general code, which was part of someone elses Stackoverflow question, that changes the colour of every white pixel instead of only just the one outside of an area. image = cv.imread("meme 2.jpg") hsv = cv.cvtColor(image, cv.COLOR_BGR2HSV) # Define lower and uppper limits of what we call "white-ish" sensitivity = 19 lower_white = np.array([0, 0, 255 - sensitivity]) upper_white = np.array([255, sensitivity, 255]) # Mask image to only select white mask = cv.inRange(hsv, lower_white, upper_white) # Change image to grey where we found brown image[mask > 0] = (170, 170, 170) cv.imwrite(file, image) | Here is one way to do that in Python/OpenCV. Read the input Convert to HSV color space Threshold on desired color to make a mask Use the mask to change the color of all corresponding pixels in the image Draw a new rectangular mask for the region where you do not want to change Invert the new mask for the region where you do want to change Apply the new mask to the original image Apply the inverted new mask to the color changed image Add the two results together to form the final image Save the results Input: import cv2 import numpy as np # Read image image = cv2.imread('4animals.jpg') # Convert to HSV hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # Define lower and uppper limits of what we call "white-ish" sensitivity = 19 lower_white = np.array([0, 0, 255 - sensitivity]) upper_white = np.array([255, sensitivity, 255]) # Create mask to only select white mask = cv2.inRange(hsv, lower_white, upper_white) # Change image to grey where we found white image2 = image.copy() image2[mask > 0] = (170, 170, 170) # Create new rectangular mask that is white on black background x,y,w,h = 33,100,430,550 mask2 = np.zeros_like(image) cv2.rectangle(mask2, (x,y), (x+w,y+h), (255, 255, 255), -1) # invert mask mask2_inv = 255 - mask2 # apply mask to image image_masked = cv2.bitwise_and(image, mask2) # apply inverted mask to image2 image2_masked = cv2.bitwise_and(image2, mask2_inv) # add together result = cv2.add(image_masked, image2_masked) # save results cv2.imwrite('4animals_mask.jpg', mask) cv2.imwrite('4animals_modified.png', image2) cv2.imwrite('4animals_mask2.jpg', mask2) cv2.imwrite('4animals_mask2_inv.jpg', mask2_inv) cv2.imwrite('4animals_masked.jpg', image_masked) cv2.imwrite('4animals_modified_masked.jpg', image2_masked) cv2.imwrite('4animals_result.jpg', result) cv2.imshow('mask', mask) cv2.imshow('image2', image2) cv2.imshow('mask2', mask2 ) cv2.imshow('mask2_inv', mask2_inv) cv2.imshow('image_masked', image_masked) cv2.imshow('image2_masked', image2_masked) cv2.imshow('result', result) cv2.waitKey(0) cv2.destroyAllWindows() Color mask: Rectangle mask: Inverted rectangle mask: Color changed image: Masked input: Masked color changed image: Result: | 11 | 4 |
61,975,353 | 2020-5-23 | https://stackoverflow.com/questions/61975353/what-is-the-difference-between-string-literals-and-string-values | See this answer. I think your confusion is that you're mixing up the concept of string literals in source code with actual string values. What is the difference between string literals and string values? I did not understand this. | A string literal is a piece of text you can write in your program's source code, beginning and ending with quotation marks, that tells Python to create a string with certain contents. It looks like 'asdf' or ''' multiline content ''' or 'the thing at the end of this one is a line break\n' In a string literal (except for raw string literals), special sequences of characters known as escape sequences in the string literal are replaced with different characters in the actual string. For example, the escape sequence \n in a string literal is replaced with a line feed character in the actual string. Escape sequences begin with a backslash. A string is a Python object representing a text value. It can be built from a string literal, or it could be read from a file, or it could originate from many other sources. Backslashes in a string have no special meaning, and backslashes in most possible sources of strings have no special meaning either. For example, if you have a file with backslashes in it, looking like this: asdf\n and you do with open('that_file.txt') as f: text = f.read() the \n in the file will not be replaced by a line break. Backslashes are special in string literals, but not in most other contexts. When you ask for the repr representation of a string, either by calling repr or by displaying the string interactively: >>> some_string = "asdf" >>> some_string 'asdf' Python will build a new string whose contents are a string literal that would evaluate to the original string. In this example, some_string does not have ' or " characters in it. The contents of the string are the four characters asdf, the characters displayed if you print the string: >>> print(some_string) asdf However, the repr representation has ' characters in it, because 'asdf' is a string literal that would evaluate to the string. Note that 'asdf' is not the same string literal as the "asdf" we originally used - many different string literals can evaluate to equal strings. | 15 | 18 |
Subsets and Splits