question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
68,903,084
2021-8-24
https://stackoverflow.com/questions/68903084/pydantic-validation-does-not-happen
I am quite new to using Pydantic. The Issue I am facing right now is that the Model Below is not raising the Expected Exception when the value is out of range. For example, if you pass -1 into this model it should ideally raise an HTTPException. but nothing happens I am not sure where I might be going wrong. Any Advice would be great. class GetInput: """ for the fields endpoint """ def __init__(self, rank: Optional[int] = None, interval: Optional[int] = None): self.rank = rank self.interval = interval @validator('rank') def check_if_rank_in_range(cls, v): """ check if input rank is within range """ if not 0 < v < 1000001: raise HTTPException( status_code=400, detail="Rank Value Must be within range (0,1000000)") return v @validator('interval') def check_if_interval_in_range(cls, v): """ check if input rank is within range """ if not 0 < v < 1000001: raise HTTPException( status_code=400, detail="Interval Value Must be within range (0,1000000)") return v The FastAPI Endpoint @router.get('fields/',status_code=200) def get_data(params: GetInput = Depends()): if params.rank: result = get_info_by_rank(params.rank) elif params.interval: result = get_info_by_interval(params.interval) return result
class GetInput(BaseModel): rank: Optional[int]=None interval: Optional[int]=None @validator("*") def check_range(cls, v): if v: if not 0 < v < 1000001: raise HTTPException(status_code=400, detail="Value Must be within range (0,1000000)") return v the validator was not working due to not Inheriting the BaseModel Class When the BaseModel Class would get inherited it would throw an error if either of the values is empty thus the additional if statement.
6
5
68,902,555
2021-8-24
https://stackoverflow.com/questions/68902555/visual-studio-code-pylance-search-folders
I have a project with some python in it - the python is part of a larger thing - there are several programs in several directories inside a root git - and there is some common code in yet another directory. Running works fine - but pylance in visual studio code sees all of the dependencies as errors, even though most are in the current directory of the script I'm editing. If I open that subdirectory they work fine - but I really want to have visual code open at the larger project level. Is there any way to bung a file somewhere in the directory that tells pylance "here is the python search path" or something?
You may search for settings about additional import search resolution paths: "python.analysis.extraPaths": [ "path1", "path2", ], Please have a try. More information view Pylance Settings and Customization.
5
8
68,902,836
2021-8-24
https://stackoverflow.com/questions/68902836/what-is-the-difference-between-client-side-based-sessions-and-server-side-sessio
I'm learning about sessions in Flask and in the documentation it says: "Besides the default client-side based sessions, if you want to handle sessions on the server-side instead, there are several Flask extensions that support this." https://flask.palletsprojects.com/en/2.0.x/quickstart/#sessions What is the different between client-side based sessions and server-side?
In addition to the request object there is also a second object called session which allows you to store information specific to a user from one request to the next. This is implemented on top of cookies for you and signs the cookies cryptographically. What this means is that the user could look at the contents of your cookie but not modify it, unless they know the secret key used for signing. So, the information is physically stored in the cookies, e.g. username=john is the value that's stored in the cookie. That's a "client-side session". The problem with that, as explained above, is that the user can see that data, which is bad if you want to store secret data in the session. If you need that, you need server-side sessions where the data is actually stored server-side, and all the client sees is some random meaningless session id. The session id gets stored in the cookie, and the server looks up the actual session data based on that id somewhere in some database. The advantage of client-side sessions is that the server is entirely stateless, i.e. it doesn't need to store any data itself. That means it doesn't need to do any database lookup to get the data, and you can—for example—run several independent servers in parallel without needing to worry about having a shared session store, which is great for scalability. The advantage of server-side sessions is that you can store more data, as it doesn't need to be sent back and forth with every request, and that the data is not visible to the user.
7
15
68,901,119
2021-8-24
https://stackoverflow.com/questions/68901119/module-aioredis-has-no-attribute-create-redis
Using python 3.6.12 and aioredis 2.0.0, asyncio 3.4.3 Tried to use the snippet from the aioredis for testing pub/sub: import asyncio import aioredis async def reader(ch): while (await ch.wait_message()): msg = await ch.get_json() print("Got Message:", msg) async def main(): pub = await aioredis.create_redis( 'redis://:password@localhost:6379') sub = await aioredis.create_redis( 'redis://:password@localhost:6379') res = await sub.subscribe('chan:1') ch1 = res[0] tsk = asyncio.ensure_future(reader(ch1)) res = await pub.publish_json('chan:1', ["Hello", "world"]) assert res == 1 await sub.unsubscribe('chan:1') await tsk sub.close() pub.close() if __name__ == '__main__': loop = asyncio.get_event_loop() result = loop.run_until_complete(main()) but the following error keeps popping up. Traceback (most recent call last): File "tests/test_async_redis.py", line 32, in <module> result = loop.run_until_complete(main()) File "/Users/dustinlee/.pyenv/versions/3.6.12/lib/python3.6/asyncio/base_events.py", line 488, in run_until_complete return future.result() File "tests/test_async_redis.py", line 12, in main pub = await aioredis.create_redis( AttributeError: module 'aioredis' has no attribute 'create_redis' Can anyone tell me what I am doing wrong? Probably something obvious but I'm just not seeing it. Thanks!
aioredis as of version 2.0 now follows the public API implementation of the library redis-py. From the aioredis doc page aioredis v2.0 is now a completely compliant asyncio-native implementation of redis-py. The entire core and public API has been re-written to follow redis-py‘s implementation as closely as possible. So the method aioredis.create_redis is no longer a public API you can use to establish a connection in version 2.0. Use version less than 2 if you want the create_redis method to work. You can refer the new pub sub example. Code copied here in case link does not work in future. import asyncio import async_timeout import aioredis STOPWORD = "STOP" async def pubsub(): redis = aioredis.Redis.from_url( "redis://localhost", max_connections=10, decode_responses=True ) psub = redis.pubsub() async def reader(channel: aioredis.client.PubSub): while True: try: async with async_timeout.timeout(1): message = await channel.get_message(ignore_subscribe_messages=True) if message is not None: print(f"(Reader) Message Received: {message}") if message["data"] == STOPWORD: print("(Reader) STOP") break await asyncio.sleep(0.01) except asyncio.TimeoutError: pass async with psub as p: await p.subscribe("channel:1") await reader(p) # wait for reader to complete await p.unsubscribe("channel:1") # closing all open connections await psub.close() async def main(): tsk = asyncio.create_task(pubsub()) async def publish(): pub = aioredis.Redis.from_url("redis://localhost", decode_responses=True) while not tsk.done(): # wait for clients to subscribe while True: subs = dict(await pub.pubsub_numsub("channel:1")) if subs["channel:1"] == 1: break await asyncio.sleep(0) # publish some messages for msg in ["one", "two", "three"]: print(f"(Publisher) Publishing Message: {msg}") await pub.publish("channel:1", msg) # send stop word await pub.publish("channel:1", STOPWORD) await pub.close() await publish() if __name__ == "__main__": import os if "redis_version:2.6" not in os.environ.get("REDIS_VERSION", ""): asyncio.run(main()) You can maybe also refer the redis-py docs as it is supposed to be what aioredis 2.0 now follows closely.
8
13
68,899,057
2021-8-23
https://stackoverflow.com/questions/68899057/can-one-add-a-custom-error-in-enum-to-show-valid-values
Let's say I have from enum import Enum class SomeType(Enum): TYPEA = 'type_a' TYPEB = 'type_b' TYPEC = 'type_c' If I now do SomeType('type_a') I will get <SomeType.TYPEA: 'type_a'> as expected. When I do SomeType('type_o') I will receive ValueError: 'type_o' is not a valid SomeType which is also expected. My question is: Can one somehow easily customize the error so that it shows all valid types? So, in my case I would like to have ValueError: 'type_o' is not a valid SomeType. Valid types are 'type_a', 'type_b', 'type_c'.
Use the _missing_ method: from enum import Enum class SomeType(Enum): TYPEA = 'type_a' TYPEB = 'type_b' TYPEC = 'type_c' @classmethod def _missing_(cls, value): raise ValueError( '%r is not a valid %s. Valid types: %s' % ( value, cls.__name__, ', '.join([repr(m.value) for m in cls]), )) and in use: >>> SomeType('type_a') <SomeType.TYPEA: 'type_a'> >>> SomeType('type_o') ValueError: 'type_o' is not a valid SomeType During handling of the above exception, another exception occurred: Traceback (most recent call last): ... ValueError: 'type_o' is not a valid SomeType. Valid types: 'type_a', 'type_b', 'type_c' As you see, it's a little clunky with the exception chaining as Enum itself will raise the "primary" ValueError, with your missing error being in the chain. Depending on your needs, you can narrow the ValueError that you are raising in _missing_ to just include the valid types: from enum import Enum class SomeType(Enum): TYPEA = 'type_a' TYPEB = 'type_b' TYPEC = 'type_c' @classmethod def _missing_(cls, value): raise ValueError( 'Valid types: %s' % ( ', '.join([repr(m.value) for m in cls]), ) and in use: >>> SomeType('type_o') ValueError: 'type_o' is not a valid SomeType During handling of the above exception, another exception occurred: Traceback (most recent call last): ... ValueError: Valid types: 'type_a', 'type_b', 'type_c' Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
5
10
68,891,213
2021-8-23
https://stackoverflow.com/questions/68891213/how-to-decode-jwt-token-with-jwk-in-python
I am developing an application where all the API's are protected by OAuth. I have received the access token from the client, but could not decode and validate the token. I have JWK in the below format { "keys": [ { "kty": "RSA", "x5t#S256": "Some value", "e": "Some Value", "x5t": "Some Value", "kid": "SIGNING_KEY", "x5c": [ "Some Value" ], "key_ops": [ "verify", "encrypt" ], "alg": "RS256", "n": "Some Value" } ] } How to decode the JWT token using above JWK in Python?
Fast check of your jwt token https://jwt.io/ otherwise you can try this, but you should know the algorithm used to generate the token (e.g. : HS256) and the key used for signing the token) (e.g. :super_secretkey) import jwt # pip install pyjwt[crypto] to install the package jwt.decode(token, key='super_secretkey', algorithms=['HS256', ]) Update decode the JWT using JWK import json import jwt #for JWKS that contain multiple JWK public_keys = {} for jwk in jwks['keys']: kid = jwk['kid'] public_keys[kid] = jwt.algorithms.RSAAlgorithm.from_jwk(json.dumps(jwk)) kid = jwt.get_unverified_header(token)['kid'] key = public_keys[kid] payload = jwt.decode(token, key=key, algorithms=['RS256'])
10
24
68,877,761
2021-8-22
https://stackoverflow.com/questions/68877761/recaptcha-wasnt-solving-by-anticaptcha-plugin-in-selenium-python
I've recently started using selenium for a project I've been working on for a while that involves automation. One of the roadblocks in the plan was the ReCaptcha system, so I decided to use anti-captcha as the service that would solve the captchas when my bot encountered it. I properly installed the plugin and found some test code with selenium on their site. from python_anticaptcha import AnticaptchaClient, NoCaptchaTaskProxylessTask def captcha_solver(): api_key = 'xxxxxxxxxxxxxxxxxxxxxxx' site_key = '6LdZPw8aAAAAAA_1XrIfloCojPwo4TdJ_A_7ioRy' # grab from site url = 'https://www.rp.gob.pa/' client = AnticaptchaClient(api_key) task = NoCaptchaTaskProxylessTask(url, site_key) job = client.createTask(task) job.join() return job.get_solution_response() captcha = captcha_solver() driver.execute_script('document.getElementById("g-recaptcha-response").innerHTML = "{}";'.format(captcha)) time.sleep(1) wait.until(EC.element_to_be_clickable((By.XPATH,'//button[@type="submit"]'))).click() the anticaptcha says the recaptcha is solved, the solved code comes like this 03AGdBq24C2SwOzrdAoGpltxX-8vMuBEjwSmHVIRkVcthtqHEsmm7sEyac1vUgTZQHs7bUtK0YwW6NiduvAmXQt6xVxGRSvO1XhsiRPTfa8spSxRG6scwInLccriAV408I4plNzEykQVQya9v2u4PMyCyrVQ6NADI_A_56DuQvuzhLKuiNL-eN4MvtwEt1ueDefa3nwHUZoW-hgMiEcg1jQ4UhZJ0Ncz1favKF8aMB--Ru1-ewClN41MjyVwREHn1xuCNtnMt5rxaFLt0f5SehaFkdccem1rbCTqsb7lOomTEWpX0TiWKl2kOP9efgOJDlwV84ISncydrQseda7pTlf6nL0m_oUY8U-tnWFQi2i8g_ZWwOgrXb6o9lBapoy0-z0SWZARHKecBbfwHa906mG_b2jh9-IPOI-6rduxTnDw4HDlizXGKOU7Z8Cb8pQAhiaYEejiaBU0X2Dc44dq7CL4Q_365277zoKG4YDwgRXjUstT39e-3C_-lpjdNHMkkz9RJTNe0kOie2i3U-BruAh3trh-vM8F7JU4f8m52F335q3GdUb8FQXL7Fd9hLJpb9KfDMV0pfmRuxl5NoECKRbP2gtTTXUJ0ZwQ I execute this solved code to g-recaptcha-response textarea and says the selenium to click the button, but the result is this I cannot solve the recaptcha using anticaptcha, I don't whether my code has a problem, but I followed the official documentation to use the recaptcha. Guys please help me to solve this issue.
I've finally managed to resolve this myself. In case anyone else is struggling with a similar issue, here was my solution: Open the console and execute the following cmd: ___grecaptcha_cfg.clients Find the path which has the callback function, in my case it's ___grecaptcha_cfg.clients[0].R.R Use the following code: driver.execute_script(f"___grecaptcha_cfg.clients[0].R.R.callback('{new_token}')") (Remember to change the path accordingly) Can get the path using the google console Right click the callback -> copy property path and paste in driver.execute_script and add this on start ___grecaptcha_cfg. and pass the solved token value This Article will help you to find the ___grecaptcha_cfg.clients of your recaptcha site driver.execute_script('document.getElementById("g-recaptcha-response").innerHTML = "{}";'.format(g_response)) time.sleep(1) driver.execute_script(f"___grecaptcha_cfg.clients[0].R.R.callback('{g_response}')")
7
7
68,884,610
2021-8-22
https://stackoverflow.com/questions/68884610/vs-code-jupyter-notebook-doesnt-automatically-select-the-default-kernel
I have created a simple jupyter notebook in VS Code and selected it to use my default python3 kernel (/usr/local/bin/python3). Everything works great. Then, I close VS Code and re-open the notebook, it asks me to select the kernel every time. Is there a way to default the kernel of this notebook to my python3 interpreter? In case it helps, when I view the notebook json, it has the following in it: "kernelspec": { "name": "python3", "display_name": "Python 3.9.6 64-bit"
It's not available for now, but they think it is a reasonable request, and considering it. You can refer to this page.
8
4
68,843,444
2021-8-19
https://stackoverflow.com/questions/68843444/handle-permission-cache-in-django-user-model
I stumbled upon a weird behaviour: I add a permission to a user object but the permission check fails. permission = Permission.objects.get_by_natural_key(app_label='myapp', codename='my_codename', model='mymodel') user.user_permissions.add(permission) user.has_permission('myapp.my_codename') # this is False! I found some posts about user permission caching here and here and the solution seems to be to completely reload the object from the database. # Request new instance of User user = get_object_or_404(pk=user_id) # Now note how the permissions have been updated user.has_perms('myapp.my_codename') # now it's True This seems really overkill for me and very un-django-like. Is there really no way to either clear the permission cache oder reload the foreign keys like you can do for an object with refresh_from_db()? Thanks in advance! Ronny
You can force the recalculation by deleting the user object's _perm_cache and _user_perm_cache. permission = Permission.objects.get_by_natural_key(app_label='myapp', codename='my_codename', model='mymodel') user.user_permissions.add(permission) user.has_permission('myapp.my_codename') # returns False del user._perm_cache del user._user_perm_cache user.has_permission('myapp.my_codename') # should be True But this will essentially hit the database again to fetch the updated permissions. Since these are based on internal workings in django and not part of the public API, these cache keys might change in the future, so I would still suggest to just fetch the user again. But that's totally up to you.
8
7
68,883,042
2021-8-22
https://stackoverflow.com/questions/68883042/how-can-i-document-methods-inherited-from-a-metaclass
Consider the following metaclass/class definitions: class Meta(type): """A python metaclass.""" def greet_user(cls): """Print a friendly greeting identifying the class's name.""" print(f"Hello, I'm the class '{cls.__name__}'!") class UsesMeta(metaclass=Meta): """A class that uses `Meta` as its metaclass.""" As we know, defining a method in a metaclass means that it is inherited by the class, and can be used by the class. This means that the following code in the interactive console works fine: >>> UsesMeta.greet_user() Hello, I'm the class 'UsesMeta'! However, one major downside of this approach is that any documentation that we might have included in the definition of the method is lost. If we type help(UsesMeta) into the interactive console, we see that there is no reference to the method greet_user, let alone the docstring that we put in the method definition: Help on class UsesMeta in module __main__: class UsesMeta(builtins.object) | A class that uses `Meta` as its metaclass. | | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) Now of course, the __doc__ attribute for a class is writable, so one solution would be to rewrite the metaclass/class definitions like so: from pydoc import render_doc from functools import cache def get_documentation(func_or_cls): """Get the output printed by the `help` function as a string""" return '\n'.join(render_doc(func_or_cls).splitlines()[2:]) class Meta(type): """A python metaclass.""" @classmethod @cache def _docs(metacls) -> str: """Get the documentation for all public methods and properties defined in the metaclass.""" divider = '\n\n----------------------------------------------\n\n' metacls_name = metacls.__name__ metacls_dict = metacls.__dict__ methods_header = ( f'Classmethods inherited from metaclass `{metacls_name}`' f'\n\n' ) method_docstrings = '\n\n'.join( get_documentation(method) for method_name, method in metacls_dict.items() if not (method_name.startswith('_') or isinstance(method, property)) ) properties_header = ( f'Classmethod properties inherited from metaclass `{metacls_name}`' f'\n\n' ) properties_docstrings = '\n\n'.join( f'{property_name}\n{get_documentation(prop)}' for property_name, prop in metacls_dict.items() if isinstance(prop, property) and not property_name.startswith('_') ) return ''.join(( divider, methods_header, method_docstrings, divider, properties_header, properties_docstrings, divider )) def __new__(metacls, cls_name, cls_bases, cls_dict): """Make a new class, but tweak `.__doc__` so it includes information about the metaclass's methods.""" new = super().__new__(metacls, cls_name, cls_bases, cls_dict) metacls_docs = metacls._docs() if new.__doc__ is None: new.__doc__ = metacls_docs else: new.__doc__ += metacls_docs return new def greet_user(cls): """Print a friendly greeting identifying the class's name.""" print(f"Hello, I'm the class '{cls.__name__}'!") class UsesMeta(metaclass=Meta): """A class that uses `Meta` as its metaclass.""" This "solves" the problem; if we now type help(UsesMeta) into the interactive console, the methods inherited from Meta are now fully documented: Help on class UsesMeta in module __main__: class UsesMeta(builtins.object) | A class that uses `Meta` as its metaclass. | | ---------------------------------------------- | | Classmethods inherited from metaclass `Meta` | | greet_user(cls) | Print a friendly greeting identifying the class's name. | | ---------------------------------------------- | | Classmethod properties inherited from metaclass `Meta` | | | | ---------------------------------------------- | | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) That's an awful lot of code to achieve this goal, however. Is there a better way? How does the standard library do it? I'm also curious about the way certain classes in the standard library manage this. If we have an Enum definition like so: from enum import Enum class FooEnum(Enum): BAR = 1 Then, typing help(FooEnum) into the interactive console includes this snippet: | ---------------------------------------------------------------------- | Readonly properties inherited from enum.EnumMeta: | | __members__ | Returns a mapping of member name->value. | | This mapping lists all enum members, including aliases. Note that this | is a read-only view of the internal mapping. How exactly does the enum module achieve this? The reason why I'm using metaclasses here, rather than just defining classmethods in the body of a class definition Some methods that you might write in a metaclass, such as __iter__, __getitem__ or __len__, can't be written as classmethods, but can lead to extremely expressive code if you define them in a metaclass. The enum module is an excellent example of this.
The help() function relies on dir(), which currently does not always give consistent results. This is why your method gets lost in the generated interactive documentation. There's a open python issue on this topic which explains the problem in more detail: see bugs 40098 (esp. the first bullet-point). In the meantime, a work-around is to define a custom __dir__ in the meta-class: class Meta(type): """A python metaclass.""" def greet_user(cls): """Print a friendly greeting identifying the class's name.""" print(f"Hello, I'm the class '{cls.__name__}'!") def __dir__(cls): return super().__dir__() + [k for k in type(cls).__dict__ if not k.startswith('_')] class UsesMeta(metaclass=Meta): """A class that uses `Meta` as its metaclass.""" which produces: Help on class UsesMeta in module __main__: class UsesMeta(builtins.object) | A class that uses `Meta` as its metaclass. | | Methods inherited from Meta: | | greet_user() from __main__.Meta | Print a friendly greeting identifying the class's name. This is essentially what enum does - although its implementation is obviously a little more sophisticated than mine! (The module is written in python, so for more details, just search for "__dir__" in the source code).
7
6
68,851,505
2021-8-19
https://stackoverflow.com/questions/68851505/installing-sqlalchemy-with-poetry-causes-an-attributeerrorr
When installing with pip, pip install sqlalchemy all is ok. When installing with poetry I am getting the error ➜ backend poetry add sqlalchemy Using version ^1.4.23 for SQLAlchemy Updating dependencies Resolving dependencies... (0.1s) AttributeError 'EmptyConstraint' object has no attribute 'allows' at ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/version/markers.py:291 in validate 287│ 288│ if self._name not in environment: 289│ return True 290│ → 291│ return self._constraint.allows(self._parser(environment[self._name])) 292│ 293│ def without_extras(self): # type: () -> MarkerTypes 294│ return self.exclude("extra") 295│ ➜ backend
Try poetry self update, then poetry update.
15
13
68,878,031
2021-8-22
https://stackoverflow.com/questions/68878031/is-multiprocessing-pool-not-allowed-in-airflow-task-assertionerror-daemonic
Our airflow project has a task that queries from BigQuery and uses Pool to dump in parallel to local JSON files: def dump_in_parallel(table_name): base_query = f"select * from models.{table_name}" all_conf_ids = range(1,10) n_jobs = 4 with Pool(n_jobs) as p: p.map(partial(dump_conf_id, base_query = base_query), all_conf_ids) with open("/tmp/final_output.json", "wb") as f: filenames = [f'/tmp/output_file_{i}.json' for i in all_conf_ids] This task was working fine for us in airflow v1.10, but is no longer working in v2.1+. Section 2.1 here - https://blog.mbedded.ninja/programming/languages/python/python-multiprocessing/ - mentions "If you try and create a Pool from within a child worker that was already created with a Pool, you will run into the error: daemonic processes are not allowed to have children" Here is the full Airflow error: [2021-08-22 02:11:53,064] {taskinstance.py:1462} ERROR - Task failed with exception Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1164, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1282, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1312, in _execute_task result = task_copy.execute(context=context) File "/usr/local/lib/python3.7/site-packages/airflow/operators/python.py", line 150, in execute return_value = self.execute_callable() File "/usr/local/lib/python3.7/site-packages/airflow/operators/python.py", line 161, in execute_callable return self.python_callable(*self.op_args, **self.op_kwargs) File "/usr/local/airflow/plugins/tasks/bigquery.py", line 249, in dump_in_parallel with Pool(n_jobs) as p: File "/usr/local/lib/python3.7/multiprocessing/context.py", line 119, in Pool context=self.get_context()) File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 176, in __init__ self._repopulate_pool() File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool w.start() File "/usr/local/lib/python3.7/multiprocessing/process.py", line 110, in start 'daemonic processes are not allowed to have children' AssertionError: daemonic processes are not allowed to have children If it matters, we run airflow using the LocalExecutor. Any idea why this task that uses Pool would have been working in airflow v1.10 but no longer in airflow 2.1?
Airflow 2 uses different processing model under the hood to speed up processing, yet to maintain process-based isolation between running tasks. That's why it uses forking and multiprocessing under the hook to run Tasks, but this also means that if you are using multiprocessing, you will hit the limits of Python multiprocessing that does not allow to chain multi-processing. I am not 100% sure if it will work but you might try to set execute_tasks_new_python_interpreter configuration to True. https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#execute-tasks-new-python-interpreter . This setting will cause airflow to start a new Python interpreter when running task instead of forking/using multiprocessing (though I am not 100% sure of the latter). It will work quite a bit slower (up to a few seconds of overhead) though to run your task as the new Python interpreter will have to reinitialize and import all the airflow code before running your task. If that does not work, then you can lunch your multiprocessing job using PythonVirtualenvOperator - that one will launch a new Python interpreter to run your python code and you should be able to use multiprocessing.
10
9
68,877,915
2021-8-22
https://stackoverflow.com/questions/68877915/airflow-alembic-util-exc-commanderror-cant-locate-revision-identified-by-a1
webserver_1 | The above exception was the direct cause of the following exception: webserver_1 | webserver_1 | Traceback (most recent call last): webserver_1 | File "/usr/local/bin/airflow", line 8, in <module> webserver_1 | sys.exit(main()) webserver_1 | File "/usr/local/lib/python3.7/site-packages/airflow/__main__.py", line 40, in main webserver_1 | args.func(args) webserver_1 | File "/usr/local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command webserver_1 | return func(*args, **kwargs) webserver_1 | File "/usr/local/lib/python3.7/site-packages/airflow/cli/commands/db_command.py", line 31, in initdb webserver_1 | db.initdb() webserver_1 | File "/usr/local/lib/python3.7/site-packages/airflow/utils/db.py", line 549, in initdb webserver_1 | upgradedb() webserver_1 | File "/usr/local/lib/python3.7/site-packages/airflow/utils/db.py", line 684, in upgradedb webserver_1 | command.upgrade(config, 'heads') webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/command.py", line 294, in upgrade webserver_1 | script.run_env() webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/script/base.py", line 490, in run_env webserver_1 | util.load_python_file(self.dir, "env.py") webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/util/pyfiles.py", line 97, in load_python_file webserver_1 | module = load_module_py(module_id, path) webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/util/compat.py", line 184, in load_module_py webserver_1 | spec.loader.exec_module(module) webserver_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module webserver_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed webserver_1 | File "/usr/local/lib/python3.7/site-packages/airflow/migrations/env.py", line 108, in <module> webserver_1 | run_migrations_online() webserver_1 | File "/usr/local/lib/python3.7/site-packages/airflow/migrations/env.py", line 102, in run_migrations_online webserver_1 | context.run_migrations() webserver_1 | File "<string>", line 8, in run_migrations webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/runtime/environment.py", line 813, in run_migrations webserver_1 | self.get_context().run_migrations(**kw) webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/runtime/migration.py", line 549, in run_migrations webserver_1 | for step in self._migrations_fn(heads, self): webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/command.py", line 283, in upgrade webserver_1 | return script._upgrade_revs(revision, rev) webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/script/base.py", line 370, in _upgrade_revs webserver_1 | for script in reversed(list(revs)) webserver_1 | File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ webserver_1 | self.gen.throw(type, value, traceback) webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/script/base.py", line 203, in _catch_revision_errors webserver_1 | compat.raise_(util.CommandError(resolution), from_=re) webserver_1 | File "/usr/local/lib/python3.7/site-packages/alembic/util/compat.py", line 296, in raise_ webserver_1 | raise exception webserver_1 | alembic.util.exc.CommandError: Can't locate revision identified by 'a13f7613ad25' This error is currently causing us all sorts of frustrations with our Airflow deployment. We run airflow in docker and, oddly enough, this issue is happening for myself but not my coworkers, making it quite challenging to debug. We found alembic util command error can't find identifier however it is not super clear what we can do to resolve the error.
You should wipe your database and recreate it from scratch (airflow db reset). Apparently the database you have have been corrupted - this could have happened if you used some development version of Arirflow or run some older version airflow 1.10 on Airflow 2 or the other way round. I presume (since you are talking about co-worker's and your database - those are all development databases, so you should be able to reset them from scratch. If this is a development database which uses sqlite, what can also help is to delete the sqlite3 file (you will find it in ${AIRFLOW_HOME} directory. This will drop the database and airflow will create one from scratch automatically when starting.
7
16
68,876,560
2021-8-21
https://stackoverflow.com/questions/68876560/issue-installing-python-3-8-using-pyenv
I tried installing python using the command pyenv install 3.8.11 Please let me know if you need more info. Thank you for looking. output: BUILD FAILED (Ubuntu 20.04 using python-build 20180424) Inspect or clean up the working tree at /tmp/python-build.20210821132713.23441 Results logged to /tmp/python-build.20210821132713.23441.log Last 10 log lines: File "/tmp/python-build.20210821132713.23441/Python-3.8.11/Lib/ensurepip/__init__.py", line 206, in _main return _bootstrap( File "/tmp/python-build.20210821132713.23441/Python-3.8.11/Lib/ensurepip/__init__.py", line 125, in _bootstrap return _run_pip(args + [p[0] for p in _PROJECTS], additional_paths) File "/tmp/python-build.20210821132713.23441/Python-3.8.11/Lib/ensurepip/__init__.py", line 34, in _run_pip return subprocess.run([sys.executable, "-c", code], check=True).returncode File "/tmp/python-build.20210821132713.23441/Python-3.8.11/Lib/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/tmp/python-build.20210821132713.23441/Python-3.8.11/python', '-c', '\nimport runpy\nimport sys\nsys.path = [\'/tmp/tmp7l0fqi9l/setuptools-56.0.0-py3-none-any.whl\', \'/tmp/tmp7l0fqi9l/pip-21.1.1-py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'/tmp/tmp7l0fqi9l\', \'--root\', \'/\', \'--upgrade\', \'setuptools\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' returned non-zero exit status 1. make: *** [Makefile:1198: install] Error 1 log: Traceback (most recent call last): File "<string>", line 6, in <module> File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/runpy.py", line 203, in run_module mod_name, mod_spec, code = _get_module_details(mod_name) File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/runpy.py", line 144, in _get_module_details return _get_module_details(pkg_main_name, error) File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/runpy.py", line 111, in _get_module_details __import__(pkg_name) File "<frozen zipimport>", line 241, in load_module File "<frozen zipimport>", line 709, in _get_module_code File "<frozen zipimport>", line 570, in _get_data zipimport.ZipImportError: can't decompress data; zlib not available Traceback (most recent call last): File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/runpy.py", line 87, in _run_code exec(code, run_globals) File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/ensurepip/__main__.py", line 5, in <module> sys.exit(ensurepip._main()) File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/ensurepip/__init__.py", line 206, in _main return _bootstrap( File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/ensurepip/__init__.py", line 125, in _bootstrap return _run_pip(args + [p[0] for p in _PROJECTS], additional_paths) File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/ensurepip/__init__.py", line 34, in _run_pip return subprocess.run([sys.executable, "-c", code], check=True).returncode File "/tmp/python-build.20210821140419.301/Python-3.8.11/Lib/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/tmp/python-build.20210821140419.301/Python-3.8.11/python', '-c', '\nimport runpy\nimport sys\nsys.path = [\'/tmp/tmp1imhbulh/setuptools-56.0.0-py3-none-any.whl\', \'/tmp/tmp1imhbulh/pip-21.1.1-py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'/tmp/tmp1imhbulh\', \'--root\', \'/\', \'--upgrade\', \'setuptools\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' returned non-zero exit status 1. make: *** [Makefile:1198: install] Error 1
pyenv requires some packages to build Python from source. For Ubuntu, from pyenv wiki: suggested build environment sudo apt-get install make build-essential libssl-dev zlib1g-dev \ libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev
5
15
68,869,020
2021-8-20
https://stackoverflow.com/questions/68869020/valueerror-y-must-be-a-structured-array-with-the-first-field-being-a-binary-cla
It appears that I have my code in the same form as the scikit-survival documentation. data_y = df[['sensored', 'sensored_2']].to_numpy() data_x = df.drop(['sensored', 'sensored_2'], axis = 1) data_y array([[True, 481], [True, 424], [True, 519], ..., [True, 13], [True, 96], [True, 6]], dtype=object) From the scikit-survial documentation the array was created from the dataset upon loading. I am trying to create my array from a dataframe, but continue to get the error in the title when I try to fit the array to the model. sksurv.linear_model import CoxPHSurvivalAnalysis estimator = CoxPHSurvivalAnalysis() estimator.fit(df_dummy_3, data_y) ValueError: y must be a structured array with the first field being a binary class event indicator and the second field the time of the event/censoring Documentation: from sksurv.datasets import load_veterans_lung_cancer data_x, data_y = load_veterans_lung_cancer() data_y array([( True, 72.), ( True, 411.), ( True, 228.), ( True, 126.), ( True, 118.), ( True, 10.), ( True, 82.), ( True, 110.), ( True, 314.), (False, 100.), ( True, 42.), ( True, 8.), ( True, 144.), (False, 25.), ( True, 11.), ( True, 30.), ( True, 384.), ( True, 4.), ( True, 54.), ( True, 13.), (False, 123.), (False, 97.), ( True, 153.), ( True, 59.), ( True, 117.), ( True, 16.), ( True, 151.), ( True, 22.), ( True, 56.), ( True, 21.), ( True, 18.), ( True, 139.), ( True, 20.), ( True, 31.), ( True, 52.), ( True, 287.), ( True, 18.), ( True, 51.), ( True, 122.), ( True, 27.), ( True, 54.), ( True, 7.), ( True, 63.), ( True, 392.), ( True, 10.), ( True, 8.), ( True, 92.), ( True, 35.), ( True, 117.), ( True, 132.), ( True, 12.), ( True, 162.), ( True, 3.), ( True, 95.), ( True, 177.), ( True, 162.), ( True, 216.), ( True, 553.), ( True, 278.), ( True, 12.), ( True, 260.), ( True, 200.), ( True, 156.), (False, 182.), ( True, 143.), ( True, 105.), ( True, 103.), ( True, 250.), ( True, 100.), ( True, 999.), ( True, 112.), (False, 87.), (False, 231.), ( True, 242.), ( True, 991.), ( True, 111.), ( True, 1.), ( True, 587.), ( True, 389.), ( True, 33.), ( True, 25.), ( True, 357.), ( True, 467.), ( True, 201.), ( True, 1.), ( True, 30.), ( True, 44.), ( True, 283.), ( True, 15.), ( True, 25.), (False, 103.), ( True, 21.), ( True, 13.), ( True, 87.), ( True, 2.), ( True, 20.), ( True, 7.), ( True, 24.), ( True, 99.), ( True, 8.), ( True, 99.), ( True, 61.), ( True, 25.), ( True, 95.), ( True, 80.), ( True, 51.), ( True, 29.), ( True, 24.), ( True, 18.), (False, 83.), ( True, 31.), ( True, 51.), ( True, 90.), ( True, 52.), ( True, 73.), ( True, 8.), ( True, 36.), ( True, 48.), ( True, 7.), ( True, 140.), ( True, 186.), ( True, 84.), ( True, 19.), ( True, 45.), ( True, 80.), ( True, 52.), ( True, 164.), ( True, 19.), ( True, 53.), ( True, 15.), ( True, 43.), ( True, 340.), ( True, 133.), ( True, 111.), ( True, 231.), ( True, 378.), ( True, 49.)], dtype=[('Status', '?'), ('Survival_in_days', '<f8')]) I have been trying to change the datatype of my array above without luck. Any help would be greatly appreciated.
The fit method expects the y data to be a structured array. In our case, this is an array of Tuples, where the first element is the status and second one is the survival in days. To put our data in the format the fit method expects, we need first transform the elements of the array from lists (e.g. [True, 424]) to tuples (e.g. (True, 424)). After that, we can group our tuples in the structured array. Below I will show an example: Suppose we have the following dataframe: df = pd.DataFrame([[True, 123.],[True, 481.], [True, 424.], [True, 519.]], columns=['sensored', 'sensored_2']) And we get y as you did: data_y = df[['sensored', 'sensored_2']].to_numpy() data_y array([[True, 123.0], [True, 481.0], [True, 424.0], [True, 519.0]], dtype=object) One way of putting the data_y in the format we expect is to create a list of tuples using its elements and, after that, create a structured array: #List of tuples aux = [(e1,e2) for e1,e2 in data_y] #Structured array new_data_y = np.array(aux, dtype=[('Status', '?'), ('Survival_in_days', '<f8')]) new_data_y array([( True, 123.), ( True, 481.), ( True, 424.), ( True, 519.)], dtype=[('Status', '?'), ('Survival_in_days', '<f8')]) Then, you can use that data to fit your model: from sksurv.linear_model import CoxPHSurvivalAnalysis estimator = CoxPHSurvivalAnalysis() estimator.fit(df_dummy, new_data_y)
7
4
68,873,111
2021-8-21
https://stackoverflow.com/questions/68873111/how-do-i-pull-the-last-modified-time-of-each-file-within-a-directory-in-python
I have been tasked with creating a small application that allows the user to: browse and choose a specific folder that will contain the files to be checked daily, browse and choose a specific folder that will receive the copied files, and manually initiate the 'file check' process that is performed by the script. In order to do that, I need to get the last modified time and add an if statement that checks if the file is over 24hrs old, then it will move the file over to the correct folder. I think I know how to do all that. Where I'm running into issues is with the os.path.getmtime method. As far as i understand it, I cannot get the modified time of multiple files. I have to name the file specifically. So the '../File Transfer/Old Files/' doesnt work. I keep getting the same error: FileNotFoundError: [WinError 2] The system cannot find the file specified: 'LICENSE-MIT.txt' (that's the first txt file in the Old Files folder) if I change the code to: x = os.path.getmtime(../File Transfer/Old Files/LICENSE-MIT.txt) It works, but with one complication. It gives me the time in seconds which is in the 449 billion range. I'm able to convert that number to a usable number by using this code: y = int(x) z = y//60//60 print(z) This keeps it as an int and not a float, and converts it from seconds to hours. Then I would just do a >=24 if statement to see if it's older than a day. HOWEVER, this is still just one the single file basis, which really doesnt work for what i want the program to do. I want it to do this with all files in a folder, regardless of what the names of those files are, or how many of them there are. ANY help would be super appreciated! Here's the script I'm using to iterate the folder and get each file mod time individually: for file in os.listdir('../File Transfer/Old Files/'): if file.endswith ('.txt'): time_mod = os.path.getmtime(file) print(time_mod) Ignore the last line. That was just for me so see the results for troubleshooting.
The os.listdir() method lists the files of the given path excluding the path, hence you will need to concatenate the path yourself: for file in os.listdir('../File Transfer/Old Files/'): if file.endswith('.txt'): time_mod = os.path.getmtime('../File Transfer/Old Files/' + file) print(time_mod) The glob.glob() method works great in cases like this: import os import glob for file in glob.globr('../File Transfer/Old Files/*.txt'): time_mod = os.path.getmtime('../File Transfer/Old Files/' + file) print(time_mod) You can get the amount of hours passed since the last modification of each file like so: import os from time import time PATH = '../File Transfer/Old Files/' for file in os.listdir(PATH): if file.endswith('.txt'): time_mod = time() - os.path.getmtime(PATH + file) print(time_mod // 3600)
5
4
68,872,321
2021-8-21
https://stackoverflow.com/questions/68872321/how-to-run-shell-script-on-specific-git-commit
I want to run run.sh to check the results of my codes, which takes about 30 hours. But I can't wait to add other features to my codes. However, I found there are some potential dangers: Edit shell script while it's running Edit shell script and python script while it's running In my case, I want run.sh running with all files are fixed while I keep editing more features on python files, which may have bugs. For example, here is my git log * commit 6d1931c0f5f6f7030f1a93d4f7496527c3711c1f (HEAD -> master) | Author: deeperlearner | Date: Fri Aug 13 10:37:21 2021 +0800 | | Version to be run by shell script | How can I force run.sh run on commit 6d1931 while I keep doing commits? * commit 059706b752f4702bc5d0c830c87e812a7bbaae27 (HEAD -> master) | Author: deeperlearner | Date: Fri Aug 13 10:38:21 2021 +0800 | | Buggy commit that may destroy `run.sh` | * commit 6d1931c0f5f6f7030f1a93d4f7496527c3711c1f | Author: deeperlearner | Date: Fri Aug 13 10:37:21 2021 +0800 | | Version to be run by shell script | I currently have solutions: use docker or just copy the whole codes to other directory. But I wonder if there is any method that can force shell script to execute on specific commit hash ID?
This is one use case for which git worktree is well suited. You basically create a new branch at the desired commit and make a copy of the working directory at that branch. For example, if you're at the top-level of your working tree and want to run the script on the current HEAD, just do: $ git worktree add ../bar Preparing worktree (new branch 'bar') HEAD is now at b2ceba9 Add d $ cd ../bar $ # Run long running script here $ cd ../foo $ # Edit files, create new commits, profit (In this example, foo is the name of the top level directory of the repo.) There are options to worktree to work with a particular commit rather than HEAD. Read the fine manual for details.
5
7
68,869,535
2021-8-21
https://stackoverflow.com/questions/68869535/numpy-accumulate-greater-operation
I'm trying to write a function that would detect all rising edges - indexes in a vector where value exceeds certain threshold. Something similar is described here: Python rising/falling edge oscilloscope-like trigger, but I want to add hysteresis, so that trigger won't fire unless the value goes below another limit. I came up the following code: import numpy as np arr = np.linspace(-10, 10, 60) sample_values = np.sin(arr) + 0.6 * np.sin(arr*3) above_trigger = sample_values > 0.6 below_deadband = sample_values < 0.0 combined = 1 * above_trigger - 1 * below_deadband Now in the combined array there is 1 where the original value was above upper limit, -1 where the value was below lower limit and 0 where the value was in between: >>> combined array([ 1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 1, 1, 0, 0, 1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 0, 0, 1, 1, 1, 0, -1, -1]) My idea was to use some clever function that would process this vector sequentially and replace all sequences of zeros with whatever non-zero value was preceding them. Then the problem would boil down to simply finding where the value changes from -1 to 1. I though that greater operation would fulfill this purpose if used correctly: -1 encoded as True and 1 as False: (True ("-1") > -1) -> True ("-1") (True ("-1") > 1) -> False ("1") (True ("-1") > 0) -> True ("-1") (False ("1") > -1) -> True ("-1") (False ("1") > 1) -> False ("1") (False ("1") > 0) -> False ("1") But the results are not what I expect: >>> 1 - 2 * np.greater.accumulate(combined) array([-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) It seems that the greater function doesn't correctly compare booleans with numeric values in this scenario, even though it works fine when used on scalars or pair-wise: >>> np.greater(False, -1) True >>> np.greater.outer(False, combined) array([False, False, True, True, True, True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, False, True, True, True, True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, True, True, True, True, True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, True, True]) Is this expected behavior? Am I doing something wrong here, is there any way around this? Alternatively, maybe you could suggest another approach to this problem? Thank you.
I am not sure what the issue with np.greater.accumulate is (it does not seem to behave as advertised indeed), but the following should work: import numpy as np import numpy as np arr = np.linspace(-10, 10, 60) sample_values = np.sin(arr) + 0.6 * np.sin(arr*3) above_trigger = sample_values > 0.6 below_deadband = sample_values < 0.0 combined = 1 * above_trigger - 1 * below_deadband mask = combined != 0 idx = np.where(mask,np.arange(len(mask)),0) idx = np.maximum.accumulate(idx) result = combined[idx] print(f"combined:\n {combined}\n") print(f"result:\n {result}") It gives: combined: [ 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 1 1 1 0 0 1 1 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 1 1 1 0 0 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 0 0 1 1 1 0 -1 -1] result: [ 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 1 1 1 1 1 -1 -1] Then the indices where values jumps from -1 to 1 can be obtained as follows: np.nonzero(result[1:] > result[:-1])[0] + 1 It gives: array([12, 31, 49])
5
2
68,869,633
2021-8-21
https://stackoverflow.com/questions/68869633/error-message-received-when-trying-to-import-turtle-python-3-9-m1-mac
When I try to import the python 3 graphical library, turtle, on my m1 MacBook, I get an error message: david@Davids-MacBook-Air Python Coding Files % /opt/homebrew/bin/python3 "/Users/david/Desktop/Python Coding Files/hello.py" Traceback (most recent call last): File "/Users/david/Desktop/Python Coding Files/hello.py", line 1, in <module> import turtle File "/opt/homebrew/Cellar/[email protected]/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/turtle.py", line 107, in <module> import tkinter as TK File "/opt/homebrew/Cellar/[email protected]/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/tkinter/__init__.py", line 37, in <module> import _tkinter # If this fails your Python may not be configured for Tk ModuleNotFoundError: No module named '_tkinter' I see it has the library tkinter written in it for some reason. This is weird because on my other Mac, an intel based Mac, with the same python 3 version, and no library such as tkinter install, I didnt get an error message when trying to import turtle Does anybody know what the problem is?
Python 3's turtle module has always used tkinter. Your other Mac must have tkinter installed. It used to be shipped with Python, but now you need to install it. Assuming you use HomeBrew: brew install python-tk
5
6
68,864,939
2021-8-20
https://stackoverflow.com/questions/68864939/s3fs-suddenly-stopped-working-in-google-colab-with-error-attributeerror-module
Yesterday the following cell sequence in Google Colab would work. (I am using colab-env to import environment variables from Google Drive.) This morning, when I run the same code, I get the following error. It appears to be a new issue with s3fs and aiobotocore. I have some experience with Google Colab and library version dependency issues that I have previously solved by upgrading libraries in a particular order: !pip install --upgrade library_name But I am a bit stuck this morning with this one. It is affecting all of my Google Colab notebooks so I thought that perhaps it is affecting others who are using data stored in Amazon AWS S3 with Google Colab. The version of s3fs that gets installed is 2021.07.0, which appears to be the latest.
Indeed, the breakage was with the release of aiobotocore 1.4.0 (today, 20 Aug 2021), which is fixed in release 2021.08.0 of s3fs, also today.
14
16
68,860,457
2021-8-20
https://stackoverflow.com/questions/68860457/fitfailedwarning-estimator-fit-failed-the-score-on-this-train-test-partition-f
I'm trying to optimize the parameters learning rate and max_depth of a XGB regression model: from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score from xgboost import XGBRegressor param_grid = [ # trying learning rates from 0.01 to 0.2 {'eta ':[0.01, 0.05, 0.1, 0.2]}, # and max depth from 4 to 10 {'max_depth': [4, 6, 8, 10]} ] xgb_model = XGBRegressor(random_state = 0) grid_search = GridSearchCV(xgb_model, param_grid, cv=5, scoring='neg_root_mean_squared_error', return_train_score=True) grid_search.fit(final_OH_X_train_scaled, y_train) final_OH_X_train_scaled is the training dataset that contains only numerical features. y_train is the training label - also numerical. This is returning the error: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. I've seen other similar questions, but couldn't find an answer yet. Also tried with: param_grid = [ # trying learning rates from 0.01 to 0.2 # and max depth from 4 to 10 {'eta ': [0.01, 0.05, 0.1, 0.2], 'max_depth': [4, 6, 8, 10]} ] But it generates the same error. EDIT: Here's a sample of the data: final_OH_X_train_scaled.head() y_train.head() EDIT2: The data sample may be retrieved with: final_OH_X_train_scaled = pd.DataFrame([[0.540617 ,1.204666 ,1.670791 ,-0.445424 ,-0.890944 ,-0.491098 ,0.094999 ,1.522411 ,-0.247443 ,-0.559572 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0], [0.117467 ,-2.351903 ,0.718969 ,-0.119721 ,-0.874705 ,-0.530832 ,-1.385230 ,2.126612 ,-0.947731 ,-0.156967 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0 ,0.0 ,0.0], [0.901138 ,-0.208256 ,-0.019134 ,0.265250 ,-0.889128 ,-0.467753 ,0.169306 ,-0.973256 ,0.056164 ,-0.671978 , 0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0], [2.074639 ,0.100602 ,-1.645121 ,0.929598 ,0.811911 ,1.364560 ,0.337242 ,0.435187 ,-0.388075 ,1.279959 , 0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0], [2.198099 ,-0.496254 ,-0.917933 ,-1.418407 ,-0.975889 ,1.044495 ,0.254181 ,1.335285 ,2.079415 ,2.071974 , 0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,1.0 ,0.0 ,0.0 ,0.0 ,0.0]], columns=['cont0' ,'cont1' ,'cont2' ,'cont3' ,'cont4' ,'cont5' ,'cont6' ,'cont7' ,'cont8' ,'cont9' ,'31' ,'32' ,'33' ,'34' ,'35' ,'36' ,'37' ,'38' ,'39' ,'40'])
I was able to reproduce the problem and the code fails to fit because there is an extra space in your eta parameter! Instead of this: {'eta ':[0.01, 0.05, 0.1, 0.2]},... Change it to this: {'eta':[0.01, 0.05, 0.1, 0.2]},... The error message was unfortunately not very helpful.
5
4
68,864,520
2021-8-20
https://stackoverflow.com/questions/68864520/pandas-cell-frequency-count-by-index
My dataframe is a long list of 4 letters, 'A', 'T', 'G','C', I need to count the frequency of each letter by index df = pd.DataFrame({'cases': ['ACCTTGTAGTGTATTTTATGACCAAATGACTTTTTCCCCCCAGTGGCTAATTTGTCTCAGGCCTGCGTCTTAAAGAGACACGGTAATGAGTAGGAAGTCCAGCGTGGTCTGGA','ACCTTGTACTGTATCTTATGACCAGATGACTTTTTCCACCCAGTGGCTAATTTGTCTCAGGCCTCCGTCTTAAAGAGACACGGTAATGAGTAGGAAGTCCAACGTGGTCTAGA','GCCTTGTACTGTATATTATGACCAAATGACTTTTTCCACCCATTGGCTAATTTGTCTCAGGCCTCCGTCTTAAAGAGACACGGAAATGAGTAGGAAGTCCAGCGTGGTCTAGA','ACCTTGTACTGTATATTATGACCAGATGACTTTTTCCACCCAGTGGCTAATTTGTCTCAGGCCTCCGTCTTAAAGAGACACGGTAATGAGTAGGAAGTCCAGCGTGGTCTAGA']}) cases 0 ACCTTGTAGTGTATTTTATGACCAAATGACTTTTTCCCCCCAGTGG... 1 ACCTTGTACTGTATCTTATGACCAGATGACTTTTTCCACCCAGTGG... 2 GCCTTGTACTGTATATTATGACCAAATGACTTTTTCCACCCATTGG... 3 ACCTTGTACTGTATATTATGACCAGATGACTTTTTCCACCCAGTGG... 4 ACCTTGTACTGTATATTATGACCAGATGACTTTTTCCACCCAGTGG... 5 ACCTTGTAGTGTATTTTATGACCAAATGACTTTTTCCCCCCAGTGG... 6 ACCTTGTACTGTATCTTATGACCAGATGACTTTTTCCACCCAGTGG... 7 GCCTTGTACTGTATATTATGACCAAATGACTTTTTCCACCCATTGG... 8 ACCTTGTACTGTATATTATGACCAGATGACTTTTTCCACCCAGTGG... 9 ACCTTGTACTGTATATTATGACCAGATGACTTTTTCCACCCAGTGG... The result would be a new df of shape 4x113, i cannot figure out a pandas way to do this. Below is my non-pandas solution def freq_lists(dna_list): n = len(dna_list[0]) A = [0]*n T = [0]*n G = [0]*n C = [0]*n for dna in dna_list: for index, base in enumerate(dna): if base == 'A': A[index] += 1 elif base == 'C': C[index] += 1 elif base == 'G': G[index] += 1 elif base == 'T': T[index] += 1 return {'A': A, 'C': C, 'G': G, 'T': T} fdf = pd.DataFrame(freq_lists(df['cases'].to_list())) A C G T 0 3 0 1 0 1 0 4 0 0 2 0 4 0 0 3 0 0 0 4 4 0 0 0 4 .. .. .. .. .. 108 0 4 0 0 109 0 0 0 4 110 3 0 1 0 111 0 0 4 0 112 4 0 0 0 To clarify the first row is obtained by summing up the counts of the first str in the case column which is AAGA -> A: 3, C:0, G:1 T:0
Use collections.Counter: from collections import Counter df['cases'].apply(lambda x: pd.Series(Counter(x))) output: A C T G 0 27 24 34 28 1 29 26 33 25 2 30 25 33 25 3 29 25 33 26 The other way around it not as sexy: pd.DataFrame([Counter(i) for i in list(zip(*df['cases'].apply(list).values))] ).fillna(0).astype(int) or (df['cases'].apply(lambda x: pd.Series(list(x))) .apply(pd.value_counts) .T.fillna(0).astype(int) ) output: A G C T 0 3 1 0 0 1 0 0 4 0 2 0 0 4 0 ... 111 0 4 0 0 112 4 0 0 0
9
2
68,863,050
2021-8-20
https://stackoverflow.com/questions/68863050/pyinstaller-loading-splash-screen
Pyinstaller recently added a splash screen option (yay!) but the splash stays open the entire time the exe is running. I need it because my file opens very slowly and I want to warn the user not to close the window. Is there a way I can get the splash screen to close when the gui opens?
from pyinstaller docs: import pyi_splash # Update the text on the splash screen pyi_splash.update_text("PyInstaller is a great software!") pyi_splash.update_text("Second time's a charm!") # Close the splash screen. It does not matter when the call # to this function is made, the splash screen remains open until # this function is called or the Python program is terminated. pyi_splash.close()
12
5
68,860,386
2021-8-20
https://stackoverflow.com/questions/68860386/how-can-we-get-token-holders-from-token
I have created my own ERC-20 token (AJR) and deploy on Ethereum private node, Now I want to list all the transaction by Token name. Also, I need to list down all the token holders by using contract address or token name. I try to fetch using web3 but I get only symbol, name, total supply, etc. but not token holders or transactions Below is my sample code: from web3 import Web3 Web3 = Web3(Web3.HTTPProvider('http://127.0.0.1:8545')) contract_instance = Web3.eth.contract(contract_address, abi=abi) print(contract_instance.functions.name().call())
Token holders are not directly available through the RPC protocol and RPC wrappers such as Web3. Information about token holders is stored on the blockchain in the token contract (or some of its dependencies), usually in the form of a mapping. Which means that you can't just loop through all of the holders, but you need to know the address and ask for their balance. // the key is the holder address, the value is their token amount mapping (address => uint256) public balanceOf; But - the ERC-20 standard defines the Transfer() event that the token contract should emit when a transfer occurs. mapping (address => uint256) public balanceOf; event Transfer(address indexed _from, address indexed _to, uint256 _amount); function transfer(address _to, uint256 _amount) external returns (bool) { balanceOf[msg.sender] -= _amount; balanceOf[_to] += _amount; emit Transfer(msg.sender, _to, _amount); return true; } So you'll need to build and maintain a database of holders from all Transfer() event logs emitted by this token contract. Collect past event logs to build the historic data, and subscribe to newly emitted logs to keep it up-to-date. Then you can aggregate all of this raw transfer data to the form of "address => current balance" and filter only addresses that have non-zero balance in your searchable DB. Docs: Get past event logs in Web3 - link Subscribe to new event logs in Web3 - link The same way is actually used by blockchain explorers. They scan each transaction for Transfer() events and if the emitter is a token contract, they update the token balances in their separate DB. The list of all holders (from this separate DB) is then displayed on the token detail page.
8
19
68,861,402
2021-8-20
https://stackoverflow.com/questions/68861402/how-to-change-size-of-confusion-matrix-in-fastai
I am drawing a Confusion Matrix in fastai with following code: interp = ClassificationInterpretation.from_learner(learn) interp.plot_confusion_matrix() But I end up with a super small matrix because I have around 20 categories: I have found the related question for sklearns but don't know how to apply it to fastai (because we don't use pyplot directly.
If you check the code of the function ClassificationInterpretation.plot_confusion_matrix (in file fastai / interpret.py), this is what you see: def plot_confusion_matrix(self, normalize=False, title='Confusion matrix', cmap="Blues", norm_dec=2, plot_txt=True, **kwargs): "Plot the confusion matrix, with `title` and using `cmap`." # This function is mainly copied from the sklearn docs cm = self.confusion_matrix() if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] fig = plt.figure(**kwargs) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) tick_marks = np.arange(len(self.vocab)) plt.xticks(tick_marks, self.vocab, rotation=90) plt.yticks(tick_marks, self.vocab, rotation=0) The key here is the line fig = plt.figure(**kwargs), so basically, the function plot_confusion_matrix will propagate its parameters to the plot function. So you could use either one of these: dpi=xxx (e.g. dpi=200) figsize=(xxx, yyy) See this post on StackOverflow about the relations they have with each other: https://stackoverflow.com/a/47639545/1603480 So in your case, you could just do: interp.plot_confusion_matrix(figsize=(12, 12)) And the Confusion Matrix would look like: FYI: this also applies to other plot functions, like interp.plot_top_losses(20, nrows=5, figsize=(25, 25))
5
6
68,850,172
2021-8-19
https://stackoverflow.com/questions/68850172/token-indices-sequence-length-issue
I am running a sentence transformer model and trying to truncate my tokens, but it doesn't appear to be working. My code is from transformers import AutoModel, AutoTokenizer model_name = "sentence-transformers/paraphrase-MiniLM-L6-v2" model = AutoModel.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) text_tokens = tokenizer(text, padding=True, truncation=True, return_tensors="pt") text_embedding = model(**text_tokens)["pooler_output"] I keep getting the following warning: Token indices sequence length is longer than the specified maximum sequence length for this model (909 > 512). Running this sequence through the model will result in indexing errors I am wondering why setting truncation=True is not truncating my text to the desired length?
You need to add the max_length parameter while creating the tokenizer like below: text_tokens = tokenizer(text, padding=True, max_length=512, truncation=True, return_tensors="pt") Reason: truncation=True without max_length parameter takes sequence length equal to maximum acceptable input length by the model. It is 1e30 or 1000000000000000019884624838656 for this model. You can check by printing out tokenizer.model_max_length. According to the Huggingface documentation about truncation, True or 'only_first' truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None).
5
6
68,854,033
2021-8-19
https://stackoverflow.com/questions/68854033/repeat-list-elements-based-on-values-in-another-list
I have two lists: nums = [2, 3, 5] mylist = ["aaa", "bbb", "ccc"] What I am trying to achieve is repeat each element in mylist as many times as the corresponding values in nums. Expected output is: ["aaa", "aaa", "bbb", "bbb", "bbb", "ccc", "ccc", "ccc", "ccc", "ccc"]
Let us do repeat import pandas as pd pd.Series(mylist).repeat(nums).tolist() Out[554]: ['aaa', 'aaa', 'bbb', 'bbb', 'bbb', 'ccc', 'ccc', 'ccc', 'ccc', 'ccc'] Or try numpy import numpy as np np.repeat(mylist,nums) Out[556]: array(['aaa', 'aaa', 'bbb', 'bbb', 'bbb', 'ccc', 'ccc', 'ccc', 'ccc', 'ccc'], dtype='<U3')
5
4
68,847,659
2021-8-19
https://stackoverflow.com/questions/68847659/pyspark-sorting-array-of-struct
This is a dummy sample of my dataframe data = [ [3273, "city y", [["ids", 27], ["smf", 13], ["tlk", 35], ["thr", 24]]], [3213, "city x", [["smf", 23], ["tlk", 15], ["ids", 17], ["thr", 34]]], ] df = spark.createDataFrame( data, "city_id:long, city_name:string, cel:array<struct<carr:string, subs:int>>" ) df.show(2, False) +-------+---------+--------------------------------------------+ |city_id|city_name|cel | +-------+---------+--------------------------------------------+ |3273 |city y |[[ids, 27], [smf, 13], [tlk, 35], [thr, 24]]| |3213 |city x |[[smf, 23], [tlk, 15], [ids, 17], [thr, 34]]| +-------+---------+--------------------------------------------+ I need to descendingly sort the array of column cel based on its subs value. It would be like this +-------+---------+--------------------------------------------+ |city_id|city_name|cel | +-------+---------+--------------------------------------------+ |3273 |city y |[[tlk, 35], [ids, 27], [thr, 24], [smf, 13]]| |3213 |city x |[[thr, 34], [smf, 23], [ids, 17], [tlk, 15]]| +-------+---------+--------------------------------------------+ Is there a way of doing it without using an UDF if possible? thanks I am using spark version 2.4.0
You can do it using some SQL lambda functions : df = df.withColumn( "cel", F.expr( "reverse(array_sort(transform(cel,x->struct(x['subs'] as subs,x['carr'] as carr))))" ), ) df.show() +-------+---------+--------------------------------------------+ |city_id|city_name|cel | +-------+---------+--------------------------------------------+ |3273 |city y |[[35, tlk], [27, ids], [24, thr], [13, smf]]| |3213 |city x |[[34, thr], [23, smf], [17, ids], [15, tlk]]| +-------+---------+--------------------------------------------+ df.printSchema() root |-- city_id: long (nullable = true) |-- city_name: string (nullable = true) |-- cel: array (nullable = true) | |-- element: struct (containsNull = false) | | |-- subs: integer (nullable = true) | | |-- carr: string (nullable = true)
7
7
68,832,550
2021-8-18
https://stackoverflow.com/questions/68832550/python-apscheduler-fails-only-timezones-from-the-pytz-library-are-supported-e
I am trying to run a python async app with an asyncioscheduler scheduled job but the APScheduler fails during build because of this error: 'Only timezones from the pytz library are supported' error I do include pytz in my app and i am passing the timezone. What is causing the error? I am calling the asyncioscheduler in a class where i create job manager: from apscheduler.schedulers.asyncio import AsyncIOScheduler class ScheduleManager: def __init__(self) -> None: self.scheduler = AsyncIOScheduler() def start(self): self.scheduler.start() def stop(self): self.scheduler.shutdown() def add_seconds_interval_job(self, callback, interval : int): self.scheduler.add_job(callback, 'interval', seconds = interval) def add_minutes_interval_job(self, callback, interval : int): self.scheduler.add_job(callback, 'interval', minutes = interval) def add_hours_interval_job(self, callback, interval : int): self.scheduler.add_job(callback, 'interval', hours = interval) def add_days_interval_job(self, callback, interval : int): self.scheduler.add_job(callback, 'interval', days = interval) then i call this manager from my application like : from jobs import ScheduleManager, ConfigJob class AppInitializer: def __init__(self) -> None: self.schedule_manager = ScheduleManager() self.config__job = ConfigJob() async def initialize(self, app, loop): self.schedule_manager.add_seconds_interval_job(self.config_job.run, 5) self.schedule_manager.start()
The tzlocal library switched from pytz to zoneinfo timezones in 3.0 and APScheduler 3.x is not compatible with those. Due to this, APScheduler 3.7.0 has tzlocal pinned to v2.x. If you're getting tzlocal 3.0 installed through APScheduler, you're using an old version. Please upgrade.
6
7
68,842,475
2021-8-19
https://stackoverflow.com/questions/68842475/single-column-fetch-returning-in-list-in-python-postgresql
Database: id trade token 1 abc 5523 2 fdfd 5145 3 sdfd 2899 Code: def db_fetchquery(sql): conn = psycopg2.connect(database="trade", user='postgres', password='jps', host='127.0.0.1', port= '5432') cursor = conn.cursor() conn.autocommit = True cursor.execute(sql) row = cursor.rowcount if row >= 1: data = cursor.fetchall() conn.close() return data conn.close() return False print(db_fetchquery("SELECT token FROM script")) Result: [(5523,),(5145,),(2899,)] But I need results as: [5523,5145,2899] I also tried print(db_fetchquery("SELECT zerodha FROM script")[0]) but this gave result as:- [(5523,)] Also, why is there ',' / list inside list when I am fetching only one column?
Not sure if you are able to do that without further processing but I would do it like this: data = [x[0] for x in data] which convert the list of tuples to a 1D list
6
8
68,837,536
2021-8-18
https://stackoverflow.com/questions/68837536/how-to-use-a-different-colormap-for-different-rows-of-a-heatmap
I am trying to change 1 row in my heatmap to a different color here is the dataset: m = np.array([[ 0.7, 1.4, 0.2, 1.5, 1.7, 1.2, 1.5, 2.5], [ 1.1, 2.5, 0.4, 1.7, 2. , 2.4, 2. , 3.2], [ 0.9, 4.4, 0.7, 2.3, 1.6, 2.3, 2.6, 3.3], [ 0.8, 2.1, 0.2, 1.8, 2.3, 1.9, 2. , 2.9], [ 0.9, 1.3, 0.8, 2.2, 1.8, 2.2, 1.7, 2.8], [ 0.7, 0.9, 0.4, 1.8, 1.4, 2.1, 1.7, 2.9], [ 1.2, 0.9, 0.4, 2.1, 1.3, 1.2, 1.9, 2.4], [ 6.3, 13.5, 3.1, 13.4, 12.1, 13.3, 13.4, 20. ]]) data = pd.DataFrame(data = m) Right now I am using seaborn heatmap, I can only create something like this: cmap = sns.diverging_palette(240, 10, as_cmap = True) sns.heatmap(data, annot = True, cmap = "Reds") plt.show I hope to change the color scheme of the last row, here is what I want to achieve (I did this in Excel): Is it possible I achieve this in Python with seaborn heatmap? Thank you!
You can split in two, mask the unwanted parts, and plot separately: # Reds data1 = data.copy() data1.loc[7] = float('nan') ax = sns.heatmap(data1, annot=True, cmap="Reds") # Greens data2 = data.copy() data2.loc[:6] = float('nan') sns.heatmap(data2, annot=True, cmap="Greens") output: NB. you need to adapt the loc[…] parameter to your actual index names
5
5
68,835,895
2021-8-18
https://stackoverflow.com/questions/68835895/how-to-supress-all-warnings-when-running-pytest
Some warning can cause problems when running tests with pytest. It might be desirable to ignore all warnings. I could not find a clear way to suppress all warnings.
This is the solution I found: pytest -W ignore test_script.py
6
3
68,820,085
2021-8-17
https://stackoverflow.com/questions/68820085/how-to-convert-geojson-to-shapely-polygon
i have a geoJSON geo = {'type': 'Polygon', 'coordinates': [[[23.08437310100004, 53.15448536100007], [23.08459767900007, 53.15448536100007], [23.08594514600003, 53.153587050000056], (...) [23.08437310100004, 53.15448536100007]]]} and i want to use these coordinates as an input to shapely.geometry.Polygon. The problem is that Polygon only accepts tuple values, meaning i have to convert this geojson to a polygon. When i try to input this type of data into a Polygon there's an error ValueError: A LinearRing must have at least 3 coordinate tuples I tried this: [tuple(l) for l in geo['coordinates']] but this dosen't quite work since it only returns this [([23.08437310100004, 53.15448536100007], [23.08459767900007, 53.15448536100007], (...) [23.08437310100004, 53.15448536100007])] and what i need is this (i think it's a tuple) ([(23.08437310100004, 53.15448536100007), (23.08459767900007, 53.15448536100007), (...) (23.08437310100004, 53.15448536100007)]) is there a function for this?
A generic solution is to use the shape function: Returns a new, independent geometry with coordinates copied from the context. This works for all geometries not just polygons. from shapely.geometry import shape from shapely.geometry.polygon import Polygon geo: dict = {'type': 'Polygon', 'coordinates': [[[23.08437310100004, 53.15448536100007], [23.08459767900007, 53.15448536100007], [23.08594514600003, 53.153587050000056], [23.08437310100004, 53.15448536100007]]]} polygon: Polygon = shape(geo)
16
39
68,807,958
2021-8-16
https://stackoverflow.com/questions/68807958/pytorch-nn-crossentropyloss-indexerror-target-2-is-out-of-bounds
I'm creating a simple 2 class sentiment classifier using bert, but i'm getting an error related to output and label size. I cannot figure out what I'm doing wrong. Below are the required code snippets. My custom dataset class: class AmazonReviewsDataset(torch.utils.data.Dataset): def __init__(self, df): self.df = df self.maxlen = 256 self.tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") def __len__(self): return len(self.df) def __getitem__(self, index): review = self.df['reviews'].iloc[index].split() review = ' '.join(review) sentiment = int(self.df['sentiment'].iloc[index]) encodings = self.tokenizer.encode_plus( review, add_special_tokens=True, max_length=self.maxlen, padding='max_length', truncation=True, return_attention_mask=True, return_tensors='pt' ) return { 'input_ids': encodings.input_ids.flatten(), 'attention_mask': encodings.attention_mask.flatten(), 'labels': torch.tensor(sentiment, dtype=torch.long) } output of dataloader: for batch in train_loader: print(batch['input_ids'].shape) print(batch['attention_mask'].shape) print(batch['labels']) print(batch['labels'].shape) break torch.Size([32, 256]) torch.Size([32, 256]) tensor([2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2, 2, 1, 1, 2]) torch.Size([32]) My nn: criterion = nn.CrossEntropyLoss().to(device) class SentimentClassifier(nn.Module): def __init__(self): super(SentimentClassifier, self).__init__() self.distilbert = DistilBertModel.from_pretrained("distilbert-base-uncased") self.drop0 = nn.Dropout(0.25) self.linear1 = nn.Linear(3072, 512) self.relu1 = nn.ReLU() self.drop1 = nn.Dropout(0.25) self.linear2 = nn.Linear(512, 2) self.relu2 = nn.ReLU() def forward(self, input_ids, attention_mask): outputs = self.distilbert(input_ids, attention_mask) last_hidden_state = outputs[0] pooled_output = torch.cat(tuple([last_hidden_state[:, i] for i in [-4, -3, -2, -1]]), dim=-1) x = self.drop0(pooled_output) x = self.relu1(self.linear1(x)) x = self.drop1(x) x = self.relu2(self.linear2(x)) return x Train loop: for batch in loop: optimizer.zero_grad() input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) labels = batch['labels'].to(device) output = model(input_ids, attention_mask) print(output.size(), labels.size()) loss = criterion(output, labels) # ERROR loss.backward() optimizer.step() Error: torch.Size([32, 2]) torch.Size([32]) --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-19-6268781f396e> in <module>() 12 print(output.size(), labels.size()) 13 # output_class = torch.argmax(results, dim=1) ---> 14 loss = criterion(output, labels) 15 train_loss += loss 16 loss.backward() 2 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 1119 def forward(self, input: Tensor, target: Tensor) -> Tensor: 1120 return F.cross_entropy(input, target, weight=self.weight, -> 1121 ignore_index=self.ignore_index, reduction=self.reduction) 1122 1123 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2822 if size_average is not None or reduce is not None: 2823 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2825 2826 IndexError: Target 2 is out of bounds. I read a tutorial which says that, do not use softmax when applying a nn.CrossEntropyLoss, as I have 2 classes. What is wrong can anyone guide me! Thank you!
You have two classes, which means the maximum target label is 1 not 2 because the classes are indexed from 0 (see official documentation). You essentially have to subtract 1 to your labels tensor, such that class n°1 is assigned the value 0, and class n°2 value 1. In turn the labels of the batch you printed would look like: tensor([1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1])
5
5
68,737,130
2021-8-11
https://stackoverflow.com/questions/68737130/error-while-import-keras-attributeerror-module-tensorflow-compat-v2-interna
I want to import keras after I did pip install keras, but it shows message as shown below. I even can't call any function from keras library. Can anyone know about this? import keras Error: AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'register_clear_session_function'
you should use import tensorflow.keras instead of import keras. More info here.
7
11
68,822,660
2021-8-17
https://stackoverflow.com/questions/68822660/how-do-you-ignore-specific-pyright-type-checks-by-project-file-line
I cannot quite find clear documentation on how to ignore one or more specific Pyright checks: Using a config file at the root of your project. At the top of a file, function, or method. Each ligne as a trailing comment. Thanks in advance for sharing this information.
The usual mypy in-line comments like # type: ignore should work (see details), and for pyright specific config, you can put a pyrightconfig.json in your project root. You can find available config options here. It's just a JSON file, so it looks something like this: { "venvPath": "/home/username/.virtualenvs/", "venv": "myenv", "reportOptionalSubscript": false, "reportOptionalMemberAccess": false } EDIT: In-source configuration can be as type-ignore statements as supported by mypy. # type: ignore is not a place holder for something else, it is literal. To narrow it down and ignore a specific error (it can be only one of the mypy error codes), like this: # type: ignore[error-code] To use the specific example of import mentioned in the comments, here are the two variants: from os import non_existent # type: ignore[attr-defined] from missing_module import name # type: ignore This is all discussed in the link to the mypy docs I provided, and the list of error codes linked from there. pyright specific configuration can only be project wide (see EDIT3), either by specifying them in a [tool.pyright] section in your pyproject.toml file, or by creating a pyrightconfig.json like above in your top-level project directory. EDIT2: In the comments the OP raised the question how to find the mypy error-codes that correspond to a pyright config option. Unfortunately there's no easy way besides reading the docs thoroughly along with some understanding of the language; e.g. in the case of from os import name, Python is actually importing the attribute os.name of the module object os into the current namespace. The following interactive session should make this clear: In [1]: import os In [2]: type(os) Out[2]: module In [3]: locals()["curdir"] ------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-3-a31c5782bef1> in <module> ----> 1 locals()["curdir"] KeyError: 'curdir' In [4]: from os import curdir In [5]: locals()["curdir"] Out[5]: '.' In [6]: os.curdir == curdir Out[6]: True EDIT3: Pyright also seems to support file-level, and line-level directives, the documentation is hidden under "comments". In short, you can ask pyright to ignore a line or ignore specific pyright errors like this: import missing_module import name # pyright: ignore import missing_module import name # pyright: ignore[reportMissingImports] You can find the list of errors in the configuration docs.
21
17
68,768,384
2021-8-13
https://stackoverflow.com/questions/68768384/how-to-send-message-to-telegram-as-code
Sending the output of Prettytable to Telegram This question is a followup to an earlier question. The code which i have is this: import telegram from prettytable import PrettyTable def send_msg(text): token = "*******:**************" chat_id = "***********" bot = telegram.Bot(token=token) bot.sendMessage(chat_id=chat_id, text=text) myTable = PrettyTable(["Student Name", "Class", "Section", "Percentage"]) myTable.add_row(["Leanord", "X", "B", "91.2 %"]) myTable.add_row(["Penny", "X", "C", "63.5 %"]) myTable.add_row(["Howard", "X", "A", "90.23 %"]) myTable.add_row(["Bernadette", "X", "D", "92.7 %"]) myTable.add_row(["Sheldon", "X", "A", "98.2 %"]) myTable.add_row(["Raj", "X", "B", "88.1 %"]) myTable.add_row(["Amy", "X", "B", "95.0 %"]) table_txt = myTable.get_string() with open('output.txt','w') as file: file.write(table_txt) new_list = [] with open("output.txt", 'r', encoding="utf-8") as file: send_msg(file.read()) The problem is that the message which is sent looks like this: +--------------+-------+---------+------------+ | Student Name | Class | Section | Percentage | +--------------+-------+---------+------------+ | Leanord | X | B | 91.2 % | | Penny | X | C | 63.5 % | | Howard | X | A | 90.23 % | | Bernadette | X | D | 92.7 % | | Sheldon | X | A | 98.2 % | | Raj | X | B | 88.1 % | | Amy | X | B | 95.0 % | +--------------+-------+---------+------------+ But when the message received in telegram looks like this: How can i fix this? Telegram lets you send messages in code whereby i guess it would preserve the format. How can i send this message in format?
You have already solved it yourself: you used three backticks in the title of your question. In Markdown (including here on SO), you can put three backticks around a block of code, and that makes it use the code block formatting. ``` this is inside a code block ``` You can just add a line containing the three backticks in front and back of your text: backticked_text = "```\n" + text + "\n```"
29
44
68,792,897
2021-8-15
https://stackoverflow.com/questions/68792897/how-can-repetitive-rows-of-data-be-collected-in-a-single-row-in-pandas
I have a dataset that contains the NBA Player's average statistics per game. Some player's statistics are repeated because of they've been in different teams in season. For example: Player Pos Age Tm G GS MP FG 8 Jarrett Allen C 22 TOT 28 10 26.2 4.4 9 Jarrett Allen C 22 BRK 12 5 26.7 3.7 10 Jarrett Allen C 22 CLE 16 5 25.9 4.9 I want to average Jarrett Allen's stats and put them into a single row. How can I do this?
You can groupby and use agg to get the mean. For the non numeric columns, let's take the first value: df.groupby('Player').agg({k: 'mean' if v in ('int64', 'float64') else 'first' for k,v in df.dtypes[1:].items()}) output: Pos Age Tm G GS MP FG Player Jarrett Allen C 22 TOT 18.666667 6.666667 26.266667 4.333333 NB. content of the dictionary comprehension: {'Pos': 'first', 'Age': 'mean', 'Tm': 'first', 'G': 'mean', 'GS': 'mean', 'MP': 'mean', 'FG': 'mean'}
16
49
68,769,247
2021-8-13
https://stackoverflow.com/questions/68769247/how-do-i-write-an-asgi-compliant-middleware-while-staying-framework-agnostic
We're currently maintaining code written in several HTTP frameworks (Flask, aiohttp and FastAPI). Rewriting them so they all use the same framework is currently not feasible. There's some code that I'd like to share across those applications and that would be very well suited for middleware (logging-config, monitoring, authentication, ...). The initial implementation was done by subclassing Flask and worked really well across all Flask-based apps. But it's unusable in aiohttp or FastAPI. Creating a framework-agnostic implementation is doable (theoretically) and this morning I've taken one of the simpler cases and successfully converted it to WSGI middleware and am able to integrate it in the Flask applications. But ASGI is giving me some trouble because there's not much documentation out there for pure ASGI middleware. All the examples show how middleware is written for their framework. The official ASGI docs is also really really "skinny" on that topic. From what I can tell, it should look something like this (side-questions: "What's the second argument passed into the constructor?"): class MyMiddleware: def __init__(self, app, something) -> None: self.app = app self.something = something # <- what is this second argument? async def __call__(self, scope, receive, send): print("Hello from the Middleware") await self.app(scope, receive, send) I took Starlette TimingMiddleware as inspiration, but I am unable to integrate it with aiohttp. Probably because their implementation is slightly different. Considering that there is a section of middleware in the ASGI specification, and that both aiohttp and Starlette implement that spec, shouldn't there be a way to write a middleware that works in both? If yes, what am I missing?
Flask is a WSGI framework. Starlette is an ASGI framework. aiohttp neither supports WSGI nor ASGI. https://github.com/aio-libs/aiohttp/issues/2902 There is no way to have the same middleware support all three frameworks.
6
2
68,804,209
2021-8-16
https://stackoverflow.com/questions/68804209/how-to-do-an-else-default-in-match-case
Python recently has released match-case in version 3.10. The question is how can we do a default case in Python? I can do if/elif but don't know how to do else. Below is the code: x = "hello" match x: case "hi": print(x) case "hey": print(x) default: print("not matched") I added this default myself. I want to know the method to do this in Python.
You can define a default case in Python. For this you use a wild card (_). The following code demonstrates it: x = "hello" match x: case "hi": print(x) case "hey": print(x) case _: print("not matched")
72
115
68,819,790
2021-8-17
https://stackoverflow.com/questions/68819790/read-write-parquet-files-without-reading-into-memory-using-python
I looked at the standard documentation that I would expect to capture my need (Apache Arrow and Pandas), and I could not seem to figure it out. I know Python best, so I would like to use Python, but it is not a strict requirement. Problem I need to move Parquet files from one location (a URL) to another (an Azure storage account, in this case using the Azure machine learning platform, but this is irrelevant to my problem). These files are too large to simply perform pd.read_parquet("https://my-file-location.parquet"), since this reads the whole thing into an object. Expectation I thought that there must be a simple way to create a file object and stream that object line by line -- or maybe column chunk by column chunk. Something like import pyarrow.parquet as pq with pq.open("https://my-file-location.parquet") as read_file_handle: with pq.open("https://my-azure-storage-account/my-file.parquet", "write") as write_filehandle: for next_line in read_file_handle{ write_file_handle.append(next_line) I understand it will be a little different because Parquet is primarily meant to be accessed in a columnar fashion. Maybe there is some sort of config object that I would pass which specifies which columns of interest, or maybe how many lines can be grabbed in a chunk or something similar. But the key expectation is that there is a means to access a parquet file without loading it all into memory. How can I do this? FWIW, I did try to just use Python's standard open function, but I was not sure how to use open with a URL location and a byte stream. If it is possible to do this via just open and skip anything Parquet-specific, that is also fine. Update Some of the comments have suggested using bash-like scripts, such as here. I can use this if there is nothing else, but it is not ideal because: I would rather keep this all in a full language SDK, whether Python, Go, or whatever. If the solution moves into a bash script with pipes, it requires an external call since the final solution will not be written entirely bash, Powershell, or any scripting language. I really want to leverage some of the benefits of Parquet itself. As I mentioned in the comment below, Parquet is columnar storage. So if I have a "data frame" that is 1.1 billion rows and 100 columns, but I only care about 3 columns, I would love to be able to only download those 3 columns, saving a bunch of time and some money, too.
Great post, based on @Micah's answer, I put my 2 cents in it, in case you don't want to read the docs. A small snippet is the following: import pandas as pd import numpy as np from pyarrow.parquet import ParquetFile # create a random df then save to parquet df = pd.DataFrame({ 'A': np.arange(10000), 'B': np.arange(10000), 'C': np.arange(10000), 'D': np.arange(10000), }) df.to_parquet('./test/test') # ****** below is @Micah Kornfield's answer ****** # 1. open parquet file batch = ParquetFile('./test/test') # 2. define generator of batches record = batch.iter_batches( batch_size=10, columns=['B', 'C'], ) # 3. yield pandas/numpy data print(next(record).to_pandas()) # pandas print(next(record).to_pydict()) # native python dict
9
9
68,745,309
2021-8-11
https://stackoverflow.com/questions/68745309/how-to-make-mediapipe-pose-estimation-faster-python
I'm making a pose estimation script for my game. However, it's working at 20-30 fps and not using the whole CPU even if there is no fps limit. It's not using whole GPU too. Can someone help me? Here is resource usage while playing a dance video: https://i.sstatic.net/6L8Rj.jpg Here is my code: import cv2 import mediapipe as mp import time inFile = '/dev/video0' capture = cv2.VideoCapture(inFile) FramesVideo = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) # Number of frames inside video FrameCount = 0 # Currently playing frame prevTime = 0 # some objects for mediapipe mpPose = mp.solutions.pose mpDraw = mp.solutions.drawing_utils pose = mpPose.Pose() while True: FrameCount += 1 #read image and convert to rgb success, img = capture.read() imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #process image results = pose.process(imgRGB) if results.pose_landmarks: mpDraw.draw_landmarks(img, results.pose_landmarks, mpPose.POSE_CONNECTIONS) #get landmark positions landmarks = [] for id, lm in enumerate(results.pose_landmarks.landmark): h, w, c = img.shape cx, cy = int(lm.x * w), int(lm.y * h) cv2.putText(img, str(id), (cx,cy), cv2.FONT_HERSHEY_PLAIN, 1, (255,0,0), 1) landmarks.append((cx,cy)) # calculate and print fps frameTime = time.time() fps = 1/(frameTime-prevTime) prevTime = frameTime cv2.putText(img, str(int(fps)), (30,50), cv2.FONT_HERSHEY_PLAIN, 3, (255,0,0), 3) #show image cv2.imshow('Video', img) cv2.waitKey(1) if FrameCount == FramesVideo-1: capture.release() cv2.destroyAllWindows() break
Set the model_complexity of mp.Pose to 0. As the documentation states: MODEL_COMPLEXITY Complexity of the pose landmark model: 0, 1 or 2. Landmark accuracy as well as inference latency generally go up with the model complexity. Default to 1. This is the best solution I've found, also use this.
7
3
68,783,157
2021-8-14
https://stackoverflow.com/questions/68783157/python-3-requests-how-to-force-use-a-new-connection-for-each-request
I have written a pausable multi-thread downloader using requests and threading, however the downloads just can't complete after resuming, long story short, due to special network conditions the connections can often die during downloads requiring refreshing the connections. You can view the code here in my previous question: Python multi connection downloader resuming after pausing makes download run endlessly I observed the downloads can go beyond 100% after resuming and won't stop (at least I haven't see them stop), mmap indexes will go out of bounds and lots of error messages... I have finally figured out this is because the ghost of previous request, that makes the server mistakenly sent extra data from last connection that was not downloaded. This is my solution: create a new connection s = requests.session() r = s.get( url, headers={'connection': 'close', 'range': 'bytes={0}-{1}'.format(start, end)}, stream=True) interrupt the connection r.close() s.close() del r del s In my testing, I have found that requests have two attributes named session, one Titlecase, one lowercase, the lowercase one is a function, and the other is a class constructor, they both create a requests.sessions.Session object, is there any difference between them? And how can I set keep-alive to False? The method found here won't work anymore: In [39]: s = requests.session() ...: s.config['keep_alive'] = False --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-39-497f569a91ba> in <module> 1 s = requests.session() ----> 2 s.config['keep_alive'] = False AttributeError: 'Session' object has no attribute 'config' This method from here doesn't throw errors: s = requests.session() s.keep_alive = False But I seriously doubt that it has any effect at all, it just adds a new boolean to the object and I don't think it is handled by the object. I have seen a close method in requests.models.Response, does it have any effect in this case or I can just close the session? And finally, using this method, is it guaranteed that the server will never send extra bytes from previous dead connections?
I guess that your problem isn't server related. Probably servers are behaving correctly and the problem are the threads. Considering the code from the related question, if it is up to date, when PAUSE is set to true, which happens during 50% of the time when first argv argument is set to 1, dozens of threads are created every second (actually num_connections threads, the (pressed - lastpressed).total_seconds() > 0.5 and self.paused = not self.paused logic makes a new batch start every second). In linux you would check this with tp -H -p $pid or watch ps -T -p $pid or watch ls /proc/$pid/task/ - you are probably using windows and there are the windows ways to check this. Each batch of connections are correct when considered in isolation, the connection range is being correctly set on the headers. By sniffing yourself you'll see that they are just fine. The problem arrises when new batches of threads arrive doing the same work. You get a lot of threads downloading similar ranges in different batches giving you the same data. Since your writing logic is relative not absolute, if two threads gives you the same 123th chunk your self.position += len(chunk) will increase for both similar chunks, which can be a reason you get your over-100%. To test whether what I said happens, just try to download an ever increasing file and check if your file being saved does not suffer from this double increases: 0000000000 00 00 00 00 00 00 00 01 00 00 00 02 00 00 00 03 ................ 0000000010 00 00 00 04 00 00 00 05 00 00 00 06 00 00 00 07 ................ 0000000020 00 00 00 08 00 00 00 09 00 00 00 0a 00 00 00 0b ................ 0000000030 00 00 00 0c 00 00 00 0d 00 00 00 0e 00 00 00 0f ................ Or simulate one file range server yourself by doing something similar to this: #!/usr/bin/env python3 from http.server import BaseHTTPRequestHandler, HTTPServer import time hostname = "localhost" serverport = 8081 filesizemegabytes = 8#.25 filesizebytes = int(filesizemegabytes*1024*1024) filesizebytestr = str(filesizebytes) class Server(BaseHTTPRequestHandler): def do_GET(self): self.do(True) def do_HEAD(self): self.do(False) def do(self,writebody=True): rangestr = self.headers.get('range') if type(rangestr) is str and rangestr.startswith('bytes='): self.send_response(206) rangestr = rangestr[6:] rangeint = tuple(int(i) for i in rangestr.split('-')) self.send_header('Content-Range', 'bytes '+rangestr+'/'+filesizebytestr) else: self.send_response(200) rangeint = (0,filesizebytes) self.send_header('Content-type', 'application/octet-stream') self.send_header('Accept-Ranges', 'bytes') self.send_header('Content-Length', rangeint[1]-rangeint[0]) self.end_headers() if writebody: for i in range(rangeint[0],rangeint[1]): self.wfile.write(i.to_bytes(4, byteorder='big')) if __name__ == '__main__': serverinstance = HTTPServer((hostname, serverport), server) print("Server started http://%s:%s" % (hostname, serverport)) try: serverinstance.serve_forever() except KeyboardInterrupt: pass serverinstance.server_close() Considerations about resource usage You don't need multithreading for multidownloads. "Green" threads are enough since you don't need more than one CPU, you just need to wait for IO. Instead of multithread+requests, a more suitable solution would be asyncio+aiohttp (aiohttp once requests is not very well designed for async, altough you will find some adaptations in the wild). Lastly, keep-alives are useful when you are planning to reconnect again, which seems to be your case. Are you source and origin IPs:ports the same? You are trying to force close of connections, but once you realize the problem are not the servers, reanalyze your situation and see whether it is not better to keep-alive connections.
10
1
68,768,017
2021-8-13
https://stackoverflow.com/questions/68768017/how-to-ignore-field-repr-in-pydantic
When I want to ignore some fields using attr library, I can use repr=False option. But I cloud't find a similar option in pydantic Please see example code import typing import attr from pydantic import BaseModel @attr.s(auto_attribs=True) class AttrTemp: foo: typing.Any boo: typing.Any = attr.ib(repr=False) class Temp(BaseModel): foo: typing.Any boo: typing.Any # I don't want to print class Config: frozen = True a = Temp( foo="test", boo="test", ) b = AttrTemp(foo="test", boo="test") print(a) # foo='test' boo='test' print(b) # AttrTemp(foo='test') However, it does not mean that there are no options at all, I can use the syntax print(a.dict(exclude={"boo"})) Doesn't pydantic have an option like repr=False?
It looks like this feature has been requested and also implemented not long ago. However, it seems like it hasn't made it into the latest release yet. I see two options how to enable the feature anyway: 1. Use the workaround provided in the feature request Define a helper class: import typing from pydantic import BaseModel, Field class ReducedRepresentation: def __repr_args__(self: BaseModel) -> "ReprArgs": return [ (key, value) for key, value in self.__dict__.items() if self.__fields__[key].field_info.extra.get("repr", True) ] and use it in your Model definition: class Temp(ReducedRepresentation, BaseModel): foo: typing.Any boo: typing.Any = Field(..., repr=False) class Config: frozen = True a = Temp( foo="test", boo="test", ) print(a) # foo='test' 2. pip install the latest master I would suggest doing this in a virtual environment. This is what worked for me: Uninstall the existing version: $ pip uninstall pydantic ... Install latest master: $ pip install git+https://github.com/samuelcolvin/pydantic.git@master ... After that the repr argument should work out of the box: import typing from pydantic import BaseModel, Field class Temp(BaseModel): foo: typing.Any boo: typing.Any = Field(..., repr=False) class Config: frozen = True a = Temp( foo="test", boo="test", ) print(a) # foo='test'
6
7
68,789,213
2021-8-15
https://stackoverflow.com/questions/68789213/i-am-getting-an-error-code-is-unreachable-pylance-what-that-mean-or-am-i-doing
def percent(marks): return (marks[0]+marks[1]+marks[2]+marks[3]/400)*100 marks1=[54,65,85,54] percent1=percent(marks1) marks2=[54,52,65,85] percent2 = percent(marks2) print(percent1,percent2)
The lines after return will not be executed anytime. So you can delete them and nothing will change. The message told you about it, because it is very unusual to have such code. I think you wanted this: def percent(marks): return (marks[0]+marks[1]+marks[2]+marks[3]/400)*100 marks1 = [54, 65, 85, 54] percent1 = percent(marks1) marks2 = [54, 52, 65, 85] percent2 = percent(marks2) print(percent1, percent2) Spaces in Python code matter. In your code all lines are the part of the function. In fixed code they are not.
9
7
68,772,211
2021-8-13
https://stackoverflow.com/questions/68772211/fake-useragent-module-not-connecting-properly-indexerror-list-index-out-of-ra
I tried to use fake_useragent module with this block from fake_useragent import UserAgent ua = UserAgent() print(ua.random) But when the execution reached this line ua = UserAgent(), it throws this error Traceback (most recent call last): File "/home/hadi/Desktop/excel/gatewayform.py", line 191, in <module> gate = GateWay() File "/home/hadi/Desktop/excel/gatewayform.py", line 23, in __init__ ua = UserAgent() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 69, in __init__ self.load() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 75, in load self.data = load_cached( File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 250, in load_cached update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 245, in update write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl)) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 178, in load raise exc File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 99, in get_browsers html = html.split('<table class="w3-table-all notranslate">')[1] IndexError: list index out of range I use linux and I have installed the module using this command pip3 install fake_useragent --upgrade. Is there any solution for this issue? if not, is there a better module to use?
There is a solution for this, from Github pull request #110. Basically, all you need to do is change one character in one line of the fake_useragent/utils.py source code. To do this on your system, open /usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py† in your favorite text editor using admin privileges. Go to line 99, and change the w3 html = html.split('<table class="w3-table-all notranslate">')[1] # ^^ change this to ws: html = html.split('<table class="ws-table-all notranslate">')[1] # ^^ to this Save the file (with admin permissions), restart your Python session, and your code should work just fine. † To find the fake_useragent directory in which utils.py resides, run the following code: import fake_useragent print(fake_useragent.__file__) For example, on my Windows laptop, this printed 'C:\\Users\\mattdmo\\AppData\\Roaming\\Python\\Python310\\site-packages\\fake_useragent\\__init__.py' so the folder to open is C:\Users\mattdmo\AppData\Roaming\Python\Python310\site-packages\fake_useragent.
6
15
68,744,612
2021-8-11
https://stackoverflow.com/questions/68744612/how-do-i-annotate-a-function-whose-return-type-depends-on-its-argument
In Python, I often write functions that filter a collection to find instances of specific subtypes. For example I might look for a specific kind of nodes in a DOM or a specific kind of events in a log: def find_pre(soup: TagSoup) -> List[tags.pre]: """Find all <pre> nodes in `tag_soup`.""" … def filter_errors(log: List[LogEvent]) -> List[LogError]: """Keep only errors from `log`.""" … Writing types for these functions is easy. But what about generic versions of these functions that take an argument to specify which types to return? def find_tags(tag_soup: TagSoup, T: type) -> List[T]: """Find all nodes of type `T` in `tag_soup`.""" … def filter_errors(log: List[LogEvent], T: type) -> List[T]: """Keep only events of type `T` from `log`.""" … (The signatures above are wrong: I can't refer to T in the return type.) This is a fairly common design: docutils has node.traverse(T: type), BeautifulSoup has soup.find_all(), etc. Of course it can get arbitrarily complex, but can Python type annotations handle simple cases like the above? Here is a MWE to make it very concrete: from dataclasses import dataclass from typing import * @dataclass class Packet: pass @dataclass class Done(Packet): pass @dataclass class Exn(Packet): exn: str loc: Tuple[int, int] @dataclass class Message(Packet): ref: int msg: str Stream = Callable[[], Union[Packet, None]] def stream_response(stream: Stream, types) -> Iterator[??]: while response := stream(): if isinstance(response, Done): return if isinstance(response, types): yield response def print_messages(stream: Stream): for m in stream_response(stream, Message): print(m.msg) # Error: Cannot access member "msg" for "Packet" msgs = iter((Message(0, "hello"), Exn("Oops", (1, 42)), Done())) print_messages(lambda: next(msgs)) Pyright says: 29:17 - error: Cannot access member "msg" for type "Packet" Member "msg" is unknown (reportGeneralTypeIssues) In the example above, is there a way to annotate stream_response so that Python type checkers will accept the definition of print_messages?
Okay, here we go. It passes MyPy --strict, but it isn't pretty. What's going on here For a given class A, we know that the type of an instance of A will be A (obviously). But what is the type of A itself? Technically, the type of A is type, as all python classes that don't use metaclassses are instances of type. However, annotating an argument with type doesn't tell the type-checker much. The syntax used for python type-checking to go "one step up" in the type hierarchy is, instead, Type[A]. So if we have a function myfunc that returns an instance of a class inputted as a parameter, we can fairly simply annotate that as follows: from typing import TypeVar, Type T = TypeVar('T') def myfunc(some_class: Type[T]) -> T: # do some stuff return some_class() Your case, however, is rather more complex. You could be inputting one class as a parameter, or you could be inputting two classes, or three classes... etc. We can solve this problem using typing.overload, which allows us to register multiple signatures for a given function. These signatures are ignored entirely at runtime; they are purely for the type-checker; as such, the bodies of these functions can be left empty. Generally, you only put a docstring or a literal ellipsis ... in the body of functions decorated with @overload. I don't think there's a way of generalising these overloaded functions, which is why the maximum number of elements that could be passed into the types parameter is important. You have to tediously enumerate every possible signature of your function. You may want to think about moving the @overload signatures to a separate .pyi stub file if you go down this route. from dataclasses import dataclass from typing import ( Callable, Tuple, Union, Iterator, overload, TypeVar, Type, Sequence ) @dataclass class Packet: pass P1 = TypeVar('P1', bound=Packet) P2 = TypeVar('P2', bound=Packet) P3 = TypeVar('P3', bound=Packet) P4 = TypeVar('P4', bound=Packet) P5 = TypeVar('P5', bound=Packet) P6 = TypeVar('P6', bound=Packet) P7 = TypeVar('P7', bound=Packet) P8 = TypeVar('P8', bound=Packet) P9 = TypeVar('P9', bound=Packet) P10 = TypeVar('P10', bound=Packet) @dataclass class Done(Packet): pass @dataclass class Exn(Packet): exn: str loc: Tuple[int, int] @dataclass class Message(Packet): ref: int msg: str Stream = Callable[[], Union[Packet, None]] @overload def stream_response(stream: Stream, types: Type[P1]) -> Iterator[P1]: """Signature if exactly one type is passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2]] ) -> Iterator[Union[P1, P2]]: """Signature if exactly two types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3]] ) -> Iterator[Union[P1, P2, P3]]: """Signature if exactly three types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3], Type[P4]] ) -> Iterator[Union[P1, P2, P3, P4]]: """Signature if exactly four types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3], Type[P4], Type[P5]] ) -> Iterator[Union[P1, P2, P3, P4, P5]]: """Signature if exactly five types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3], Type[P4], Type[P5], Type[P6]] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6]]: """Signature if exactly six types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[ Type[P1], Type[P2], Type[P3], Type[P4], Type[P5], Type[P6], Type[P7] ] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7]]: """Signature if exactly seven types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[ Type[P1], Type[P2], Type[P3], Type[P4], Type[P5], Type[P6], Type[P7], Type[P8] ] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7, P8]]: """Signature if exactly eight types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[ Type[P1], Type[P2], Type[P3], Type[P4], Type[P5], Type[P6], Type[P7], Type[P8], Type[P9] ] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7, P8, P9]]: """Signature if exactly nine types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[ Type[P1], Type[P2], Type[P3], Type[P4], Type[P5], Type[P6], Type[P7], Type[P8], Type[P9], Type[P10] ] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7, P8, P9, P10]]: """Signature if exactly ten types are passed in for the `types` parameter""" # We have to be more generic in our type-hinting for the concrete implementation # Otherwise, MyPy struggles to figure out that it's a valid argument to `isinstance` def stream_response( stream: Stream, types: Union[type, Tuple[type, ...]] ) -> Iterator[Packet]: while response := stream(): if isinstance(response, Done): return if isinstance(response, types): yield response def print_messages(stream: Stream) -> None: for m in stream_response(stream, Message): print(m.msg) msgs = iter((Message(0, "hello"), Exn("Oops", (1, 42)), Done())) print_messages(lambda: next(msgs)) Strategies for making this less verbose If you wanted to make this more concise, one way of achieving that is to introduce an alias for certain typing constructs. The danger here is that the intent and meaning of the type hint gets quite difficult to read, but it does make overloads 7-10 look a lot less horrific: from dataclasses import dataclass from typing import ( Callable, Tuple, Union, Iterator, overload, TypeVar, Type, Sequence ) @dataclass class Packet: pass P1 = TypeVar('P1', bound=Packet) P2 = TypeVar('P2', bound=Packet) P3 = TypeVar('P3', bound=Packet) P4 = TypeVar('P4', bound=Packet) P5 = TypeVar('P5', bound=Packet) P6 = TypeVar('P6', bound=Packet) P7 = TypeVar('P7', bound=Packet) P8 = TypeVar('P8', bound=Packet) P9 = TypeVar('P9', bound=Packet) P10 = TypeVar('P10', bound=Packet) _P = TypeVar('_P', bound=Packet) S = Type[_P] T7 = Tuple[S[P1], S[P2], S[P3], S[P4], S[P5], S[P6], S[P7]] T8 = Tuple[S[P1], S[P2], S[P3], S[P4], S[P5], S[P6], S[P7], S[P8]] T9 = Tuple[S[P1], S[P2], S[P3], S[P4], S[P5], S[P6], S[P7], S[P8], S[P9]] T10 = Tuple[S[P1], S[P2], S[P3], S[P4], S[P5], S[P6], S[P7], S[P8], S[P9], S[P10]] @dataclass class Done(Packet): pass @dataclass class Exn(Packet): exn: str loc: Tuple[int, int] @dataclass class Message(Packet): ref: int msg: str Stream = Callable[[], Union[Packet, None]] @overload def stream_response(stream: Stream, types: Type[P1]) -> Iterator[P1]: """Signature if exactly one type is passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2]] ) -> Iterator[Union[P1, P2]]: """Signature if exactly two types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3]] ) -> Iterator[Union[P1, P2, P3]]: """Signature if exactly three types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3], Type[P4]] ) -> Iterator[Union[P1, P2, P3, P4]]: """Signature if exactly four types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3], Type[P4], Type[P5]] ) -> Iterator[Union[P1, P2, P3, P4, P5]]: """Signature if exactly five types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: Tuple[Type[P1], Type[P2], Type[P3], Type[P4], Type[P5], Type[P6]] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6]]: """Signature if exactly six types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: T7[P1, P2, P3, P4, P5, P6, P7] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7]]: """Signature if exactly seven types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: T8[P1, P2, P3, P4, P5, P6, P7, P8] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7, P8]]: """Signature if exactly eight types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: T9[P1, P2, P3, P4, P5, P6, P7, P8, P9] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7, P8, P9]]: """Signature if exactly nine types are passed in for the `types` parameter""" @overload def stream_response( stream: Stream, types: T10[P1, P2, P3, P4, P5, P6, P7, P8, P9, P10] ) -> Iterator[Union[P1, P2, P3, P4, P5, P6, P7, P8, P9, P10]]: """Signature if exactly ten types are passed in for the `types` parameter""" # We have to be more generic in our type-hinting for the concrete implementation # Otherwise, MyPy struggles to figure out that it's a valid argument to `isinstance` def stream_response( stream: Stream, types: Union[type, Tuple[type, ...]] ) -> Iterator[Packet]: while response := stream(): if isinstance(response, Done): return if isinstance(response, types): yield response def print_messages(stream: Stream) -> None: for m in stream_response(stream, Message): print(m.msg) msgs = iter((Message(0, "hello"), Exn("Oops", (1, 42)), Done())) print_messages(lambda: next(msgs))
5
3
68,803,511
2021-8-16
https://stackoverflow.com/questions/68803511/mypy-error-incompatible-types-in-assignment-expression-has-type-dictnothing
I tried to instantiate an empty dictionary on the second level of an existing dict, then assign a key-value pair to it, but MyPy throws an error. Here is a minimal example, which will reproduce it when MyPy checking is activated: result = {"Test": "something"} result['key'] = {} result['key']['sub_key'] = ["some string", "another string"] The error here will be something like: mypy(error): Incompatible types in assignment (expression has type "Dict[<nothing>, <nothing>]", target has type "List[str]") How do I prevent this error? According to a similar problem, it was suggested to do result['key'] = {} # type: ignore as a workaround, but this does not seem very elegant, which is why I'm wondering if there is more one can do.
The problem Right, so let's look at the first two lines here. First, you define your dictionary result. You define it like so: result = {"Test": "something"} You don't declare what types you expect the keys and values of result to have, so MyPy is left to work it out for itself. Alright, it says, I can do this — you've got only strings as dictionary keys, and only strings as dictionary values, therefore result must be of type dict[str, str]. Then we get to line 2: result['key'] = {} And MyPy, quite reasonably, raises an error. "Hey, this looks like a mistake!" it says. You've only got strings as dictionary values in this so far, and you haven't explicitly told MyPy that it's okay to have non-str values in this dictionary, so MyPy reckons you've probably made an error here, and didn't mean to add that value to the dictionary. No real need to look at the third line, because it's basically the same thing going on How to fix this? There are a couple of ways of fixing this. In order of most-preferred (use if possible) to least-preferred (use only as a last resort): You can tell MyPy that this specific dictionary has certain types associated with certain string-keys by annotating it as a TypedDict. (You'll have to change your "sub-key" key to "sub_key", however, as "sub-key" isn't a valid variable name.) from typing import TypedDict class KeyDict(TypedDict, total=False): sub_key: list[str] class ResultDict(TypedDict, total=False): Test: str key: KeyDict result: ResultDict = {"Test": "something"} result['key'] = {} result['key']['sub_key'] = ["some string", "another string"] You can tell MyPy that any value in this dictionary could be of type str, or it could be of type dict[str, list[str]]: from typing import Union result: dict[str, Union[str, dict[str, list[str]]]] = {"Test": "something"} d: dict[str, list[str]] = {} d['sub_key'] = ["some string", "another string"] result['key'] = d You can tell MyPy that the value in this dictionary could be anything (not that different from switching off the type-checker): from typing import Any result: dict[str, Any] = {"Test": "something"} result['key'] = {} result['key']['sub_key'] = ["some string", "another string"] You can switch off the type-checker: result = {"Test": "something"} result['key'] = {} # type: ignore[assignment] result['key']['sub_key'] = ["some string", "another string"] # type: ignore[index] Anyway, it's a bit difficult to know what the "best" solution is here without knowing more about your use case.
8
25
68,779,189
2021-8-13
https://stackoverflow.com/questions/68779189/why-do-i-get-this-dbeaver-error-when-importing-data-from-a-csv-file
I'm a student, and I'm working on a project. The premise is that this program I'm working on takes as input the days (M-F) on which a student is enrolled, the number of hours per day they are enrolled, and which course they are enrolled in. Then, it queries a PostgreSQL database for the amount of progress hours (the "par value" of how long an assignment should take) for each module, and runs an algorithm, outputting the approximate due dates that students should shoot for in order to be done with their course by the deadline (which is set for the entire course rather than for individual assignments). Now, I've had the database working in the past, but for the next deliverable I am overhauling the program to add support for students selecting their program, which will then populate an option menu with the appropriate courses. To do this, I need to add more test data. In the course of importing additional CSV files into a schema, an error arose: Can't load entity attributes Reason: Can't find target attribute [Sequence] In the CSV file, "Sequence” is the name of one of the columns. What does this error mean? And, how might I go about resolving it? I have looked over the CSV and I can’t find anything obviously wrong; the heading is there, along with all the associated data. I could in theory create a connection in DBeaver directly with the CSV files rather than importing their data into a database, and I’ve tested doing it that way and it does work as expected, but the Python module I’m using to query the database (psycopg2) doesn’t seem to be able to find it if I do it like that as opposed to using a traditional database (which is how I did it prior to altering my database). Edit: In response to Adrian's comment, a sample of the CSV data is as follows: PK,Active,Sequence,Name,Number,Hours,Start Date,Stop Date,Modified When,Modified By 183328,TRUE,1,ASP.NET Core Fundamentals,1,2.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183329,TRUE,2,F-Q1,2,3.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183330,TRUE,3,Assignment - Clock-in Station,3,5.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183331,TRUE,4,ASP.NET MVC Fundamentals,4,2.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183332,TRUE,5,MVC-Q1,5,3.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183333,TRUE,6,Wishlist Application,6,5.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183334,TRUE,7,ASP.NET Web APIs,7,2.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183335,TRUE,8,API-Q1,8,3.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183336,TRUE,9,Starchart API,9,5.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf 183337,TRUE,10,Dependency Injection,10,2.00,07-01-2020,06-30-2299,06-12-2020 09:09:27 AM,Laraine.Moellendorf I'm not importing this data into an existing table; rather, DBeaver has a feature where one can import a CSV without having an existing table, and a table with matching field headings and data types will be generated to place the data into. So, I do not have a schema to provide. The DDL that is producing the error is below: CREATE TABLE "Software Development".newtable ( pk integer NULL, active boolean NULL, "Sequence" integer NULL, "name" varchar(32) NULL, "Number" integer NULL, hours real NULL, "Start Date" varchar(10) NULL, "Stop Date" varchar(10) NULL, "Modified When" varchar(22) NULL, "Modified By" varchar(19) NULL ); In answer to Adrian's last question, DBeaver can link directly to a CSV file rather than importing its data into a table within a database. That was what I meant by it not being a "traditional database".
I've upgraded to DDeaver version 21.2.0 and everything is working now. I was getting the error trying to export a query to a SQLite table. So the solution is to upgrade DBeaver.
6
3
68,779,350
2021-8-13
https://stackoverflow.com/questions/68779350/pylance-reportmissingmodulesource-with-docker
I'm getting the error of missing import when doing import in my Django project, I think it's because it's installed in a Docker Container. But how can I make it so VSCode somehow knows that the packages are installed? If I select an interpreter of a venv in which I have installed django or other packages it doesn't give me that warning, but I don't think creating an venv and installing all the packages there is the right thing. Or maybe it is?
It's recommended to install the packages individually, but if you want to reuse them, you can add the paths of them into the PYTHONPATH. You can do this to modify the PYTHONPATH: Add these in the settings.json file to Modify the PYTHONPATH in the terminal: "terminal.integrated.env.windows": { "PYTHONPATH": "xxx/site-packages" } Create a .env file under your workspace, and add these settings in it to modify the PYTHONPATH for the extension and debugger: PYTHONPATH=xxx/site-packages You can refer to here to understand the effects of these two configurations.
7
0
68,806,714
2021-8-16
https://stackoverflow.com/questions/68806714/determining-exactly-what-is-pickled-during-python-multiprocessing
As explained in the thread What is being pickled when I call multiprocessing.Process? there are circumstances where multiprocessing requires little to no data to be transferred via pickling. For example, on Unix systems, the interpreter uses fork() to create the processes, and objects which already exist when multiprocessing starts can be accessed by each process without pickling. However, I'm trying to consider scenarios beyond "here's how it's supposed to work". For example, the code may have a bug and an object which is supposed to read-only is inadvertently modified, leading to its pickling to be transferred to other processes. Is there some way to determine what, or at least how much, is pickled during multiprocessing? The information doesn't necessarily have to be in real-time, but it would be helpful if there was a way to get some statistics (e.g., number of objects pickled) which might give a hint as to why something is taking longer to run than intended because of unexpected pickling overhead. I'm looking for a solution internal to the Python environment. Process tracing (e.g. Linux strace), network snooping, generalized IPC statistics, and similar solutions that might be used to count the number of bytes moving between processes aren't going to be specific enough to identify object pickling versus other types of communication. Updated: Disappointingly, there appears to be no way to gather pickling statistics short of hacking up the module and/or interpreter sources. However, @aaron does explain this and clarified a few minor points, so I have accepted the answer.
Multiprocessing isn't exactly a simple library, but once you're familiar with how it works, it's pretty easy to poke around and figure it out. You usually want to start with context.py. This is where all the useful classes get bound depending on OS, and... well... the "context" you have active. There are 4 basic contexts: Fork, ForkServer, and Spawn for posix; and a separate Spawn for windows. These in turn each have their own "Popen" (called at start()) to launch a new process to handle the separate implementations. popen_fork.py creating a process literally calls os.fork(), and then in the child organizes to run BaseProcess._bootstrap() which sets up some cleanup stuff then calls self.run() to execute the code you give it. No pickling occurs to start a process this way because the entire memory space gets copied (with some exceptions. see: fork(2)). popen_spawn_xxxxx.py I am most familiar with windows, but I assume both the win32 and posix versions operate in a very similar manner. A new python process is created with a simple crafted command line string including a pair of pipe handles to read/write from/to. The new process will import the __main__ module (generally equal to sys.argv[0]) in order to have access to all the needed references. Then it will execute a simple bootstrap function (from the command string) that attempts to read and un-pickle a Process object from its pipe it was created with. Once it has the Process instance (a new object which is a copy; not just a reference to the original), it will again arrange to call _bootstrap(). popen_forkserver.py The first time a new process is created with the "forkserver" context, a new process will be "spawn"ed running a simple server (listening on a pipe) which handles new process requests. Subsequent process requests all go to the same server (based on import mechanics and a module-level global for the server instance). New processes are then "fork"ed from that server in order to save the time of spinning up a new python instance. These new processes however can't have any of the same (as in same object and not a copy) Process objects because the python process they were forked from was itself "spawn"ed. Therefore the Process instance is pickled and sent much like with "spawn". The benefits of this method include: The process doing the forking is single threaded to avoid deadlocks. The cost of spinning up a new python interpreter is only paid once. The memory consumption of the interpreter, and any modules imported by __main__ can largely be shared due to "fork" generally using copy-on-write memory pages. In all cases, once the split has occurred, you should consider the memory spaces totally separate, and the only communication between them is via pipes or shared memory. Locks and Semaphores are handled by an extension library (written in c), but are basically named semaphores managed by the OS. Queue's, Pipe's and multiprocessing.Manager's use pickling to synchronize changes to the proxy objects they return. The new-ish multiprocessing.shared_memory uses a memory-mapped file or buffer to share data (managed by the OS like semaphores). To address your concern: the code may have a bug and an object which is supposed to read-only is inadvertently modified, leading to its pickling to be transferred to other processes. This only really applies to multiprocessing.Manager proxy objects. As everything else requires you to be very intentional about sending and receiveing data, or instead uses some other transfer mechanism than pickling.
6
6
68,780,808
2021-8-14
https://stackoverflow.com/questions/68780808/xml-to-srt-conversion-not-working-after-installing-pytube
I have installed pytube to extract captions from some youtube videos. Both the following code give me the xml captions. from pytube import YouTube yt = YouTube('https://www.youtube.com/watch?v=4ZQQofkz9eE') caption = yt.captions['a.en'] print(caption.xml_captions) and also as mentioned in the docs yt = YouTube('http://youtube.com/watch?v=2lAe1cqCOXo') caption = yt.captions.get_by_language_code('en') caption.xml_captions But in both cases, I get the xml output and when use print(caption.generate_srt_captions()) I get an error like the following. Can you help on how to extract the srt format? KeyError ~/anaconda3/envs/myenv/lib/python3.6/site-packages/pytube/captions.py in generate_srt_captions(self) 49 recompiles them into the "SubRip Subtitle" format. 50 """ 51 return self.xml_caption_to_srt(self.xml_captions) 52 53 @staticmethod ~/anaconda3/envs/myenv/lib/python3.6/site-packages/pytube/captions.py in xml_caption_to_srt(self, xml_captions) 81 except KeyError: 82 duration = 0.0 83 start = float(child.attrib["start"]) 84 end = start + duration 85 sequence_number = i + 1 # convert from 0-indexed to 1. KeyError: 'start'
This is a bug in the library itself. Everything below is done in pytube 11.01. In the captions.py file on line 76 replace: for i, child in enumerate(list(root)): to: for i, child in enumerate(list(root.findall('body/p'))): Then on line 83, replace: duration = float(child.attrib["dur"]) to: duration = float(child.attrib["d"]) Then on line 86, replace: start = float(child.attrib["start"]) to: start = float(child.attrib["t"]) If only the number of lines and time will be displayed but no subtitle text, replace line 77: text = child.text or "" to: text = ''.join(child.itertext()).strip() if not text: continue It worked for me, python 3.9, pytube 11.01. Good luck!
5
7
68,817,652
2021-8-17
https://stackoverflow.com/questions/68817652/why-my-python-code-is-extracting-the-same-data-for-all-the-elements-in-my-list
My project consists of making a competitive watch table for hotel rates for an agency. It is a painful action that I wanted to automate, the code extract correctly the name of hotels and the prices I want to extract but it's working correctly only for the first hotel and I don't know where is the problem. I provide you with the code and the output, if any of you can help me and thank you in advance. NB : the code 2 works correctly but when i've added more operations the problem appeared code 1 #!/usr/bin/env python # coding: utf-8 import time from time import sleep import ast import pandas as pd from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait, Select from selenium.common.exceptions import StaleElementReferenceException, NoSuchElementException from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By driver = webdriver.Chrome("C:\\Users\\marketing2\\Documents\\chromedriver.exe") driver.get('https://tn.tunisiebooking.com/') # params to select params = { 'destination': 'Tozeur', 'date_from': '11/09/2021', 'date_to': '12/09/2021', 'bedroom': '1' } # select destination destination_select = Select(WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, 'ville_des')))) destination_select.select_by_value(params['destination']) # select bedroom bedroom_select = Select(WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, 'select_ch')))) bedroom_select.select_by_value(params['bedroom']) # select dates script = f"document.getElementById('checkin').value ='{params['date_from']}';" script += f"document.getElementById('checkout').value ='{params['date_to']}';" script += f"document.getElementById('depart').value ='{params['date_from']}';" script += f"document.getElementById('arrivee').value ='{params['date_to']}';" driver.execute_script(script) # submit form btn_rechercher = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="boutonr"]'))) btn_rechercher.click() urls = [] hotels = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, "//div[starts-with(@id,'produit_affair')]"))) for hotel in hotels: link = hotel.find_element_by_xpath(".//span[@class='tittre_hotel']/a").get_attribute("href") urls.append(link) for url in urls: driver.get(url) def existsElement(xpath): try: driver.find_element_by_id(xpath); except NoSuchElementException: return "false" else: return "true" if (existsElement('result_par_arrangement')=="false"): btn_t = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="moteur_rech"]/form/div/div[3]/div'))) btn_t.click() sleep(10) else : pass try: name = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//div[@class='bloc_titre_hotels']/h2"))).text arropt = driver.find_element_by_xpath("//div[contains(@class,'line_result')][1]") opt = arropt.find_element_by_tag_name("b").text num = len(arropt.find_elements_by_tag_name("option")) optiondata = {} achats = {} marges= {} selection = Select(driver.find_element_by_id("arrangement")) for i in range(num): try: selection = Select(driver.find_element_by_id("arrangement")) selection.select_by_index(i) time.sleep(2) arr = driver.find_element_by_xpath("//select[@id='arrangement']/option[@selected='selected']").text prize = driver.find_element_by_id("prix_total").text optiondata[arr] = (int(prize)) btn_passe = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="resultat"]/div/form/div/div[2]/div[1]/div[2]/div[2]/div'))) btn_passe.click() # params to select params = { 'civilite_acheteur': 'Mlle', 'prenom_acheteur': 'test', 'nom_acheteur': 'test', 'e_mail_acheteur': '[email protected]', 'portable_acheteur': '22222222', 'ville_acheteur': 'Test', } # select civilite civilite_acheteur = Select(WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.NAME, 'civilite_acheteur')))) civilite_acheteur.select_by_value(params['civilite_acheteur']) # saisir prenom script = f"document.getElementsByName('prenom_acheteur')[0].value ='{params['prenom_acheteur']}';" script += f"document.getElementsByName('nom_acheteur')[0].value ='{params['nom_acheteur']}';" script += f"document.getElementsByName('e_mail_acheteur')[0].value ='{params['e_mail_acheteur']}';" script += f"document.getElementsByName('portable_acheteur')[0].value ='{params['portable_acheteur']}';" script += f"document.getElementsByName('ville_acheteur')[0].value ='{params['ville_acheteur']}';" driver.execute_script(script) # submit form btn_agence = driver.find_element_by_id('titre_Nabeul') btn_agence.click() btn_continuez = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, 'boutonr'))) btn_continuez.click() achat = int(driver.find_element_by_xpath('/html/body/header/div[2]/div[1]/div[1]/div[4]/div[2]/div[2]').text.replace(' TND', '')) achats[arr]=achat marge =int(((float(prize) - float(achat)) / float(achat)) * 100); marges[arr]=marge optiondata[arr]=prize,achat,marge driver.get(url) btn_display = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="moteur_rech"]/form/div/div[3]/div'))) btn_display.click() sleep(10) except StaleElementReferenceException: pass except NoSuchElementException: pass s="- {} | {} : {}".format(name, opt, optiondata) print(s) ds = [] for l in s.splitlines(): d = l.split('-') if len(d) > 1: df = pd.DataFrame(ast.literal_eval(d[1].strip())) ds.append(df) for df in ds: df.reset_index(drop=True, inplace=True) df = pd.concat(ds, axis= 1) cols = df.columns cols = [((col.split('.')[0], col)) for col in df.columns] df.columns=pd.MultiIndex.from_tuples(cols) print(df.T) #print("{} : {} - {}".format(name, opt, optiondata)) code 2 from selenium.webdriver.support.ui import Select from selenium.common.exceptions import StaleElementReferenceException,NoSuchElementException urls = [] hotels = driver.find_elements_by_xpath("//div[starts-with(@id,'produit_affair')]") for hotel in hotels: link = hotel.find_element_by_xpath(".//span[@class='tittre_hotel']/a").get_attribute("href") urls.append(link) for url in urls: driver.get(url) try: name = driver.find_element_by_xpath("//div[@class='bloc_titre_hotels']/h2").text arropt = driver.find_element_by_xpath("//div[contains(@class,'line_result')][1]") opt = arropt.find_element_by_tag_name("b").text num = len(arropt.find_elements_by_tag_name("option")) optiondata = {} selection = Select(driver.find_element_by_id("arrangement")) for i in range(num): try: selection = Select(driver.find_element_by_id("arrangement")) selection.select_by_index(i) time.sleep(2) arr = driver.find_element_by_xpath("//select[@id='arrangement']/option[@selected='selected']").text prize = driver.find_element_by_id("prix_total").text optiondata[arr]=prize except StaleElementReferenceException: pass except NoSuchElementException: pass print("{} : {} - {} - {}".format(name,opt,num,optiondata))
The problem was that it can't access to the element listing arrangements for the rest of the hotels in the list i've added a function that tests the presence of the data and it workod for url in urls: driver.get(url) def existsElement(xpath): try: driver.find_element_by_id(xpath); except NoSuchElementException: return "false" else: return "true" if (existsElement('result_par_arrangement')=="false"): btn_t = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="moteur_rech"]/form/div/div[3]/div'))) btn_t.click() else : pass
5
2
68,823,021
2021-8-17
https://stackoverflow.com/questions/68823021/groupby-roll-up-or-roll-down-for-any-kind-of-aggregates
TL;DR: How can we achieve something similar to Group By Roll Up with any kind of aggregates in pandas? (Credit to @Scott Boston for this term) I have following dataframe: P Q R S T 0 PLAC NR F HOL F 1 PLAC NR F NHOL F 2 TRTB NR M NHOL M 3 PLAC NR M NHOL M 4 PLAC NR F NHOL F 5 PLAC R M NHOL M 6 TRTA R F HOL F 7 TRTA NR F HOL F 8 TRTB NR F NHOL F 9 PLAC NR F NHOL F 10 TRTB NR F NHOL F 11 TRTB NR M NHOL M 12 TRTA NR F HOL F 13 PLAC NR F HOL F 14 PLAC R F NHOL F For a list of columns ['Q', 'R', 'S', 'T'], I want to calculate some aggregates on P column on following 4 list of grouping columns: ['Q'] ['Q', 'R'] ['Q', 'R', 'S'] ['Q', 'R', 'S', 'T'] I've already written the code to group above dataframes in an increasing number of columns, and calculate the aggregate (using count for the shake of simplicity) on each of the groupby object, and finally concatenate them: cols = list('QRST') aggCol = 'P' groupCols = [] result = [] for col in cols: groupCols.append(col) result.append(df.groupby(groupCols)[aggCol].agg(count='count').reset_index()) result = pd.concat(result)[groupCols+['count']] However, I've strong feeling that above method is not so efficient in terms of CPU time. Is there a more efficient way to apply aggregate on such continuously increasing number of columns for grouping? Why I think it is not so efficient is because: For above values, in first iteration, it groups the dataframe on Q column then calculates aggregate. Then in next iteration it groups the dataframe on Q and R, that means it again needs to group it by Q then R, but it was already grouped by Q in the first iteration, so the same operation is repeating. If there is some way to utilize the previously created groups, I think it'll be efficient. OUTPUT: Q R S T count 0 NR NaN NaN NaN 12 1 R NaN NaN NaN 3 0 NR F NaN NaN 9 1 NR M NaN NaN 3 2 R F NaN NaN 2 3 R M NaN NaN 1 0 NR F HOL NaN 4 1 NR F NHOL NaN 5 2 NR M NHOL NaN 3 3 R F HOL NaN 1 4 R F NHOL NaN 1 5 R M NHOL NaN 1 0 NR F HOL F 4 1 NR F NHOL F 5 2 NR M NHOL M 3 3 R F HOL F 1 4 R F NHOL F 1 5 R M NHOL M 1 I already looked into Is there an equivalent of SQL GROUP BY ROLLUP in Python pandas? and Pandas Pivot tables row subtotals, they don't work in my case, I already tried them i.e. These method can be used to get the count only, and immediately fail even for unique counts when the same identifier appears for more than one values: pd.pivot_table(df, aggCol, columns=cols, aggfunc='count', margins=True).T.reset_index() Q R S T P 0 NR F HOL F 4 1 NR F NHOL F 5 2 NR M NHOL M 3 3 NR All 3 4 R F HOL F 1 5 R F NHOL F 1 6 R M NHOL M 1 7 R All 3 UPDATE In order to avoid any unnecessary confusion with just getting the count as per suggestion in the comment, I have added it for the mean as aggregate, changing P column to a numeric type: P Q R S T 0 9 NR F HOL F 1 7 NR F NHOL F 2 3 NR M NHOL M 3 9 NR M NHOL M 4 1 NR F NHOL F 5 0 R M NHOL M 6 1 R F HOL F 7 7 NR F HOL F 8 2 NR F NHOL F 9 2 NR F NHOL F 10 1 NR F NHOL F 11 2 NR M NHOL M 12 3 NR F HOL F 13 6 NR F HOL F 14 0 R F NHOL F cols = list('QRST') cols = list('QRST') aggCol = 'P' groupCols = [] result = [] for col in cols: groupCols.append(col) result.append(df.groupby(groupCols)[aggCol] .agg(agg=np.mean) .round(2).reset_index()) result = pd.concat(result)[groupCols+['agg']] >>> result Q R S T agg 0 NR NaN NaN NaN 4.33 1 R NaN NaN NaN 0.33 0 NR F NaN NaN 4.22 1 NR M NaN NaN 4.67 2 R F NaN NaN 0.50 3 R M NaN NaN 0.00 0 NR F HOL NaN 6.25 1 NR F NHOL NaN 2.60 2 NR M NHOL NaN 4.67 3 R F HOL NaN 1.00 4 R F NHOL NaN 0.00 5 R M NHOL NaN 0.00 0 NR F HOL F 6.25 1 NR F NHOL F 2.60 2 NR M NHOL M 4.67 3 R F HOL F 1.00 4 R F NHOL F 0.00 5 R M NHOL M 0.00
Building on the idea of @ScottBoston (progressive aggregation, i.e., repeatedly aggregating on the previous aggregate result), we can do something that is relatively generic with regard to the aggregation function, if that function can be expressed as a composition of functions ((f3 ∘ f2 ∘ f2 ∘ ... ∘ f1)(x), or in other words: f3(f2(f2(...(f1(x)))))). For example, sum works fine as is, because sum is associative, so the sum of group sums is the sum of the whole. For count, the initial function (f1) is indeed count, but f2 has to be sum, and the final f3 has to be identity. For mean, the initial function (f1) has to produce two quantities: sum and count. The intermediary function f2 can be sum, and the final function (f3) has then to be the ratio of the two quantities. Here is a rough template, with a few functions defined. As an added bonus, the function also optionally produces a grand total: # think map-reduce: first map, then reduce (arbitrary number of times), then map to result myfuncs = { 'sum': [sum, sum], 'prod': ['prod', 'prod'], 'count': ['count', sum], 'set': [set, lambda g: set.union(*g)], 'list': [list, sum], 'mean': [[sum, 'count'], sum, lambda r: r[0]/r[1]], 'var': [ [lambda x: (x**2).sum(), sum, 'count'], sum, lambda r: (r[0].sum() - r[1].sum()**2 / r[2]) / (r[2] - 1)], 'std': [ [lambda x: (x**2).sum(), sum, 'count'], sum, lambda r: np.sqrt((r[0].sum() - r[1].sum()**2 / r[2]) / (r[2] - 1))], } totalCol = '__total__' def agg(df, cols, aggCol, fun, total=True): if total: cols = [totalCol] + cols df = df.assign(__total__=0) funs = myfuncs[fun] b = df.groupby(cols).agg({aggCol: funs[0]}) frames = [b.reset_index()] for k in range(1, len(cols)): b = b.groupby(cols[:-k]).agg(funs[1]) frames.append(b.reset_index()) result = pd.concat(frames).reset_index(drop=True) result = result[frames[0].columns] if len(funs) > 2: s = result[aggCol].apply(funs[2], axis=1) result = result.drop(aggCol, axis=1, level=0) result[aggCol] = s result.columns = result.columns.droplevel(-1) if total: result = result.drop(columns=[totalCol]) return result Examples cols = list('QRST') aggCol = 'P' >>> agg(df, cols, aggCol, 'count') Q R S T P 0 NR F HOL F 4 1 NR F NHOL F 5 2 NR M NHOL M 3 3 R F HOL F 1 .. ... ... ... ... .. 15 R M NaN NaN 1 16 NR NaN NaN NaN 12 17 R NaN NaN NaN 3 18 NaN NaN NaN NaN 15 >>> agg(df, cols, aggCol, 'mean') Q R S T P 0 NR F HOL F 6.250000 1 NR F NHOL F 2.600000 2 NR M NHOL M 4.666667 3 R F HOL F 1.000000 .. ... ... ... ... ... 15 R M NaN NaN 0.000000 16 NR NaN NaN NaN 4.333333 17 R NaN NaN NaN 0.333333 18 NaN NaN NaN NaN 3.533333 >>> agg(df, cols, aggCol, 'sum') Q R S T P 0 NR F HOL F 25 1 NR F NHOL F 13 2 NR M NHOL M 14 3 R F HOL F 1 .. ... ... ... ... .. 15 R M NaN NaN 0 16 NR NaN NaN NaN 52 17 R NaN NaN NaN 1 18 NaN NaN NaN NaN 53 >>> agg(df, cols, aggCol, 'set') Q R S T P 0 NR F HOL F {9, 3, 6, 7} 1 NR F NHOL F {1, 2, 7} 2 NR M NHOL M {9, 2, 3} 3 R F HOL F {1} .. ... ... ... ... ... 15 R M NaN NaN {0} 16 NR NaN NaN NaN {1, 2, 3, 6, 7, 9} 17 R NaN NaN NaN {0, 1} 18 NaN NaN NaN NaN {0, 1, 2, 3, 6, 7, 9} >>> agg(df, cols, aggCol, 'std') Q R S T P 0 NR F HOL F 2.500000 1 NR F NHOL F 2.509980 2 NR M NHOL M 3.785939 3 R F HOL F NaN .. ... ... ... ... ... 15 R M NaN NaN NaN 16 NR NaN NaN NaN 3.055050 17 R NaN NaN NaN 0.577350 18 NaN NaN NaN NaN 3.181793 Notes The code is not as 'pure' as I would like it to be. There are two reasons for that: groupby likes to do some magic to the shape of the result. For example, in some cases (but not always, strangely enough), if there is only one resulting group, the output is sometimes squeezed to a Series. the pandas arithmetic on set seems sometimes bogus, or finicky at best. My initial definition had: 'set': [set, sum] and this was working reasonably well (pandas seems to sometimes understand that .agg(sum) on a Series of set objects, it is desirable to apply set.union), except that, weirdly enough, in some conditions we'd get a NaN result instead. This only works for a single aggCol. The expressions for std and var are relatively naive. For improved numerical stability, see Standard Deviation: Rapid calculation methods. Speed Since the original posting of this answer, another solution has been proposed by @U12-Forward. After a bit of cleaning (e.g. not using recursion, and changing the agg dtype to whatever it needs to be, instead of object, this solution becomes: def v_u12(df, cols, aggCol, fun): newdf = pd.DataFrame(columns=cols) for count in range(1, len(cols)+1): groupcols = cols[:count] newdf = newdf.append( df.groupby(groupcols)[aggCol].agg(fun).reset_index().reindex(columns=groupcols + [aggCol]), ignore_index=True, ) return newdf To compare speed, let's generate DataFrames of arbitrary sizes: def gen_example(n, m=4, seed=-1): if seed >= 0: np.random.seed(seed) aggCol = 'v' cols = list(ascii_uppercase)[:m] choices = [['R', 'NR'], ['F', 'M'], ['HOL', 'NHOL']] df = pd.DataFrame({ aggCol: np.random.uniform(size=n), **{ k: np.random.choice(choices[np.random.randint(0, len(choices))], n) for k in cols }}) return df # example >>> gen_example(8, 5, 0) v A B C D E 0 0.548814 R M F M NR 1 0.715189 R M M M R 2 0.602763 NR M M F R 3 0.544883 R F F M NR 4 0.423655 NR M F F NR 5 0.645894 NR F M M R 6 0.437587 R M F M NR 7 0.891773 R F M M R We can now compare speed over a range of sizes, using the excellent perfplot package, plus a few definitions: m = 4 aggCol, *cols = gen_example(2, m).columns fun = 'mean' def ours(df): funname = fun if isinstance(fun, str) else fun.__name__ return agg(df, cols, aggCol, funname, total=False) def u12(df): return v_u12(df, cols, aggCol, fun) def equality_check(a, b): a = a.sort_values(cols).reset_index(drop=True) b = b.sort_values(cols).reset_index(drop=True) non_numeric = a[aggCol].dtype == 'object' if non_numeric: return a[cols+[aggCol]].equals(b[cols+[aggCol]]) return a[cols].equals(b[cols]) and np.allclose(a[aggCol], b[aggCol]) perfplot.show( time_unit='auto', setup=lambda n: gen_example(n, m), kernels=[ours, u12], n_range=[2 ** k for k in range(4, 21)], equality_check=equality_check, xlabel=f'n rows\n(m={m} columns, fun={fun})' ) The comparisons for a few aggregation functions and m values are below (y-axis is average time: lower is better): m fun perfplot 4 'mean' 10 'mean' 10 'sum' 4 'set' For functions that are not associative (e.g. 'mean'), our "progressive re-aggregation" needs to keep track of multiple values (e.g., for mean: sum and count), so for relatively small DataFrames, the speed is about twice as slow as u12. But as the size grows, the gain of the re-aggregation overcomes that and ours becomes faster.
12
4
68,742,863
2021-8-11
https://stackoverflow.com/questions/68742863/error-while-trying-to-fine-tune-the-reformermodelwithlmhead-google-reformer-enw
I'm trying to fine-tune the ReformerModelWithLMHead (google/reformer-enwik8) for NER. I used the padding sequence length same as in the encode method (max_length = max([len(string) for string in list_of_strings])) along with attention_masks. And I got this error: ValueError: If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2248. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape. When I changed the sequence length to 65536, my colab session crashed by getting all the inputs of 65536 lengths. According to the second option(changing config.axial_pos_shape), I cannot change it. I would like to know, Is there any chance to change config.axial_pos_shape while fine-tuning the model? Or I'm missing something in encoding the input strings for reformer-enwik8? Thanks! Question Update: I have tried the following methods: By giving paramteres at the time of model instantiation: model = transformers.ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8", num_labels=9, max_position_embeddings=1024, axial_pos_shape=[16,64], axial_pos_embds_dim=[32,96],hidden_size=128) It gives me the following error: RuntimeError: Error(s) in loading state_dict for ReformerModelWithLMHead: size mismatch for reformer.embeddings.word_embeddings.weight: copying a param with shape torch.Size([258, 1024]) from checkpoint, the shape in current model is torch.Size([258, 128]). size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 32]). This is quite a long error. Then I tried this code to update the config: model1 = transformers.ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8', num_labels = 9) Reshape Axial Position Embeddings layer to match desired max seq length model1.reformer.embeddings.position_embeddings.weights[1] = torch.nn.Parameter(model1.reformer.embeddings.position_embeddings.weights[1][0][:128]) Update the config file to match custom max seq length model1.config.axial_pos_shape = 16,128 model1.config.max_position_embeddings = 16*128 #2048 model1.config.axial_pos_embds_dim= 32,96 model1.config.hidden_size = 128 output_model_path = "model" model1.save_pretrained(output_model_path) By this implementation, I am getting this error: RuntimeError: The expanded size of the tensor (512) must match the existing size (128) at non-singleton dimension 2. Target sizes: [1, 128, 512, 768]. Tensor sizes: [128, 768] Because updated size/shape doesn't match with the original config parameters of pretrained model. The original parameters are: axial_pos_shape = 128,512 max_position_embeddings = 128*512 #65536 axial_pos_embds_dim= 256,768 hidden_size = 1024 Is it the right way I'm changing the config parameters or do I have to do something else? Is there any example where ReformerModelWithLMHead('google/reformer-enwik8') model fine-tuned. My main code implementation is as follow: class REFORMER(torch.nn.Module): def __init__(self): super(REFORMER, self).__init__() self.l1 = transformers.ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8", num_labels=9) def forward(self, input_ids, attention_masks, labels): output_1= self.l1(input_ids, attention_masks, labels = labels) return output_1 model = REFORMER() def train(epoch): model.train() for _, data in enumerate(training_loader,0): ids = data['input_ids'][0] # input_ids from encode method of the model https://huggingface.co/google/reformer-enwik8#:~:text=import%20torch%0A%0A%23%20Encoding-,def%20encode,-(list_of_strings%2C%20pad_token_id%3D0 input_shape = ids.size() targets = data['tags'] print("tags: ", targets, targets.size()) least_common_mult_chunk_length = 65536 padding_length = least_common_mult_chunk_length - input_shape[-1] % least_common_mult_chunk_length #pad input input_ids, inputs_embeds, attention_mask, position_ids, input_shape = _pad_to_mult_of_chunk_length(self=model.l1, input_ids=ids, inputs_embeds=None, attention_mask=None, position_ids=None, input_shape=input_shape, padding_length=padding_length, padded_seq_length=None, device=None, ) outputs = model(input_ids, attention_mask, labels=targets) # sending inputs to the forward method print(outputs) loss = outputs.loss logits = outputs.logits if _%500==0: print(f'Epoch: {epoch}, Loss: {loss}') for epoch in range(1): train(epoch)
First of all, you should note that google/reformer-enwik8 is not a properly trained language model and that you will probably not get decent results from fine-tuning it. enwik8 is a compression challenge and the reformer authors used this dataset for exactly that purpose: To verify that the Reformer can indeed fit large models on a single core and train fast on long sequences, we train up to 20-layer big Reformers on enwik8 and imagenet64... This is also the reason why they haven't trained a sub-word tokenizer and operate on character level. You should also note that the LMHead is usually used for predicting the next token of a sequence (CLM). You probably want to use a token classification head (i.e. use an encoder ReformerModel and add a linear layer with 9 classes on top+maybe a dropout layer). Anyway, in case you want to try it still, you can do the following to reduce the memory footprint of the google/reformer-enwik8 reformer: Reduce the number of hashes during training: from transformers import ReformerConfig, ReformerModel conf = ReformerConfig.from_pretrained('google/reformer-enwik8') conf.num_hashes = 2 # or maybe even to 1 model = transformers.ReformerModel.from_pretrained("google/reformer-enwik8", config =conf) After you have finetuned your model, you can increase the number of hashes again to increase the performance (compare Table 2 of the reformer paper). Replace axial-position embeddings: from transformers import ReformerConfig, ReformerModel conf = ReformerConfig.from_pretrained('google/reformer-enwik8') conf.axial_pos_embds = False model = transformers.ReformerModel.from_pretrained("google/reformer-enwik8", config =conf) This will replace the learned axial positional embeddings with learnable position embeddings like Bert's and do not require the full sequence length of 65536. They are untrained and randomly initialized (i.e. consider a longer training).
6
3
68,762,104
2021-8-12
https://stackoverflow.com/questions/68762104/plotly-adding-scatter-geo-points-and-traces-on-top-of-density-mapbox
I am trying to add a Scattergeo trace or overlay on top of a white-bg density mapbox to get a heat map over a generic USA states outline. The reason for my use of scattergeo is I'd like to plot a star symbol on top of the density mapbox, and the only symbol accepted via add_scattermapbox is a dot. If you choose the star symbol, there is no symbol added. I'm also aware that star symbols are acceptable for the p mapbox_styles of add_scattermapbox or density_scattermapbox but at the present time I am not in the position to pay per web load after the trial amount runs out. Is there a clever way to add a star symbol on top of a density_mapbox plot? Working ScatterGeo fig = go.Figure(go.Scattergeo()) fig.add_scattergeo(lat = [30, 40] ,lon = [-90, -80] ,hoverinfo = 'none' ,marker_size = 10 ,marker_color = 'rgb(65, 105, 225)' # blue ,marker_symbol = 'star' ,showlegend = False ) fig.update_geos( visible=False, resolution=110, scope="usa", showcountries=True, countrycolor="Black", showsubunits=True, subunitcolor="Black" ) fig.show() Working Density Mapbox d = {'Location': ['Point A', 'Point B'], 'lat': [30, 40], 'long': [-90, -80], 'z': [100,200]} df = pd.DataFrame(data=d) fig = px.density_mapbox(df ,lat='lat' ,lon='long' ,z='z' ,hover_name='Location' ,center=dict(lat=38.5, lon=-96) ,range_color = [0, 200] ,zoom=2 ,radius=50 ,opacity=.5 ,mapbox_style='open-street-map') fig.add_scattermapbox(lat = [30, 40] ,lon = [-90, -80] ,hoverinfo = 'none' ,marker_size = 6 ,marker_color = 'rgb(0, 0, 0)' # ,marker_symbol = 'star' ,showlegend = False ) fig.show() Attempt #1 - Just set marker_symbol = 'star' Un-commenting the marker_symbol = 'star', which would work for the premium styles of mapbox, completely removes the scatter point. d = {'Location': ['Point A', 'Point B'], 'lat': [30, 40], 'long': [-90, -80], 'z': [100,200]} df = pd.DataFrame(data=d) fig = px.density_mapbox(df ,lat='lat' ,lon='long' ,z='z' ,hover_name='Location' ,center=dict(lat=38.5, lon=-96) ,range_color = [0, 200] ,zoom=2 ,radius=50 ,opacity=.5 ,mapbox_style='open-street-map') fig.add_scattermapbox(lat = [30, 40] ,lon = [-90, -80] ,hoverinfo = 'none' ,marker_size = 6 ,marker_color = 'rgb(0, 0, 0)' ,marker_symbol = 'star' ,showlegend = False ) fig.show() Attempt #2 - Adding a density mapbox on top of the scatter geo Adding a density_mapbox on top of the scattergeo produces the same geo plot, but nothing more. The density mapbox legend is there, but no heat map. d = {'Location': ['Point A', 'Point B'], 'lat': [30, 40], 'long': [-90, -80], 'z': [100,200]} df = pd.DataFrame(data=d) fig = go.Figure(go.Scattergeo()) fig.add_scattergeo(lat = [30, 40] ,lon = [-90, -80] ,hoverinfo = 'none' ,marker_size = 10 ,marker_color = 'rgb(65, 105, 225)' # blue ,marker_symbol = 'star' ,showlegend = False ) fig.add_densitymapbox(lat=df['lat'], lon=df['long'], z=df['z'], radius=50, opacity=.5 ) fig.update_geos( visible=False, resolution=110, scope="usa", showcountries=True, countrycolor="Black", showsubunits=True, subunitcolor="Black" ) fig.show()
tile maps and layer maps do not work together. Hence you cannot use markers from geo on mapbox thinking laterally, you can add your own geojson layers onto mapbox plots generate geometry. Have provided two options for this a simple triangle get_geom(df["long"], df["lat"], marker=None, size=k) https://labs.mapbox.com/maki-icons/ get_geom(df["long"], df["lat"], marker="star", size=k) where marker is the MAKI icon name. NB icons with holes can be filled in - for example caution adding layers to mapbox figure layout. This is parameterised to generate multiple layers to support different zoom levels. More layers, more overhead. import geopandas as gpd import pandas as pd import shapely.geometry import math import json import plotly.express as px import svgpath2mpl import requests import numpy as np d = { "Location": ["Point A", "Point B"], "lat": [30, 40], "long": [-90, -80], "z": [100, 200], } df = pd.DataFrame(data=d) fig = px.density_mapbox( df, lat="lat", lon="long", z="z", hover_name="Location", center=dict(lat=38.5, lon=-96), range_color=[0, 200], zoom=2, radius=50, opacity=0.5, mapbox_style="open-street-map", ) # https://stackoverflow.com/questions/23411688/drawing-polygon-with-n-number-of-sides-in-python-3-2 def polygon(sides, radius=1, rotation=0, translation=None): one_segment = math.pi * 2 / sides points = [(math.sin(one_segment * i + rotation) * radius, math.cos(one_segment * i + rotation) * radius,) for i in range(sides)] if translation: points = [[sum(pair) for pair in zip(point, translation)] for point in points] return shapely.geometry.Polygon(points) def makimarker(makiname="star", geo=(0, 0), size=0.1): url = f"https://raw.githubusercontent.com/mapbox/maki/main/icons/{makiname}.svg" svgpath = pd.read_xml(requests.get(url).text).loc[0, "d"] p = svgpath2mpl.parse_path(svgpath).to_polygons() # need centroid to adjust marked to be centred on geo location c = shapely.affinity.scale( shapely.geometry.Polygon(p[0]), xfact=size, yfact=size ).centroid # centre and place marker marker = shapely.geometry.Polygon( [[sum(triple) for triple in zip(point, geo, (-c.x, -c.y))] for point in p[0]] ) # finally size geometry return shapely.affinity.scale(marker, xfact=size, yfact=size) def get_geom(long_a: list, lat_a: list, marker=None, size=0.15) -> list: if marker: geo = [ makimarker(marker, geo=(long, lat), size=size) for long, lat in zip(long_a, lat_a) ] else: geo = [ polygon(3, translation=(long, lat), radius=size*10) for long, lat in zip(long_a, lat_a) ] return json.loads(gpd.GeoDataFrame(geometry=geo).to_json()) # basing math on this https://wiki.openstreetmap.org/wiki/Zoom_levels # dict is keyed by size with min/max zoom levels covered by this size MINZOOM=.1 MAXZOOM=18 LAYERS=7 zoom = 512**np.linspace(math.log(MINZOOM,512), math.log(MAXZOOM, 512), LAYERS) zoom = { (200/(2**(np.percentile(zoom[i:i+2],25)+9))): {"minzoom":zoom[i], "maxzoom":zoom[i+1], "name":i} for i in range(LAYERS-1) } # add a layers to density plot that are the markers fig.update_layout( mapbox={ "layers": [ { "source": get_geom(df["long"], df["lat"], marker="star", size=k), "type": "fill", "color": "blue", **zoom[k], } for k in zoom.keys() ] }, margin={"t": 0, "b": 0, "l": 0, "r": 0}, ) fig
6
2
68,749,370
2021-8-11
https://stackoverflow.com/questions/68749370/what-is-the-correct-way-to-update-an-slqalchemy-orm-column-from-a-pandas-datafra
I've loaded some data and modified one column in the dataframe and would like to update the DB to reflect the changes. I tried: db.session.query(sqlTableName).update({sqlTableName.sql_col_name: pdDataframe.pd_col_name}) But that just wiped out the column in the database (set every value to '0', the default). I tried a few other dataformats with no luck. I'm guessing that there is something funky going on with datatypes that I've mixed up, or you just aren't allowed to update a column with a variable like this directly. I could do this with a loop but... that would be genuinely awful. Sorry for the basic question, after a long break from a project, my grasp of sqlalchemy has certainly waned.
For uploading the DataFrame to a temporary table and then performing an UPDATE you don't need to write the SQL yourself, you can have SQLAlchemy Core do it for you: import pandas as pd import sqlalchemy as sa def update_table_columns_from_df(engine, df, table_name, cols_to_update): metadata = sa.MetaData() main_table = sa.Table(table_name, metadata, autoload_with=engine) pk_columns = [x.name for x in main_table.primary_key.columns] df.to_sql("temp_table", engine, index=False, if_exists="replace") temp_table = sa.Table("temp_table", metadata, autoload_with=engine) with engine.begin() as conn: values_clause = {x: temp_table.columns[x] for x in cols_to_update} where_clause = sa.and_( main_table.columns[x] == temp_table.columns[x] for x in pk_columns ) conn.execute( main_table.update().values(values_clause).where(where_clause) ) temp_table.drop(engine) if __name__ == "__main__": test_engine = sa.create_engine( "postgresql+psycopg2://scott:[email protected]/test", echo=True, # (for demonstration purposes) ) with test_engine.begin() as test_conn: test_conn.exec_driver_sql("DROP TABLE IF EXISTS main_table") test_conn.exec_driver_sql( """\ CREATE TABLE main_table ( id1 integer NOT NULL, id2 integer NOT NULL, txt1 varchar(50), txt2 varchar(50), CONSTRAINT main_table_pkey PRIMARY KEY (id1, id2) ) """ ) test_conn.exec_driver_sql( """\ INSERT INTO main_table (id1, id2, txt1, txt2) VALUES (1, 1, 'foo', 'x'), (1, 2, 'bar', 'y'), (1, 3, 'baz', 'z') """ ) df_updates = pd.DataFrame( [ (1, 1, "new_foo", "new_x"), (1, 3, "new_baz", "new_z"), ], columns=["id1", "id2", "txt1", "txt2"], ) update_table_columns_from_df( test_engine, df_updates, "main_table", ["txt1", "txt2"] ) """SQL emitted: UPDATE main_table SET txt1=temp_table.txt1, txt2=temp_table.txt2 FROM temp_table WHERE main_table.id1 = temp_table.id1 AND main_table.id2 = temp_table.id2 """ df_result = pd.read_sql_query( "SELECT * FROM main_table ORDER BY id1, id2", test_engine ) print(df_result) """ id1 id2 txt1 txt2 0 1 1 new_foo new_x 1 1 2 bar y 2 1 3 new_baz new_z """
7
1
68,770,788
2021-8-13
https://stackoverflow.com/questions/68770788/how-to-check-the-convergence-when-fitting-a-distribution-in-scipy
Is there a way to check the convergence when fitting a distribution in SciPy? My goal is to fit a SciPy distribution (namely Johnson S_U distr.) to dozens of datasets as a part of an automated data-monitoring system. Mostly it works fine, but a few datasets are anomalous and clearly do not follow the Johnson S_U distribution. Fits on these datasets diverge silently, i.e. without any warning/error/whatever! On the contrary, if I switch to R and try to fit there I never ever get a convergence, which is correct - regardless of the fit settings, the R algorithm denies to declare a convergence. data: Two datasets are available in Dropbox: data-converging-fit.csv ... a standard data where fit converges nicely (you may think this is an ugly, skewed and heavy-central-mass blob but the Johnson S_U is flexible enough to fit such a beast!): data-diverging-fit.csv ... an anomalous data where fit diverges: code to fit the distribution: import pandas as pd from scipy import stats distribution_name = 'johnsonsu' dist = getattr(stats, distribution_name) convdata = pd.read_csv('data-converging-fit.csv', index_col= 'timestamp') divdata = pd.read_csv('data-diverging-fit.csv', index_col= 'timestamp') On the good data, the fitted parameters have common order of magnitude: a, b, loc, scale = dist.fit(convdata['target']) a, b, loc, scale [out]: (0.3154946859186918, 2.9938226613743932, 0.002176043693009398, 0.045430055488776266) On the anomalous data, the fitted parameters are unreasonable: a, b, loc, scale = dist.fit(divdata['target']) a, b, loc, scale [out]: (-3424954.6481554992, 7272004.43156841, -71078.33596490842, 145478.1300979394) Still I get no single line of warning that the fit failed to converge. From researching similar questions on StackOverflow, I know the suggestion to bin my data and then use curve_fit. Despite its practicality, that solution is not right in my opinion, since that is not the way we fit distributions: the binning is arbitrary (the nr. of bins) and it affects the final fit. A more realistic option might be scipy.optimize.minimize and callbacks to learn the progrss of convergence; still I am not sure that it will eventually tell me whether the algorithm converged.
The johnsonu.fit method comes from scipy.stats.rv_continuous.fit. Unfortunately from the documentation it does not appear that it is possible to get any more information about the fit from this method. However, looking at the source code, it appears the actual optimization is done with fmin, which does return more descriptive parameters. You could borrow from the source code and write your own implementation of fit that checks the fmin output parameters for convergence: import numpy as np import pandas as pd from scipy import optimize, stats distribution_name = 'johnsonsu' dist = getattr(stats, distribution_name) convdata = pd.read_csv('data-converging-fit.csv', index_col= 'timestamp') divdata = pd.read_csv('data-diverging-fit.csv', index_col= 'timestamp') def custom_fit(dist, data, method="mle"): data = np.asarray(data) start = dist._fitstart(data) args = [start[0:-2], (start[-2], start[-1])] x0, func, restore, args = dist._reduce_func(args, {}, data=data) vals = optimize.fmin(func, x0, args=(np.ravel(data),)) return vals custom_fit(dist, convdata['target']) [out]: Optimization terminated successfully. Current function value: -23423.995945 Iterations: 162 Function evaluations: 274 array([3.15494686e-01, 2.99382266e+00, 2.17604369e-03, 4.54300555e-02]) custom_fit(dist, divdata['target']) [out]: Warning: Maximum number of function evaluations has been exceeded. array([-12835849.95223926, 27253596.647191 , -266388.68675908, 545225.46661612])
7
5
68,769,968
2021-8-13
https://stackoverflow.com/questions/68769968/google-document-ai-giving-different-outputs-for-the-same-file
I was using Document OCR API to extract text from a pdf file, but part of it is not accurate. I found that the reason may be due to the existence of some Chinese characters. The following is a made-up example in which I cropped part of the region that the extracted text is wrong and add some Chinese characters to reproduce the problem. When I use the website version, I cannot get the Chinese characters but the remaining characters are correct. When I use Python to extract the text, I can get the Chinese characters correctly but part of the remaining characters are wrong. The actual string that I got. Are the versions of Document AI in the website and API different? How can I get all the characters correctly? Update: When I print the detected_languages (don't know why for lines = page.lines, the detected_languages for both lines are empty list, need to change to page.blocks or page.paragraphs first) after printing the text, I get the following output. Code: from google.cloud import documentai_v1beta3 as documentai project_id= 'secret-medium-xxxxxx' location = 'us' # Format is 'us' or 'eu' processor_id = 'abcdefg123456' # Create processor in Cloud Console opts = {} if location == "eu": opts = {"api_endpoint": "eu-documentai.googleapis.com"} client = documentai.DocumentProcessorServiceClient(client_options=opts) def get_text(doc_element: dict, document: dict): """ Document AI identifies form fields by their offsets in document text. This function converts offsets to text snippets. """ response = "" # If a text segment spans several lines, it will # be stored in different text segments. for segment in doc_element.text_anchor.text_segments: start_index = ( int(segment.start_index) if segment in doc_element.text_anchor.text_segments else 0 ) end_index = int(segment.end_index) response += document.text[start_index:end_index] return response def get_lines_of_text(file_path: str, location: str = location, processor_id: str = processor_id, project_id: str = project_id): # You must set the api_endpoint if you use a location other than 'us', e.g.: # opts = {} # if location == "eu": # opts = {"api_endpoint": "eu-documentai.googleapis.com"} # The full resource name of the processor, e.g.: # projects/project-id/locations/location/processor/processor-id # You must create new processors in the Cloud Console first name = f"projects/{project_id}/locations/{location}/processors/{processor_id}" # Read the file into memory with open(file_path, "rb") as image: image_content = image.read() document = {"content": image_content, "mime_type": "application/pdf"} # Configure the process request request = {"name": name, "raw_document": document} result = client.process_document(request=request) document = result.document document_pages = document.pages response_text = [] # For a full list of Document object attributes, please reference this page: https://googleapis.dev/python/documentai/latest/_modules/google/cloud/documentai_v1beta3/types/document.html#Document # Read the text recognition output from the processor print("The document contains the following paragraphs:") for page in document_pages: lines = page.blocks for line in lines: block_text = get_text(line.layout, document) confidence = line.layout.confidence response_text.append((block_text[:-1] if block_text[-1:] == '\n' else block_text, confidence)) print(f"Text: {block_text}") print("Detected Language", line.detected_languages) return response_text if __name__ == '__main__': print(get_lines_of_text('/pdf path')) It seems the language code is wrong, will this affect the result?
Posting this Community Wiki for better visibility. One of features of DocumentAI is OCR - Optical Character Recognition which allows recognizing text from various files. OP in this scenario received difference outputs using Try it function and Client Libraries - Python. Why are there discrepancies between Try it and Python library? It's hard to say as both methods use the same API documentai_v1beta3. It might be related to some files modifications when pdf is uploading to Try it Demo, different endpoints, language alphabet recognition or some random stuff. When you are using Python Client you also get accuracy % of text identification. Below examples from my testes: However, OP's identification is about 0,73 so it might get wrong results and in this situation is a visible issue. I guess it cannot be anyhow improved using code. Maybe if there would be different quality of PDF (in shown OPs example there are some dots which might affect identification).
5
1
68,807,896
2021-8-16
https://stackoverflow.com/questions/68807896/how-to-disable-logging-from-pytorch-lightning-logger
Logger in PyTorch-Lightning prints information about the model to be trained (or evaluated) and the progress during the training, However, in my case I would like to hide all messages from the logger in order not to flood the output in Jupyter Notebook. I've looked into the API of the Trainer class on the official docs page https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags and it seems like there is no option to turn off the messages from the logger. There is a parameter log_every_n_steps which can be set to big value, but nevertheless, the logging result after each epoch is displayed. How can one disable the logging?
I am assuming that two things are particularly bothering you in terms of flooding output stream: One, The "weight summary": | Name | Type | Params -------------------------------- 0 | l1 | Linear | 100 K 1 | l2 | Linear | 1.3 K -------------------------------- ... Second, the progress bar: Epoch 0: 74%|███████████ | 642/1874 [00:02<00:05, 233.59it/s, loss=0.85, v_num=wxln] PyTorch Lightning provided very clear and elegant solutions for turning them off: Trainer(progress_bar_refresh_rate=0) for turning off progress bar and Trainer(weights_summary=None) for turning off weight summary.
10
7
68,815,761
2021-8-17
https://stackoverflow.com/questions/68815761/how-to-customize-fastapi-request-body-documentation
I'm using FastAPI to serve ML models. My endpoint receives and sends JSON data of the form: [ {"id": 1, "data": [{"code": "foo", "value": 0.1}, {"code": "bar", "value": 0.2}, ...]}, {"id": 2, "data": [{"code": "baz", "value": 0.3}, {"code": "foo", "value": 0.4}, ...]}, ... ] My models and app look as follows: from typing import Dict, List from fastapi import Body from fastapi.responses import JSONResponse from pydantic import BaseModel import pandas as pd class Item(BaseModel): code: str value: float class Sample(BaseModel): id: int data: List[Item] app = FastAPI() @app.post("/score", response_model=List[Sample]) # correct response documentation def score(input_data: List[Sample] = Body(...)): # 1. conversion dict -> Pydantic models, slow input_df: pd.DataFrame = models_to_df(input_data) # 2. conversion Pydantic models -> df output_df: pd.DataFrame = predict(input_df) output_data: Dict = df_to_dict(output_df) # direct conversion df -> dict, fast return JSONResponse(output_data) Everything works fine and the automated documentation looks good, but the performance is bad. Since the data can be quite large, Pydantic conversion and validation can take a lot of time. This can easily be solved by writing direct conversion functions between JSON data and data frames, skipping the intermediary representation of Pydantic models. This is what I did for the response, achieving a 10x speedup, at the same time preserving the automated API documentation with the response_model=List[Sample] argument. I would like to achieve the same with the request: being able to use custom JSON input parsing, while at the same time preserving API documentation using Pydantic models. Sadly I can't find a way to do it in the FastAPI docs. How can I accomplish this?
You can always accept the raw request, load the request.body() data as bytes and do your own decoding. The schema of the request body should then be documented as a (partial) raw OpenAPI Operation structure using the openapi_extra argument to the @app.post() decorator: @app.post( "/score", response_model=List[Sample], openapi_extra={ "requestBody": { "content": { "application/json": { "schema": { "type": "array", "items": Sample.schema(ref_template="#/components/schemas/{model}"), } } } } }, ) async def score(request: Request): raw_body = await request.body() # parse the `raw_body` request data (bytes) into your DF directly. The openapi_extra structure is merged into the operation structure generated from other components (such as the response_model). I used your existing Sample model here to provide the schema for the array items, but you can also map out the whole schema manually. Instead of using the raw bytes of the body, you could also delegate parsing as JSON to the request object: data = await request.json() If there is a way to parse the data as a stream (pushing chunks to a parser), you could avoid the memory overhead of loading the whole body at once by treating the request as a stream in an async loop: parser = ... # something that can be fed chunks of data async for chunk in request.stream(): parser.feed(chunk) This is documented in the Custom OpenAPI path operation schema section in the Advanced User Guide. The same section also covers Us[ing] the Request object directly, and the various options for handling the Request body can be found in the Starlette Request class documentation.
6
5
68,811,220
2021-8-17
https://stackoverflow.com/questions/68811220/handling-the-token-expiration-in-fastapi
I'm new with fastapi security and I'm trying to implement the authentication thing and then use scopes. The problem is that I'm setting an expiration time for the token but after the expiration time the user still authenticated and can access services import json from jose import jwt,JWTError from typing import Optional from datetime import datetime,timedelta from fastapi.security import OAuth2PasswordBearer,OAuth2PasswordRequestForm,SecurityScopes from fastapi import APIRouter, UploadFile, File, Depends, HTTPException,status from tinydb import TinyDB,where from tinydb import Query from passlib.hash import bcrypt from pydantic import BaseModel from passlib.context import CryptContext ## class TokenData(BaseModel): username: Optional[str] = None class Token(BaseModel): access_token: str token_type: str router = APIRouter() SECRET_KEY="e79b2a1eaa2b801bc81c49127ca4607749cc2629f73518194f528fc5c8491713" ALGORITHM="HS256" ACCESS_TOKEN_EXPIRE_MINUTES=1 oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/dev-service/api/v1/openvpn/token") db=TinyDB('app/Users.json') Users = db.table('User') User = Query pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto") class User(BaseModel): username: str password:str def get_user(username: str):#still user= Users.search((where('name') ==name)) if user: return user[0] @router.post('/verif') async def verify_user(name,password): user = Users.search((where('name') ==name)) print(user) if not user: return False print(user) passw=user[0]['password'] if not bcrypt.verify(password,passw): return False return user def create_access_token(data: dict, expires_delta: Optional[timedelta] = None): to_encode = data.copy() if expires_delta: expire = datetime.utcnow() + expires_delta else: expire = datetime.utcnow() + timedelta(minutes=1) to_encode.update({"exp": expire}) encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM) return encoded_jwt @router.post("/token", response_model=Token) async def token_generate(form_data:OAuth2PasswordRequestForm=Depends()): user=await verify_user(form_data.username,form_data.password) if not user: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Incorrect username or password", headers={"WWW-Authenticate": "Bearer"}, ) access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES) access_token = create_access_token(data={"sub": form_data.username}, expires_delta=access_token_expires) return {"access_token": access_token, "token_type": "bearer"} @router.get('/user/me') async def get_current_user(token: str = Depends(oauth2_scheme)): credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"}, ) try: payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM]) username: str = payload.get("sub") if username is None: raise credentials_exception token_data = TokenData(username=username) except JWTError: raise credentials_exception user =Users.search(where('name') ==token_data.username) if user is None: raise credentials_exception return user @router.post('/user') async def create_user(name,password): Users.insert({'name':name,'password':bcrypt.hash(password)}) return True How can I really see the expiration of the token and how can I add the scopes?
I had pretty much the same confusion when I started out with FastAPI. The access token you created will not expire on its own, so you will need to check if it is expired while validating the token at get_current_user. You could modify your TokenData schema to the code below: class TokenData(BaseModel): username: Optional[str] = None expires: Optional[datetime] And your get_current_user to: @router.get('/user/me') async def get_current_user(token: str = Depends(oauth2_scheme)): # get the current user from auth token # define credential exception credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"}, ) try: # decode token and extract username and expires data payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM]) username: str = payload.get("sub") expires = payload.get("exp") except JWTError: raise credentials_exception # validate username if username is None: raise credentials_exception token_data = TokenData(username=username, expires=expires) user = Users.search(where('name') == token_data.username) if user is None: raise credentials_exception # check token expiration if expires is None: raise credentials_exception if datetime.utcnow() > token_data.expires: raise credentials_exception return user Scopes is a huge topic on its own, so I can't cover it here. However, you can read more on it at the fastAPI docs here
5
5
68,814,074
2021-8-17
https://stackoverflow.com/questions/68814074/how-to-save-parameters-just-related-to-classifier-layer-of-pretrained-bert-model
I fine tuned the pretrained model here by freezing all layers except the classifier layers. And I saved weight file with using pytorch as .bin format. Now instead of loading the 400mb pre-trained model, is there a way to load the parameters of the just Classifier layer I retrained it? By the way, I know that I have to load the original pretrained model, I just don't want to load the entire fine tuned model. due to memory concerns. I can access the last layer's parameters from state_dict as below, but how can I save them in a separate file to use them later for less memory usage? model = PosTaggingModel(num_pos_tag=num_pos_tag) state_dict = torch.load("model.bin") print("state dictionary:",state_dict) with torch.no_grad(): model.out_pos_tag.weight.copy_(state_dict['out_pos_tag.weight']) model.out_pos_tag.bias.copy_(state_dict['out_pos_tag.bias']) Here is the model class: class PosTaggingModel(nn.Module): def __init__(self, num_pos_tag): super(PosTaggingModel, self).__init__() self.num_pos_tag = num_pos_tag self.model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-cased") for name, param in self.model.named_parameters(): if 'classifier' not in name: # classifier layer param.requires_grad = False self.bert_drop = nn.Dropout(0.3) self.out_pos_tag = nn.Linear(768, self.num_pos_tag) def forward(self, ids, mask, token_type_ids, target_pos_tag): o1, _ = self.model(ids, attention_mask = mask, token_type_ids = token_type_ids) bo_pos_tag = self.bert_drop(o1) pos_tag = self.out_pos_tag(bo_pos_tag) loss = loss_fn(pos_tag, target_pos_tag, mask, self.num_pos_tag) return pos_tag, loss I don't know if this is possible but I'm just looking for a way to save and reuse the last layer's parameters, without the need for parameters of frozen layers. I couldn't find it in the documentation. Thanks in advance to those who will help.
You can do it like this import torch # creating a dummy model class Classifier(torch.nn.Module): def __init__(self): super(Classifier, self).__init__() self.first = torch.nn.Linear(10, 10) self.second = torch.nn.Linear(10, 20) self.last = torch.nn.Linear(20, 1) def forward(self, x): pass # Creating its object model = Classifier() # Extracting the layer to save to_save = model.last # Saving the state dict of that layer torch.save(to_save.state_dict(), './classifier.bin') # Recreating the object of that model model = Classifier() # Updating the saved layer of model model.last.load_state_dict(torch.load('./classifier.bin'))
5
6
68,812,647
2021-8-17
https://stackoverflow.com/questions/68812647/how-to-add-vertical-line-to-legends-created-via-add-vline-method-in-plotly-pyth
Using Python 3.8, Plotly 4.13. Within my scatterplot, I've added multiple vertical lines using add_vline() method in plotly. However I cannot add it to legend allowing me to turn on/off vertical lines. How can add vertical lines to the legend? Here is example of how I've created my plot: fig = go.Figure() fig.add_trace( go.Scatter(name="name added to legend", datas...) ) for dt in dates: fig.add_vline(x=dt, line_width=1, etc...) outputting something like this: all plots created using go.Scatter are added to legend however not vertical lines created by fig.add_vline.
The vertical line is just a decoration of the graph, not a graph object, so it is not included in the legend. So if you want to use the ON/OFF function in the legend, you can add a new one in go.Scatter and use it for the function to select in the legend. I modified the example from the official reference to create the code. import plotly.express as px import plotly.graph_objects as go df = px.data.stocks() dmax = df[['GOOG', 'AAPL', 'AMZN', 'FB', 'NFLX', 'MSFT']].values.max() dmin = df[['GOOG', 'AAPL', 'AMZN', 'FB', 'NFLX', 'MSFT']].values.min() fig = go.Figure() fig.add_trace(go.Scatter(x=df.date, y=df['AAPL'], mode='lines', name='AAPL')) fig.add_trace(go.Scatter(x=df.date, y=df['GOOG'], mode='lines', name='GOOG')) fig.add_trace(go.Scatter(x=[df.date[10],df.date[10]], y=[dmin,dmax], mode='lines', line=dict(color='green', width=2, dash='dash'), name='2018-03-12')) fig.show()
7
11
68,813,246
2021-8-17
https://stackoverflow.com/questions/68813246/can-anyone-please-explain-why-set-is-behaving-like-this-with-boolean-in-it
Please explain the behavior of the set in the image. I know that set is unordered but where are the other elements from the set a & b ?
True and 1 are the same: >>> True == 1 True >>> Since sets can't have duplicate values, it only takes the one that appears first. You can see that if you convert True to int: >>> int(True) 1 >>> The output is 1.
5
6
68,808,298
2021-8-16
https://stackoverflow.com/questions/68808298/typeerror-scatter-got-an-unexpected-keyword-argument-trendline-options-plo
I'm getting the error: TypeError: scatter() got an unexpected keyword argument 'trendline_options' When trying to adjust the smoothing of the lowess tendline using plotly express. Here is my code for the graph: fig = px.scatter(dfg, x="Yr_Mnth", y="Episode_Count", color = "Target", labels={"Episode_Count": tally + " per Shift", "Target":"Target", "Yr_Mnth": "Date" }, trendline='lowess',trendline_options= dict(frac=0.1), title="Aggregate Behavior Data: " + patient + " - " + today) fig.update_xaxes(tickangle=45,) fig.update_layout(template = 'plotly_white',hovermode="x unified") Dataset (dfg): Yr_Mnth Target Episode_Count 2020-08-01 Aggression 0.09 2020-08-01 Elopement 0.00 2020-08-01 Self-injury 0.97 2020-09-01 Aggression 0.65 2020-09-01 Elopement 0.00 2020-09-01 Self-injury 1.58 2020-10-01 Aggression 0.24 2020-10-01 Elopement 0.00 2020-10-01 Self-injury 0.75 2020-11-01 Aggression 0.03 2020-11-01 Elopement 0.01 2020-11-01 Self-injury 0.89 2020-12-01 Aggression 0.14 2020-12-01 Elopement 0.00 2020-12-01 Self-injury 0.94 2021-01-01 Aggression 0.05 2021-01-01 Elopement 0.00 2021-01-01 Self-injury 0.30 2021-02-01 Self-injury 0.42 2021-02-01 Elopement 0.03 2021-02-01 Aggression 0.16 2021-03-01 Elopement 0.00 2021-03-01 Self-injury 0.68 2021-03-01 Aggression 0.20 2021-04-01 Aggression 0.10 2021-04-01 Elopement 0.03 2021-04-01 Self-injury 0.33 2021-05-01 Elopement 0.20 2021-05-01 Aggression 0.21 2021-05-01 Self-injury 1.63 2021-06-01 Self-injury 0.90 2021-06-01 Aggression 0.29 2021-06-01 Elopement 0.14 I find this strange as I'm directly following the documentation - https://plotly.com/python/linear-fits/ Is this a known issue? I can't find any examples with a google search...
As Henry helpfully pointed out this was just a version problem, easily addressed by updating plotly using: pip install plotly==5.2.1
5
5
68,759,330
2021-8-12
https://stackoverflow.com/questions/68759330/python-appending-dataframe-to-exsiting-excel-file-and-sheet
I have a question about appending a dataframe to existing sheet on existing file. I tried to write the code by myself writer = pd.ExcelWriter('existingFile.xlsx', engine='openpyxl', mode='a') df.to_excel(writer, sheet_name="existingSheet", startrow=writer.sheets["existingSheet"].max_row, index=False, header=False) and this would cause an error ValueError: Sheet 'existingSheet' already exists and if_sheet_exists is set to 'error'. and I googled and found this function in here; Append existing excel sheet with new dataframe using python pandas and even with this function, it still causes the same error, even though i think this function prevents this exact error from what i think. Could you please help? Thank you very much!
It seems this function is broken in pandas 1.3.0 Just look at this document , when trying to append to existing sheet it will either raise an error, create a new one or replace it if_sheet_exists{‘error’, ‘new’, ‘replace’}, default ‘error’ How to behave when trying to write to a sheet that already exists (append mode only). error: raise a ValueError. new: Create a new sheet, with a name determined by the engine. replace: Delete the contents of the sheet before writing to it. New in version 1.3.0 The only solution is to downgrade pandas (1.2.3 is working for me) pip install pandas==1.2.3
10
6
68,803,345
2021-8-16
https://stackoverflow.com/questions/68803345/syntax-for-creating-a-new-empty-geodataframe
I have a GeoDataFrame (let's call it Destinations) that was made from a point shapefile using GeoPandas. For every feature (correct me if the terminology is wrong) in Destinations, I need to find the nearest node on a graph and save that node to another GeoDataFrame (let's call it Destination_nodes, it will be used later for shortest-path analysis). Presumably, this requires creating a new, blank Destination_nodes GeoDataFrame and appending nodes to it as I loop through Destinations. But what is the syntax for creating a new empty GeoDataFrame?
For a GeoDataFrame you need to say which column will be the geometry, i.e. contain features as you say. So you could simply specify the columns of your empty dataframe at creation, without specifying any data: >>> dest = geopandas.GeoDataFrame(columns=['id', 'distance', 'feature'], geometry='feature') >>> dest Empty GeoDataFrame Columns: [id, distance, feature] Index: [] >>> dest.geometry GeoSeries([], Name: feature, dtype: geometry)
8
9
68,736,735
2021-8-11
https://stackoverflow.com/questions/68736735/django-prints-error-when-permissiondenied-exception-raises
In our project, we have used django SessionMiddleware to handle users sessions and it's working fine. The only problem here is when PermissionDenied exceptions happens, an error and its traceback will be printed out in the console! However as expected, by raising that exception, the 403 page will show to the user, but I think it doesn't seem rational, because the middleware here is handling the exception! Just like not found exception, I expect no error in the console. Is there anything wrong?! here is the middleware settings: MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django_otp.middleware.OTPMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'axes.middleware.AxesMiddleware', ] And here's the printed error: Forbidden (Permission denied): /the/not_allowed/page Traceback (most recent call last): File "/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner response = get_response(request) File "/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response response = self.process_exception_by_middleware(e, request) File "/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python3.8/contextlib.py", line 75, in inner return func(*args, **kwds) File "/our_project/base/decorators.py", line 88, in wrapper return view_func(request, *args, **kwargs) File "/venv/lib/python3.8/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view return view_func(request, *args, **kwargs) File "/venv/lib/python3.8/site-packages/django/contrib/auth/decorators.py", line 20, in _wrapped_view if test_func(request.user): File "/venv/lib/python3.8/site-packages/django/contrib/auth/decorators.py", line 70, in check_perms raise PermissionDenied django.core.exceptions.PermissionDenied
From this comment In this example the exception is raised from permission_required decorator in django.contrib.auth.decorators. I passed raise_exception=True to this decorator to make it raise exception instead of redirecting to login page So, it is clear that you have set raise_exception=True in your decorator. and from the doc If the raise_exception parameter is given, the decorator will raise PermissionDenied, prompting the 403 (HTTP Forbidden) view instead of redirecting to the login page. So, technically, when the conditions are not met, Django will raise an exception. But, depending on the value of 403.html, Django will show you either the plain 403 page or custom HTML response. I expect no error in the console. Is there anything wrong? There is nothing wrong here (or I couldn't see anything). So, if you want to omit the traceback from the console, you may need to write a error handling middleware to handle this exception. # error handling middleware from django.core.exceptions import PermissionDenied from django.shortcuts import render class PermissionDeniedErrorHandler: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): response = self.get_response(request) return response def process_exception(self, request, exception): # This is the method that responsible for the safe-exception handling if isinstance(exception, PermissionDenied): return render( request=request, template_name="your_custom_403.html", status=403 ) return None Note: Do not forgot to bind this middleware in your MIDDLEWARE settings. and thus, you will not get any error tracebacks in the console. Cheers!!!
5
3
68,796,752
2021-8-16
https://stackoverflow.com/questions/68796752/whats-the-difference-between-spark-sql-and-spark-read-formatjdbc-option
I assumed that spark.sql(query) is used when we are using spark sql and that spark.read.format("jdbc").option("query", "") is used when we are using oracle sql syntax. Would I be right in assuming so?
Yes and Spark does more than that too! Spark-Jdbc: From Spark docs Jdbc(Java Database connectivity) is used to read/write data from other databases (oracle, mysql, sqlserver, postgres, db2..etc). spark.read.format("jdbc").option("query", "(select * from <db>.<tb>)e") Spark-Sql: From docs Spark's module for working with structured data and lets you to query data using DataFrame API or SQL API. We can use Spark-Sql to read data from hive/presto/athena/delta/csv/parquet...etc files. Create a temporary view/table on dataframe then run SQL queries. Easily write RDD's/Dataframes to Hive or Parquet files.
5
4
68,796,549
2021-8-16
https://stackoverflow.com/questions/68796549/how-to-calculate-ratio-of-values-in-a-pandas-dataframe-column
I'm new to pandas and decided to learn it by playing around with some data I pulled from my favorite game's API. I have a dataframe with two columns "playerId" and "winner" like so: playerStatus: ______________________ playerId winner 0 1848 True 1 1988 False 2 3543 True 3 1848 False 4 1988 False ... Each row represents a match the player participated in. My goal is to either transform this dataframe or create a new one such that the win percentage for each playerId is calculated. For example, the above dataframe would become: playerWinsAndTotals _________________________________________ playerId wins totalPlayed winPct 0 1848 1 2 50.0000 1 1988 0 2 0.0000 2 3543 1 1 100.0000 ... It took quite a while of reading pandas docs, but I actually managed to achieve this by essentially creating two different tables (one to find the number of wins for each player, one to find the total games for each player), and merging them, then taking the ratio of wins to games played. Creating the "wins" dataframe: temp_df = playerStatus[['playerId', 'winner']].value_counts().reset_index(name='wins') onlyWins = temp_df[temp_df['winner'] == True][['playerId', 'wins']] onlyWins _________________________ playerId wins 1 1670 483 3 1748 474 4 2179 468 6 4006 434 8 1668 392 ... Creating the "totals" dataframe: totalPlayed = playerStatus['playerId'].value_counts().reset_index(name='totalCount').rename(columns={'index': 'playerId'}) totalPlayed ____________________ playerId totalCount 0 1670 961 1 1748 919 2 1872 877 3 4006 839 4 2179 837 ... Finally, merging them and adding the "winPct" column. playerWinsAndTotals = onlyWins.merge(totalPlayed, on='playerId', how='left') playerWinsAndTotals['winPct'] = playerWinsAndTotals['wins']/playerWinsAndTotals['totalCount'] * 100 playerWinsAndTotals _____________________________________________ playerId wins totalCount winPct 0 1670 483 961 50.260146 1 1748 474 919 51.577802 2 2179 468 837 55.913978 3 4006 434 839 51.728248 4 1668 392 712 55.056180 ... Now, the reason I am posting this here is because I know I'm not taking full advantage of what pandas has to offer. Creating and merging two different dataframes just to find the ratio of player wins seems unnecessary. I feel like I took the "scenic" route on this one. To anyone more experienced than me, how would you tackle this problem?
We can take advantage of the way that Boolean values are handled mathematically (True being 1 and False being 0) and use 3 aggregation functions sum, count and mean per group (groupby aggregate). We can also take advantage of Named Aggregation to both create and rename the columns in one step: df = ( df.groupby('playerId', as_index=False) .agg(wins=('winner', 'sum'), totalCount=('winner', 'count'), winPct=('winner', 'mean')) ) # Scale up winPct df['winPct'] *= 100 df: playerId wins totalCount winPct 0 1848 1 2 50.0 1 1988 0 2 0.0 2 3543 1 1 100.0 DataFrame and imports: import pandas as pd df = pd.DataFrame({ 'playerId': [1848, 1988, 3543, 1848, 1988], 'winner': [True, False, True, False, False] })
8
6
68,766,563
2021-8-13
https://stackoverflow.com/questions/68766563/fastapi-response-not-formatted-correctly-for-sqlite-db-with-a-json-column
I have a fast api app with sqlite, I am trying to get an output as json which is valid. One of the columns in sqlite database is a list stored in Text column and another column has json data in Text column. code sample below database = Database("sqlite:///db/database.sqlite") app = FastAPI() @app.get("/flow_json") async def get_data(select: str='*'): query = query_formatter(table='api_flow_json',select=select) logger.info(query) results = await database.fetch_all(query=query) print(results) # this result is a list of tuples which i can confirm output stated below return results List of tuples printed [('182', 'ABC', 'response_name', '[["ABC","DEF","GHI"]]', 'GHI', '{"metadata":{"contentId":"ABC"}}', '2', 'false', '39', '72', 'true')] sqlite db row example below "id","customer_name","response_name","entities","abstract","json_col","revision","disabled","customer_id","id2","auth" 182,"ABC","response_name","[[""ABC"",""DEF"",""GHI""]]","GHI","{""metadata"":{""contentId"":""ABC""}}",2,false,39,72,true result using http call [{"id":"182","customer_name":"ABC","response_name":"response_name","entities":"[[\"ABC\",\"DEF\",\"GHI\"]]","abstract":"GHI","json_col":"{\"metadata\":{\"contentId\":\"ABC\"}}","revision":"2","disabled":"false","customer_id":"39","id2":"72","auth":"true"}] expected result [{"id":"182","customer_name":"ABC","response_name":"response_name","entities":[["ABC","DEF","GHI"]],"abstract":"GHI","json_col":{metadata:{contentId:ABC}},"revision":"2","disabled":"false","customer_id":"39","id2":"72","auth":"true"}] What did I try: transforming list to be more json friendly after I get the list of tuples tried the json1 extension for sqlite but doesn't work. I know that this will involve a formatting after response from database but can't figure out the formatting to return to client.
Expanding on this answer I would use the Json datatype for entities and json_col: class ApiFlowJson(BaseModel): id: int customer_name: str response_name: str entities: Json abstract: str json_col: Json revision: int disabled: bool customer_id: int id2: int auth: bool class Config: orm_mode = True Sample from typing import List import sqlalchemy from databases import Database from fastapi import FastAPI from pydantic import BaseModel, Json DATABASE_URL = "sqlite:///test.db" app = FastAPI() # database database = Database(DATABASE_URL) metadata = sqlalchemy.MetaData() api_flow_json = sqlalchemy.Table( "api_flow_json", metadata, sqlalchemy.Column("id", sqlalchemy.Integer, primary_key=True), sqlalchemy.Column("customer_name", sqlalchemy.String), sqlalchemy.Column("response_name", sqlalchemy.String), sqlalchemy.Column("entities", sqlalchemy.JSON), sqlalchemy.Column("abstract", sqlalchemy.String), sqlalchemy.Column("json_col", sqlalchemy.JSON), sqlalchemy.Column("revision", sqlalchemy.Integer), sqlalchemy.Column("disabled", sqlalchemy.Boolean), sqlalchemy.Column("customer_id", sqlalchemy.Integer), sqlalchemy.Column("id2", sqlalchemy.Integer), sqlalchemy.Column("auth", sqlalchemy.Boolean), ) engine = sqlalchemy.create_engine( DATABASE_URL, connect_args={"check_same_thread": False} ) metadata.create_all(engine) # pydantic class ApiFlowJson(BaseModel): id: int customer_name: str response_name: str entities: Json abstract: str json_col: Json revision: int disabled: bool customer_id: int id2: int auth: bool class Config: orm_mode = True # events @app.on_event("startup") async def startup(): await database.connect() @app.on_event("shutdown") async def shutdown(): await database.disconnect() # route handlers @app.get("/") def home(): return "Hello, World!" @app.get("/seed") async def seed(): query = api_flow_json.insert().values( customer_name="ABC", response_name="response_name", entities='["ABC","DEF","GHI"]', abstract="GHI", json_col='{"metadata":{"contentId":"ABC"}}', revision=2, disabled=False, customer_id=39, id2=72, auth=True, ) record_id = await database.execute(query) return {"id": record_id} @app.get("/get", response_model=List[ApiFlowJson]) async def get_data(): query = api_flow_json.select() return await database.fetch_all(query)
5
5
68,774,700
2021-8-13
https://stackoverflow.com/questions/68774700/subset-multi-indexed-dataframe-based-on-multiple-level-1-columns
I have a multi=indexed DataFrame, but I want to keep only two columns per level 1, for each of the level 0 variables (i.e. columns 'one' and 'two'). I can subset them separately, but I would like to do it together so I can keep the values side by side Here is the DataFrame index = pd.MultiIndex.from_tuples(list(zip(*[['bar1', 'foo1', 'bar1', 'foo2','bar3','foo3'], ['one','two','three','two','one','four']]))) df = pd.DataFrame(np.random.randn(2, 6), columns=index) Here is the way to subset for one column in level 1 df.iloc[:, df.columns.get_level_values(1)== 'one'] # or df.xs('one', level=1, axis=1) # but adding two columns within either command will not work e.g. df.xs(('one','two), level=1, axis=1) This would be the expected output bar1 foo1 foo2 bar3 one two two one 0 -0.508272 -0.195379 0.865563 2.002205 1 -0.771565 1.360479 1.900931 -1.589277
Here's one way using pd.IndexSlice: idnx = pd.IndexSlice[:, ['one', 'two']] df.loc[:, idnx] Output: bar1 bar3 foo1 foo2 one one two two 0 0.589999 0.261224 -0.106588 -2.309628 1 0.646201 -0.491110 0.430724 1.027424 Another way using a little known argument, axis, of pd.DataFrame.loc: df.loc(axis=1)[:, ['one', 'two']] Output: bar1 bar3 foo1 foo2 one one two two 0 0.589999 0.261224 -0.106588 -2.309628 1 0.646201 -0.491110 0.430724 1.027424 NOTE: This argument is not listed in the documented API for pd.DataFrame.loc, but is referenced in the User Guide in MultiIndex / Advanced indexing section in the Using Slicers paragraph about middle way down with an example.
6
11
68,781,379
2021-8-14
https://stackoverflow.com/questions/68781379/how-do-i-change-the-terminal-inside-visual-studio-code-to-use-the-non-rosetta-on
I am new to python and am trying to run a python 2.7 script. Got pip for python 2.7 and installed a dependency of pyCrypto from the mac terminal shell. The downloaded python script, I want to try, runs fine in the terminal app when I execute it using python2. Now I open it in vscode and try to run the script in its terminal and I get ImportError: dlopen(/Users/xxx/Library/Python/2.7/lib/python/site-packages/Crypto/Cipher/_DES3.so, 2): no suitable image found. Did find: /Users/xxx/Library/Python/2.7/lib/python/site-packages/Crypto/Cipher/_DES3.so: mach-o, but wrong architecture /Users/xxx/Library/Python/2.7/lib/python/site-packages/Crypto/Cipher/_DES3.so: mach-o, but wrong architecture When I run uname -m inside a vscode terminal(zsh) on an M1 Mac, I see an output of x86_64, implying the terminal is running under Rosetta and looking for the intel version of the library. And when I run uname -m in the regular mac terminal app, I see arm64 How do I change the terminal inside vscode to use the non rosetta one? Or how do I get the script to run from within vscode?
I'm not familiar with VSCode, but you can manually force the chosen architecture slice of anything you launch with the arch command (see man arch). If you have a script that you'd normally launch like: ./script.py Then you can force either architecture like so: arch -x86_64 ./script.py arch -arm64 ./script.py
6
8
68,778,470
2021-8-13
https://stackoverflow.com/questions/68778470/python-mysql-error-1-failed-executing-the-operation-could-not-process-para
I am using following code to update many on for my table. However that's fine working on Windows with Python but not working on my Ubuntu machine. Update many keep saying MySQL Error [-1]: Failed executing the operation; Could not process parameters. Is there are any solution to trace what's exact causing this error? def update_wifs(done_data): magicwallet = mysql.connector.connect( host="localhost", user="dumpass", password="dumuser", database="magicwallet" ) mysql_table = "magic97mil" conn = magicwallet.cursor() query = "" values = [] for data_dict in done_data: if not query: columns = ', '.join('`{0}`'.format(k) for k in data_dict) duplicates = ', '.join('{0}=VALUES({0})'.format(k) for k in data_dict) place_holders = ', '.join('%s'.format(k) for k in data_dict) query = "INSERT INTO {0} ({1}) VALUES ({2})".format(mysql_table, columns, place_holders) query = "{0} ON DUPLICATE KEY UPDATE {1}".format(query, duplicates) v = data_dict.values() values.append(v) try: conn.executemany(query, values) except Exception as e: try: print("MySQL Error [%d]: %s" % (e.args[0], e.args[1])) except IndexError: print("MySQL Error: %s" % str(e)) magicwallet.rollback() return False magicwallet.commit() conn.close() magicwallet.close() return done_data done_data coming like it. what's exact like name of the columns in my table. It's been working fine on Windows machine but keep has error on Unix {'id': 73399, 'wif': 'uMx1VuwRT4cKQQyE', 'PublicAddress': 'p62GqtrDtRg', 'PublicAddressP2WPKH': 'dj3krprezquku7wkswv', 'phrase': '0075839', 'index_file': 73399, 'imported': 1} {'id': 73400, 'wif': 'L1Ri4cv3vicfGbESct', 'PublicAddress': 'JfstH24WMHGZz63WEopk', 'PublicAddressP2WPKH': 'ffkt6xxgksktzzkq8ucydrn8ps', 'phrase': '007584', 'index_file': 73400, 'imported': 1} UPD 0. query print INSERT INTO magic97mil (id, wif, PublicAddress, PublicAddressP2WPKH, phrase, index_file, imported) VALUES (%s, %s, %s, %s, %s, %s, %s) ON DUPLICATE KEY UPDATE id=VALUES(id), wif=VALUES(wif), PublicAddress=VALUES(PublicAddress), PublicAddressP2WPKH=VALUES(PublicAddressP2WPKH), phrase=VALUES(phrase), index_file=VALUES(index_file), imported=VALUES(imported) UPD 1. Full traceback Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/mysql/connector/cursor_cext.py", line 313, in _batch_insert prepared = self._cnx.prepare_for_mysql(params) File "/usr/local/lib/python3.8/dist-packages/mysql/connector/connection_cext.py", line 659, in prepare_for_mysql raise ValueError("Could not process parameters") ValueError: Could not process parameters During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/www/html/WalletMagic97mil/to_wif.py", line 76, in update_wifs conn.executemany(query, values) File "/usr/local/lib/python3.8/dist-packages/mysql/connector/cursor_cext.py", line 355, in executemany stmt = self._batch_insert(operation, seq_params) File "/usr/local/lib/python3.8/dist-packages/mysql/connector/cursor_cext.py", line 333, in _batch_insert raise errors.InterfaceError( mysql.connector.errors.InterfaceError: Failed executing the operation; Could not process parameters UPD 2. Difference on windows and unix values So the difference of print(values) on Unix and Windows is: Windows it's like: [{'id': 73401, 'wif': 'iVVUsHiLPLGq3uPaPyzdQ6xFeB3', 'PublicAddress': 'nLNtv3XUiMs7f1q8vU', 'PublicAddressP2WPKH': 'epw8ae08fnjpuva7x8783', 'phrase': '0075840', 'index_file': 73401, 'imported': 1}, {'id': 73402, 'wif': 'i41ZqWgvoKbUjYsbA41A', 'PublicAddress': 'Vd1D2krnjMucjmLU9', 'PublicAddressP2WPKH': '20g4my4rm4xgt04ph8xcmgyk', 'phrase': '0075841', 'index_file': 73402, 'imported': 1}] Unix it's like: [dict_values([73101, 'bmgHbonEKw4LoUqmwSg', '7K77mEtoiH5x8FnmJi2', 'dx3pdppq0zgacldefnge8ea3', '0075576', 73101, 1]), dict_values([73102, 'nojKY4pzXxJ9TeFX14vpnk', 'qkuVECaPs3WcCj', 'j5sv9q28kzaqs0m6g', '0075577', 73102, 1])] I think maybe this is exact error
You should use one pattern for building queries: if not query: columns = ', '.join('`{0}`'.format(k) for k in data_dict) duplicates = ', '.join('{0}=VALUES({0})'.format(k) for k in data_dict) place_holders = ', '.join('`{0}`'.format(k) for k in data_dict) Truly, you shouldn't string-build queries. It's unsafe, error-prone, and unnecessary. A better method is to use parameter binding with prepared statements.
5
2
68,774,082
2021-8-13
https://stackoverflow.com/questions/68774082/why-does-pythons-exceptions-repr-keep-track-of-passed-objects-to-init
Please see the below code snippet: In [1]: class A(Exception): ...: def __init__(self, b): ...: self.message = b.message ...: In [2]: class B: ...: message = "hello" ...: In [3]: A(B()) Out[3]: __main__.A(<__main__.B at 0x14af96790>) In [4]: class A: ...: def __init__(self, b): ...: self.message = b.message ...: In [5]: A(B()) Out[5]: <__main__.A at 0x10445b0a0> if A subclasses from Exception, its repr returns a reference to B() even though we only pass B()'s message attribute. Why is this the intentional behavior in Exception.repr and how does it work in python psuedocode if possible given the cpython code isn't too readable ?
When a Python object is created, it is the class's __new__ method that is called, and __init__ is then called on the new instance that the __new__ method returns (assuming it returned a new instance, which sometimes it doesn't). Your overridden __init__ method doesn't keep a reference to b, but you didn't override __new__, so you inherit the __new__ method defined here (CPython source link): static PyObject * BaseException_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { // ... if (args) { self->args = args; Py_INCREF(args); return (PyObject *)self; } // ... } I omitted the parts that are not relevant. As you can see, the __new__ method of the BaseException class stores a reference to the tuple of arguments used when creating the exception, and this tuple is therefore available for the __repr__ method to print a reference to the objects used to instantiate the exception. So it's this tuple which retains a reference to the original argument b. This is consistent with the general expectation that repr should return Python code which would create an object in the same state, if it can. Note that it is only args, not kwds, which has this behaviour; the __new__ method doesn't store a reference to kwds and __repr__ doesn't print it, so we should expect not to see the same behaviour if the constructor is called with a keyword argument instead of a positional argument. Indeed, that is what we observe: >>> A(B()) A(<__main__.B object at 0x7fa8e7a23860>,) >>> A(b=B()) A() A bit strange since the two A objects are supposed to have the same state, but that's how the code is written, anyway.
7
9
68,775,783
2021-8-13
https://stackoverflow.com/questions/68775783/pythons-requests-library-removing-appending-question-mark-from-url
Goal Make request to http://example.com/page? using requests.get() Problem The question mark ("?") is automatically stripped from the request if it is the last character in the URL (eg. http://example.com/page?1 and http://example.com/page?! work, http://example.com/page? does not) Sample code import requests endpoint = "http://example.com/page?" r = requests.get(endpoint) print(r.url) # -> "http://example.com/page" assert r.url == endpoint # Raises AssertionError Question Without modifying the library, is it possible to reach the intended endpoint? Both intended solutions (if such exist) and workarounds are welcome. Thanks!
This is not possible with the requests library. URLs passed into requests are parsed by urllib3.util.url.parse_url() into separate parts: scheme auth host port path query fragment The logic for getting the query part of a URL assumes that the querystring starts after ?, but since there is nothing after the question mark, it gives a blank query. The URL is then reconstructed as a string when you print r.url. That is why the URL does not have the trailing question mark. I found that the behavior you are looking for is possible with urllib.request, though. Here's an example: import urllib.request, urllib.error try: response = urllib.request.urlopen("http://example.com/page?") print(response.url) # -> http://example.com/page? except urllib.error.HTTPError as e: print(e.url) # -> http://example.com/page? print(e.code) # -> 404 I have surrounded the request in a try/except because if the page you are trying to get gives a 404, urllib will raise an error, where requests will simply put up with it.
6
8
68,765,846
2021-8-13
https://stackoverflow.com/questions/68765846/combining-rows-in-a-geopandas-dataframe
TLDR: I'm trying to combine rows of a GeoPandas Dataframe into one row where their shapes are combined into one. I'm currently working on a little project that requires me to create interactive choropleth plots of Canadian health regions using a few different metrics. I had merged two Dataframes, one containing population estimates by year for each health region, and another GeoDataframe containing the geometry for the health regions, when I noticed that the number of rows wasn't the same. Upon further inspection, I realized the two datasets I had been using didn't include the exact same health regions. The shape-files I got had a few more health regions than the population data, which had amalgamated a few of them for methodological reasons. After noticing the difference, I redid the merge to show me the differences so I could figure out what I need to roll up. merged_gdf = gdf.merge(df, on='HR_UID') #HR_UID is just the name of the column with the codes for the health regions, since they #have slightly different names in different datasets, it's easier to merge on code. print(list(set(df['HEALTH_REGION'])-set(merged_gdf['HEALTH_REGION_y'])),list(set(gdf['HR_UID'])-set(df['HR_UID'].unique()))) Here I was shown the missing health region was ['Mamawetan/Keewatin/Athabasca, Saskatchewan']. The GeoDataframe has those three regions separate, with codes 4711, 4712, 4713, while the population data has them rolled up into one region with code 4714. I intend on combining the rows of my GeoDataframe that correspond to the health regions combined in the population data, to combine their polygons. I went back to the GeoDataframe to try and combine the three rows corresponding to those regions: old_hr=gdf[gdf['HR_UID'].isin({'4711','4712','4713'})] HR_UID HEALTH_REGION SHAPE_AREA \ 31 4711 Mamawetan Churchill River Regional Health Auth... 1.282120e+11 32 4712 Keewatin Yatthé Regional Health Authority 1.095536e+11 33 4713 Athabasca Health Authority 5.657720e+10 SHAPE_LEN geometry 31 1.707619e+06 POLYGON ((5602074.666 2364598.029, 5591985.366... 32 1.616297e+06 POLYGON ((5212469.723 2642030.691, 5273110.000... 33 1.142962e+06 POLYGON ((5248633.914 2767057.263, 5249285.640... Now I've come to the realization that I'm not exactly sure how to combine polygons in a GeoDataframe. I have tried using dissolve(on='HEALTH_REGION'), although that didn't work. I've spent a while looking around online, but thus far it seems I can't find anyone asking this particular question - perhaps I'm missing something..
Turns out it was actually simpler than I had imagined, and I was just confused about some additional columns in the dataframe that weren't actually necessary for the mapping. I'm new to Geopandas and mapping in general, so I hadn't realized the SHAPE_AREA and SHAPE_LEN weren't actually needed. Here was the code I used to import the dataframe without the extra columns and then combine the 3 polygons: # if this is not "pythonic" let me know, I'm still a python rookie, but this # worked for me. gdf = gpd.read_file('data/HR_Boundary_Files/HR_000b18a_e.shp', encoding='utf-8').drop(columns={'FRENAME', 'SHAPE_AREA','SHAPE_LEN'}) gdf.rename(columns={'ENGNAME':'HEALTH_REGION'}, inplace=True) old_hr=gdf[gdf['HR_UID'].isin({'4711','4712','4713'})] gdf=gdf[~gdf['HR_UID'].isin({'4711','4712','4713'})] new_region_geometry = old_hr['geometry'].unary_union gdf=gdf.append(pd.Series(['4714', 'Mamawetan/Keewatin/Athabasca Health Region', new_region_geometry], index=gdf.columns), ignore_index=True) The unary_union property of GeoSeries returns the union of all the geometries, which gave me the new shape I needed. I just added that into the dataframe with the correct region name and code, and dropped the old regions that made up the new one.
5
3
68,765,137
2021-8-12
https://stackoverflow.com/questions/68765137/displacy-custom-colors-for-custom-entities-using-displacy
I have a list of words, noun-verb phrases and I want to: Search dependency patterns, words, in a corpus of text identify the paragraph that matches appears in extract the paragraph highlight the matched words in the paragraph create a snip/jpeg of the paragraph with matched words highlighted save the image in an excel. The MWE below pertains to highlighting the matched words and displaying them using displacy. I have mentioned the rest of my task just to provide the context. The output isn't coloring the custom entities with custom colors. import spacy from spacy.matcher import PhraseMatcher from spacy.tokens import Span good = ['bacon', 'chicken', 'lamb','hot dog'] bad = [ 'apple', 'carrot'] nlp = spacy.load('en_core_web_sm') patterns1 = [nlp(good) for good in good] patterns2 = [nlp(bad) for bad in bad] matcher = PhraseMatcher(nlp.vocab) matcher.add('good', None, *patterns1) matcher.add('bad', None, *patterns2) doc = nlp("I like bacon and chicken but unfortunately I only had an apple and a carrot in the fridge") matches = matcher(doc) for match_id, start, end in matches: span = Span(doc, start, end, label=match_id) doc.ents = list(doc.ents) + [span] # add span to doc.ents print([(ent.text, ent.label_) for ent in doc.ents]) The code above produces this output: [('bacon', 'good'), ('chicken', 'good'), ('apple', 'bad'), ('carrot', 'bad')] But when I try to custom color the entities, it doesn't seem to be working. from spacy import displacy colors = {'good': "#85C1E9", "bad": "#ff6961"} options = {"ents": ['good', 'bad'], "colors": colors} displacy.serve(doc, style='ent',options=options) This is the output I get:
I just copy/pasted your code and it works fine here. I'm using spaCy v3.1.1. What does the HTML output source look like? I was able to reproduce your issue on spaCy 2.3.5. I was able to fix it by making the labels upper-case (GOOD and BAD). I can't find a bug about this but since the models normally only use uppercase labels I guess this is an issue with older versions.
5
7
68,766,136
2021-8-13
https://stackoverflow.com/questions/68766136/including-special-character-in-basemodel-for-pydantic
I am trying to create a Pydantic basemodel with a key including a '$' sign. It looks like this: class someModel(BaseModel): $something:Optional[str] = None Then I get SyntaxError: invalid syntax. But I need to keep the key name $something to use in other parts. Is there a way to allow the dollar sign in this case?
You can use Field(alias=...) to use a different (valid) variable name. from pydantic import BaseModel, Field class SomeModel(BaseModel): something: Optional[str] = Field(alias="$something", default=None) I've also added a default value of None since you had that in your code. Here's a working example (EDIT: Updated to show how to get alias for your variable name if you need to use it later): import logging from typing import Optional from fastapi import FastAPI, Request from pydantic import BaseModel, Field logging.basicConfig(level=logging.INFO, format="%(levelname)-9s %(asctime)s - %(name)s - %(message)s") LOGGER = logging.getLogger(__name__) app = FastAPI() class SomeModel(BaseModel): something: Optional[str] = Field(alias="$something", default=None) @app.post("/") async def root(request: Request, parsed_body: SomeModel): # A dict of all the model fields and their properties LOGGER.info(f"SomeModel.__fields__: {SomeModel.__fields__}") # To get the alias of the variable name something_alias = SomeModel.__fields__["something"].alias LOGGER.info(f"something_alias: {something_alias}") # Edit: prefer to use "parsed_body_by_alias" than raw_body. Leaving here to show the difference. raw_body: bytes = await request.body() LOGGER.info(f"raw_body: {raw_body}") # Edit: This is better as you get validated / parsed values, including defaults if applicable. parsed_body_by_alias = parsed_body.dict(by_alias=True) LOGGER.info(f"parsed_body_by_alias: {parsed_body_by_alias}") # If you just want "something" instead of "$something" LOGGER.info(f"parsed_body: {parsed_body}") LOGGER.info(f"parsed_body.something: {parsed_body.something}") return 1 if __name__ == "__main__": import uvicorn uvicorn.run(app, host="127.0.0.1", port=8080) If you run the code then send a POST with body: {"$something": "bar"} you'll see something like: INFO xxx - __main__ - SomeModel.__fields__: {'something': ModelField(name='something', type=Optional[str], required=False, default=None, alias='$something')} INFO xxx - __main__ - something_alias: $something INFO xxx - __main__ - raw_body: b'{"$something": "bar"}' INFO xxx - __main__ - parsed_body_by_alias: {'$something': 'bar'} INFO xxx - __main__ - parsed_body: something='bar' INFO xxx - __main__ - parsed_body.something: bar INFO: 127.0.0.1:xxxxx - "POST / HTTP/1.1" 200 OK
5
9
68,766,331
2021-8-13
https://stackoverflow.com/questions/68766331/how-to-apply-predict-to-xgboost-cross-validation
After some time searching google I feel this might be a nonsensical question, but here it goes. If I use the following code I can produce an xgb regression model, which I can then use to fit on the training set and evaluate the model xgb_reg = xgb.XGBRegressor(objective='binary:logistic', gamme = .12, eval_metric = 'logloss', #eval_metric = 'auc', eta = .068, subsample = .78, colsample_bytree = .76, min_child_weight = 9, max_delta_step = 5, nthread = 4) start = time.time() xgb_reg.fit(X_train, y_train) print(start-time.time()) y_pred = xgb_reg.predict(X_test) print(log_loss(y_test, y_pred)) Now, I want to take that a step further and use kfold cv to improve the model, so I have this data_dmatrix = xgb.DMatrix(data=X_train,label=y_train) params = {'objective':'binary:logistic','eval_metric':'logloss','eta':.068, 'subsample':.78,'colsample_bytree':.76,'min_child_weight':9, 'max_delta_step':5,'nthread':4} xgb_cv = cv(dtrain=data_dmatrix, params=params, nfold=5, num_boost_round=20, metrics = 'logloss',seed=42) However, this spits out a data frame and I can't use the .predict() on the test set. I'm thinking I might not be understanding the fundamental concept of this but I'm hoping I'm just overlooking something simple.
kfold cv doesn't make the model more accurate per se. In your example with xgb, there are many hyper parameters eg(subsample, eta) to be specified, and to get a sense of how the parameters chosen perform on unseen data, we use kfold cv to partition the data into many training and test samples and measure out-of-sample accuracy. We usually try this for several possible values of a parameter and what gives the lowest average error. After this you would refit your model with the parameters. This post and its answers discusses it. For example, below we run something like what you did and we get only the train / test error for 1 set of values : import pandas as pd import numpy as np import xgboost as xgb from sklearn.model_selection import train_test_split from sklearn.datasets import make_classification X, y = make_classification(n_samples=500,class_sep=0.7) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) data_dmatrix = xgb.DMatrix(data=X_train,label=y_train) params = {'objective':'binary:logistic','eval_metric':'logloss', 'eta':0.01, 'subsample':0.1} xgb_cv = xgb.cv(dtrain=data_dmatrix, params=params, nfold=5, metrics = 'logloss',seed=42) train-logloss-mean train-logloss-std test-logloss-mean test-logloss-std 0 0.689600 0.000517 0.689820 0.001009 1 0.686462 0.001612 0.687151 0.002089 2 0.683626 0.001438 0.684667 0.003009 3 0.680450 0.001100 0.681929 0.003604 4 0.678269 0.001399 0.680310 0.002781 5 0.675170 0.001867 0.677254 0.003086 6 0.672349 0.002483 0.674432 0.004349 7 0.668964 0.002484 0.671493 0.004579 8 0.666361 0.002831 0.668978 0.004200 9 0.663682 0.003881 0.666744 0.003598 The last row is the result from last round, which is what we use for evaluation. If we test over multiple values of eta ( and subsample for example: grid = pd.DataFrame({'eta':[0.01,0.05,0.1]*2, 'subsample':np.repeat([0.1,0.3],3)}) eta subsample 0 0.01 0.1 1 0.05 0.1 2 0.10 0.1 3 0.01 0.3 4 0.05 0.3 5 0.10 0.3 Normally we can use GridSearchCV for this, but below is something that uses xgb.cv: def fit(x): params = {'objective':'binary:logistic', 'eval_metric':'logloss', 'eta':x[0], 'subsample':x[1]} xgb_cv = xgb.cv(dtrain=data_dmatrix, params=params, nfold=5, metrics = 'logloss',seed=42) return xgb_cv[-1:].values[0] grid[['train-logloss-mean','train-logloss-std', 'test-logloss-mean','test-logloss-std']] = grid.apply(fit,axis=1,result_type='expand') eta subsample train-logloss-mean train-logloss-std test-logloss-mean test-logloss-std 0 0.01 0.1 0.663682 0.003881 0.666744 0.003598 1 0.05 0.1 0.570629 0.012555 0.580309 0.023561 2 0.10 0.1 0.503440 0.017761 0.526891 0.031659 3 0.01 0.3 0.646587 0.002063 0.653741 0.004201 4 0.05 0.3 0.512229 0.008013 0.545113 0.018700 5 0.10 0.3 0.414103 0.012427 0.472379 0.032606 We can see for eta = 0.10 and subsample = 0.3 gives the best result, so next you just need to refit the model with these parameters: xgb_reg = xgb.XGBRegressor(objective='binary:logistic', eval_metric = 'logloss', eta = 0.1, subsample = 0.3) xgb_reg.fit(X_train, y_train)
11
16
68,763,671
2021-8-12
https://stackoverflow.com/questions/68763671/running-async-code-from-a-sync-function-with-an-event-loop-already-running
I know, it's a mouthful. I am using pytest-asyncio, which gives you an async event loop for running async code in your tests. I want to use factory-boy with an async ORM. The only problem is that factory-boy does not support async whatsoever. I want to override the _create function on a factory (which is a synchronous function), however, I need that function to run asynchronous code. It doesn't matter if it blocks, it's just tests. Something like this: class AsyncModelFactory(factory.alchemy.SQLAlchemyFactory): class Meta: sqlalchemy_session = async_session abstract = True @classmethod def _create(cls, model_class, *args, **kwargs): # run_until_complete only works if there isn't already an event loop running return asyncio.get_event_loop().run_until_complete( cls._async_create(model_class, *args, **kwargs) ) @classmethod async def _async_create(cls, model_class, *args, **kwargs): obj = model_class(*args, **kwargs) session = self._meta.sqlalchemy_session session.add(obj) await session.commit() # needs to be awaited return obj The above actually works, except in the case where an async event loop is already running. Ironically, that factory works from synchronous tests, but not from asynchronous tests. Is there a way to await _async_create from _create in an asynchronous context? All of factory-boy with factories creating sub-factories, etc, is synchronous, so this can only work if _create remains a synchronous call
You can't await anything in an ordinary function, of course. But there are ways that ordinary functions and async code can interact. An ordinary function can create tasks and return awaitables, for example. Look at this little program: import asyncio import time async def main(): print("Start", time.asctime()) awaitable1 = f1() awaitable2 = f2() await asyncio.gather(awaitable1, awaitable2) print("Finish", time.asctime()) def f1(): return asyncio.create_task(f3()) def f2(): return asyncio.sleep(2.0) async def f3(): await asyncio.sleep(1.0) print("Task f3", time.asctime()) asyncio.run(main()) Result: Start Thu Aug 12 18:55:20 2021 Task f3 Thu Aug 12 18:55:21 2021 Finish Thu Aug 12 18:55:22 2021 Here are two ordinary functions, one of which creates a task and one which creates a simple awaitable. Those objects are returned up the stack to an async function, which then awaits on them. You asked specifically about the situation where the event loop is already running. When that's the case, except for code that's invoked by callback functions, everything that runs in your program is part of some task or other. Somewhere in the call stack will be an async function. If you figure out how to pass awaitable objects up to that level, I think you can solve your problem.
8
3
68,763,357
2021-8-12
https://stackoverflow.com/questions/68763357/strange-behaviour-when-mixing-abstractmethod-classmethod-and-property-decorator
I've been trying to see whether one can create an abstract class property by mixing the three decorators (in Python 3.9.6, if that matters), and I noticed some strange behaviour. Consider the following code: from abc import ABC, abstractmethod class Foo(ABC): @classmethod @property @abstractmethod def x(cls): print(cls) return None class Bar(Foo): @classmethod @property def x(cls): print("this is executed") return super().x This outputs this is executed <class '__main__.Bar'> This means that somehow, Bar.x ends up being called. PyCharm warns me that Property 'self' cannot be deleted. If I reverse the order of @classmethod and @property, Bar.x is not called, but I still get the same warning, and also another one: This decorator will not receive a callable it may expect; the built-in decorator returns a special object (this also appears whenever I put @property above @classmethod). Removing any of the three decorators (with the appropriate changes: adding () when removing @property or changing cls to self when removing @classmethod) also prevents Bar.x from being called. I suppose all of this means that it's probably just a bad idea to directly mix those decorators (as indicated by discussion about class properties in other threads here). Neverthless, I am curious: what is happening here? Why is Bar.x called?
This looks like a bug in the logic that checks for inherited abstract methods. An object in a class dict is considered abstract if retrieving its __isabstractmethod__ attribute produces True. When Bar subclasses Foo, Python needs to determine whether Bar overrides the abstract Foo.x, and if so, whether the override is itself abstract. It should do this by searching the MRO for an 'x' entry in a class dict, so it can examine __isabstractmethod__ on descriptors directly without invoking the descriptor protocol, but instead, it performs a simple Bar.x attribute access. The Bar.x attribute access invokes the class property. It also returns None instead of the abstract property, and None isn't abstract, so Python gets confused about whether Bar.x is abstract. Python ends up still thinking Bar.x is abstract due to a different check, but if you change the example a bit: >>> from abc import ABC, abstractmethod >>> >>> class Foo(ABC): ... @classmethod ... @property ... @abstractmethod ... def x(cls): ... print(cls) ... return None ... >>> class Bar(Foo): pass ... <class '__main__.Bar'> >>> Bar() <__main__.Bar object at 0x7f46eca8ab80> Python ends up thinking Bar is a concrete class, even though the changed example doesn't override x at all.
6
5
68,760,228
2021-8-12
https://stackoverflow.com/questions/68760228/is-there-something-in-python-that-is-similar-to-want-in-perl
In Perl there is a way to find out what context a function is called in - whether it's a list or scalar context - or even finer granularity. See wantarray and the want module. Is there something similar in Python?
No, Python doesn't have the scalar/array distinction that Perl does. Values are simply bound to names with no regard for the type of the value. The closest analogy I can think of would be something like x += y This is desugared as x.__iadd__(y), so the method __iadd__ could examine the type of its argument y if it wanted to. For example, even though list.__iadd__ doesn't do this, it could theoretically have been defined to allow things like x = [] x += 9 # x == [9]; append a non-iterable argument x += [11, 12] # x == [9, 11, 12]; extend with an iterable argument
5
5
68,746,351
2021-8-11
https://stackoverflow.com/questions/68746351/using-pydantic-to-deserialize-sublasses-of-a-model
I'm using data that follows a class inheritance pattern... I'm having trouble getting pydantic to deserialize it correctly for some use cases. Given the code below, it appears that the validators are not called when using the parse_* methods. The type for "fluffy" and "tiger" are Animal, however when deserializing the "bob" the Person, his pet is the correct Dog type. Is there another way to approach this problem? Using pydantic is not a requirement, but the ability to deserialize nested complex types (including List and Dict of objects) is. # modified from the following examples # - https://github.com/samuelcolvin/pydantic/issues/2177#issuecomment-739578307 # - https://github.com/samuelcolvin/pydantic/issues/619#issuecomment-713508861 from pydantic import BaseModel TIGER = """{ "type": "cat", "name": "Tiger the Cat", "color": "tabby" }""" FLUFFY = """{ "type": "dog", "name": "Fluffy the Dog", "color": "brown", "breed": "rottweiler" }""" ALICE = """{ "name": "Alice the Person" }""" BOB = f"""{{ "name": "Bob the Person", "pet": {FLUFFY} }}""" class Animal(BaseModel): type: str name: str color: str = None _subtypes_ = dict() def __init_subclass__(cls, type=None): cls._subtypes_[type or cls.__name__.lower()] = cls @classmethod def __get_validators__(cls): yield cls._convert_to_real_type_ @classmethod def _convert_to_real_type_(cls, data): data_type = data.get("type") if data_type is None: raise ValueError("Missing 'type' in Animal") sub = cls._subtypes_.get(data_type) if sub is None: raise TypeError(f"Unsupport sub-type: {data_type}") return sub(**data) class Cat(Animal, type="cat"): hairless: bool = False class Dog(Animal, type="dog"): breed: str class Person(BaseModel): name: str pet: Animal = None tiger = Animal.parse_raw(TIGER) print(f"tiger [{tiger.name}] => {type(tiger)} [{tiger.type}]") fluffy = Animal.parse_raw(FLUFFY) print(f"fluffy [{fluffy.name}] => {type(fluffy)} [{fluffy.type}]") bob = Person.parse_raw(BOB) pet = bob.pet print(f"bob [{bob.name}] => {type(bob)}") print(f"pet [{pet.name}] => {type(pet)}") Output: tiger [Tiger the Cat] => <class '__main__.Animal'> fluffy [Fluffy the Dog] => <class '__main__.Animal'> bob [Bob the Person] => <class '__main__.Person'> pet [Fluffy the Dog] => <class '__main__.Dog'>
Answer from pydantic project discussion: You can simply add class Animal(BaseModel): ... @classmethod def parse_obj(cls, obj): return cls._convert_to_real_type_(obj)
5
3
68,753,693
2021-8-12
https://stackoverflow.com/questions/68753693/mqtt-subscribing-does-not-work-properly-in-multithreading
I have code like below threads = [] t = threading.Thread(target=Subscribe('username', "password", "topic", "host",port).start) t1 = threading.Thread(target=Subscribe('username2', "password2", "topic2", "host",port).start) threads.append(t) threads.append(t1) for thread in threads: thread.start() for thread in threads: thread.join() My topics send data in every 5 minutes. When I use the code above,it does not work properly, sometimes it sends data sometimes it does not. But when I use one topic without thread , it works fine and data is coming in every 5 minutes perfectly. How can I solve this problem? I want to subscribe two topics simultaneously. I am using Paho for MQTT My subscribe class is class Subscribe: def __init__(self, username, passowrd, topic, host, port): self.username = username self.password = passowrd self.topic = topic self.host = host self.port = port def start(self): self.mqttc = mqtt.Client(client_id="Python") self.connected = False self.mqtt_message = "" self.mqttc.username_pw_set(self.username, self.password) self.mqttc.on_connect = self.on_connect self.mqttc.on_message = self.on_message self.mqttc.connect(self.host, self.port) self.mqttc.loop_forever() def on_message(self, client, userdata, message): """ Fetch data when data coming to Broker """ topic = message.topic m = json.loads(message.payload.decode("utf-8")) print(m) def on_connect(self, client, userdata, flags, rc): if rc == 0: print("Connected to broker", self.topic) self.mqttc.subscribe(self.topic) self.connected = True else: print("could not connect", self.topic)
I had to give two different client_ids for the two instances of Client and it solved the issue.
5
1
68,754,499
2021-8-12
https://stackoverflow.com/questions/68754499/sphinx-meth-role-does-not-create-a-link
In a python module, in the docstring of the module I have the following: :meth:`my_method` and I have the following class in the current module: class GameP: ... def my_method(self): return f"{self._name} {self.selected}" Sphinx does not create a link for that, whilst in the Sphinx documentation we have: Normally, names in these roles are searched first without any further qualification, then with the current module name prepended, then with the current module and class name (if any) prepended. If you prefix the name with a dot, this order is reversed. For example, in the documentation of Python’s codecs127 module, :py:func:open always refers to the built-in function, while :py:func:.open refers to codecs.open(). Why does the bolded section not hold for me? :meth: role does not make a link for me.
The documentation is not crystal clear IMHO, but it works if you use :meth:`.my_method` (with a dot). The dot makes Sphinx look for a match for my_method anywhere. The dot is not needed if the cross-reference is in the docstring of the GameP class. But in this case the cross-reference is in the module docstring, and on the module level there is no "current class name".
5
4
68,741,263
2021-8-11
https://stackoverflow.com/questions/68741263/pycharm-like-console-in-visual-code-studio
I wanted to give VSC a try for developing some Python programs, where I only used PyCharm before. One of the most helpful features for me in Pycharm was the PyDev Console, where I can quickly try small snippets of code (think 3-10 lines), and adjust it to work the way I want it to. I see VSC has a console, but it's much more like the regular IDLE console, where it's kind of hard to write these snippets of code (fixing something 2 lines prior for example is pretty much impossible). I've been searching for an extension that'll give me a PyCharm-like console experience in VSC, but have been unable to find one. Is is out there? Or is there another way to get to the same result (like setting up a custom console based on the same PyDev console)?
Have you tried Jupyter Notebook and Interactive? There are provided by the Jupyter Extension which is bound with Python Extension. Open the Command Palette (Ctrl+Shift+P), with the command of: Jupyter: Create New Blank Notebook Jupyter: Create Interactive Window You can refer to the official docs for more details.
6
9
68,750,375
2021-8-12
https://stackoverflow.com/questions/68750375/no-matching-distribution-found-for-pandas-1-3-1
I currently have Pandas with version 1.1.5 and I am trying to install the newest version of Pandas by using the following command $ pip install pandas==1.3.1 However, I get an error as follows: ERROR: Could not find a version that satisfies the requirement pandas==1.3.1 (from versions: 0.1, 0.2b0, 0.2b1, 0.2, 0.3.0b0, 0.3.0b2, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.5.0, 0.6.0, 0.6.1, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0rc1, 0.8.0, 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.14.1, 0.15.0, 0.15.1, 0.15.2, 0.16.0, 0.16.1, 0.16.2, 0.17.0, 0.17.1, 0.18.0, 0.18.1, 0.19.0, 0.19.1, 0.19.2, 0.20.0, 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.21.1, 0.22.0, 0.23.0, 0.23.1, 0.23.2, 0.23.3, 0.23.4, 0.24.0, 0.24.1, 0.24.2, 0.25.0, 0.25.1, 0.25.2, 0.25.3, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5) ERROR: No matching distribution found for pandas==1.3.1 I am using Python 3.6.8 How can I solve this? Thanks!
Pandas dropped support for Python 3.6 in version 1.2.0 (December, 2020). Either upgrade to Python 3.7+ or make do with Pandas 1.1.5, which is the last version to support Python 3.6.
6
20
68,744,441
2021-8-11
https://stackoverflow.com/questions/68744441/why-keyword-match-in-python-3-10-can-be-as-a-variable-or-function-name
I don't fully understand why the keyword match can be used as a variable or function name, unlike other keywords if, while, etc.? >>> match "abc": ... case "abc": ... print('Hello!') ... Hello! >>> from re import match >>> match('A', 'A Something A') <re.Match object; span=(0, 1), match='A'> >>> match = '????' >>> match '????' >>> case = 'something' >>> case 'something'
Per PEP 622, match and case are being added as "soft keywords", so they will remain valid identifiers: This PEP is fully backwards compatible: the match and case keywords are proposed to be (and stay!) soft keywords, so their use as variable, function, class, module or attribute names is not impeded at all.
6
12
68,742,663
2021-8-11
https://stackoverflow.com/questions/68742663/how-to-upgrade-tensorflow-in-anaconda-environment
I am having TensorFlow version 2.0.0 in my anaconda environment. I want to upgrade it to an upgraded version. How can I do it?
conda update <package name> or conda install <package name> Note: A second install equals an Override... Specific to your case: conda install -c conda-forge -n <environment> tensorflow==<wanted version>
6
7
68,741,944
2021-8-11
https://stackoverflow.com/questions/68741944/using-decorators-of-optional-dependency
Lets say I have the following code: try: import bar except ImportError: bar = None @bar.SomeProvidedDecorator def foo(): pass where bar is an optional dependency. The code above will fail, if bar isn't imported. Is there a recommended way of dealing with this problem? I came up with: try: import bar except ImportError: bar = None def foo(): pass if bar is not None: foo = bar.SomeProvidedDecorator(foo) but I'm wondering if there are better ways of handling this (i.e. is there a way to keep the decorator syntax) ?
Provide an identity decorator in case of bar unavailability: try: import bar except ImportError: class bar: SomeProvidedDecorator = lambda f: f
5
6
68,739,824
2021-8-11
https://stackoverflow.com/questions/68739824/use-of-generic-and-typevar
I'm not able to understand the use of Generic and TypeVar, and how they are related. https://docs.python.org/3/library/typing.html#building-generic-types The docs have this example: class Mapping(Generic[KT, VT]): def __getitem__(self, key: KT) -> VT: ... # Etc. X = TypeVar('X') Y = TypeVar('Y') def lookup_name(mapping: Mapping[X, Y], key: X, default: Y) -> Y: try: return mapping[key] except KeyError: return default Type variables exist primarily for the benefit of static type checkers. They serve as the parameters for generic types as well as for generic function definitions. Why can't I simply use Mapping with some existing type, like int, instead of creating X and Y?
Type variables are literally "variables for types". Similar to how regular variables allow code to apply to multiple values, type variables allow code to apply to multiple types. At the same time, just like code is not required to apply to multiple values, it is not required to depend on multiple types. A literal value can be used instead of variables, and a literal type can be used instead of type variables – provided these are the only values/types applicable. Since the Python language semantically only knows values – runtime types are also values – it does not have the facilities to express type variability. Namely, it cannot define, reference or scope type variables. Thus, typing represents these two concepts via concrete things: A typing.TypeVar represents the definition and reference to a type variable. A typing.Generic represents the scoping of types, specifically to class scope. Notably, it is possible to use TypeVar without Generic – functions are naturally scoped – and Generic without TypeVar – scopes may use literal types. Consider a function to add two things. The most naive implementation adds two literal things: def add(): return 5 + 12 That is valid but needlessly restricted. One would like to parameterise the two things to add – this is what regular variables are used for: def add(a, b): return a + b Now consider a function to add two typed things. The most naive implementations adds two things of literal type: def add(a: int, b: int) -> int: return a + b That is valid but needlessly restricted. One would like to parameterise the types of the two things to add – this is what type variables are used for: T = TypeVar("T") def add(a: T, b: T) -> T: return a + b Now, in the case of values we defined two variables – a and b but in the case of types we defined one variable – the single T – but used for both variables! Just like the expression a + a would mean both operands are the same value, the annotation a: T, b: T means both parameters are the same type. This is because our function has a strong relation between the types but not the values. While type variables are automatically scoped in functions – to the function scope – this is not the case for classes: a type variable might be scoped across all methods/attributes of a class or specific to some method/attribute. When we define a class, we may scope type variables to the class scope by adding them as parameters to the class. Notably, parameters are always variables – this applies to regular parameters just as for type parameters. It just does not make sense to parameterise a literal. # v value parameters of the function are "value variables" def mapping(keys, values): ... # v type parameters of the class are "type variables" class Mapping(Generic[KT, VT]): ... When we use a class, the scope of its parameters has already been defined. Notably, the arguments passed in may be literal or variable – this again applies to regular arguments just as for type arguments. # v pass in arguments via literals mapping([0, 1, 2, 3], ['zero', 'one', 'two', 'three']) # v pass in arguments via variables mapping(ks, vs) # v pass in arguments via literals m: Mapping[int, str] # v pass in arguments via variables m: Mapping[KT, VT] Whether to use literals or variables and whether to scope them or not depends on the use-case. But we are free to do either as required.
24
40
68,739,467
2021-8-11
https://stackoverflow.com/questions/68739467/scalars-returning-first-column-and-leaves-out-all-other-columns
I have the following code that should get the last known action of users and insert it's timestamp in the user object: async def get_last_activity(users, db): user_ids = [user.id for user in users] event_query = select(func.max(EventModel.datetime), EventModel.user_id).\ where(EventModel.user_id.in_(user_ids)).\ group_by(EventModel.user_id) events = (await db.execute(event_query)).scalars() for event in events: print(event) for user in users: if event.user_id == user.id: user.last_activity = event.datetime return users @router.get( '/manageable', response_model=Users, ) async def get_manageable_users( db: Session = Depends(get_db) ): """ Returns a list of users that the current user can manage """ # @TODO: get users based on authorization helper. users = (await db.execute(select(UserModel))).scalars().all() users = await get_last_activity(users, db) return [{ 'id': user.id, 'name': user.name, 'email': user.email, 'last_activity': user.last_activity } for user in users] This results in the following error: File "./app/api/v1/users/read.py", line 26, in get_last_activity if event.user_id == user.id: AttributeError: 'datetime.datetime' object has no attribute 'user_id Even though the column is being selected in the generated query: api001_1 | 2021-08-11 09:19:16,514 INFO sqlalchemy.engine.Engine SELECT max(events.datetime) AS max_1, events.user_id api001_1 | FROM events api001_1 | WHERE events.user_id IN (%s, %s) GROUP BY events.user_id api001_1 | 2021-08-11 09:19:16,515 INFO sqlalchemy.engine.Engine [generated in 0.00323s] (1, 2) api001_1 | 2016-10-07 22:00:29 <---------- print(event) Does anyone know why the user_id column is not showing up in the Event objects? Update: It's scalars() that is causing the issue. Without it this is the output of print(event): (datetime.datetime(2016, 10, 7, 22, 0, 29), 1) After running scalars only the datetime is picked up So scalars seems to completely disregard the last number which is the user_id. Why?
I was under the impression that scalars() was supposed to map the result to an object. Upon reading the docs more closely, it can only do this with one column. This (optional) parameter defaults to 0 (index of column) and hence only the datetime is picked up. I'm not using .scalars() anymore and instead have to do row[0], row[1]
5
3
68,740,412
2021-8-11
https://stackoverflow.com/questions/68740412/type-hinting-a-tuple-without-being-too-verbose
Is there a way to type hint a tuple of elements without having to define each inner element a bunch of times? Example: a = ((1, 2), (2, 3), (3, 4), (4, 5)) a: Tuple[Tuple[int, int], Tuple[int, int], Tuple[int, int], Tuple[int, int]] I am looking for something that may look like this a: Tuple[5*Tuple[int, int]] as otherwise my code would become very verbose in order to indicate something like that (tuple containing 5 tuples of 4 ints)
There's a few options here. (All examples here assume a = ((1, 2), (3, 4), (5, 6), (7, 8)), i.e., a tuple consisting of 4 (<int>, <int>) tuples.) You could, as has been suggested in the comments, just use a type alias for the inner type: from typing import Tuple X = Tuple[int, int] a: Tuple[X, X, X, X] You could also use a type alias for the whole annotation if it's still too long, perhaps using typing.Annotated to clarify what's going on in your code: from typing import Tuple, Annotated X = Tuple[int, int] IntTupleSequence = Annotated[ Tuple[X, X, X, X], "Some useful metatadata here giving " "some documentation about this type alias" ] a: IntTupleSequence You could also use typing.Sequence, which is useful for if you have a homogenous sequence (that may be mutable or immutable) that's either of indeterminate length or would be too long to feasibly annotate using typing.Tuple: from typing import Sequence, Tuple a: Sequence[Tuple[int, int]] You can achieve the same thing as Sequence by using Tuple with a literal ellipsis. The ellipsis here signifies that the tuple will be of indeterminate length but homogenous type. from typing Sequence, Tuple a: Tuple[Tuple[int, int], ...] I prefer using Sequence to using Tuple with an ellipsis, however. It's more concise, seems like a more intuitive syntax to me (... is used to mean something similar to Any in other parts of the typing module such as Callable), and I rarely find there's anything gained by telling the type-checker that my immutable Sequence is specifically of type tuple. That's just my opinion, however. A final option might be to use a NamedTuple instead of a tuple for variable a. However, that might be overkill if all items in your tuple are of a homogenous type. (NamedTuple is most useful for annotating heterogeneous tuples.)
5
7
68,736,211
2021-8-11
https://stackoverflow.com/questions/68736211/recursionerror-maximum-recursion-depth-exceeded-python-property-getter-setter
I am trying to do getter setter in this simple class, class Person: def __init__(self, n): self.name = n def get_name(self): return self.name def set_name(self, n): self.name = n name = property(get_name, set_name) p = Person('Lewis') p.name = 'Philo' Looks pretty simple and straightforward, but somehow it's not working, what am I missing, help me find what is the understanding that I am going wrong. I get the following error. Traceback (most recent call last): File "/Users/napoleon/python-play/oops/person.py", line 15, in <module> p = Person('Lewis') File "/Users/napoleon/python-play/oops/person.py", line 4, in __init__ self.name = n File "/Users/napoleon/python-play/oops/person.py", line 10, in set_name self.name = n File "/Users/napoleon/python-play/oops/person.py", line 10, in set_name self.name = n File "/Users/napoleon/python-play/oops/person.py", line 10, in set_name self.name = n [Previous line repeated 994 more times] RecursionError: maximum recursion depth exceeded
seems like since name is a property, that you defined with get_name and set_name when you call self.name = n what it does is actually calls the setter of the property, which is set_name. when you initialize the Person object, it calls __init__ which then calls set_name since it has the line self.name = n, and the same happens in set_name itself (since it has the same line). a solution may be creating a _name member and using it in the getter and setter. in more complex cases you may check if it's not holding the value you sent to the setter, and only then change the member's value in the setter. class Person: _name = None def __init__(self, n): self._name = n def get_name(self): return self._name def set_name(self, n): self._name = n name = property(get_name, set_name)
5
3
68,666,464
2021-8-5
https://stackoverflow.com/questions/68666464/how-to-use-pytest-with-subprocess
The following is a test. How to make this test fail if the subprocess.run results in an error? import pytest import subprocess @pytest.mark.parametrize("input_json", ["input.json"]) def test_main(input_json): subprocess.run(['python', 'main.py', input_json]
subprocess.run returns a CompletedProcess instance, which you can inspect. I'm not sure what exactly you mean by "results in an error"—if it's the return code of the process being non-zero, you can check returncode. If it's a specific error message being output, check stdout or stderr instead. For example: import pytest import subprocess @pytest.mark.parametrize("input_json", ["input.json"]) def test_main(input_json): completed_process = subprocess.run(['python', 'main.py', 'input_json']) assert completed_process.returncode == 0 NB. You don't seem to use the parametrized argument input_json within your test function. Just an observation.
8
9
68,684,670
2021-8-6
https://stackoverflow.com/questions/68684670/how-poetry-knows-my-package-is-located-in-the-src-folder
I have a simple question. I used to create a poetry project with my package at root. project.toml mypackage +- __init__.py +- mypackage.py +- test_mypackage.py I recently moved my tests in another directory, so that the folder now looks like. project.toml src +- mypackage +- __init__.py +- mypackage.py tests +- test_mypackage.py Nothing changed for poetry which still work fine when I make poetry build. How does it search for the package root folder? Does it serach a folder with the package name in project.toml? Thanks for your help.
Having a folder called src to contain the package code is just a pre-defined standard that poetry recognizes without being told. It works via the packages section in your project file, which by default scans for mypackage and src/mypackage. If you provide your own value, it will stop auto-detecting those two.
35
23
68,695,228
2021-8-7
https://stackoverflow.com/questions/68695228/is-there-a-liveness-probe-in-kubernetes-that-can-catch-when-a-python-container-f
I have a python program that runs an infinite loop, however, every once in a while the code freezes. No errors are raised or any other message that would alert me something's wrong. I was wondering if Kubernetes has any liveness probe that could possibly help catch when the code freezes so it can kill and restart that container. I have an idea of having the python code make a periodic log every time it completes the loop. This way I can have a liveness probe check the log file every 30 seconds or so to see if the file has been updated. If the file has not been updated after the allotted time, then its is assumed the program froze and the container is killed and restarted. I am currently using the following python code to test with: #Libraries import logging import random as r from time import sleep #Global Veriables FREEZE_TIME = 60 '''Starts an infinate loop that has a 10% chance of freezing...........................................''' def main(): #Create .log file to hold logged info. logging.basicConfig(filename="freeze.log", level=logging.INFO) #Start infinate loop while True: freeze = r.randint(1, 10) #10% chance of freezing. sleep(2) logging.info('Running infinate loop...') print("Running infinate loop...") #Simulate a freeze. if freeze == 1: print(f"Simulating freeze for {FREEZE_TIME} sec.") sleep(FREEZE_TIME) #Start code with main() if __name__ == "__main__": main() If anyone could tell me how to implement this log idea or if there is a better way to do this I would be most grateful! I am currently using Kubernetes on Docker-Desktop for windows 10 if this makes a difference. Also, I am fairly new to this so if you could keep your answers to a "Kubernetes for dummies" level I would appreciate it.
A common approach to liveness probes in Kubernetes is to access an HTTP endpoint (if the application has it). Kubernetes checks whether response status code falls into 200-399 range (=success) or not (=failure). Running a HTTP server is not mandatory as you can run a command or sequence of commands instead. In this case health status is based on the exit code (0 - ok, anything else - failure). Given the nature of your script and the idea with the log, I would wrote another python script to read the last line of that log and parse the timestamp. Then, if the difference between current time and the timestamp is greater than [insert reasonable amount] then exit(1), else exit(0). If you have prepared the health-check script, you can enable it in this way: spec: containers: - name: my_app image: my_image livenessProbe: exec: command: # the command to run - python3 - check_health.py initialDelaySeconds: 5 # wait 5 sec after start for the log to appear periodSeconds: 5 # run every 5 seconds The documentation has detailed explanation with some great examples.
5
3
68,668,417
2021-8-5
https://stackoverflow.com/questions/68668417/is-it-possible-to-pass-path-arguments-into-fastapi-dependency-functions
Is there any way for a FastAPI "dependency" to interpret Path parameters? I have a lot of functions of the form: @app.post("/item/{item_id}/process", response_class=ProcessResponse) async def process_item(item_id: UUID, session: UserSession = Depends(security.user_session)) -> ProcessResponse: item = await get_item(client_id=session.client_id, item_id=item_id) await item.process() Over and over, I need to pass in [multiple] arguments to fetch the required item before doing something with it. This is very repetitive and makes the code very verbose. What I'd really like to do is pass the item in as an argument to the method. Ideally I'd like to make get_item a dependency or embed it somehow in the router. This would dramatically reduce the repetitive logic and excessively verbose function arguments. The problem is that some critical arguments are passed by the client in the Path. Is it possible to pass Path arguments into a dependency or perhaps execute the dependency in the router and pass the result?
A FastAPI dependency function can take any of the arguments that a normal endpoint function can take. So in a normal endpoint you might define a path parameter like so: from fastapi import FastAPI app = FastAPI() @app.get("/items/{item_id}") async def read_item(item_id): return {"item_id": item_id} Now if you want to use that parameter in a dependency, you can simply do: from fastapi import Depends, FastAPI app = FastAPI() async def my_dependency_function(item_id: int): return {"item_id": item_id} @app.get("/items/{item_id}") async def read_item(item_id: int, my_dependency: dict = Depends(my_dependency_function)): return my_dependency The parameters will simply be passed on through to the dependency function if they are present there. You can also use things like Path and Query within the dependency function to define where these are coming from. It will just analyze the request object to pull these values. Here is an example using the Path function from FastAPI: from fastapi import Depends, FastAPI, Path app = FastAPI() async def my_dependency_function(item_id: int = Path(...)): return {"item_id": item_id} @app.get("/items/{item_id}") async def read_item(my_dependency: dict = Depends(my_dependency_function)): return my_dependency As for your concern of implementing it as a dependency in the router, you can do something like this when creating the router: items_router = APIRouter( prefix="/items", tags=["items"], dependencies=[Depends(my_dependency_function)], ) Or you can do it when you run include_router on the app like: app.include_router( items_router, prefix="/items", dependencies=[Depends(my_dependency_function)], ) For more on dependencies and more examples like this see https://fastapi.tiangolo.com/tutorial/dependencies/
27
45