question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
65,412,984 | 2020-12-22 | https://stackoverflow.com/questions/65412984/using-pandas-data-frame-as-a-type-in-pydantic | I'm using pydantic and want to create classes which contain pandas dataframes. I was looking for this online for quite a time and did not find anything. My code for the custom types looks as following. I named the type for dataframes pd.DataFrame but obviously its not correct. Does anyone know how to declare a pandas dataframe type? import pandas as pd from pydantic import BaseModel class SubModelInput(BaseModel): a: pd.DataFrame b: pd.DataFrame class ModelInput(BaseModel): SubModelInput: SubModelInput a: pd.DataFrame b: pd.DataFrame c: pd.DataFrame Thanks for any help! | You can activate Arbitrary Types Allowed: import pandas as pd from pydantic import BaseModel class SubModelInput(BaseModel): a: pd.DataFrame b: pd.DataFrame class Config: arbitrary_types_allowed = True class ModelInput(BaseModel): SubModelInput: SubModelInput a: pd.DataFrame b: pd.DataFrame c: pd.DataFrame class Config: arbitrary_types_allowed = True | 8 | 13 |
65,316,863 | 2020-12-16 | https://stackoverflow.com/questions/65316863/is-asyncio-to-thread-method-different-to-threadpoolexecutor | I see that asyncio.to_thread() method is been added @python 3.9+, its description says it runs blocking codes on a separate thread to run at once. see example below: def blocking_io(): print(f"start blocking_io at {time.strftime('%X')}") # Note that time.sleep() can be replaced with any blocking # IO-bound operation, such as file operations. time.sleep(1) print(f"blocking_io complete at {time.strftime('%X')}") async def main(): print(f"started main at {time.strftime('%X')}") await asyncio.gather( asyncio.to_thread(blocking_io), asyncio.sleep(1)) print(f"finished main at {time.strftime('%X')}") asyncio.run(main()) # Expected output: # # started main at 19:50:53 # start blocking_io at 19:50:53 # blocking_io complete at 19:50:54 # finished main at 19:50:54 By explanation, it seems like using thread mechanism and not context switching nor coroutine. Does this mean it is not actually an async after all? is it same as a traditional multi-threading as in concurrent.futures.ThreadPoolExecutor? what is the benefit of using thread this way then? | Source code of to_thread is quite simple. It boils down to awaiting run_in_executor with a default executor (executor argument is None) which is ThreadPoolExecutor. In fact, yes, this is traditional multithreading, Ρode intended to run on a separate thread is not asynchronous, but to_thread allows you to await for its result asynchronously. Also note that the function runs in the context of the current task, so its context variable values will be available inside the func. async def to_thread(func, /, *args, **kwargs): """Asynchronously run function *func* in a separate thread. Any *args and **kwargs supplied for this function are directly passed to *func*. Also, the current :class:`contextvars.Context` is propogated, allowing context variables from the main thread to be accessed in the separate thread. Return a coroutine that can be awaited to get the eventual result of *func*. """ loop = events.get_running_loop() ctx = contextvars.copy_context() func_call = functools.partial(ctx.run, func, *args, **kwargs) return await loop.run_in_executor(None, func_call) | 37 | 53 |
65,322,928 | 2020-12-16 | https://stackoverflow.com/questions/65322928/cv2-imwrite-systemerror-built-in-function-imwrite-returned-null-without-set | I'm tring to save an np array as an image. The problem is that, if I write the path in the imwrite function it works, but if i store it in a variable and then use this variable as path it doesn't work and returns an error. This works: cv2.imwrite('my/path/to/image.png', myarray[...,::-1]) This doesn't work new_image_path = path/'{}+.png'.format(np.random.randint(1000)) cv2.imwrite(new_image_path, myarray[...,::-1]) And it returns this error: SystemError: <built-in function imwrite> returned NULL without setting an error | The method cv2.imwrite does not work with the Path object. Simply convert it to string when you call the method: cv2.imwrite(str(new_image_path), myarray[...,::-1]) | 7 | 11 |
65,331,736 | 2020-12-16 | https://stackoverflow.com/questions/65331736/how-can-i-publish-python-packages-to-codeartifact-using-poetry | Trying to publish a Poetry package to AWS CodeArtifact. It supports pip which should indicate that it supports poetry as well since poetry can upload to PyPi servers. I've configured the domain like so: export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain XXXX --domain-owner XXXXXXXXXXXX --query authorizationToken --output text` poetry config repositories.the_aws_repo https://aws:$CODEARTIFACT_AUTH_TOKEN@XXXX-XXXXXXXXXXXX.d.codeartifact.eu-central-1.amazonaws.com/pypi/XXXX/simple/ poetry config pypi-token.the_aws_repo $CODEARTIFACT_AUTH_TOKEN But I'm getting 404 when trying to publish the package: β― poetry publish --repository the_aws_repo -vvv No suitable keyring backend found No suitable keyring backends were found Using a plaintext file to store and retrieve credentials Publishing xxx (0.1.5) to the_aws_repo - Uploading xxx-0.1.5-py3-none-any.whl 100% Stack trace: 7 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/console_application.py:131 in run 129β parsed_args = resolved_command.args 130β β 131β status_code = command.handle(parsed_args, io) 132β except KeyboardInterrupt: 133β status_code = 1 6 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/api/command/command.py:120 in handle 118β def handle(self, args, io): # type: (Args, IO) -> int 119β try: β 120β status_code = self._do_handle(args, io) 121β except KeyboardInterrupt: 122β if io.is_debug(): 5 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/api/command/command.py:171 in _do_handle 169β handler_method = self._config.handler_method 170β β 171β return getattr(handler, handler_method)(args, io, self) 172β 173β def __repr__(self): # type: () -> str 4 ~/.poetry/lib/poetry/_vendor/py3.8/cleo/commands/command.py:92 in wrap_handle 90β self._command = command 91β β 92β return self.handle() 93β 94β def handle(self): # type: () -> Optional[int] 3 ~/.poetry/lib/poetry/console/commands/publish.py:77 in handle 75β ) 76β β 77β publisher.publish( 78β self.option("repository"), 79β self.option("username"), 2 ~/.poetry/lib/poetry/publishing/publisher.py:93 in publish 91β ) 92β β 93β self._uploader.upload( 94β url, 95β cert=cert or get_cert(self._poetry.config, repository_name), 1 ~/.poetry/lib/poetry/publishing/uploader.py:119 in upload 117β 118β try: β 119β self._upload(session, url, dry_run) 120β finally: 121β session.close() UploadError HTTP Error 404: Not Found at ~/.poetry/lib/poetry/publishing/uploader.py:216 in _upload 212β self._register(session, url) 213β except HTTPError as e: 214β raise UploadError(e) 215β β 216β raise UploadError(e) 217β 218β def _do_upload( 219β self, session, url, dry_run=False 220β ): # type: (requests.Session, str, Optional[bool]) -> None My AWS IAM user has permission to do this since I gave it the relevant permissions in the repo. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::XXXXXXXXXXXX:user/ShayN" }, "Action": [ "codeartifact:AssociateExternalConnection", "codeartifact:CopyPackageVersions", "codeartifact:DeletePackageVersions", "codeartifact:DeleteRepository", "codeartifact:DeleteRepositoryPermissionsPolicy", "codeartifact:DescribePackageVersion", "codeartifact:DescribeRepository", "codeartifact:DisassociateExternalConnection", "codeartifact:DisposePackageVersions", "codeartifact:GetPackageVersionReadme", "codeartifact:GetRepositoryEndpoint", "codeartifact:ListPackageVersionAssets", "codeartifact:ListPackageVersionDependencies", "codeartifact:ListPackageVersions", "codeartifact:ListPackages", "codeartifact:PublishPackageVersion", "codeartifact:PutPackageMetadata", "codeartifact:PutRepositoryPermissionsPolicy", "codeartifact:ReadFromRepository", "codeartifact:UpdatePackageVersionsStatus", "codeartifact:UpdateRepository" ], "Resource": "*" } ] } What am I missing? | The problem is the /simple/ at the end of the repo url. This part should only be added when pulling from that repo, not when publishing to it. If you look closely to the documentation of AWS CodeArtifact on how to publish with twine, you'll see that it's also not there. This works: # This will give the repo url without the /simple/ part # Example: https://<my-domain>-<domain-owner-id>.d.codeartifact.<region>.amazonaws.com/pypi/<my-repo>/ # Note the lack of the "aws:auth-token@" part export CODEARTIFACT_REPOSITORY_URL=`aws codeartifact get-repository-endpoint --domain my-domain --domain-owner domain-owner-id --repository my-repo --format pypi --query repositoryEndpoint --output text` # This will give the token to access the repo export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain my-domain --domain-owner domain-owner-id --query authorizationToken --output text` # This specifies the user who accesses the repo export CODEARTIFACT_USER=aws # Now use all of these when configuring the repo in poetry poetry config repositories.<my-repo-name-for-poetry> $CODEARTIFACT_REPOSITORY_URL poetry config http-basic.<my-repo-name-for-poetry> $CODEARTIFACT_USER $CODEARTIFACT_AUTH_TOKEN Note that the authentication token will expire when your AWS login session ends. Hence, you'll have to set the http-basic.<my-repo-name-for-poetry> with the new token every time it expires. FYI, I had the same problem and it took me hours to figure this out. But in the end, more carefully reading the documentation should have helped me. | 16 | 26 |
65,394,319 | 2020-12-21 | https://stackoverflow.com/questions/65394319/boto3-how-to-interract-with-digitalocean-s3-spaces-when-cdn-is-enabled | I'm working with DigitalOcean Spaces (S3 storage protocol) which has enabled CDN. Any file on s3 can be accessed via direct URL in the given form: https://my-bucket.fra1.digitaloceanspaces.com/<file_key> If CDN is enabled, the file can be accessed via additional CDN URL: https://my-bucket.fra1.cdn.digitaloceanspaces.com/<file_key> where fra1 is a region_name. When I'm using boto3 SDK for Python, the file URL is the following (generated by boto3): https://fra1.digitaloceanspaces.com/my-bucket/<file_key> # just note that bucket name is no more a domain part! This format also works fine. But, if CDN is enabled - file url causes an error: EndpointConnectionError: Could not connect to the endpoint URL: https://fra1.cdn.digitaloceanspaces.com/my-bucket/<file_key> assuming the endpoint_url was changed from default_endpoint=https://fra1.digitaloceanspaces.com to default_endpoint=https://fra1.cdn.digitaloceanspaces.com How to connect to CDN with proper URL without getting an error? And why boto3 uses different URL format? Is any workaround can be applied in this case? code: s3_client = boto3.client('s3', region_name=s3_configs['default_region'], endpoint_url=s3_configs['default_endpoint'], aws_access_key_id=s3_configs['bucket_access_key'], aws_secret_access_key=s3_configs['bucket_secret_key']) s3_client.download_file(bucket_name,key,local_filepath) boto3 guide for DigitalOcean Spaces. Here is what I've also tried but It didn't work: Generate presigned url's UPDATE Based on @Amit Singh's answer: As I mentioned before, I've already tried this trick with presigned URLs. I've got Urls like this https://fra1.digitaloceanspaces.com/<my-bucket>/interiors/uploaded/images/07IRgHJ2PFhVqVrJDCIpzhghqe4TwK1cSSUXaC4T.jpeg?<presigned-url-params> The bucket name appears after endpoint. I had to move It to domain-level manually: https://<my-bucket>.fra1.cdn.digitaloceanspaces.com/interiors/uploaded/images/07IRgHJ2PFhVqVrJDCIpzhghqe4TwK1cSSUXaC4T.jpeg?<presigned-url-params> With this URL I can now connect to Digital ocean, but another arror occures: This XML file does not appear to have any style information associated with it. The document tree is shown below. <Error> <Code>SignatureDoesNotMatch</Code> <RequestId>tx00000000000008dfdbc88-006005347c-604235a-fra1a</RequestId> <HostId>604235a-fra1a-fra1</HostId> </Error> As a workaround I've tired to use signature s3v4: s3_client = boto3.client('s3', region_name=configs['default_region'], endpoint_url=configs['default_endpoint'], aws_access_key_id=configs['bucket_access_key'], aws_secret_access_key=configs['bucket_secret_key'], config= boto3.session.Config(signature_version='s3v4')) but It still fails. | Based on @Amit Singh's answer, I've made an additional research of this issue. Answers that helped me were found here and here. To make boto3 presigned URLs work, I've made the following update to client and generate_presigned_url() params. s3_client = boto3.client('s3', region_name=configs['default_region'], endpoint_url=configs['default_endpoint'], aws_access_key_id=configs['bucket_access_key'], aws_secret_access_key=configs['bucket_secret_key'], config=boto3.session.Config(signature_version='s3v4', retries={ 'max_attempts': 10, 'mode': 'standard' }, s3={'addressing_style': "virtual"}, )) ... response = s3_client.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': object_name}, ExpiresIn=3600, HttpMethod=None ) After that, .cdn domain part shoud be added after region name. | 7 | 2 |
65,379,879 | 2020-12-20 | https://stackoverflow.com/questions/65379879/define-a-ipython-magic-which-replaces-the-content-of-the-next-cell | The %load line-magic command loads the content of a given file into the current cell, for instance, executing: [cell 1] %load hello_world.py ... transform the cell into: [cell 1] # %load hello_world.py print("hello, world") I would like to create a %load_next line-magic command which would instead load this file into the next cell. For example, executing cell 1 in the following notebook: [cell 1] %load_next hello_world.py [cell 2] print("hello, cruel world") # original content ... would keep cell 1 unchanged and update cell 2 with the new content: [cell 1] %load_next hello_world.py [cell 2] print("hello, world") I have tried this: from IPython.core.magic import Magics, magics_class, line_magic from pathlib import Path @magics_class class MyMagics(Magics): @line_magic def load_next(self, line): new_content = Path(line).read_text() self.shell.set_next_input(new_content, replace=False) ip = get_ipython() ip.register_magics(MyMagics) But it inserts the content between the current and the next cell: [cell 1] %load_next hello_world.py [cell 2] print("hello, world") [cell 3] print("hello, cruel world") # original content Is it possible to make it either replace the next cell, or delete the next cell before inserting it? | You can run below script. There is no way to get all cells, so I decided to run javascript code to remove the next cell. Js part finds all cells and remove the next cell from the current cell. I have tested on Jupyter Notebook and Jupyter Lab. from IPython.display import display, HTML, Javascript from IPython.core.magic import Magics, magics_class, line_magic from pathlib import Path @magics_class class MyMagics(Magics): @line_magic def load_next(self, line): js_script = r"""<script> if (document.getElementById('notebook-container')) { //console.log('Jupyter Notebook'); allCells = document.getElementById('notebook-container').children; selectionClass = /\bselected\b/; jupyter = 'notebook'; } else if (document.getElementsByClassName('jp-Notebook-cell').length){ //console.log('Jupyter Lab'); allCells = document.getElementsByClassName('jp-Notebook-cell'); selectionClass = /\bjp-mod-selected\b/; jupyter = 'lab'; } else { console.log('Unknown Environment'); } if (typeof allCells !== 'undefined') { for (i = 0; i < allCells.length - 1; i++) { if(selectionClass.test(allCells[i].getAttribute('class'))){ allCells[i + 1].remove(); // remove output indicators of current cell window.setTimeout(function(){ if(jupyter === 'lab') { allCells[i].setAttribute('class', allCells[i].getAttribute('class') + ' jp-mod-noOutputs'); allCells[i].getElementsByClassName('jp-OutputArea jp-Cell-outputArea')[0].innerHTML = ''; } else if(jupyter === 'notebook'){ allCells[i].getElementsByClassName('output')[0].innerHTML = ''; } }, 20); break; } } } </script>""" # remove next cell display(HTML(js_script)) new_content = Path(line).read_text() self.shell.set_next_input(new_content, replace=False) ip = get_ipython() ip.register_magics(MyMagics) | 6 | 2 |
65,349,950 | 2020-12-17 | https://stackoverflow.com/questions/65349950/how-to-make-python-telegram-bot-send-a-message-without-getting-a-commad | I'm Making a telegram bot using Python-Telegram-bot. I wanna make it send a message to one specific user (myself in this case) to select an option. after that, it should take that option as a command and work as usual. but after 30 min... it should send me the same message making me choose an option just like before. How can I make i work? def start(update: Update, context: CallbackContext) -> None: user = update.message.from_user.username print(user) context.bot.send_message(chat_id=update.effective_chat.id, text= "Choose an option. ('/option1' , '/option 2', '/...')") def main(): updater = Updater("<MY-BOT-TOKEN>", use_context=True) updater.dispatcher.add_handler(CommandHandler('start', start)) updater.start_polling() updater.idle() if __name__ == '__main__': main() I want this to run without getting the /start command (the update). but after this... there will be normal functions that'll get updates and after 30 min I wanna run this "start" function again without getting an update. | You can get the bot object either from the updater or the dispatcher: updater = Updater('<bot-token>') updater.bot.sendMessage(chat_id='<user-id>', text='Hello there!') # alternative: updater.dispatcher.bot.sendMessage(chat_id='<user-id>', text='Hello there!') | 11 | 19 |
65,411,425 | 2020-12-22 | https://stackoverflow.com/questions/65411425/running-two-dask-ml-imputers-simultaneously-instead-of-sequentially | I can impute the mean and most frequent value using dask-ml like so, this works fine: mean_imputer = impute.SimpleImputer(strategy='mean') most_frequent_imputer = impute.SimpleImputer(strategy='most_frequent') data = [[100, 2, 5], [np.nan, np.nan, np.nan], [70, 7, 5]] df = pd.DataFrame(data, columns = ['Weight', 'Age', 'Height']) df.iloc[:, [0,1]] = mean_imputer.fit_transform(df.iloc[:,[0,1]]) df.iloc[:, [2]] = most_frequent_imputer.fit_transform(df.iloc[:,[2]]) print(df) Weight Age Height 0 100.0 2.0 5.0 1 85.0 4.5 5.0 2 70.0 7.0 5.0 But what if I have 100 million rows of data it seems that dask would do two loops when it could have done only one, is it possible to run both imputers simultaneously and/or in parallel instead of sequentially? What would be a sample code to achieve that? | You can used dask.delayed as suggested in docs and Dask Toutorial to parallelise the computation if entities are independent of one another. Your code would look like: from dask.distributed import Client client = Client(n_workers=4) from dask import delayed import numpy as np import pandas as pd from dask_ml import impute mean_imputer = impute.SimpleImputer(strategy='mean') most_frequent_imputer = impute.SimpleImputer(strategy='most_frequent') def fit_transform_mi(d): return mean_imputer.fit_transform(d) def fit_transform_mfi(d): return most_frequent_imputer.fit_transform(d) def setdf(a,b,df): df.iloc[:, [0,1]]=a df.iloc[:, [2]]=b return df data = [[100, 2, 5], [np.nan, np.nan, np.nan], [70, 7, 5]] df = pd.DataFrame(data, columns = ['Weight', 'Age', 'Height']) a = delayed(fit_transform_mi)(df.iloc[:,[0,1]]) b = delayed(fit_transform_mfi)(df.iloc[:,[2]]) c = delayed(setdf)(a,b,df) df= c.compute() print(df) client.close() The c object is a lazy Delayed object. This object holds everything we need to compute the final result, including references to all of the functions that are required and their inputs and relationship to one-another. | 6 | 2 |
65,380,093 | 2020-12-20 | https://stackoverflow.com/questions/65380093/is-there-an-essential-difference-between-await-and-async-with-while-doing-reques | My question is about the right way of making response in aiohttp Official aiohttp documentation gives us the example of making an async query: session = aiohttp.ClientSession() async with session.get('http://httpbin.org/get') as resp: print(resp.status) print(await resp.text()) await session.close() I cannot understand, why is the context manager here. All i have found is that __aexit__() method awaits resp.release() method. But the documentation also tells that awaiting resp.release() is not necessary at general. That all really confuses me. Why should i do that way if i find the code below more readable and not so nested? session = aiohttp.ClientSession() resp = await session.get('http://httpbin.org/get') print(resp.status) print(await resp.text()) # I finally have not get the essence of this method. # I've tried both using and not using this method in my code, # I've not found any difference in behaviour. # await resp.release() await session.close() I have dug into aiohttp.ClientSession and its context manager sources, but i have not found anything that could clarify the situation. In the end, my question: what's the difference? | Explicitly managing a response via async with, is not necessary but advisable. The purpose of async with for response objects is to safely and promptly release resources used by the response (via a call to resp.release()). That is, even if an error occurs the resources are freed and available for further requests/responses. Otherwise, aiohttp will also release the response resources but without guarantee of promptness. The worst case is that this is delayed for an arbitrary time, i.e. up to the end of the application and timeout of external resources (such as sockets). The difference is not noticeable if no errors occur (in which case aiohttp cleans up unused resources) and/or if the application is short (in which case there are enough resources to not need re-use). However, since errors may occur unexpectedly and aiohttp is designed for many requests/responses, it is advisable to always default to prompt cleanup via async with. | 9 | 3 |
65,321,798 | 2020-12-16 | https://stackoverflow.com/questions/65321798/how-to-config-completer-use-jedi-to-false-in-juypter-notebook-permanently | Every time a new jupyter notebook instance is opened, it requires %config Completer.use_jedi = False command to be run, before autocomplete functionality starts working. This is tiring every time, to config use_jedi to False before coding. kindly suggest if there is a permanent fix to have autocomplete in juypter notebook. | I launch my jupyterlab from docker and catch this problem. I solved like this: COPY ipython_kernel_config.py /root/.ipython/profile_default/ipython_kernel_config.py Content ipython_kernel_config.py: c.Completer.use_jedi = False idea: https://github.com/ipython/ipython/issues/11530 | 8 | 6 |
65,339,479 | 2020-12-17 | https://stackoverflow.com/questions/65339479/if-you-store-optional-functionality-of-a-base-class-in-a-secondary-class-should | I know the title is probably a bit confusing, so let me give you an example. Suppose you have a base class Base which is intended to be subclassed to create more complex objects. But you also have optional functionality that you don't need for every subclass, so you put it in a secondary class OptionalStuffA that is always intended to be subclassed together with the base class. Should you also make that secondary class a subclass of Base? This is of course only relevant if you have more than one OptionalStuff class and you want to combine them in different ways, because otherwise you don't need to subclass both Base and OptionalStuffA (and just have OptionalStuffA be a subclass of Base so you only need to subclass OptionalStuffA). I understand that it shouldn't make a difference for the MRO if Base is inherited from more than once, but I'm not sure if there are any drawbacks to making all the secondary classes inherit from Base. Below is an example scenario. I've also thrown in the QObject class as a 'third party' token class whose functionality is necessary for one of the secondary classes to work. Where do I subclass it? The example below shows how I've done it so far, but I doubt this is the way to go. from PyQt5.QtCore import QObject class Base: def __init__(self): self._basic_stuff = None def reset(self): self._basic_stuff = None class OptionalStuffA: def __init__(self): super().__init__() self._optional_stuff_a = None def reset(self): if hasattr(super(), 'reset'): super().reset() self._optional_stuff_a = None def do_stuff_that_only_works_if_my_children_also_inherited_from_Base(self): self._basic_stuff = not None class OptionalStuffB: def __init__(self): super().__init__() self._optional_stuff_b = None def reset(self): if hasattr(super(), 'reset'): super().reset() self._optional_stuff_b = None def do_stuff_that_only_works_if_my_children_also_inherited_from_QObject(self): print(self.objectName()) class ClassThatIsActuallyUsed(Base, OptionalStuffA, OptionalStuffB, QObject): def __init__(self): super().__init__() self._unique_stuff = None def reset(self): if hasattr(super(), 'reset'): super().reset() self._unique_stuff = None | What I can get from your problem is that you want to have different functions and properties based on different condition, that sounds like good reason to use MetaClass. It all depends how complex your each class is, and what are you building, if it is for some library or API then MetaClass can do magic if used rightly. MetaClass is perfect to add functions and property to the class based on some sort of condition, you just have to add all your subclass function into one meta class and add that MetaClass to your main class From Where to start you can read about MetaClass here, or you can watch it here. After you have better understanding about MetaClass see the source code of Django ModelForm from here and here, but before that take a brief look on how the Django Form works from outside this will give You an idea on how to implement it. This is how I would implement it. #You can also inherit it from other MetaClass but type has to be top of inheritance class meta_class(type): # create class based on condition """ msc: meta_class, behaves much like self (not exactly sure). name: name of the new class (ClassThatIsActuallyUsed). base: base of the new class (Base). attrs: attrs of the new class (Meta,...). """ def __new__(mcs, name, bases, attrs): meta = attrs.get('Meta') if(meta.optionA){ attrs['reset'] = resetA }if(meta.optionB){ attrs['reset'] = resetB }if(meta.optionC){ attrs['reset'] = resetC } if("QObject" in bases){ attrs['do_stuff_that_only_works_if_my_children_also_inherited_from_QObject'] = functionA } return type(name, bases, attrs) class Base(metaclass=meta_class): #you can also pass kwargs to metaclass here #define some common functions here class Meta: # Set default values here for the class optionA = False optionB = False optionC = False class ClassThatIsActuallyUsed(Base): class Meta: optionA = True # optionB is False by default optionC = True EDIT: Elaborated on how to implement MetaClass. | 8 | 2 |
65,369,447 | 2020-12-19 | https://stackoverflow.com/questions/65369447/how-to-intercept-the-first-value-of-a-generator-and-transparently-yield-from-the | Update: I've started a thread on python-ideas to propose additional syntax or a stdlib function for this purpose (i.e. specifying the first value sent by yield from). So far 0 replies... :/ How do I intercept the first yielded value of a subgenerator but delegate the rest of the iteration to the latter using yield from? For example, suppose we have an arbitrary bidirectional generator subgen, and we want to wrap this in another generator gen. The purpose of gen is to intercept the first yielded value of subgen and delegate the rest of the generationβincluding sent values, thrown exceptions, .close(), etc.βto the sub-generator. The first thing that might come to mind could be this: def gen(): g = subgen() first = next(g) # do something with first... yield "intercepted" # delegate the rest yield from g But this is wrong, because when the caller .sends something back to the generator after getting the first value, it will end up as the value of the yield "intercepted" expression, which is ignored, and instead g will receive None as the first .send value, as part of the semantics of yield from. So we might think to do this: def gen(): g = subgen() first = next(g) # do something with first... received = yield "intercepted" g.send(received) # delegate the rest yield from g But what we've done here is just moving the problem back by one step: as soon as we call g.send(received), the generator resumes its execution and doesn't stop until it reaches the next yield statement, whose value becomes the return value of the .send call. So we'd also have to intercept that and re-send it. And then send that, and that again, and so on... So this won't do. Basically, what I'm asking for is a yield from with a way to customize what the first value sent to the generator is: def gen(): g = subgen() first = next(g) # do something with first... received = yield "intercepted" # delegate the rest yield from g start with received # pseudocode; not valid Python ...but without having to re-implement all of the semantics of yield from myself. That is, the laborious and poorly maintainable solution would be: def adaptor(generator, init_send_value=None): send = init_send_value try: while True: send = yield generator.send(send) except StopIteration as e: return e.value which is basically a bad re-implementation of yield from (it's missing handling of throw, close, etc.). Ideally I would like something more elegant and less redundant. | If you're trying to implement this generator wrapper as a generator function using yield from, then your question basically boils down to whether it is possible to specify the first value sent to the "yielded from" generator. Which it is not. If you look at the formal specification of the yield from expression in PEP 380, you can see why. The specification contains a (surprisingly complex) piece of sample code that behaves the same as a yield from expression. The first few lines are: _i = iter(EXPR) try: _y = next(_i) except StopIteration as _e: _r = _e.value else: ... You can see that the first thing that is done to the iterator is to call next() on it, which is basically equivalent to .send(None). There is no way to skip that step and your generator will always receive another None whenever yield from is used. The solution I've come up with is to implement the generator protocol using a class instead of a generator function: class Intercept: def __init__(self, generator): self._generator = generator self._intercepted = False def __next__(self): return self.send(None) def send(self, value): yielded_value = self._generator.send(value) # Intercept the first value yielded by the wrapped generator and # replace it with a different value. if not self._intercepted: self._intercepted = True print(f'Intercepted value: {yielded_value}') yielded_value = 'intercepted' return yielded_value def throw(self, type, *args): return self._generator.throw(type, *args) def close(self): self._generator.close() __next__(), send(), throw(), close() are described in the Python Reference Manual. The class wraps the generator passed to it when created will mimic its behavior. The only thing it changes is that the first value yielded by the generator is replaced by a different value before it is returned to the caller. We can test the behavior with an example generator f() which yields two values and a function main() which sends values into the generator until the generator terminates: def f(): y = yield 'first' print(f'f(): {y}') y = yield 'second' print(f'f(): {y}') def main(): value_to_send = 0 gen = f() try: x = gen.send(None) while True: print(f'main(): {x}') # Send incrementing integers to the generator. value_to_send += 1 x = gen.send(value_to_send) except StopIteration: print('main(): StopIteration') main() When ran, this example will produce the following output, showing which values arrive in the generator and which are returned by the generator: main(): first f(): 1 main(): second f(): 2 main(): StopIteration Wrapping the generator f() by changing the statement gen = f() to gen = Intercept(f()), produces the following output, showing that the first yielded value has been replaced: Intercepted value: first main(): intercepted f(): 1 main(): second f(): 2 As all other calls to any of the generator API are forwarded directly to the wrapped generator, it should behave equivalently to the wrapped generator itself. | 12 | 2 |
65,413,501 | 2020-12-22 | https://stackoverflow.com/questions/65413501/annoying-diff-format-for-long-strings-using-pytest-pycharm | Hi have this very basic test: def test_long_diff(): long_str1 = "ABCDEFGHIJ " * 10 long_str2 = "ABCDEFGHIJ " * 5 + "* " + "ABCDEFGHIJ " * 5 assert long_str1 == long_str2 Using: Python 3.8.5, pytest-6.2.1, PyCharm 2020.2, MacOs Running with pytest from a shell, the output is "useable" and the error message will point out the faulty character in the long string: (venv) ~/dev/testdiff/> pytest longdiff.py ========== test session starts =========== platform darwin -- Python 3.8.5, pytest-6.2.1, py-1.10.0, pluggy-0.13.1 [...] > assert long_str1 == long_str2 E AssertionError: assert 'ABCDEFGHIJ A...J ABCDEFGHIJ ' == 'ABCDEFGHIJ A...J ABCDEFGHIJ ' E Skipping 45 identical leading characters in diff, use -v to show E - BCDEFGHIJ * ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ E ? -- E + BCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ Using pytest-clarity and the -vv option, I get coloured diff (not rendered bellow) and different details: E left: "ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ " E right: "ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ * ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ " E E left and right have different lengths: E len(left) == 110, len(right) == 112 But if I let Pycharm run the test (same Python version, same .venv, I just right-click the test and pick "Run 'pytest for ...'"), the output in the Run Console is almost unusable because "something along the way" turns the long strings into tuples of shorter strings before applying the diff: FAILED [100%] longdiff.py:0 (test_long_diff) ('ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ' 'ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ') != ('ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ * ABCDEFGHIJ ' 'ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ') <Click to see difference> Clicking <Click to see difference> basically shows the same garbled output in a larger window What's causing this Pycharm output? And is there a way to prevent this behaviour ? Ideally I'd like to see the output from pytest-clarity in the Run console. | So it turns out this is an hard-coded behaviour of the pytest plugin used by PyCharm. The plugin always applies pprint.pformat() to the left and right values. The behaviour described in the question then occurs when the strings are longer than 80 characters and contain white spaces. One possible workaround is to override the plugin's pytest_assertrepr_compare hook. Here's a version that worked for me. Simply paste it in your conftest.py. import pprint import pytest @pytest.hookimpl(tryfirst=True) def pytest_assertrepr_compare(config, op, left, right): if op in ('==', '!='): return ['{0} {1} {2}'.format(pprint.pformat(left, width=999), op, pprint.pformat(right, width=999))] Another possible hack it to monkeypatch pprint.pformat: import pytest import pprint from functools import partial @pytest.fixture(scope='function', autouse=True) def fix_long_string_diffs(monkeypatch): monkeypatch.setattr(pprint, 'pformat', partial(pprint.pformat, width=999)) | 6 | 3 |
65,366,434 | 2020-12-19 | https://stackoverflow.com/questions/65366434/discrepancy-between-two-hosts-running-the-same-docker-commands | A colleague and I have a big Docker puzzle. When we run the following commands we get different results. docker run -it python:3.8.6 /bin/bash pip install fbprophet For me, it installs perfectly, while for him it produces an error and fails to install. I thought the whole point of docker is to prevent this kind of issue, so I'm really puzzled. I'm giving more details below, but my main question is: How is it possible that we get different results? More details: We both are running Docker in a new MacBook Pro with similar specs, on Catalina. His Docker engine version 20.x.x is slightly newer than mine 19.X.X. Also: He tried all the commands he could think of to clean up things in Docker. We verified that the hashes of the image IDs were the same. Our resource settings were also the same. He tried reinstalling Docker and changing to other versions of python (3.7). We tried simultaneously on multiple occasions during the last three days. The result was always the same: He gets the error and I don't. The error he gets is the following. Error: Installing collected packages: six, pytz, python-dateutil, pymeeus, numpy, pyparsing, pillow, pandas, korean-lunar-calendar, kiwisolver, ephem, Cython, cycler, convertdate, tqdm, setuptools-git, pystan, matplotlib, LunarCalendar, holidays, cmdstanpy, fbprophet Running setup.py install for fbprophet ... error ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -u -c βimport sys, setuptools, tokenize; sys.argv[0] = βββββ/tmp/pip-install-l516b8ts/fbprophet_80d5f400081541a2bf6ee26d2785e363/setup.pyβββββ; __file__=βββββ/tmp/pip-install-l516b8ts/fbprophet_80d5f400081541a2bf6ee26d2785e363/setup.pyβββββ;f=getattr(tokenize, βββββopenβββββ, open)(__file__);code=f.read().replace(βββββ\r\nβββββ, βββββ\nβββββ);f.close();exec(compile(code, __file__, βββββexecβββββ))' install --record /tmp/pip-record-7n8tvfkb/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.8/fbprophet cwd: /tmp/pip-install-l516b8ts/fbprophet_80d5f400081541a2bf6ee26d2785e363/ Complete output (10 lines): running install running build running build_py creating build creating build/lib creating build/lib/fbprophet creating build/lib/fbprophet/stan_model Importing plotly failed. Interactive plots will not work. INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_dfdaf2b8ece8a02eb11f050ec701c0ec NOW. error: command βgccβ failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/local/bin/python -u -c βimport sys, setuptools, tokenize; sys.argv[0] = βββββ/tmp/pip-install-l516b8ts/fbprophet_80d5f400081541a2bf6ee26d2785e363/setup.pyβββββ; __file__=βββββ/tmp/pip-install-l516b8ts/fbprophet_80d5f400081541a2bf6ee26d2785e363/setup.pyβββββ;f=getattr(tokenize, βββββopenβββββ, open)(__file__);code=f.read().replace(βββββ\r\nβββββ, βββββ\nβββββ);f.close();exec(compile(code, __file__, βββββexecβββββ))' install --record /tmp/pip-record-7n8tvfkb/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.8/fbprophet Check the logs for full command output. Note that running the two commands I provided always produce errors, but they are not critical. Upgrading setuptools and installing the dependencies before fbprophet solves those minor errors. The error shown above is different, related to gcc, and only happens to some people. Optional additional questions: How do we fix it? How do we prevent non-reproducible results like this one? Can upgrading the docker engine version break a container? | How do we fix it? Your error reports a GCC / compilation problem. A quick search shows mostly problems related to python / gcc version (one, two, three). But you are right, this doesn't look like as it could happen inside a one particular container. What it does look like is some kind of OOM problem. Also, is this a VM? Stan requires a significant amount of memory to compile the models, and this error can occur if you run out of RAM while it is compiling. I did a bit of testing. On my machine the compilation process consumed up to 2.4 Gb of RAM. cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core) uname -r 3.10.0-1160.6.1.el7.x86_64 docker --version Docker version 20.10.1, build 831ebea # works fine docker run --rm -it -m 3G python:3.8.6 /bin/bash # fails with error: command 'gcc' failed with exit status 1 # actually it was killed by OOM killer docker run --rm -it -m 2G python:3.8.6 /bin/bash # yes, here he is tail -f /var/log/messages | grep -i 'killed process' Dec 22 08:34:09 cent7-1 kernel: Killed process 5631 (cc1plus), UID 0, total-vm:2073600kB, anon-rss:1962404kB, file-rss:15332kB, shmem-rss:0kB Dec 22 08:35:56 cent7-1 kernel: Killed process 5640 (cc1plus), UID 0, total-vm:2056816kB, anon-rss:1947392kB, file-rss:15308kB, shmem-rss:0kB Check OOM killer log on problematic machine. Is there enough RAM available for Docker? Can upgrading the docker engine version break a container? Generally, it shouldn't be the case. But for v20.10.0 Docker introduced a very big set of changes related to memory and cgroups. After you rule out all obvious reasons (like your friend's machine just not having enough RAM), you might need to dig into your docker daemon settings related to memory / cgroups / etc. How can the same container produce different results on two computers? Well, technically it's quite possible. Containerized programs still use host OS kernel. Not all kernel settings are "namespaced", i. e. can be set exclusively for one particular container. A lot of them (actually, most) are still global and can affect your program's behavior. Though I don't think it's related to your problem. But for complicated programs relying on specific kernel setting that must be taken into account. | 7 | 13 |
65,410,758 | 2020-12-22 | https://stackoverflow.com/questions/65410758/problem-formatting-python-when-using-prettier-in-vscode | In vscode I want to use Prettier as my default formatter, but not for Python, where I will just use autopep8. I have the following settings now: { "workbench.iconTheme": "vscode-icons", "workbench.editorAssociations": [ { "viewType": "jupyter.notebook.ipynb", "filenamePattern": "*.ipynb" } ], "git.confirmSync": false, "editor.formatOnPaste": true, "editor.formatOnSave": true, "editor.defaultFormatter": "esbenp.prettier-vscode", "python.formatting.provider": "autopep8", "explorer.confirmDelete": false, "python.showStartPage": false, "explorer.confirmDragAndDrop": false } When I save a python file, it gives me the message: "Extension 'Pretier - code formatter cannot format etc...'. So, apparently it still uses the wrong formatter for python files. How do I change this?! | If I disabled Prettier as the default formatter, it would not format on save anymore, but my Python would be formatted by autopep8 on save. With this in mind, the following solution worked for me to have both Prettier working for other languages and autopep8 for Python: { "workbench.iconTheme": "vscode-icons", "workbench.editorAssociations": [ { "viewType": "jupyter.notebook.ipynb", "filenamePattern": "*.ipynb" } ], "git.confirmSync": false, "editor.formatOnPaste": true, "editor.formatOnSave": true, "python.formatting.provider": "autopep8", "explorer.confirmDelete": false, "python.showStartPage": false, "explorer.confirmDragAndDrop": false, "python.linting.pylintArgs": ["--load-plugins=pylint_django"], "javascript.updateImportsOnFileMove.enabled": "always", "editor.defaultFormatter": "esbenp.prettier-vscode", "[python]": { "editor.defaultFormatter": "ms-python.python" } } Let me know if somebody finds a better solution! | 19 | 40 |
65,383,964 | 2020-12-20 | https://stackoverflow.com/questions/65383964/typeerror-could-not-build-a-typespec-with-type-kerastensor | I am a newbie to deep learning so while I am trying to build a Masked R-CNN model for training my Custom Dataset I am getting an error which reads: TypeError: Could not build a TypeSpec for <KerasTensor: shape=(None, None, 4) dtype=float32 (created by layer 'tf.math.truediv')> with type KerasTensor Below is the PYTHON CODE I am trying to implement for building my ** Masked R-CNN model**: Mask R-CNN Configurations and data loading code for MS COCO. Copyright (c) 2017 Matterport, Inc. Licensed under the MIT License (see LICENSE for details) Written by Waleed Abdulla ------------------------------------------------------------ Usage: import the module (see Jupyter notebooks for examples), or run from the command line as such: # Train a new model starting from pre-trained COCO weights python3 coco.py train --dataset=/path/to/coco/ --model=coco # Train a new model starting from ImageNet weights. Also auto download COCO dataset python3 coco.py train --dataset=/path/to/coco/ --model=imagenet --download=True # Continue training a model that you had trained earlier python3 coco.py train --dataset=/path/to/coco/ --model=/path/to/weights.h5 # Continue training the last model you trained python3 coco.py train --dataset=/path/to/coco/ --model=last # Run COCO evaluatoin on the last model you trained python3 coco.py evaluate --dataset=/path/to/coco/ --model=last """ import os import sys import time import numpy as np import imgaug # https://github.com/aleju/imgaug (pip3 install imgaug) # Download and install the Python COCO tools from https://github.com/waleedka/coco # That's a fork from the original https://github.com/pdollar/coco with a bug # fix for Python 3. # I submitted a pull request https://github.com/cocodataset/cocoapi/pull/50 # If the PR is merged then use the original repo. # Note: Edit PythonAPI/Makefile and replace "python" with "python3". from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval from pycocotools import mask as maskUtils import zipfile import urllib.request import shutil # Root directory of the project ROOT_DIR = os.path.abspath("../../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn.config import Config from mrcnn import model as modellib, utils # Path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") # Directory to save logs and model checkpoints, if not provided # through the command line argument --logs DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "Mask_RCNN\\logs") DEFAULT_DATASET_YEAR = "2014" ############################################################ # Configurations ############################################################ class CocoConfig(Config): """Configuration for training on MS COCO. Derives from the base Config class and overrides values specific to the COCO dataset. """ # Give the configuration a recognizable name NAME = "coco" # We use a GPU with 12GB memory, which can fit two images. # Adjust down if you use a smaller GPU. IMAGES_PER_GPU = 2 # Uncomment to train on 8 GPUs (default is 1) # GPU_COUNT = 8 # Number of classes (including background) NUM_CLASSES = 1 + 80 # COCO has 80 classes ############################################################ # Dataset ############################################################ class CocoDataset(utils.Dataset): def load_coco(self, dataset_dir, subset, year=DEFAULT_DATASET_YEAR, class_ids=None, class_map=None, return_coco=False, auto_download=False): """Load a subset of the COCO dataset. dataset_dir: The root directory of the COCO dataset. subset: What to load (train, val, minival, valminusminival) year: What dataset year to load (2014, 2017) as a string, not an integer class_ids: If provided, only loads images that have the given classes. class_map: TODO: Not implemented yet. Supports maping classes from different datasets to the same class ID. return_coco: If True, returns the COCO object. auto_download: Automatically download and unzip MS-COCO images and annotations """ if auto_download is True: self.auto_download(dataset_dir, subset, year) coco = COCO("{}/annotations/instances_{}{}.json".format(dataset_dir, subset, year)) if subset == "minival" or subset == "valminusminival": subset = "val" image_dir = "{}/{}{}".format(dataset_dir, subset, year) # Load all classes or a subset? if not class_ids: # All classes class_ids = sorted(coco.getCatIds()) # All images or a subset? if class_ids: image_ids = [] for id in class_ids: image_ids.extend(list(coco.getImgIds(catIds=[id]))) # Remove duplicates image_ids = list(set(image_ids)) else: # All images image_ids = list(coco.imgs.keys()) # Add classes for i in class_ids: self.add_class("coco", i, coco.loadCats(i)[0]["name"]) # Add images for i in image_ids: self.add_image( "coco", image_id=i, path=os.path.join(image_dir, coco.imgs[i]['file_name']), width=coco.imgs[i]["width"], height=coco.imgs[i]["height"], annotations=coco.loadAnns(coco.getAnnIds( imgIds=[i], catIds=class_ids, iscrowd=None))) if return_coco: return coco def auto_download(self, dataDir, dataType, dataYear): """Download the COCO dataset/annotations if requested. dataDir: The root directory of the COCO dataset. dataType: What to load (train, val, minival, valminusminival) dataYear: What dataset year to load (2014, 2017) as a string, not an integer Note: For 2014, use "train", "val", "minival", or "valminusminival" For 2017, only "train" and "val" annotations are available """ # Setup paths and file names if dataType == "minival" or dataType == "valminusminival": imgDir = "{}/{}{}".format(dataDir, "val", dataYear) imgZipFile = "{}/{}{}.zip".format(dataDir, "val", dataYear) imgURL = "http://images.cocodataset.org/zips/{}{}.zip".format("val", dataYear) else: imgDir = "{}/{}{}".format(dataDir, dataType, dataYear) imgZipFile = "{}/{}{}.zip".format(dataDir, dataType, dataYear) imgURL = "http://images.cocodataset.org/zips/{}{}.zip".format(dataType, dataYear) # print("Image paths:"); print(imgDir); print(imgZipFile); print(imgURL) # Create main folder if it doesn't exist yet if not os.path.exists(dataDir): os.makedirs(dataDir) # Download images if not available locally if not os.path.exists(imgDir): os.makedirs(imgDir) print("Downloading images to " + imgZipFile + " ...") with urllib.request.urlopen(imgURL) as resp, open(imgZipFile, 'wb') as out: shutil.copyfileobj(resp, out) print("... done downloading.") print("Unzipping " + imgZipFile) with zipfile.ZipFile(imgZipFile, "r") as zip_ref: zip_ref.extractall(dataDir) print("... done unzipping") print("Will use images in " + imgDir) # Setup annotations data paths annDir = "{}/annotations".format(dataDir) if dataType == "minival": annZipFile = "{}/instances_minival2014.json.zip".format(dataDir) annFile = "{}/instances_minival2014.json".format(annDir) annURL = "https://dl.dropboxusercontent.com/s/o43o90bna78omob/instances_minival2014.json.zip?dl=0" unZipDir = annDir elif dataType == "valminusminival": annZipFile = "{}/instances_valminusminival2014.json.zip".format(dataDir) annFile = "{}/instances_valminusminival2014.json".format(annDir) annURL = "https://dl.dropboxusercontent.com/s/s3tw5zcg7395368/instances_valminusminival2014.json.zip?dl=0" unZipDir = annDir else: annZipFile = "{}/annotations_trainval{}.zip".format(dataDir, dataYear) annFile = "{}/instances_{}{}.json".format(annDir, dataType, dataYear) annURL = "http://images.cocodataset.org/annotations/annotations_trainval{}.zip".format(dataYear) unZipDir = dataDir # print("Annotations paths:"); print(annDir); print(annFile); print(annZipFile); print(annURL) # Download annotations if not available locally if not os.path.exists(annDir): os.makedirs(annDir) if not os.path.exists(annFile): if not os.path.exists(annZipFile): print("Downloading zipped annotations to " + annZipFile + " ...") with urllib.request.urlopen(annURL) as resp, open(annZipFile, 'wb') as out: shutil.copyfileobj(resp, out) print("... done downloading.") print("Unzipping " + annZipFile) with zipfile.ZipFile(annZipFile, "r") as zip_ref: zip_ref.extractall(unZipDir) print("... done unzipping") print("Will use annotations in " + annFile) def load_mask(self, image_id): """Load instance masks for the given image. Different datasets use different ways to store masks. This function converts the different mask format to one format in the form of a bitmap [height, width, instances]. Returns: masks: A bool array of shape [height, width, instance count] with one mask per instance. class_ids: a 1D array of class IDs of the instance masks. """ # If not a COCO image, delegate to parent class. image_info = self.image_info[image_id] if image_info["source"] != "coco": return super(CocoDataset, self).load_mask(image_id) instance_masks = [] class_ids = [] annotations = self.image_info[image_id]["annotations"] # Build mask of shape [height, width, instance_count] and list # of class IDs that correspond to each channel of the mask. for annotation in annotations: class_id = self.map_source_class_id( "coco.{}".format(annotation['category_id'])) if class_id: m = self.annToMask(annotation, image_info["height"], image_info["width"]) # Some objects are so small that they're less than 1 pixel area # and end up rounded out. Skip those objects. if m.max() < 1: continue # Is it a crowd? If so, use a negative class ID. if annotation['iscrowd']: # Use negative class ID for crowds class_id *= -1 # For crowd masks, annToMask() sometimes returns a mask # smaller than the given dimensions. If so, resize it. if m.shape[0] != image_info["height"] or m.shape[1] != image_info["width"]: m = np.ones([image_info["height"], image_info["width"]], dtype=bool) instance_masks.append(m) class_ids.append(class_id) # Pack instance masks into an array if class_ids: mask = np.stack(instance_masks, axis=2).astype(np.bool) class_ids = np.array(class_ids, dtype=np.int32) return mask, class_ids else: # Call super class to return an empty mask return super(CocoDataset, self).load_mask(image_id) def image_reference(self, image_id): """Return a link to the image in the COCO Website.""" info = self.image_info[image_id] if info["source"] == "coco": return "http://cocodataset.org/#explore?id={}".format(info["id"]) else: super(CocoDataset, self).image_reference(image_id) # The following two functions are from pycocotools with a few changes. def annToRLE(self, ann, height, width): """ Convert annotation which can be polygons, uncompressed RLE to RLE. :return: binary mask (numpy 2D array) """ segm = ann['segmentation'] if isinstance(segm, list): # polygon -- a single object might consist of multiple parts # we merge all parts into one mask rle code rles = maskUtils.frPyObjects(segm, height, width) rle = maskUtils.merge(rles) elif isinstance(segm['counts'], list): # uncompressed RLE rle = maskUtils.frPyObjects(segm, height, width) else: # rle rle = ann['segmentation'] return rle def annToMask(self, ann, height, width): """ Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask. :return: binary mask (numpy 2D array) """ rle = self.annToRLE(ann, height, width) m = maskUtils.decode(rle) return m ############################################################ # COCO Evaluation ############################################################ def build_coco_results(dataset, image_ids, rois, class_ids, scores, masks): """Arrange resutls to match COCO specs in http://cocodataset.org/#format """ # If no results, return an empty list if rois is None: return [] results = [] for image_id in image_ids: # Loop through detections for i in range(rois.shape[0]): class_id = class_ids[i] score = scores[i] bbox = np.around(rois[i], 1) mask = masks[:, :, i] result = { "image_id": image_id, "category_id": dataset.get_source_class_id(class_id, "coco"), "bbox": [bbox[1], bbox[0], bbox[3] - bbox[1], bbox[2] - bbox[0]], "score": score, "segmentation": maskUtils.encode(np.asfortranarray(mask)) } results.append(result) return results def evaluate_coco(model, dataset, coco, eval_type="bbox", limit=0, image_ids=None): """Runs official COCO evaluation. dataset: A Dataset object with valiadtion data eval_type: "bbox" or "segm" for bounding box or segmentation evaluation limit: if not 0, it's the number of images to use for evaluation """ # Pick COCO images from the dataset image_ids = image_ids or dataset.image_ids # Limit to a subset if limit: image_ids = image_ids[:limit] # Get corresponding COCO image IDs. coco_image_ids = [dataset.image_info[id]["id"] for id in image_ids] t_prediction = 0 t_start = time.time() results = [] for i, image_id in enumerate(image_ids): # Load image image = dataset.load_image(image_id) # Run detection t = time.time() r = model.detect([image], verbose=0)[0] t_prediction += (time.time() - t) # Convert results to COCO format # Cast masks to uint8 because COCO tools errors out on bool image_results = build_coco_results(dataset, coco_image_ids[i:i + 1], r["rois"], r["class_ids"], r["scores"], r["masks"].astype(np.uint8)) results.extend(image_results) # Load results. This modifies results with additional attributes. coco_results = coco.loadRes(results) # Evaluate cocoEval = COCOeval(coco, coco_results, eval_type) cocoEval.params.imgIds = coco_image_ids cocoEval.evaluate() cocoEval.accumulate() cocoEval.summarize() print("Prediction time: {}. Average {}/image".format( t_prediction, t_prediction / len(image_ids))) print("Total time: ", time.time() - t_start) ############################################################ # Training ############################################################ if __name__ == '__main__': import argparse # Parse command line arguments parser = argparse.ArgumentParser( description='Train Mask R-CNN on MS COCO.') parser.add_argument("command", metavar="<command>", help="'train' or 'evaluate' on MS COCO") parser.add_argument('--dataset', required=True, metavar="C:\\Users\\HP\\TEST\\Train", help='Directory of the MS-COCO dataset') parser.add_argument('--year', required=False, default=DEFAULT_DATASET_YEAR, metavar="<year>", help='Year of the MS-COCO dataset (2014 or 2017) (default=2014)') parser.add_argument('--model', required=False, #metavar="C:\\Users\\HP\\mask_rcnn_coco.h5" metavar="C:\\Users\\HP\\Mask_RCNN\\samples\\coco\\coco.py", help="Path to weights .h5 file or 'coco'") parser.add_argument('--logs', required=False, default=DEFAULT_LOGS_DIR, metavar="/path/to/logs/", help='Logs and checkpoints directory (default=logs/)') parser.add_argument('--limit', required=False, default=500, metavar="<image count>", help='Images to use for evaluation (default=500)') parser.add_argument('--download', required=False, default=False, metavar="<True|False>", help='Automatically download and unzip MS-COCO files (default=False)', type=bool) args = parser.parse_args() print("Command: ", args.command) print("Model: ", args.model) print("Dataset: ", args.dataset) print("Year: ", args.year) print("Logs: ", args.logs) print("Auto Download: ", args.download) # Configurations if args.command == "train": config = CocoConfig() else: class InferenceConfig(CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 DETECTION_MIN_CONFIDENCE = 0 config = InferenceConfig() config.display() # Create model if args.command == "train": model = modellib.MaskRCNN(mode="training", config=config, model_dir=args.logs) else: model = modellib.MaskRCNN(mode="inference", config=config, model_dir=args.logs) # Select weights file to load if args.model.lower() == "coco": model_path = COCO_MODEL_PATH elif args.model.lower() == "last": # Find last trained weights model_path = model.find_last() elif args.model.lower() == "imagenet": # Start from ImageNet trained weights model_path = model.get_imagenet_weights() else: model_path = args.model # Load weights print("Loading weights ", model_path) model.load_weights(model_path, by_name=True) # Train or evaluate if args.command == "train": # Training dataset. Use the training set and 35K from the # validation set, as as in the Mask RCNN paper. dataset_train = CocoDataset() dataset_train.load_coco(args.dataset, "train", year=args.year, auto_download=args.download) if args.year in '2014': dataset_train.load_coco(args.dataset, "valminusminival", year=args.year, auto_download=args.download) dataset_train.prepare() # Validation dataset dataset_val = CocoDataset() val_type = "val" if args.year in '2017' else "minival" dataset_val.load_coco(args.dataset, val_type, year=args.year, auto_download=args.download) dataset_val.prepare() # Image Augmentation # Right/Left flip 50% of the time augmentation = imgaug.augmenters.Fliplr(0.5) # *** This training schedule is an example. Update to your needs *** # Training - Stage 1 print("Training network heads") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=40, layers='heads', augmentation=augmentation) # Training - Stage 2 # Finetune layers from ResNet stage 4 and up print("Fine tune Resnet stage 4 and up") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=120, layers='4+', augmentation=augmentation) # Training - Stage 3 # Fine tune all layers print("Fine tune all layers") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE / 10, epochs=160, layers='all', augmentation=augmentation) elif args.command == "evaluate": # Validation dataset dataset_val = CocoDataset() val_type = "val" if args.year in '2017' else "minival" coco = dataset_val.load_coco(args.dataset, val_type, year=args.year, return_coco=True,auto_download=args.download) dataset_val.prepare() print("Running COCO evaluation on {} images.".format(args.limit)) evaluate_coco(model, dataset_val, coco, "bbox", limit=int(args.limit)) else: print("'{}' is not recognized. " "Use 'train' or 'evaluate'".format(args.command)) Now after I saved this code as a .py file and executed the following command on my terminal: (base) C:\Users\HP>python C:\Users\HP\Mask_RCNN\samples\coco\coco.py train --dataset=C:\Users\HP\Test\Train --model=coco I got the following: 2020-12-21 00:41:06.252236: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2020-12-21 00:41:06.260248: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. (base) C:\Users\HP>python C:\Users\HP\Desktop\try.py train --dataset=C:\Users\HP\Test\Train --model=C:\Users\HP\mask_rcnn_coco.h5 2020-12-21 00:42:34.586446: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2020-12-21 00:42:34.594568: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. (base) C:\Users\HP>python C:\Users\HP\Mask_RCNN\samples\coco\coco.py train --dataset=C:\Users\HP\Test\Train --model=coco 2020-12-21 00:44:41.479421: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2020-12-21 00:44:41.490317: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Command: train Model: coco Dataset: C:\Users\HP\Test\Train Year: 2014 Logs: C:\Mask_RCNN\logs Auto Download: False Configurations: BACKBONE resnet101 BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 2 BBOX_STD_DEV [0.1 0.1 0.2 0.2] COMPUTE_BACKBONE_SHAPE None DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD 0.3 FPN_CLASSIF_FC_LAYERS_SIZE 1024 GPU_COUNT 1 GRADIENT_CLIP_NORM 5.0 IMAGES_PER_GPU 2 IMAGE_MAX_DIM 1024 IMAGE_META_SIZE 93 IMAGE_MIN_DIM 800 IMAGE_MIN_SCALE 0 IMAGE_RESIZE_MODE square IMAGE_SHAPE [1024 1024 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME coco NUM_CLASSES 81 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (32, 64, 128, 256, 512) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TOP_DOWN_PYRAMID_SIZE 256 TRAIN_BN False TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001 Traceback (most recent call last): File "C:\Users\HP\Mask_RCNN\samples\coco\coco.py", line 456, in <module> model_dir=args.logs) File "C:\Users\HP\anaconda3\lib\site-packages\mrcnn\model.py", line 1832, in __init__ self.keras_model = self.build(mode=mode, config=config) File "C:\Users\HP\anaconda3\lib\site-packages\mrcnn\model.py", line 1871, in build x, K.shape(input_image)[1:3]))(input_gt_boxes) File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 952, in __call__ input_list) File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1091, in _functional_construction_call inputs, input_masks, args, kwargs) File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 822, in _keras_tensor_symbolic_call return self._infer_output_signature(inputs, args, kwargs, input_masks) File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 869, in _infer_output_signature keras_tensor.keras_tensor_from_tensor, outputs) File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\util\nest.py", line 659, in map_structure structure[0], [func(*x) for x in entries], File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\util\nest.py", line 659, in <listcomp> structure[0], [func(*x) for x in entries], File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\engine\keras_tensor.py", line 606, in keras_tensor_from_tensor out = keras_tensor_cls.from_tensor(tensor) File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\engine\keras_tensor.py", line 205, in from_tensor type_spec = type_spec_module.type_spec_from_value(tensor) File "C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\framework\type_spec.py", line 554, in type_spec_from_value (value, type(value).__name__)) TypeError: Could not build a TypeSpec for <KerasTensor: shape=(None, None, 4) dtype=float32 (created by layer 'tf.math.truediv')> with type KerasTensor | You should using Tensorflow 1.x. Change TF version on colab using %tensorflow_version 1.x After that I think you will get other problem with keras version, add command to install keras 2.1.5. !pip install keras==2.1.5 | 14 | 2 |
65,408,027 | 2020-12-22 | https://stackoverflow.com/questions/65408027/how-to-correctly-use-cross-entropy-loss-vs-softmax-for-classification | I want to train a multi class classifier using Pytorch. Following the official Pytorch doc shows how to use a nn.CrossEntropyLoss() after a last layer of type nn.Linear(84, 10). However, I remember this is what Softmax does. This leaves me confused. How to train a "standard" classification network in the best way? If the network has a final linear layer, how to infer the probabilities per class? If the network has a final softmax layer, how to train the network (which loss, and how)? I found this thread on the Pytorch forum, which likely answers all that, but I couldn't compile it into working and readable Pytorch code. My assumed answers: Like the doc says. Exponentiation of the outputs of the linear layer, which are really logits (log probalbilities). I don't understand. | I think that it's important to understand softmax and cross-entropy, at least from a practical point of view. Once you have a grasp on these two concepts then it should be clear how they may be "correctly" used in the context of ML. Cross Entropy H(p, q) Cross-entropy is a function that compares two probability distributions. From a practical standpoint it's probably not worth getting into the formal motivation of cross-entropy, though if you're interested I would recommend Elements of Information Theory by Cover and Thomas as an introductory text. This concept is introduced pretty early on (chapter 2 I believe). This is the intro text I used in grad school and I thought it did a very good job (granted I had a wonderful instructor as well). The key thing to pay attention to is that cross-entropy is a function that takes, as input, two probability distributions: q and p and returns a value that is minimal when q and p are equal. q represents an estimated distribution, and p represents a true distribution. In the context of ML classification we know the actual label of the training data, so the true/target distribution, p, has a probability of 1 for the true label and 0 elsewhere, i.e. p is a one-hot vector. On the other hand, the estimated distribution (output of a model), q, generally contains some uncertainty, so the probability of any class in q will be between 0 and 1. By training a system to minimize cross entropy we are telling the system that we want it to try and make the estimated distribution as close as it can to the true distribution. Therefore, the class that your model thinks is most likely is the class corresponding to the highest value of q. Softmax Again, there are some complicated statistical ways to interpret softmax that we won't discuss here. The key thing from a practical standpoint is that softmax is a function that takes a list of unbounded values as input, and outputs a valid probability mass function with the relative ordering maintained. It's important to stress the second point about relative ordering. This implies that the maximum element in the input to softmax corresponds to the maximum element in the output of softmax. Consider a softmax activated model trained to minimize cross-entropy. In this case, prior to softmax, the model's goal is to produce the highest value possible for the correct label and the lowest value possible for the incorrect label. CrossEntropyLoss in PyTorch The definition of CrossEntropyLoss in PyTorch is a combination of softmax and cross-entropy. Specifically CrossEntropyLoss(x, y) := H(one_hot(y), softmax(x)) Note that one_hot is a function that takes an index y, and expands it into a one-hot vector. Equivalently you can formulate CrossEntropyLoss as a combination of LogSoftmax and negative log-likelihood loss (i.e. NLLLoss in PyTorch) LogSoftmax(x) := ln(softmax(x)) CrossEntropyLoss(x, y) := NLLLoss(LogSoftmax(x), y) Due to the exponentiation in softmax, there are some computational "tricks" that make directly using CrossEntropyLoss more stable (more accurate, less likely to get NaN), than computing it in stages. Conclusion Based on the above discussion the answers to your questions are 1. How to train a "standard" classification network in the best way? Like the doc says. 2. If the network has a final linear layer, how to infer the probabilities per class? Apply softmax to the output of the network to infer the probabilities per class. If the goal is to just find the relative ordering or highest probability class then just apply argsort or argmax to the output directly (since softmax maintains relative ordering). 3. If the network has a final softmax layer, how to train the network (which loss, and how)? Generally, you don't want to train a network that outputs softmaxed outputs for stability reasons mentioned above. That said, if you absolutely needed to for some reason, you would take the log of the outputs and provide them to NLLLoss criterion = nn.NLLLoss() ... x = model(data) # assuming the output of the model is softmax activated loss = criterion(torch.log(x), y) which is mathematically equivalent to using CrossEntropyLoss with a model that does not use softmax activation. criterion = nn.CrossEntropyLoss() ... x = model(data) # assuming the output of the model is NOT softmax activated loss = criterion(x, y) | 8 | 26 |
65,369,567 | 2020-12-19 | https://stackoverflow.com/questions/65369567/import-rest-framework-could-not-be-resolved-but-i-have-installed-djangorestfr | Here's my settings.py: INSTALLED_APPS = [ 'rest_framework', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'api.apps.ApiConfig' ] | If you are using VSCode, Ctrl + Shift + P -> Type and select 'Python: Select Interpreter' and enter into your projects virtual environment. This is what worked for me. | 29 | 102 |
65,398,794 | 2020-12-21 | https://stackoverflow.com/questions/65398794/what-does-this-mean-warningrootpyarrow-ignore-timezone-environment-variabl | I am working in Python on a Jupyter Notebook, and I got this warning: WARNING:root:'PYARROW_IGNORE_TIMEZONE' environment variable was not set. I tried to remove it, but I couldn't. I tried to set PYARROW_IGNORE_TIMEZONE to 1, as I saw on some forums but it didn't work. Here is my code : PYARROW_IGNORE_TIMEZONE=1 import databricks.koalas as ks import pyspark from pyspark.sql import SparkSession, functions from pyspark.sql.types import * import datetime What's wrong with it ? I am using spark and koalas. | If you want to set environment variable, you should use os. Otherwise you're just setting the variable in Python, but it doesn't get exported to the environment. import os os.environ["PYARROW_IGNORE_TIMEZONE"] = "1" | 11 | 20 |
65,398,299 | 2020-12-21 | https://stackoverflow.com/questions/65398299/proper-inputs-for-scikit-learn-roc-auc-score-and-roc-plot | I am trying to determine roc_auc_score for a fit model on a validation set. I am seeing some conflicting information on function inputs. Documentation says: "y_score array-like of shape (n_samples,) or (n_samples, n_classes) Target scores. In the binary and multilabel cases, these can be either probability estimates or non-thresholded decision values (as returned by decision_function on some classifiers). In the multiclass case, these must be probability estimates which sum to 1. The binary case expects a shape (n_samples,), and the scores must be the scores of the class with the greater label. The multiclass and multilabel cases expect a shape (n_samples, n_classes). In the multiclass case, the order of the class scores must correspond to the order of labels, if provided, or else to the numerical or lexicographical order of the labels in y_true." Not sure exactly what this calls for: 1) predicted probabilities against the actual y values in the test set or 2) class predictions against the actual y values in the test set I've been searching and, in the binary classification case (my interest), some people use predicted probabilities while others use actual predictions (0 or 1). In other words: Fit model: model.fit(X_train, y_train) Use either: y_preds = model.predict(X_test) or: y_probas = model.predict_proba(X_test) I find that: roc_auc_score(y_test, y_preds) and: roc_auc_score(y_test, y_probas[:,1]) # probabilites for the 1 class yield very different results. Which one is correct? I also find that to actually plot the ROC Curve I need to use probabilities. Any guidance appreciated. | model.predict(...) will give you the predicted label for each observation. That is, it will return an array full of ones and zeros. model.predict_proba(...)[:, 1] will give you the probability for each observation being equal to one. That is, it will return an array full of numbers between zero and one, inclusive. A ROC curve is calculated by taking each possible probability, using it as a threshold and calculating the resulting True Positive and False Positive rates. Hence, if you pass model.predict(...) to metrics.roc_auc_score(), you are calculating the AUC for a ROC curve that only used two thresholds (either one or zero). This is incorrect, as these are not the predicted probabilities of your model. To get the AUC of your model, you need to pass the predicted probabilities to roc_auc_score(...): from sklearn.metrics import roc_auc_score roc_auc_score(y_test, model.predict_proba(X_test)[:, 1]) | 7 | 10 |
65,396,901 | 2020-12-21 | https://stackoverflow.com/questions/65396901/what-is-the-difference-between-pycrypto-and-crypto-packages-in-python | I am new to encryption and hashing in python. I need this for authentication in one of my flask project. So my friend told me to use crypto package but when i searched it up i got crypto and pycrypto packages in the result. The thing is I know both of them are for encryption utility but I am confused as to which one to use. Is one of them better than the other or is one of them just a wrapper of the other? Or is there another better encryption package for python that I should use instead of the two mentioned above? I hope someone who have used these packages could help me. Thanks. | These two packages serve very different goals: crypto is a command line utility, which is intended to encrypt files, while pycrypto is a Python library which can be used from within Python to perform a number of different cryptographic operations (hashing, encryption/decryption, etc). pycrypto would be the more appropriate choice for implementing authentication within Python. I will note that Python also includes some cryptographic primitives in the standard library, which may be more suitable for your use case. Edit: As has been noted in the comments, pycrypto is no longer maintained, and a library such as cryptography or pycryptodome should be used instead. | 6 | 6 |
65,396,538 | 2020-12-21 | https://stackoverflow.com/questions/65396538/python-requests-jsondecodeerror | I have this code: import requests r = requests.get('https://www.instagram.com/p/CJDxE7Yp5Oj/?__a=1') data = r.json()['graphql']['shortcode_media'] Why do I get an error like this? C:\ProgramData\Anaconda3\envs\test\python.exe C:/Users/Solba/PycharmProjects/test/main.py Traceback (most recent call last): File "C:/Users/Solba/PycharmProjects/test/main.py", line 4, in <module> data = r.json() File "C:\ProgramData\Anaconda3\envs\test\lib\site-packages\requests\models.py", line 900, in json return complexjson.loads(self.text, **kwargs) File "C:\ProgramData\Anaconda3\envs\test\lib\json\__init__.py", line 357, in loads return _default_decoder.decode(s) File "C:\ProgramData\Anaconda3\envs\test\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\ProgramData\Anaconda3\envs\test\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Process finished with exit code 1 | r.json() expects a JSON string to be returned by the API. The API should explicitly say it is responding with JSON through response headers. In this case, the URL you are requesting is either not responding with a proper JSON or not explicitly saying it is responding with a JSON. You can first check the response sent by the URL by: data = r.text print(data) If the response can be treated as a JSON string, then you can process it with: import json data = json.loads(r.text) Note: You can also check the content-type and Accept headers to ensure the request and response are in the required datatype | 10 | 9 |
65,385,500 | 2020-12-20 | https://stackoverflow.com/questions/65385500/valueerror-invalid-literal-for-int-with-base-10-30-0-when-running-unittest | I'm trying to run a test that was previously working but has suddenly stopped running but now i seem to get an error on all my tests e.g. from httmock import HTTMock from unittest import TestCase from unittest.mock import patch, call, mock_open, MagicMock, Mock, ANY import os.path import os from src.operators import InjestDictDescriptionOperator from airflow.hooks.base_hook import BaseHook from airflow.hooks.postgres_hook import PostgresHook from airflow.hooks.S3_hook import S3Hook class TestInjestDictDescriptionOperator(TestCase): def setUp(self): # hook patches self.open_file_mock = patch('builtins.open').start() self.os_path_isdir = patch.object(os.path, 'isdir').start() self.os_makedirs = patch.object(os, 'makedirs').start() self.open_file_write_mock = self.open_file_mock.return_value.__enter__.return_value.write # prepare the target self.target = InjestDictDescriptionOperator( task_id='InjestDictDescriptionOperatorTest', sql=None, postgres_conn_id='test', aws_conn_id='s3-conn-1', s3_bucket_name=βdataβ, output_path='output/path/1') def tearDown(self): patch.stopall() def testTmpFolderCreationIfItDoesntExist(self): self.os_path_isdir.return_value = False self.target.execute(None) self.os_makedirs.assert_called_with('/tmp/') def testTmpFolderNotCreatedIfItExists(self): self.os_path_isdir.return_value = True self.target.execute(None) self.os_makedirs.assert_not_called() def testTmpFileCreation(self): self.target.execute(None) self.open_file_mock.assert_called_with( '/tmp/modelling/temp.txt', 'w+', encoding='utf-8') def testTmpFileDataDump(self): self.target.execute(None) self.open_file_write_mock.assert_has_calls( [ call(f"{doc['name']}\n") for doc in self.dummy_data ] , any_order=False) The traceback details of the problem are ests/operators/modelling/language/test_injest_dict_description_operator.py:9: in <module> from src.operators.modelling.language import InjestDictDescriptionOperator src/operators/modelling/language/__init__.py:1: in <module> from .injest_onboarded_commands_operator import Operator as InjestOnboardedCommandsOperator src/operators/modelling/language/injest_onboarded_commands_operator.py:9: in <module> from airflow.models import BaseOperator ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/__init__.py:50: in <module> from airflow.models import DAG # noqa: E402 ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/__init__.py:21: in <module> from airflow.models.baseoperator import BaseOperator, BaseOperatorLink # noqa: F401 ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/baseoperator.py:43: in <module> from airflow.models.dag import DAG ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/dag.py:52: in <module> from airflow.models.dagbag import DagBag ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/dagbag.py:50: in <module> class DagBag(BaseDagBag, LoggingMixin): ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/dagbag.py:80: in DagBag DAGBAG_IMPORT_TIMEOUT = conf.getint('core', 'DAGBAG_IMPORT_TIMEOUT') ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/configuration.py:414: in getint return int(self.get(section, key, **kwargs)) E ValueError: invalid literal for int() with base 10: '30.0'collection failure tests/operators/modelling/language/test_injest_dict_description_operator.py:9: in <module> from src.operators.modelling.language import InjestDictDescriptionOperator src/operators/modelling/language/__init__.py:1: in <module> from .injest_onboarded_commands_operator import Operator as InjestOnboardedCommandsOperator src/operators/modelling/language/injest_onboarded_commands_operator.py:9: in <module> from airflow.models import BaseOperator ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/__init__.py:50: in <module> from airflow.models import DAG # noqa: E402 ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/__init__.py:21: in <module> from airflow.models.baseoperator import BaseOperator, BaseOperatorLink # noqa: F401 ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/baseoperator.py:43: in <module> from airflow.models.dag import DAG ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/dag.py:52: in <module> from airflow.models.dagbag import DagBag ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/dagbag.py:50: in <module> class DagBag(BaseDagBag, LoggingMixin): ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/models/dagbag.py:80: in DagBag DAGBAG_IMPORT_TIMEOUT = conf.getint('core', 'DAGBAG_IMPORT_TIMEOUT') ../../../../.pyenv/versions/3.6.10/lib/python3.6/site-packages/airflow/configuration.py:414: in getint return int(self.get(section, key, **kwargs)) E ValueError: invalid literal for int() with base 10: '30.0' The only thing that changed was apache-airflow. I upgraded to the latest version 2.0 but then realised i would need to refactor parts of my code so downgraded to a later version. | The DAGBAG_IMPORT_TIMEOUT had been upgraded in the config files to float for 2.0 and for 1.10.14 it needed to be float. Deleted airflow completely including cfg files and re-installed the version | 15 | 10 |
65,388,539 | 2020-12-21 | https://stackoverflow.com/questions/65388539/using-python-i-cant-access-shared-drive-folders-from-google-drive-api-v3 | I can get mydrive folders, but I can't access shared drive folders from Google Drive API. This is my code.(almost same to the Guides' code here) I followed the Guides, finished "Enable the Drive API", execute the pip command on VScode, and put credentials.json to the working directory. (I got no error, only got filename list of mydrive or 'No files found' that is printed by the code.) from __future__ import print_function import pickle import os.path from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly'] def main(): """Shows basic usage of the Drive v3 API. Prints the names and ids of the first 10 files the user has access to. """ creds = None # The file token.pickle stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) service = build('drive', 'v3', credentials=creds) # Call the Drive v3 API results = service.files().list( pageSize=10, fields="nextPageToken, files(id, name)").execute() fields="nextPageToken, files(id, name)").execute() items = results.get('files', []) if not items: print('No files found.') else: print('Files:') for item in items: print(u'{0} ({1})'.format(item['name'], item['id'])) if __name__ == '__main__': main() | Notice that the API has the includeItemsFromAllDrives parameter in order to determine whether shared drive items show up or not in the results. The Python API V3 wrapper also has this parameter included on it's list method implementation that needs to be included when calling the list() method: ... service = build('drive', 'v3', credentials=creds) # Call the Drive v3 API results = service.files().list( pageSize=10, includeItemsFromAllDrives=True, supportsAllDrives=True, fields="nextPageToken, files(id, name)").execute() items = results.get('files', []) ... | 6 | 14 |
65,387,500 | 2020-12-21 | https://stackoverflow.com/questions/65387500/insert-a-png-image-in-a-matplotlib-figure | I'm trying to insert a png image in matplotlib figure (ref) import matplotlib.pyplot as plt import numpy as np from matplotlib.figure import Figure from matplotlib.offsetbox import OffsetImage, AnnotationBbox ax = plt.subplot(111) ax.plot( [1, 2, 3], [1, 2, 3], 'go-', label='line 1', linewidth=2 ) arr_img = plt.imread("stinkbug.png") im = OffsetImage(arr_img) ab = AnnotationBbox(im, (1, 0), xycoords='axes fraction') ax.add_artist(ab) plt.show() Inset image: Output obtained: I'd like to know how to resize the image that has to be inserted to avoid overlaps. EDIT: Saving the figure ax.figure.savefig("output.svg", transparent=True, dpi=600, bbox_inches="tight") | You can zoom the image and the set the box alignment to the lower right corner (0,1) plus some extra for the margins: im = OffsetImage(arr_img, zoom=.45) ab = AnnotationBbox(im, (1, 0), xycoords='axes fraction', box_alignment=(1.1,-0.1)) You may also want to use data coordinates, which is the default, and use the default box_alignment to the center, e.g. ab = AnnotationBbox(im, (2.6, 1.45)). See the xycoords parameter doc for more information about various coordinate options. | 8 | 9 |
65,388,213 | 2020-12-21 | https://stackoverflow.com/questions/65388213/why-is-pathlib-path-file-parent-parent-sensitive-to-my-working-directory | I have a script that's two directories down. β― tree . βββ foo βββ bar βββ test.py β― cd foo/bar β― cat test.py from pathlib import Path print(Path(__file__).parent) print(Path(__file__).parent.parent) When I run it from the directory that contains it, PathLib thinks that the file's grandparent is the same as its parent. β― python test.py . # <-- same . # <-- directories But when I run it from the top level, PathLib behaves correctly. β― cd ../.. β― python foo/bar/test.py foo/bar # <-- different foo # <-- directories Am I misunderstanding something about PathLib's API, or is something else causing its output to be sensitive to my working directory? | You need to call Path.resolve() to make your path absolute (a full path including all parent directories and removing all symlinks) from pathlib import Path print(Path(__file__).resolve().parent) print(Path(__file__).resolve().parent.parent) This will cause the results to include the entire path to each directory, but the behaviour will work wherever it is called from | 7 | 10 |
65,383,467 | 2020-12-20 | https://stackoverflow.com/questions/65383467/can-conda-be-configured-to-use-a-private-pypi-repo | I have users that create both conda and pip packages- I have no control over this I use artifactory to host private conda and pip repos, for example this is how a private pip repo works: https://www.jfrog.com/confluence/display/JFROG/PyPI+Repositories Sometimes there is a private pip package a conda environment or package needs. How can I configure conda to get my private pip packages from my private repo? I haven't found documentation on this. I would like for this to be transparent for users as much as possible- so they set up their config once and in their conda environment they can easily specify a private pip package and it just works | Conda won't search PyPI or alternative pip-compatible indexes automatically, but one can still use the --index-url or --extra-index-url flags when using pip install. E.g., Ad Hoc Installation # activate environment conda activate foo # ensure it has `pip` installed conda list pip # install with `pip` pip install --extra-index-url http://localhost:8888 bar YAML-based environments foo.yaml name: foo channels: - defaults dependencies: - python - pip - pip: - --extra-index-url http://localhost:8888 - bar Environment creation conda env create -f foo.yaml | 12 | 16 |
65,357,675 | 2020-12-18 | https://stackoverflow.com/questions/65357675/a-function-to-return-the-frequency-counts-of-all-or-specific-columns | I can return the frequency of all columns in a nice dataframe with a total column. for column in df: df.groupby(column).size().reset_index(name="total") Count total 0 1 423 1 2 488 2 3 454 3 4 408 4 5 343 Precipitation total 0 Fine 7490 1 Fog 23 2 Other 51 3 Raining 808 Month total 0 1 717 1 2 648 2 3 710 3 4 701 I put the loop in a function, but this returns the first column "Count" only. def count_all_columns_freq(dataframe_x): for column in dataframe_x: return dataframe_x.groupby(column).size().reset_index(name="total") count_all_columns_freq(df) Count total 0 1 423 1 2 488 2 3 454 3 4 408 4 5 343 Is there a way to do this using slicing or other method e.g. for column in dataframe_x[1:]: | Based on your comment, you just want to return a list of dataframe: def count_all_columns_freq(df): return [df.groupby(column).size().reset_index(name="total") for column in df] You can select columns in many ways in pandas, e.g. by slicing or by passing a list of columns like in df[['colA', 'colB']]. You don't need to change the function for that. Personally, I would return a dictionary instead: def frequency_dict(df): return {column: df.groupby(column).size() for column in df} # so that I could use it like this: freq = frequency_dict(df) freq['someColumn'].loc[value] EDIT: "What if I want to count the number of NaN?" In that case, you can pass dropna=False to groupby (this works for pandas >= 1.1.0): def count_all_columns_freq(df): return [df.groupby(column, dropna=False).size().reset_index(name="total") for column in df] | 6 | 2 |
65,381,289 | 2020-12-20 | https://stackoverflow.com/questions/65381289/what-is-the-purpose-of-the-pyautogui-failsafe | So, I was just messing around with pyautogui moving the mouse to random positions on the screen, when I manually moved the mouse to the top left corner of the screen and ran the program, it raised a pyautogui failsafe. I do know how to disable it and all that, but I want to know why is is there in the first place and possible use cases Code: import pyautogui pyautogui.click(x=25, y=1048) time.sleep(2) # I moved the move to the corner of the screen during this time delay pyautogui.click(x=701, y=430) Error: Traceback (most recent call last): File "C:\1 Files and Folders\folder\Python Project\Python\CODE\My Projects\Automation.py", line 23, in <module> job() File "C:\1 Files and Folders\folder\Python Project\Python\CODE\My Projects\Automation.py", line 18, in job pyautogui.click(x=1475, y=141) File "C:\Users\My_user\AppData\Local\Programs\Python\Python39-32\lib\site-packages\pyautogui\__init__.py", line 585, in wrapper failSafeCheck() File "C:\Users\My_user\AppData\Local\Programs\Python\Python39-32\lib\site-packages\pyautogui\__init__.py", line 1710, in failSafeCheck raise FailSafeException( pyautogui.FailSafeException: PyAutoGUI fail-safe triggered from mouse moving to a corner of the screen. To disable this fail-safe, set pyautogui.FAILSAFE to False. DISABLING FAIL-SAFE IS NOT RECOMMENDED. | According to pyautogui docs Itβs hard to use the mouse to close a program if the mouse cursor is moving around on its own. As a safety feature, a fail-safe feature is enabled by default. When a PyAutoGUI function is called, if the mouse is in any of the four corners of the primary monitor, they will raise a pyautogui.FailSafeException. So simply FailSafe is just thing that if your program just spams the mouse or doing something that you cannot stop it, You can close that in this way, For more information read it docs pyautogui docs | 7 | 10 |
65,379,408 | 2020-12-20 | https://stackoverflow.com/questions/65379408/pandas-groupby-and-find-difference-between-max-and-min | I have a dataframe. I have aggregated as below. But, I want to difference them as max value - min values dnm=df.groupby('Type').agg({'Vehicle_Age': ['max','min']}) Expect: | You can use np.ptp, this does the max - min calculation for you: df.groupby('Type').agg({'Vehicle_Age': np.ptp}) Or, df.groupby('Type')['Vehicle_Age'].agg(np.ptp) If you a Series as the output. | 5 | 12 |
65,317,215 | 2020-12-16 | https://stackoverflow.com/questions/65317215/using-unittest-mocks-patch-in-same-module-getting-does-not-have-the-attribute | I have what should've been a simple task, and it has stumped me for a while. I am trying to patch an object imported into the current module. Per the answers to Mock patching from/import statement in Python I should just be able to patch("__main__.imported_obj"). However, this isn't working for me. Please see my below minimal repro (I am running the tests via pytest): Minimal Repro This is run using Python 3.8.6. from random import random from unittest.mock import patch import pytest @pytest.fixture def foo(): with patch("__main__.random"): return def test(foo) -> None: pass When I run this code using PyCharm, I get an AttributeError: AttributeError: <module '__main__' from '/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/_jb_pytest_runner.py'> does not have the attribute 'random' Furthermore, when I enter debugger mode in the line before the with patch, I see the attribute __main__ is not defined. I am not sure if it needs to be defined for patch to work its magic. NOTE: I know I can use patch.object and it becomes much easier. However, I am trying to figure out how to use patch in this question. Research Unable to mock open, even when using the example from the documentation This question is related because it's both a similar error message and use case. Their solution was to use builtins instead of __main__, but that's because they were trying to patch a built-in function (open). | You are assuming that the module the test is running in is __main__, but that would only be the case if it were called via main. This is usually the case if you are using unittest. With pytest, the tests live in the module they are defined in. You have to patch the current module, the name of which is accessible via __name__, instead of assuming a specific module name: from random import random from unittest.mock import patch import pytest @pytest.fixture def foo(): with patch(__name__ + ".random"): yield | 6 | 6 |
65,372,252 | 2020-12-19 | https://stackoverflow.com/questions/65372252/selenium-python-page-down-unknown-error-neterr-name-not-resolved | So I'm currently working on a python scraper to collect website info with selenium in python. The issue I'm having is if I head to a page that isn't live I get the error: unknown error: net::ERR_NAME_NOT_RESOLVED I haven't used python in a while so my knowledge isn't the best. Here is my code driver = webdriver.Chrome(ChromeDriverManager().install()) try: driver.get('%s' %link) except ERR_NAME_NOT_RESOLVED: print ("page down") example website: http://www.whitefoxcatering.co.uk error Traceback (most recent call last): File "C:\Users\STE\AppData\Local\Programs\Python\Python39\selenium email\selenium test.py", line 106, in <module> driver.get('%s' %url) File "C:\Users\STE\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\selenium\webdriver\remote\webdriver.py", line 333, in get self.execute(Command.GET, {'url': url}) File "C:\Users\STE\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute self.error_handler.check_response(response) File "C:\Users\STE\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: net::ERR_NAME_NOT_RESOLVED (Session info: chrome=87.0.4280.88) I've searched through the previous questions and cannot seem to find a solution to this. Any help would be hugely appreciated :) | If you want to catch the error you have to change your except to what selenium is raising for you. In this case: selenium.common.exceptions.WebDriverException. So first import it: from selenium.common.exceptions import WebDriverException Then you can catch: try: driver.get('http://www.whitefoxcatering.co.uk') except WebDriverException: print("page down") | 6 | 8 |
65,369,612 | 2020-12-19 | https://stackoverflow.com/questions/65369612/unknown-distribution-format-when-uploading-to-pypi-via-twine | I am trying to update the version of infixpy using twine. Here is my ~/.pypirc: index-servers = pypi pypitest [pypi] repository: https://upload.pypi.org/legacy/ username: myuser password: mypassword [pypitest] repository: https://upload.testpypi.org/legacy username: myuser password: mypassword Here is the command line: python setup.py build twine upload -r pypi dist/ The upload errors out with InvalidDistribution: Unknown distribution format: '' Here is the full output: Processing dependencies for infixpy==0.0.6 Finished processing dependencies for infixpy==0.0.6 Processing /Users/steve/git/infixpy Building wheels for collected packages: infixpy Building wheel for infixpy (setup.py) ... done Created wheel for infixpy: filename=infixpy-0.0.6-py3-none-any.whl size=43459 sha256=01fed46f42fa86475079636a55685c93521989aa0ba6558726a9d35c01004b7a Stored in directory: /private/var/folders/d6/m67jyndd7h754m3810cl3bpm0000gp/T/pip-ephem-wheel-cache-1bizg6_y/wheels/47/66/74/d79a56979feba04c8ef05e12fe861cacf813cecd397e57071f Successfully built infixpy Installing collected packages: infixpy Attempting uninstall: infixpy Found existing installation: infixpy 0.0.6 Uninstalling infixpy-0.0.6: Successfully uninstalled infixpy-0.0.6 Successfully installed infixpy-0.0.6 Uploading distributions to https://upload.pypi.org/legacy/ InvalidDistribution: Unknown distribution format: '' What corrections are needed to my publishing process? I am on Python 3.7 on macOS. | Per the docs for twine upload (emphasis mine): positional arguments: dist The distribution files to upload to the repository (package index). Usually dist/* . May additionally contain a .asc file to include an existing signature with the file upload. You've passed a directory, not files - as the docs suggest, you probably want dist/*. If you pass a directory there are no matches for known distributions, as these are based on file extension, so you end up at this error case: else: raise exceptions.InvalidDistribution( "Unknown distribution format: '%s'" % os.path.basename(filename) ) The basename for a directory is '', hence the not-so-helpful output. | 7 | 10 |
65,367,298 | 2020-12-19 | https://stackoverflow.com/questions/65367298/python-while-loop-breakout-issues | The question I have is about the flag I have here for the while loop. This works but not like I think it should. I assume I'm not understanding something so if someone is able to explain, that would be great. From my understanding this should break out of the loop as soon as one of my conditionals is met. So if I input 'q' it should break out and stop the loop. But what happens is it keeps going through the loop and then it breaks out. so it goes through the last prompt and prints the exception. (Python version is 3.8.5) # Statement that tells the user what we need. print("Enter two numbers and I will tell you the sum of the numbers.") # Lets the user know they can press 'q' to exit the program. print("Press 'q' at anytime to exit.") keep_going = True # Loop to make the program keep going until its told to stop. while keep_going: # Prompt for user to input first number and store it in a variable. first_number = input("First number: ") # Create a break when entering the first number. if first_number == 'q': keep_going = False # Prompt for user to input second number and store it in a variable. second_number = input("Second number: ") # Create a break when entering the second number. if second_number == 'q': keep_going = False # Exception for non integers being input "ValueError" try: # Convert input to integers and add them. # storing the answer in a variable. answer = int(first_number) + int(second_number) except ValueError: # Tell the user what they did wrong. print("Please enter a number!") else: # Print the sum of the numbers print(f"\nThe answer is: {answer}") Using this code it breaks out right away like I expect it to. while True: first_number = input("First number: ") if first_number == 'q': break second_number = input("Second number: ") if second_number == 'q': break I just would like to understand what the difference is and if thats how it should work. I feel like I'm missing something or misunderstanding something. | The condition of the while loop is only checked between iterations of the loop body, so if you change the condition in the middle of the loop, the current iteration will finish before the loop terminates. If you want to break a loop immediately, you need to either break (which automatically breaks the loop regardless of the condition) or continue (which jumps to the next iteration, and will therefore terminate the loop if the condition is no longer true). Using while True: with a break when you want to stop the loop is generally much more straightforward than trying to control the loop by setting and unsetting a flag. FWIW, rather than copying and pasting the code to input the two numbers, and have two different ways to break out of the loop, I might put that all into a function and break the loop with an Exception, like this: print("Enter two numbers and I will tell you the sum of the numbers.") print("Press 'q' at anytime to exit.") def input_number(prompt: str) -> int: """Ask the user to input a number, re-prompting on invalid input. Exception: raise EOFError if the user enters 'q'.""" while True: try: number = input(f"{prompt} number: ") if number == 'q': raise EOFError return int(number) except ValueError: print("Please enter a number!") while True: try: numbers = (input_number(n) for n in ("First", "Second")) print(f"The answer is: {sum(numbers)}") except EOFError: break This makes it easier to extend the program to handle more than two inputs; try adding a "Third" after where it says "First" and "Second"! :) | 8 | 12 |
65,362,524 | 2020-12-18 | https://stackoverflow.com/questions/65362524/in-json-created-from-a-pydantic-basemodel-exclude-optional-if-not-set | I want to exclude all the Optional values that are not set when I create JSON. In this example: from pydantic import BaseModel from typing import Optional class Foo(BaseModel): x: int y: int = 42 z: Optional[int] print(Foo(x=3).json()) I get {"x": 3, "y": 42, "z": null}. But I would like to exclude z. Not because its value is None, but because it is Optional and there was no keyword argument for z. In the two cases below I would like to have z in the JSON. Foo(x=1, z=None) Foo(x=1, z=77) If there is any other solution to set z to optional in this sense, I would like to see it. | You could exclude only optional model fields that unset by making of union of model fields that are set and those that are not None. Pydantic provides the following arguments for exporting method model.dict(...): exclude_unset: whether fields which were not explicitly set when creating the model should be excluded from the returned dictionary; default False. exclude_none: whether fields which are equal to None should be excluded from the returned dictionary; default False To make union of two dicts we can use the expression a = {**b, **c} (values from c overwrites values from b). Note that since Python 3.9 it could be done just as a = b | c. from pydantic import BaseModel from typing import Optional from pydantic.json import pydantic_encoder import json class Foo(BaseModel): x: int y: int = 42 z: Optional[int] def exclude_optional_dict(model: BaseModel): return {**model.dict(exclude_unset=True), **model.dict(exclude_none=True)} def exclude_optional_json(model: BaseModel): return json.dumps(exclude_optional_dict(model), default=pydantic_encoder) print(exclude_optional_json(Foo(x=3))) # {"x": 3, "y": 42} print(exclude_optional_json(Foo(x=3, z=None))) # {"x": 3, "z": null, "y": 42} print(exclude_optional_json(Foo(x=3, z=77))) # {"x": 3, "z": 77, "y": 42} Update In order for the approach to work with nested models, we need to do a deep union( or merge) of two dictionaries, like so: def union(source, destination): for key, value in source.items(): if isinstance(value, dict): node = destination.setdefault(key, {}) union(value, node) else: destination[key] = value return destination def exclude_optional_dict(model: BaseModel): return union(model.dict(exclude_unset=True), model.dict(exclude_none=True)) class Foo(BaseModel): x: int y: int = 42 z: Optional[int] class Bar(BaseModel): a: int b: int = 52 c: Optional[int] d: Foo print(exclude_optional_json(Bar(a=4, d=Foo(x=3)))) print(exclude_optional_json(Bar(a=4, c=None, d=Foo(x=3, z=None)))) print(exclude_optional_json(Bar(a=4, c=78, d=Foo(x=3, z=77)))) {"a": 4, "b": 52, "d": {"x": 3, "y": 42}} {"a": 4, "b": 52, "d": {"x": 3, "y": 42, "z": null}, "c": null} {"a": 4, "b": 52, "c": 78, "d": {"x": 3, "y": 42, "z": 77}} | 21 | 25 |
65,357,462 | 2020-12-18 | https://stackoverflow.com/questions/65357462/can-you-have-an-empty-string-for-a-python-enum-attribute | I'd really like to use Python enum types for this model I'm working on. The problem is one of the potential values from the data provider is an empty string. Given this model... from sqlalchemy import Column, Enum class Events(Base): __tablename__ = 'events' ... restriction_code = Column(Enum(RestrictionCode)) ... ...one of the enum types would have an "empty string" as an attribute like this... from enum import Enum class RestrictionCode(Enum): A = 'A Description' B = 'B Description' C = 'C Description' D = 'D Description' '' = 'No Restriction' The above produces the not surprising: SyntaxError: can't assign to literal. Is this even possible or is there a workaround? | No, each Enum member must have a name. The intended way to use Enums in this scenario would be to have the value of the member be the value stored in the database, so your Enum should look like: class RestrictionCode(Enum): A = 'A' B = 'B' C = 'C' D = 'D' NONE = '' If you want a description as well, you'll need to design your own Enum -- but it's not hard: class DescriptiveEnum(Enum): # def __new__(cls, value, description): member = object.__new__(cls) member._value_ = value member.description = description return member Then your RestrictionCode will look like: class RestrictionCode(DescriptiveEnum): A = 'A', 'A description' B = 'B', 'B description' C = 'C', 'C description' D = 'D', 'D description' NONE = '', 'No restriction' | 6 | 4 |
65,360,692 | 2020-12-18 | https://stackoverflow.com/questions/65360692/python-patching-new-method | I am trying to patch __new__ method of a class, and it is not working as I expect. from contextlib import contextmanager class A: def __init__(self, arg): print('A init', arg) @contextmanager def patch_a(): new = A.__new__ def fake_new(cls, *args, **kwargs): print('call fake_new') return new(cls, *args, **kwargs) # here I get error: TypeError: object.__new__() takes exactly one argument (the type to instantiate) A.__new__ = fake_new try: yield finally: A.__new__ = new if __name__ == '__main__': A('foo') with patch_a(): A('bar') A('baz') I expect the following output: A init foo call fake_new A init bar A init baz But after call fake_new I get an error (see comment in the code). For me It seems like I just decorate a __new__ method and propagate all args unchanged. It doesn't work and the reason is obscure for me. Also I can write return new(cls) and call A('bar') works fine. But then A('baz') breaks. Can someone explain what is going on? Python version is 3.8 | You've run into a complicated part of Python object instantiation - in which the language opted for a design that would allow one to create a custom __init__ method with parameters, without having to touch __new__. However, the in the base of class hierarchy, object, both __new__ and __init__ take one single parameter each. IIRC, it goes this way: if your class have a custom __init__ and you did not touch __new__ and there are more any parameters to the class instantiation that would be passed to both __init__ and __new__, the parameters will be stripped from the call do __new__, so you don't have to customize it just to swallow the parameters you consume in __init__. The converse is also true: if your class have a custom __new__ with extra parameters, and no custom __init__, these are not passed to object.__init__. With your design, Python sees a custom __new__ and passes it the same extra arguments that are passed to __init__ - and by using *args, **kw, you forward those to object.__new__ which accepts a single parameter - and you get the error you presented us. The fix is to not pass those extra parameters to the original __new__ method - unless they are needed there - so you have to make the same check Python's type does when initiating an object. And an interesting surprise to top it: while making the example work, I found out that even if A.__new__ is deleted when restoring the patch, it is still considered as "touched" by cPython's type instantiation, and the arguments are passed through. In order to get your code working I needed to leave a permanent stub A.__new__ that will forward only the cls argument: from contextlib import contextmanager class A: def __init__(self, arg): print('A init', arg) @contextmanager def patch_a(): new = A.__new__ def fake_new(cls, *args, **kwargs): print('call fake_new') if new is object.__new__: return new(cls) return new(cls, *args, **kwargs) # here I get error: TypeError: object.__new__() takes exactly one argument (the type to instantiate) A.__new__ = fake_new try: yield finally: del A.__new__ if new is not object.__new__: A.__new__ = new else: A.__new__ = lambda cls, *args, **kw: object.__new__(cls) print(A.__new__) if __name__ == '__main__': A('foo') with patch_a(): A('bar') A('baz') (I tried inspecting the original __new__ signature instead of the new is object.__new__ comparison - to no avail: object.__new__ signature is *args, **kwargs - possibly made so that it will never fail on static checking) | 8 | 6 |
65,359,261 | 2020-12-18 | https://stackoverflow.com/questions/65359261/can-you-get-all-estimators-from-an-sklearn-grid-search-gridsearchcv | I recently tested many hyperparameter combinations using sklearn.model_selection.GridSearchCV. I want to know if there is a way to call all previous estimators that were trained in the process. search = GridSearchCV(estimator=my_estimator, param_grid=parameters) # `my_estimator` is a gradient boosting classifier object # `parameters` is a dictionary containing all the hyperparameters I want to try I know I can call the best estimator with search.best_estimator_, but I would like to call all other estimators as well so I can test their individual performance. The search took around 35 hours to complete, so I really hope I do not have to do this all over again. NOTE: This was asked a few years ago (here), but sklearn has been updated multiple times since and the anwer may be different now (I hope). | No, none of the tested models are saved, except (optionally, but by default) one final one trained on the entire training set, your best_estimator_. Especially when models store significant amounts of data (e.g. KNNs), saving all the fitted estimators would be very memory-expensive, and usually not of much use. (cross_validate does have a parameter return_estimator, but the hyperparameter tuners do not. If you have a compelling reason to add it, it probably wouldn't take much work and you could open a GitHub Issue at sklearn.) However, you do have the cv_results_ attribute that documents the scores of all of the tested estimators. That's usually enough for inspection purposes. | 6 | 10 |
65,357,665 | 2020-12-18 | https://stackoverflow.com/questions/65357665/how-to-skip-lines-in-pycharm-debug-mode-with-python | Let's say I put a breakpoint in the first line. I see no option to simply skip the 2nd line and jump straight to the print statement. Is there any hidden option? If not, what is the most non-intrusive way? Commenting out the lines I don't wanna run is not elegant. a = 3 a = 4 print(a) | You can do right click on the third statement and Jump to Cursor. This is a manual action though... I don't think there is a mode to only run breakpointed lines... | 8 | 12 |
65,352,682 | 2020-12-18 | https://stackoverflow.com/questions/65352682/python-asyncio-pythonic-way-of-waiting-until-condition-satisfied | I need to wait until a certain condition/equation becomes True in a asynchronous function in python. Basically it is a flag variable which would be flagged by a coroutine running in asyncio.create_task(). I want to await until it is flagged in the main loop of asyncio. Here's my current code: import asyncio flag = False async def bg_tsk(): await asyncio.sleep(10) flag = True async def waiter(): asyncio.create_task(bg_tsk()) await asyncio.sleep(0) # I find the below part unpythonic while not flag: await asyncio.sleep(0.2) asyncio.run(waiter()) Is there any better implementation of this? Or is this the best way possible? I have tried using 'asyncio.Event', but it doesnt seem to work with 'asyncio.create_task()'. | Using of asyncio.Event is quite straightforward. Sample below. Note: Event should be created from inside coroutine for correct work. import asyncio async def bg_tsk(flag): await asyncio.sleep(3) flag.set() async def waiter(): flag = asyncio.Event() asyncio.create_task(bg_tsk(flag)) await flag.wait() print("After waiting") asyncio.run(waiter()) | 11 | 14 |
65,349,787 | 2020-12-17 | https://stackoverflow.com/questions/65349787/how-to-find-the-range-of-dates-from-a-datetime-column-in-a-dataframe | Wondering how to print the range of dates in a dataframe. Seems like it would be very simple but I can't find answers anywhere. Is there an easy way to do this with pandas datetime module? So if this was a small version of the dataframe for example: Date Id Value 2020-09-23 14:00:00 4752764 12212 2020-10-25 08:00:00 4752764 12298 2020-10-28 12:00:00 4752764 12291 2020-10-29 18:00:00 4752764 12295 How could I get an output like: date_range = 2020-09-23 to 2020-10-29 OR date_range = 23rd of September, 2020 to 29th of October, 2020 I appreciate any answers :) | Try this df['Date'] = pd.to_datetime(df['Date']) # If your Date column is of the type object otherwise skip this date_range = str(df['Date'].dt.date.min()) + ' to ' +str(df['Date'].dt.date.max()) | 12 | 20 |
65,344,578 | 2020-12-17 | https://stackoverflow.com/questions/65344578/how-to-check-if-a-model-is-in-train-or-eval-mode-in-pytorch | How to check from within a model if it is currently in train or eval mode? | From the Pytorch forum, with a small tweak: use if self.training: # it's in train mode else: # it's in eval mode Always better to have a stack overflow answer than to look at forums. Explanation about the modes | 24 | 34 |
65,343,377 | 2020-12-17 | https://stackoverflow.com/questions/65343377/adam-optimizer-with-warmup-on-pytorch | In the paper Attention is all you need, under section 5.3, the authors suggested to increase the learning rate linearly and then decrease proportionally to the inverse square root of steps. How do we implement this in PyTorch with Adam optimizer? Preferably without additional packages. | PyTorch provides learning-rate-schedulers for implementing various methods of adjusting the learning rate during the training process. Some simple LR-schedulers are are already implemented and can be found here: https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate In your special case you can - just like the other LR-schedulers do - subclass _LRScheduler for implementing a variable schedule based on the number of epochs. For a bare-bones method you only need to implement __init__() and get_lr() methods. Just note that many of these schedulers expect you to call .step() once per epoch. But you can also update it more frequently or even pass a custom argument just like in the cosine-annealing LR-scheduler: https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#CosineAnnealingLR | 24 | 16 |
65,334,215 | 2020-12-17 | https://stackoverflow.com/questions/65334215/how-can-request-param-be-annotated-in-indirect-parametrization | In the Indirect parametrization example I want to type hint request.param indicating a specific type, a str for example. The problem is since the argument to fixt must be the request fixture there seems to be no way to indicate what type the parameters passed through the "optional param attribute" should be (quoting the documentation). What are the alternatives? Perhaps documenting the type hint in the fixt docstring, or in the test_indirect docstring? @pytest.fixture def fixt(request): return request.param * 3 @pytest.mark.parametrize("fixt", ["a", "b"], indirect=True) def test_indirect(fixt): assert len(fixt) == 3 | As of now (version 6.2), pytest doesn't provide any type hint for the param attribute. If you need to just type param regardless of the rest of FixtureRequest fields and methods, you can inline your own impl stub: from typing import TYPE_CHECKING if TYPE_CHECKING: class FixtureRequest: param: str else: from typing import Any FixtureRequest = Any @pytest.fixture def fixt(request: FixtureRequest) -> str: return request.param * 3 If you want to extend existing typing of FixtureRequest, the stubbing gets somewhat more complex: from typing import TYPE_CHECKING if TYPE_CHECKING: from pytest import FixtureRequest as __FixtureRequest class FixtureRequest(__FixtureRequest): param: str else: from pytest import FixtureRequest @pytest.fixture def fixt(request: FixtureRequest) -> str: return request.param * 3 Ideally, pytest would allow generic param types in FixtureRequest, e.g. P = TypeVar("P") # generic param type class FixtureRequest(Generic[P]): def __init__(self, param: P, ...): ... You would then just do from pytest import FixtureRequest @pytest.fixture def fixt(request: FixtureRequest[str]) -> str: return request.param * 3 Not sure, however, whether this typing is agreeable with the current pytest's codebase - I guess it was omitted for a reason... | 11 | 16 |
65,337,641 | 2020-12-17 | https://stackoverflow.com/questions/65337641/switching-python-version-3-9-%e2%86%92-3-8-installed-by-homebrew | Itβs a very similar situation like described here, but vice versa. I have Python 3.8 installed via Homebrew and updated that to 3.9: % brew list --formula | grep python [email protected] [email protected] I want to use Python 3.8 as my default version with python3 command and tried β inspired by this answer β the following: brew unlink [email protected] brew unlink [email protected] brew link [email protected] The last gave me the following output: % brew link [email protected] Linking /usr/local/Cellar/[email protected]/3.8.6_2... Error: Could not symlink bin/pip3 Target /usr/local/bin/pip3 already exists. You may want to remove it: rm '/usr/local/bin/pip3' To force the link and overwrite all conflicting files: brew link --overwrite [email protected] To list all files that would be deleted: brew link --overwrite --dry-run [email protected] So I did: % brew link --overwrite --dry-run [email protected] Would remove: /usr/local/bin/pip3 If you need to have this software first in your PATH instead consider running: echo 'export PATH="/usr/local/opt/[email protected]/bin:$PATH"' >> ~/.zshrc I thought it would be a good idea to check that first: ralf@razbook ~ % brew link --force --dry-run [email protected] Would link: /usr/local/bin/2to3 /usr/local/bin/2to3-3.8 /usr/local/bin/easy_install-3.8 /usr/local/bin/idle3 /usr/local/bin/idle3.8 /usr/local/bin/pip3 /usr/local/bin/pip3.8 /usr/local/bin/pydoc3 /usr/local/bin/pydoc3.8 /usr/local/bin/python3 /usr/local/bin/python3-config /usr/local/bin/python3.8 /usr/local/bin/python3.8-config /usr/local/bin/wheel3 /usr/local/share/man/man1/python3.1 /usr/local/share/man/man1/python3.8.1 /usr/local/lib/pkgconfig/python-3.8-embed.pc /usr/local/lib/pkgconfig/python-3.8.pc /usr/local/lib/pkgconfig/python3-embed.pc /usr/local/lib/pkgconfig/python3.pc /usr/local/Frameworks/Python.framework/Headers /usr/local/Frameworks/Python.framework/Python /usr/local/Frameworks/Python.framework/Resources /usr/local/Frameworks/Python.framework/Versions/3.8 /usr/local/Frameworks/Python.framework/Versions/Current If you need to have this software first in your PATH instead consider running: echo 'export PATH="/usr/local/opt/[email protected]/bin:$PATH"' >> ~/.zshrc Sounds good, so let's do it: % brew link --force [email protected] Linking /usr/local/Cellar/[email protected]/3.8.6_2... Error: Could not symlink bin/pip3 Target /usr/local/bin/pip3 already exists. You may want to remove it: rm '/usr/local/bin/pip3' To force the link and overwrite all conflicting files: brew link --overwrite [email protected] To list all files that would be deleted: brew link --overwrite --dry-run [email protected] Unfortunately I skipped the dry-run: % brew link --overwrite [email protected] Linking /usr/local/Cellar/[email protected]/3.8.6_2... 25 symlinks created If you need to have this software first in your PATH instead consider running: echo 'export PATH="/usr/local/opt/[email protected]/bin:$PATH"' >> ~/.zshrc Something seems to have worked: % python3 --version Python 3.8.6 % pip3 --version pip 20.2.4 from /usr/local/lib/python3.8/site-packages/pip (python 3.8) But still something with pipenv was wrong: % pipenv install google-ads Warning: Python 3.9 was not found on your system... Neither 'pyenv' nor 'asdf' could be found to install Python. You can specify specific versions of Python with: $ pipenv --python path/to/python Perhaps I simply should reinstall pipenv? % which pipenv /usr/local/bin/pipenv % pip3 uninstall pipenv Found existing installation: pipenv 2020.8.13 Uninstalling pipenv-2020.8.13: Would remove: /usr/local/bin/pipenv /usr/local/bin/pipenv-resolver /usr/local/lib/python3.8/site-packages/pipenv-2020.8.13.dist-info/* /usr/local/lib/python3.8/site-packages/pipenv/* Proceed (y/n)? y Successfully uninstalled pipenv-2020.8.13 % pip3 install pipenv Collecting pipenv Downloading pipenv-2020.11.15-py2.py3-none-any.whl (3.9 MB) |ββββββββββββββββββββββββββββββββ| 3.9 MB 2.9 MB/s Requirement already satisfied: pip>=18.0 in /usr/local/lib/python3.8/site-packages (from pipenv) (20.2.4) Requirement already satisfied: virtualenv in /usr/local/lib/python3.8/site-packages (from pipenv) (20.0.31) Requirement already satisfied: setuptools>=36.2.1 in /usr/local/lib/python3.8/site-packages (from pipenv) (50.3.2) Requirement already satisfied: certifi in /usr/local/lib/python3.8/site-packages (from pipenv) (2020.6.20) Requirement already satisfied: virtualenv-clone>=0.2.5 in /usr/local/lib/python3.8/site-packages (from pipenv) (0.5.4) Requirement already satisfied: distlib<1,>=0.3.1 in /usr/local/lib/python3.8/site-packages (from virtualenv->pipenv) (0.3.1) Requirement already satisfied: filelock<4,>=3.0.0 in /usr/local/lib/python3.8/site-packages (from virtualenv->pipenv) (3.0.12) Collecting six<2,>=1.9.0 Using cached six-1.15.0-py2.py3-none-any.whl (10 kB) Requirement already satisfied: appdirs<2,>=1.4.3 in /usr/local/lib/python3.8/site-packages (from virtualenv->pipenv) (1.4.4) Installing collected packages: pipenv, six Successfully installed pipenv-2020.11.15 six-1.15.0 But still: % pipenv install google-ads Warning: Python 3.9 was not found on your system... Neither 'pyenv' nor 'asdf' could be found to install Python. You can specify specific versions of Python with: $ pipenv --python path/to/python Actually not OK, but let's declare which Python pipenv should use: % pipenv --python /usr/local/opt/[email protected]/bin/python3 install google-ads Creating a virtualenv for this project... Pipfile: /Users/ralf/code/test_snippets/20-12-10_google_ads/Pipfile Using /usr/local/opt/[email protected]/bin/python3 (3.8.6) to create virtualenv... β ¦ Creating virtual environment...created virtual environment CPython3.8.6.final.0-64 in 362ms creator CPython3Posix(dest=/Users/ralf/.local/share/virtualenvs/20-12-10_google_ads-S7vGVfKj, clear=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/ralf/Library/Application Support/virtualenv) added seed packages: pip==20.2.4, setuptools==50.3.2, wheel==0.35.1 activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator β Successfully created virtual environment! Virtualenv location: /Users/ralf/.local/share/virtualenvs/20-12-10_google_ads-S7vGVfKj Warning: Your Pipfile requires python_version 3.9, but you are using 3.8.6 (/Users/ralf/.local/share/v/2/bin/python). $ pipenv --rm and rebuilding the virtual environment may resolve the issue. $ pipenv check will surely fail. Warning: Your Pipfile requires python_version 3.9, but you are using 3.8.6 (/Users/ralf/.local/share/v/2/bin/python). $ pipenv --rm and rebuilding the virtual environment may resolve the issue. $ pipenv check will surely fail. Installing google-ads... pipenv --rm and rebuilding the virtual environment did not help. (I even consider to simply uninstall Python 3.9 and installing Python 3.8 again β but don't know how.) Is there a way to persuade pipenv of using Python 3.8? | Well, sometimes it helps to ask the question to find the solution on your own β one of the great things of StackOverflow, by the way. The hint is in the warning of pipenv: "Your Pipfile requires python_version 3.9". I simply did rm Pipfile rm Pipfile.lock and then it worked: pipenv install google-ads Well, at least pipenv worked correctly with Python 3.8. There is still an issue with google-ads, but that's another story. Probably it would have been enough to change the Pipfile: [requires] python_version = "3.8" | 9 | 1 |
65,326,039 | 2020-12-16 | https://stackoverflow.com/questions/65326039/how-to-handle-job-cancelation-in-slurm | I am using Slurm job manager on an HPC cluster. Sometimes there are situations, when a job is canceled due to time limit and I would like to finish my program gracefully. As far as I understand, the process of cancellation occurs in two stages exactly for a software developer to be able to finish the program gracefully: srun: Job step aborted: Waiting up to 62 seconds for job step to finish. slurmstepd: error: *** JOB 18522559 ON ncm0317 CANCELLED AT 2020-12-14T19:42:43 DUE TO TIME LIMIT *** You can see that I am given 62 seconds to finish the job the way I want it to finish (by saving some files, etc.). Question: how to do this? I understand that first some Unix signal is sent to my job and I need to respond to it correctly. However, I cannot find in the Slurm documentation any information on what this signal is. Besides, I do not exactly how to handle it in Python, probably, through exception handling. | In Slurm, you can decide which signal is sent at which moment before your job hits the time limit. From the sbatch man page: --signal=[[R][B]:]<sig_num>[@<sig_time>] When a job is within sig_time seconds of its end time, send it the signal sig_num. So set #SBATCH --signal=B:TERM@05:00 to get Slurm to signal the job with SIGTERM 5 minutes before the allocation ends. Note that depending on how you start your job, you might need to remove the B: part. In your Python script, use the signal package. You need to define a "signal handler", a function that will be called when the signal is receive, and "register" that function for a specific signal. As that function is disrupting the normal flow when called , you need to keep it short and simple to avoid unwanted side effects, especially with multithreaded code. A typical scheme in a Slurm environment is to have a script skeleton like this: #! /bin/env python import signal, os, sys # Global Boolean variable that indicates that a signal has been received interrupted = False # Global Boolean variable that indicates then natural end of the computations converged = False # Definition of the signal handler. All it does is flip the 'interrupted' variable def signal_handler(signum, frame): global interrupted interrupted = True # Register the signal handler signal.signal(signal.SIGTERM, signal_handler) try: # Try to recover a state file with the relevant variables stored # from previous stop if any with open('state', 'r') as file: vars = file.read() except: # Otherwise bootstrap (start from scratch) vars = init_computation() while not interrupted and not converged: do_computation_iteration() # Save current state if interrupted: with open('state', 'w') as file: file.write(vars) sys.exit(99) sys.exit(0) This first tries to restart computations left by a previous run of the job, and otherwise bootstraps it. If it was interrupted, it lets the current loop iteration finish properly, and then saves the needed variables to disk. It then exits with the 99 return code. This allows, if Slurm is configured for it, to requeue the job automatically for further iteration. If slurm is not configured for it, you can do it manually in the submission script like this: python myscript.py || scontrol requeue $SLURM_JOB_ID | 7 | 9 |
65,337,020 | 2020-12-17 | https://stackoverflow.com/questions/65337020/pyspark-filter-dataframe-if-column-does-not-contain-string | I hope it wasn't asked before, at least I couldn't find. I'm trying to exclude rows where Key column does not contain 'sd' value. Below is the working example for when it contains. values = [("sd123","2"),("kd123","1")] columns = ['Key', 'V1'] df2 = spark.createDataFrame(values, columns) df2.where(F.col('Key').contains('sd')).show() how to do the opposite? | Use ~ as bitwise NOT: df2.where(~F.col('Key').contains('sd')).show() | 18 | 35 |
65,336,695 | 2020-12-17 | https://stackoverflow.com/questions/65336695/python-no-module-named-pip | I use Windows 7 32 bit and Python 3.7. I was trying to install a module with pip and this error came up: cd C:\Windows\System32 pip install pyttsx3 Output: Traceback (most recent call last): File "d:\python\python 3.7\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "d:\python\python 3.7\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\Python\Python 3.7\Scripts\pip.exe\__main__.py", line 5, in <module> ModuleNotFoundError: No module named 'pip' How can I fix this? | Could you try? pip3 install pyttsx3 | 16 | 4 |
65,329,555 | 2020-12-16 | https://stackoverflow.com/questions/65329555/standard-library-logging-plus-loguru | Let's say I'm setting up a script or library that has a few dependencies which use Python's standard library logging module but I want to use loguru to capture all logs. My first naive attempt was a complete failure, but I'm not sure how to proceed. To test I have two files main.py: from loguru import logger from base_log import test_func if __name__ == "__main__": logger.debug("In main") test_func() and base_log.py: import logging logger = logging.getLogger(__name__) def test_func(): logger.warning("In test_func") If I run main.py (i.e. python main.py) then I get the following output: 2020-12-16 10:57:48.269 | DEBUG | __main__:<module>:6 - In main In test_func When I expect: 2020-12-16 11:01:34.408 | DEBUG | __main__:<module>:6 - In main 2020-12-16 11:01:34.408 | WARNING | base_log:test_func:9 - In test_func | You can use a custom handler to intercept standard logging messages toward your Loguru sinks as documented here. main.py will then look something like this: import logging from loguru import logger from base_log import test_func class InterceptHandler(logging.Handler): def emit(self, record): try: level = logger.level(record.levelname).name except ValueError: level = record.levelno frame, depth = logging.currentframe(), 2 while frame.f_code.co_filename == logging.__file__: frame = frame.f_back depth += 1 logger.opt(depth=depth, exception=record.exc_info).log( level, record.getMessage() ) if __name__ == "__main__": logging.basicConfig(handlers=[InterceptHandler()], level=0) logger.debug("In main") test_func() Output: 2020-12-16 22:15:55.337 | DEBUG | __main__:<module>:26 - In main 2020-12-16 22:15:55.337 | WARNING | base_log:test_func:7 - In test_func This should work for all well-behaved libraries that do not add any handlers other than NullHandler to the libraries' loggers. The rest may need extra work. | 6 | 6 |
65,331,297 | 2020-12-16 | https://stackoverflow.com/questions/65331297/how-to-make-a-column-based-on-previous-values-in-dataframe | I have a data frame: user_id url 111 google.com 111 youtube.com 111 youtube.com 111 google.com 111 stackoverflow.com 111 google.com 222 twitter.com 222 google.com 222 twitter.com I want to create a column that will show the fact of visiting this URL before. Desired output: user_id url target 111 google.com 0 111 youtube.com 0 111 youtube.com 1 111 google.com 1 111 stackoverflow.com 0 111 google.com 1 222 twitter.com 0 222 google.com 0 222 twitter.com 1 I can do this with a loop but it doesn't look good. Is it possible to make it with pandas? | Use duplicated: df['target'] = df.duplicated().astype(int) print(df) Output user_id url target 0 111 google.com 0 1 111 youtube.com 0 2 111 youtube.com 1 3 111 google.com 1 4 111 stackoverflow.com 0 5 111 google.com 1 6 222 twitter.com 0 7 222 google.com 0 8 222 twitter.com 1 | 6 | 5 |
65,327,494 | 2020-12-16 | https://stackoverflow.com/questions/65327494/how-to-join-elements-within-an-array-of-strings | I have this array of strings that I have split element wise so I can do stuff to it. Now I want to return them back into a sentence. my_array = (['T', 'e', 's', 't', ' ', 'w', 'o', 'r', 'd', 's', '!']) Is there a way to join them togeter back into a sentence whilst keeping the formatting? Ideal output would be something similar to the following: joined_array = (['Test Words!']) | Try join: my_array = (['T', 'e', 's', 't', ' ', 'w', 'o', 'r', 'd', 's', '!']) joined_array = ''.join(my_array) Which gives: 'Test words!' | 10 | 15 |
65,323,350 | 2020-12-16 | https://stackoverflow.com/questions/65323350/how-to-sum-rows-in-the-same-column-than-the-category-in-pandas-dataframe-pytho | I have been working on formatting a log file and finally I have arrived to the following dataframe sample, where the categories and numbers I want to add are in the same column: df = pd.DataFrame(dict(a=['Cat. A',1,1,3,'Cat. A',2,2,'Cat. B',3,5,2,6,'Cat. B',1,'Cat. C',4])) >>> a 0 Cat. A 1 1 2 1 3 3 4 Cat. A 5 2 6 2 7 Cat. B 8 3 9 5 10 2 11 6 12 Cat. B 13 1 14 Cat. C 15 4 If I sum all the numbers below each category I want to obtain: Cat. A= 1+1+3+2+2 = 9 Cat. B= 3+5+2+6+1 = 17 Cat. C= 4 I know how to do with going through all the files in the classic way, but I would like to know how to do it in a most pythonic way, considering that the numbers of rows for each category can be variable, and that the numbers of times that the category appears in each dataframe can be different too. | We can use pd.to_numeric to mark non-numeric fields as nan using Series.mask and Series.notna then use for group. Then use GroupBy.sum a = pd.to_numeric(df['a'], errors='coerce') g = df['a'].mask(a.notna()).ffill() a.groupby(g).sum() Cat. A 9.0 Cat. B 17.0 Cat. C 4.0 Name: a, dtype: float64 | 6 | 3 |
65,316,586 | 2020-12-16 | https://stackoverflow.com/questions/65316586/get-the-run-id-for-an-mlflow-experiment-with-the-name | I currently created an experiment in mlflow and created multiple runs in the experiment. from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error import mlflow experiment_name="experiment-1" mlflow.set_experiment(experiment_name) no_of_trees=[100,200,300] depths=[2,3,4] for trees in no_of_trees: for depth in depths: with mlflow.start_run() as run: model=RandomForestRegressor(n_estimators=trees, criterion='mse',max_depth=depth) model.fit(x_train, y_train) predictions=model.predict(x_cv) mlflow.log_metric('rmse',mean_squared_error(y_cv, predictions)) after creating the runs, I wanted to get the best run_id for this experiment. for now, I can get the best run by looking at the UI of mlflow but how can we do right the program? | we can get the experiment id from the experiment name and we can use python API to get the best runs. experiment_name = "experiment-1" current_experiment=dict(mlflow.get_experiment_by_name(experiment_name)) experiment_id=current_experiment['experiment_id'] By using the experiment id, we can get all the runs and we can sort them based on metrics like below. In the below code, rmse is my metric name (so it may be different for you based on metric name) df = mlflow.search_runs([experiment_id], order_by=["metrics.rmse DESC"]) best_run_id = df.loc[0,'run_id'] | 14 | 30 |
65,226,602 | 2020-12-9 | https://stackoverflow.com/questions/65226602/why-is-plus-equals-valid-for-list-and-dictionary | Adding a dictionary to a list using the __iadd__ notation seems to add the keys of the dictionary as elements in the list. Why? For example a = [] b = {'hello': 'world'} a += b print(a) # -> ['hello'] The documentation for plus-equals on collections doesn't imply to me that this should happen: For instance, to execute the statement x += y, where x is an instance of a class that has an __iadd__() method, x.__iadd__(y) is called. If x is an instance of a class that does not define a __iadd__() method, x.__add__(y) and y.__radd__(x) are considered, as with the evaluation of x + y But, logically, both a + b and b + a raise a TypeError. Furthermore, b += a raises a TypeError too. I don't see any special implementation in the source that would explain things, but I'm not 100% sure where to look. The closest question on SO I found is this one, asking about += on dictionaries, but that's just asking about a data structure with itself. This one had a promising title about list self-addition, but it claims "__add__" is being applied under the hood, which shouldn't be defined between lists and dictionaries. My best guess is that the __iadd__ is invoking extend, which is defined here, and then it tries to iterate over the dictionary, which in turn yields its keys. But this seems... weird? And I don't see any intuition of that coming from the docs. | My best guess is that the __iadd__ is invoking extend, which is defined here, and then it tries to iterate over the dictionary, which in turn yields its keys. But this seems... weird? And I don't see any intuition of that coming from the docs. This is the correct answer for why this happens. I've found the relevant docs that say this- In the docs you can see that in fact __iadd__ is equivalent to .extend(), and here it says: list.extend(iterable): Extend the list by appending all the items from the iterable. In the part about dicts it says: Performing list(d) on a dictionary returns a list of all the keys used in the dictionary So to summarize, a_list += a_dict is equivalet to a_list.extend(iter(a_dict)), which is equivalent to a_list.extend(a_dict.keys()), which will extend the list with the list of keys in the dictionary. We can maybe discuss on why this is the way things are, but I don't think we will find a clear-cut answer. I think += is a very useful shorthand for .extend, and also that a dictionary should be iterable (personally I'd prefer it returning .items(), but oh well) Edit: You seem to be interested in the actual implementation of CPython, so here are some code pointers: dict iterator returning keys: static PyObject * dict_iter(PyDictObject *dict) { return dictiter_new(dict, &PyDictIterKey_Type); } list.extend(iterable) calling iter() on its argument: static PyObject * list_extend(PyListObject *self, PyObject *iterable) { ... it = PyObject_GetIter(iterable); ... } += being equivalent to list.extend(): static PyObject * list_inplace_concat(PyListObject *self, PyObject *other) { ... result = list_extend(self, other); ... } and then this method seems to be referenced above inside a PySequenceMethods struct, which seems to be an abstraction of sequences that defines common actions such as concatenating in-place, and concatenating normally (which is defined as list_concat in the same file and you can see is not the same). | 15 | 16 |
65,241,321 | 2020-12-10 | https://stackoverflow.com/questions/65241321/python-shift-enter-not-working-in-vscode-with-jupyter | I have a new install of VS Code Version 1.50.1 with the python extension that now added the Jupyter extension. The Jupyter extension build number is 2020.12.411183115 When I press shift enter on the default it adds a new line below. You can see in the video that shift + enter should work to run the line. At this point the only way I can run code in the interactive window is with ctrl + shift + p and select run selected cell. Edit after working with Danny Varod and the comments below: Changing the keyboard shortcut to ctrl + enter and nothing happens (it does not add a new line below). I press ctrl + shift + p and I see that the correct shortcut is crtl + enter but it will not trigger the action Here is a screenshot of my keyboard shortcuts before the change. Changing my keyboard shortcuts to match the comment below. Now when I press ctrl + enter nothing happens. | Please use the following shortcut key settings: { "key": "shift+enter", "command": "jupyter.execSelectionInteractive", "when": "editorTextFocus" }, This shortcut key is set with the use conditions, and it can be used only when it is confirmed (including the control panel is opened). Therefore, we can remove the use conditions for this shortcut key. | 18 | 7 |
65,230,006 | 2020-12-10 | https://stackoverflow.com/questions/65230006/how-to-create-a-figure-of-subplots-of-grouped-bar-charts-in-python | I want to combine multiple grouped bar charts into one figure, as the image below shows. grouped bar charts in a single figure import matplotlib import matplotlib.pyplot as plt import numpy as np labels = ['G1', 'G2', 'G3'] yesterday_test1_mean = [20, 12, 23] yesterday_test2_mean = [21, 14, 25] today_test1_mean = [18, 10, 12] today_test2_mean = [13, 13, 9] Firstly I created each grouped bar chart by plt.subplots() x = np.arange(len(labels)) width = 0.3 fig1, ax = plt.subplots() rects1 = ax.bar(x-width/2, yesterday_test1_mean, width) rects2 = ax.bar(x+width/2, yesterday_test2_mean, width) fig2, ax = plt.subplots() rects3 = ax.bar(x-width/2, today_test1_mean, width) rects4 = ax.bar(x+width/2, today_test2_mean, width) Then, I used add_subplot in an attempt to treat fig1 and fig2 as new axes in a new figure. fig_all = plt.figure() fig1 = fig_all.add_subplot(1,2,1) fig2 = fig_all.add_subplot(1,2,2) fig_all.tight_layout() plt.show() But it didn't work. How can I combined several grouped bar charts into a single figure? Thanks in advance. | Well, I tried something. Here's a rough result. Only thing I changed is that rather using axes, I am just using subplot as I learned over time. So with fig and axes as output, there must be a way too. But this is all I've ever used. I've not added the legend and title yet, but I guess you can try it on your own too. Here's the code with just small change: import matplotlib.pyplot as plt import numpy as np labels = ['G1', 'G2', 'G3'] yesterday_test1_mean = [20, 12, 23] yesterday_test2_mean = [21, 14, 25] today_test1_mean = [18, 10, 12] today_test2_mean = [13, 13, 9] x = np.arange(len(labels)) width = 0.3 plt.figure(figsize=(12,5)) plt.subplot(121) plt.bar(x-width/2, yesterday_test1_mean, width) plt.bar(x+width/2, yesterday_test2_mean, width) plt.subplot(122) plt.bar(x-width/2, today_test1_mean, width) plt.bar(x+width/2, today_test2_mean, width) plt.show() And here's your initial result: While you see the result and try some stuff on your own, let me try to add the labels and legend to it as well as you've provided in the sample image. Edit: The final output So here it is, the exact thing you're looking for: Code: import matplotlib.pyplot as plt import numpy as np labels = ['G1', 'G2', 'G3'] yesterday_test1_mean = [20, 12, 23] yesterday_test2_mean = [21, 14, 25] today_test1_mean = [18, 10, 12] today_test2_mean = [13, 13, 9] x = np.arange(len(labels)) width = 0.3 plt.figure(figsize=(12,5)) plt.subplot(121) plt.title('Yesterday', fontsize=18) plt.bar(x-width/2, yesterday_test1_mean, width, label='test1', hatch='//', color=np.array((199, 66, 92))/255) plt.bar(x+width/2, yesterday_test2_mean, width, label='test2', color=np.array((240, 140, 58))/255) plt.xticks([0,1,2], labels, fontsize=15) plt.subplot(122) plt.title('Today', fontsize=18) plt.bar(x-width/2, today_test1_mean, width, hatch='//', color=np.array((199, 66, 92))/255) plt.bar(x+width/2, today_test2_mean, width, color=np.array((240, 140, 58))/255) plt.xticks([0,1,2], labels, fontsize=15) plt.figlegend(loc='upper right', ncol=1, labelspacing=0.5, fontsize=14, bbox_to_anchor=(1.11, 0.9)) plt.tight_layout(w_pad=6) plt.show() | 6 | 7 |
65,278,110 | 2020-12-13 | https://stackoverflow.com/questions/65278110/how-does-gunicorn-distribute-requests-across-sync-workers | I am using gunicorn to run a simple HTTP server1 using e.g. 8 sync workers (processes). For practical reasons I am interested in knowing how gunicorn distributes incoming requests between these workers. Assume that all requests take the same time to complete. Is the assignment random? Round-robin? Resource-based? The command I use to run the server: gunicorn --workers 8 bind 0.0.0.0:8000 main:app 1 I'm using FastAPI but I believe this is not relevant for this question. | Gunicorn does not distribute requests. Each worker is spawned with the same LISTENERS (e.g. gunicorn.sock.TCPSocket) in Arbiter.spawn_worker(), and calls listener.accept() on its own. The assignment in the blocking OS calls to the socket's accept() method β i.e. whichever worker is later woken up by the OS kernel and given the client connection β is an OS implementation detail that, empirically, is neither round-robin nor resource-based. Reference from the docs From https://docs.gunicorn.org/en/stable/design.html: Gunicorn is based on the pre-fork worker model. ... The master never knows anything about individual clients. All requests and responses are handled completely by worker processes. Gunicorn relies on the operating system to provide all of the load balancing when handling requests. Other reading Is there any way Load balancing on a preforked multi process TCP server build by asyncio? http://www.citi.umich.edu/projects/linux-scalability/reports/accept.html | 13 | 9 |
65,290,242 | 2020-12-14 | https://stackoverflow.com/questions/65290242/pythons-platform-mac-ver-reports-incorrect-macos-version | I'm using Python platform module to identify the MacOS version like this: import platform print(platform.mac_ver()) Output: In [1]: import platform In [2]: platform.mac_ver() Out[2]: ('10.16', ('', '', ''), 'x86_64') I have updated to BigSur and the version is incorrect, it should be 11.0.1 I looked at the source code of platform and it seems to parse a this file /System/Library/CoreServices/SystemVersion.plist to get the information. When reading this file from Python I get an incorrect version but from bash it is the correct one Bash: Amirs-MacBook-Pro:~ arossert$ cat /System/Library/CoreServices/SystemVersion.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>ProductBuildVersion</key> <string>20B50</string> <key>ProductCopyright</key> <string>1983-2020 Apple Inc.</string> <key>ProductName</key> <string>macOS</string> <key>ProductUserVisibleVersion</key> <string>11.0.1</string> <key>ProductVersion</key> <string>11.0.1</string> <key>iOSSupportVersion</key> <string>14.2</string> </dict> </plist> Python: In [4]: print(open("/System/Library/CoreServices/SystemVersion.plist").read()) <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>ProductBuildVersion</key> <string>20B50</string> <key>ProductCopyright</key> <string>1983-2020 Apple Inc.</string> <key>ProductName</key> <string>Mac OS X</string> <key>ProductUserVisibleVersion</key> <string>10.16</string> <key>ProductVersion</key> <string>10.16</string> <key>iOSSupportVersion</key> <string>14.2</string> </dict> </plist> What am I missing? This is the output from the same ipython session In [3]: print(open("/System/Library/CoreServices/SystemVersion.plist").read()) <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>ProductBuildVersion</key> <string>20B50</string> <key>ProductCopyright</key> <string>1983-2020 Apple Inc.</string> <key>ProductName</key> <string>Mac OS X</string> <key>ProductUserVisibleVersion</key> <string>10.16</string> <key>ProductVersion</key> <string>10.16</string> <key>iOSSupportVersion</key> <string>14.2</string> </dict> </plist> In [4]: cat /System/Library/CoreServices/SystemVersion.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>ProductBuildVersion</key> <string>20B50</string> <key>ProductCopyright</key> <string>1983-2020 Apple Inc.</string> <key>ProductName</key> <string>macOS</string> <key>ProductUserVisibleVersion</key> <string>11.0.1</string> <key>ProductVersion</key> <string>11.0.1</string> <key>iOSSupportVersion</key> <string>14.2</string> </dict> </plist> | In the Known Issues section of the Big Sur release notes, the following is present: Some third-party scripts might produce unexpected results due to the change in macOS version from 10.x to 11. (62477208) Workaround: Set SYSTEM_VERSION_COMPAT=1 in the calling environment, for example: $ SYSTEM_VERSION_COMPAT=1 legacy_script.pl There is also a rather extensive 3rd-party writeup at https://eclecticlight.co/2020/08/13/macos-version-numbering-isnt-so-simple/ Applications compiled for Big Sur or later get back "11.0" as the operating system version. Applications compiled for earlier versions get "10.16". This is so logic that assumes 10 as the prefix will not break. The environment variable SYSTEM_VERSION_COMPAT can be used to control which version of the file is returned; SYSTEM_VERSION_COMPAT=0 cat /System/Library/CoreServices/SystemVersion.plist returns 11.0.1, whereas SYSTEM_VERSION_COMPAT=1 cat /System/Library/CoreServices/SystemVersion.plist returns 10.16. (Note that there should be a space, not a newline, between the assignment and the invocation of cat, such that the shell sees this as a transient environment variable assignment rather than the assignment of a non-exported shell variable). | 9 | 12 |
65,292,230 | 2020-12-14 | https://stackoverflow.com/questions/65292230/reply-to-a-message-discord-py | I want to make my bot react to a users message when they type a certain sentence. My code to reply: await ctx.message.reply("I just replied to you") I get the error: ctx.message has no attribute "reply" What code can I do to make the bot reply to the message? When I say reply, I mean the same as a user can press reply on a message | To any new user here, as of the 1.6.0 discord.py-rewrite update, you are now able to reply! Every message or context now has a reply attribute. To reply, simply use await ctx.reply('Hello!') You can also not mention the author in the reply with mention_author=False await ctx.reply('Hello!', mention_author=False) You can also find a basic example Here | 14 | 32 |
65,248,401 | 2020-12-11 | https://stackoverflow.com/questions/65248401/why-is-my-confusion-matrix-returning-only-one-number | I'm doing a binary classification. Whenever my prediction equals the ground truth, I find sklearn.metrics.confusion_matrix to return a single value. Isn't there a problem? from sklearn.metrics import confusion_matrix print(confusion_matrix([True, True], [True, True]) # [[2]] I would expect something like: [[2 0] [0 0]] | You should fill-in labels=[True, False]: from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true=[True, True], y_pred=[True, True], labels=[True, False]) print(cm) # [[2 0] # [0 0]] Why? From the docs, the output of confusion_matrix(y_true, y_pred) is: C: ndarray of shape (n_classes, n_classes) The variable n_classes is either: guessed as the number of unique values in y_true or y_pred taken from the length of optional parameters labels In your case, because you did not fill in labels, the variable n_classes is guessed from the number of unique values in [True, True] which is 1. Hence the result. | 10 | 13 |
65,235,535 | 2020-12-10 | https://stackoverflow.com/questions/65235535/how-long-does-the-event-loop-live-in-a-django-3-1-async-view | I am playing around with the new async views from Django 3.1. Some benefits I would love to have is to do some simple fire-and-forget "tasks" after the view already gave its HttpResponse, like sending a push notification or sending an email. I am not looking for solutions with third-party packages like celery! To test this async views I used some code from this tutorial: https://testdriven.io/blog/django-async-views/ async def http_call_async(): for num in range(1, 200): await asyncio.sleep(1) print(num) # TODO: send email async print('done') async def async_view(request): loop = asyncio.get_event_loop() loop.create_task(http_call_async()) return HttpResponse("Non-blocking HTTP request") I started the django server with uvicorn. When I make a request to this view it will immediately return the HTTP-response "Non-blocking HTTP request". In the meantime and after the HTTP-response, the event loop continues happily to print the numbers up to 200 and then print "done". This is exactly the behaviour I want to utilize for my fire-and-forget tasks. Unfortunatly I could not find any information about the lifetime of the event loop running this code. How long does the event loop live? Is there a timeout? On what does it depend? On uvicorn? Is that configurable? Are there any resources which discuss this topic? | Django (which does not provide an ASGI server) does not limit the lifetime of the event loop. In the case of this question: ASGI server: Uvicorn Event loop: either the built-in asyncio event loop or uvloop (both do not limit their lifetime) For Uvicorn: How long does the event loop live? The event loop lives as long as the worker lives. Is there a timeout? No, tasks run until completion (unless the worker is unresponsive and is killed by Gunicorn). On what does it depend? On Uvicorn? Worker lifetime is mainly limited by --limit-max-requests (Gunicorn: --max-requests). Is that configurable? Yes, but the worker still exits at some point. It may also be killed for other reasons. See the issue for yourself by specifying --limit-max-requests 2 and running the view twice: uvicorn mysite.asgi:application --limit-max-requests 2 Graceful shutdown Regardless of how the lifetime is limited, what we might care about is how to handle a shutdown. How does Uvicorn do graceful shutdown? Let's see how Uvicorn worker (uvicorn.server.Server) does graceful shutdown for requests. uvicorn.protocols.http.h11_impl.H11Protocol#__init__: Store a reference to server tasks # Shared server state self.server_state = server_state self.tasks = server_state.tasks uvicorn.protocols.http.h11_impl.H11Protocol#handle_events: Add request task to tasks self.cycle = RequestResponseCycle( ... ) task = self.loop.create_task(self.cycle.run_asgi(app)) task.add_done_callback(self.tasks.discard) self.tasks.add(task) uvicorn.server.Server#shutdown: Wait for existing tasks to complete if self.server_state.tasks and not self.force_exit: msg = "Waiting for background tasks to complete. (CTRL+C to force quit)" logger.info(msg) while self.server_state.tasks and not self.force_exit: await asyncio.sleep(0.1) How to make Uvicorn worker wait for your task? Let's piggyback on this by adding your tasks to the server tasks. async def async_view(request): loop = asyncio.get_event_loop() # loop.create_task(http_call_async()) # Replace this task = loop.create_task(http_call_async()) # with this server_state = await get_server_state() # Add this task.add_done_callback(server_state.tasks.discard) # Add this server_state.tasks.add(task) # Add this return HttpResponse("Non-blocking HTTP request") import gc from uvicorn.server import ServerState _server_state = None @sync_to_async def get_server_state(): global _server_state if not _server_state: objects = gc.get_objects() _server_state = next(o for o in objects if isinstance(o, ServerState)) return _server_state Now, Uvicorn worker will wait for your tasks in a graceful shutdown. | 9 | 4 |
65,273,118 | 2020-12-13 | https://stackoverflow.com/questions/65273118/why-is-tensorflow-not-recognizing-my-gpu-after-conda-install | I am new to deep learning and I have been trying to install tensorflow-gpu version in my pc in vain for the last 2 days. I avoided installing CUDA and cuDNN drivers since several forums online don't recommend it due to numerous compatibility issues. Since I was already using the conda distribution of python before, I went for the conda install -c anaconda tensorflow-gpu as written in their official website here: https://anaconda.org/anaconda/tensorflow-gpu . However even after installing the gpu version in a fresh virtual environment (to avoid potential conflicts with pip installed libraries in the base env), tensorflow doesn't seem to even recognize my GPU for some mysterious reason. Some of the code snippets I ran(in anaconda prompt) to understand that it wasn't recognizing my GPU:- 1. >>>from tensorflow.python.client import device_lib >>>print(device_lib.list_local_devices()) [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 7692219132769779763 ] As you can see it completely ignores the GPU. 2. >>>tf.debugging.set_log_device_placement(True) >>>a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) 2020-12-13 10:11:30.902956: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. >>>b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) >>>c = tf.matmul(a, b) >>>print(c) tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32) Here, it was supposed to indicate that it ran with a GPU by showing Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0 (as written here: https://www.tensorflow.org/guide/gpu) but nothing like that is present. Also I am not sure what the message after the 2nd line means. I have also searched for several solutions online including here but almost all of the issues are related to the first manual installation method which I haven't tried yet since everyone recommended this approach. I don't use cmd anymore since the environment variables somehow got messed up after uninstalling tensorflow-cpu from the base env and on re-installing, it worked perfectly with anaconda prompt but not cmd. This is a separate issue (and widespread also) but I mentioned it in case that has a role to play here. I installed the gpu version in a fresh virtual environment to ensure a clean installation and as far as I understand path variables need to be set up only for manual installation of CUDA and cuDNN libraries. The card which I use:-(which is CUDA enabled) C:\WINDOWS\system32>wmic path win32_VideoController get name Name NVIDIA GeForce 940MX Intel(R) HD Graphics 620 Tensorflow and python version I am using currently:- >>> import tensorflow as tf >>> tf.__version__ '2.3.0' Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. System information: Windows 10 Home, 64-bit operating system, x64-based processor. Any help would be really appreciated. Thanks in advance. | August 2021 Conda install may be working now, as according to @ComputerScientist in the comments below, conda install tensorflow-gpu==2.4.1 will give cudatoolkit-10.1.243 and cudnn-7.6.5 The following was written in Jan 2021 and is out of date Currently conda install tensorflow-gpu installs tensorflow v2.3.0 and does NOT install the conda cudnn or cudatoolkit packages. Installing them manually (e.g. with conda install cudatoolkit=10.1) does not seem to fix the problem either. A solution is to install an earlier version of tensorflow, which does install cudnn and cudatoolkit, then upgrade with pip conda install tensorflow-gpu=2.1 pip install tensorflow-gpu==2.3.1 (2.4.0 uses cuda 11.0 and cudnn 8.0, however cudnn 8.0 is not in anaconda as of 16/12/2020) Edit: please also see @GZ0's answer, which links to a github discussion with a one-line solution | 30 | 50 |
65,271,060 | 2020-12-12 | https://stackoverflow.com/questions/65271060/is-there-a-built-in-way-to-use-inline-c-code-in-python | Even if numba, cython (and especially cython.inline) exist, in some cases, it would be interesting to have inline C code in Python. Is there a built-in way (in Python standard library) to have inline C code? PS: scipy.weave used to provide this, but it's Python 2 only. | Directly in the Python standard library, probably not. But it's possible to have something very close to inline C in Python with the cffi module (pip install cffi). Here is an example, inspired by this article and this question, showing how to implement a factorial function in Python + "inline" C: from cffi import FFI ffi = FFI() ffi.set_source("_test", """ long factorial(int n) { long r = n; while(n > 1) { n -= 1; r *= n; } return r; } """) ffi.cdef("""long factorial(int);""") ffi.compile() from _test import lib # import the compiled library print(lib.factorial(10)) # 3628800 Notes: ffi.set_source(...) defines the actual C source code ffi.cdef(...) is the equivalent of the .h header file you can of course add some cleaning code after, if you don't need the compiled library at the end (however, cython.inline does the same and the compiled .pyd files are not cleaned by default, see here) this quick inline use is particularly useful during a prototyping / development phase. Once everything is ready, you can separate the build (that you do only once), and the rest of the code which imports the pre-compiled library It seems too good to be true, but it seems to work! | 6 | 7 |
65,278,555 | 2020-12-13 | https://stackoverflow.com/questions/65278555/typeerror-nan-inf-not-supported-in-write-number-without-nan-inf-to-errors-w | I want to save X (ndarray) with dimensions (3960, 225) in excel file (.xlsx). In X I have some missing values (nan). I made a code for it. However, I am getting the error. Here is the Code: workbook = xlsxwriter.Workbook('arrays.xlsx') worksheet = workbook.add_worksheet() row = 0 for col, data in enumerate(X): worksheet.write_column(row, col, data) workbook.close() df = pd.DataFrame(X) ## save to xlsx file filepath = 'my_excel_file.xlsx' df.to_excel(filepath, index=False) Here is the traceback: Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2020.2.1\plugins\python\helpers\pydev\pydevd.py", line 1448, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2020.2.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/Nafees Ahmed/PycharmProjects/Extra_Sensory_Experimetns/main.py", line 475, in <module> worksheet.write_column(row, col, data) File "C:\Users\Nafees Ahmed\AppData\Local\Programs\Python\Python38\lib\site-packages\xlsxwriter\worksheet.py", line 69, in cell_wrapper return method(self, *args, **kwargs) File "C:\Users\Nafees Ahmed\AppData\Local\Programs\Python\Python38\lib\site-packages\xlsxwriter\worksheet.py", line 1164, in write_column error = self._write(row, col, token, cell_format) File "C:\Users\Nafees Ahmed\AppData\Local\Programs\Python\Python38\lib\site-packages\xlsxwriter\worksheet.py", line 481, in _write return self._write_number(row, col, *args) File "C:\Users\Nafees Ahmed\AppData\Local\Programs\Python\Python38\lib\site-packages\xlsxwriter\worksheet.py", line 589, in _write_number raise TypeError( TypeError: NAN/INF not supported in write_number() without 'nan_inf_to_errors' Workbook() option Perhaps, it is coming due to nan (missing) values. Is there any simple way to handle this error? | Filling NaN values with zero, does not solve the problem, If you want to keep NaN values as NaN, you should skip filling value in like that: row = 0 for col, data in enumerate(X): try: worksheet.write_column(row, col, data) except: pass | 8 | 10 |
65,263,059 | 2020-12-12 | https://stackoverflow.com/questions/65263059/sampling-a-fixed-length-sequence-from-a-numpy-array | I have a data matrix a and I have list of indices stored in array idx. I would like to get 10-length data starting at each of the indices defined by idx . Right now I use a for loop to achieve this. But it is extremely slow as I have to do this data fetch about 1000 times in an iteration. Below is a minimum working example. import numpy as np a = np.random.random(1000) idx = np.array([1, 5, 89, 54]) # I want "data" array to have np.array([a[1:11], a[5:15], a[89:99], a[54:64]]) # I use for loop below but it is slow data = [] for id in idx: data.append(a[id:id+10]) data = np.array(data) Is there anyway to speed up this process? Thanks. EDIT: My question is different from the question asked here. In the question, the size of the chunks is random in contrast to fixed chunk size in my question. Other differences exist. I do not have to use up the entire array a and an element can occur in more than one chunk. My question does not necessarily "split" the array. | (Thanks to suggestion from @MadPhysicist) This should work: a[idx.reshape(-1, 1) + np.arange(10)] Output: Shape (L,10), where L is the length of idx Notes: This does not check for index-out-of-bound situations. I suppose it's easy to first ensure that idx doesn't contain such values. Using np.take(a, idx.reshape(-1, 1) + np.arange(10), mode='wrap') is an alternative, that will handle out-of-bounds indices by wrapping them around a. Passing mode='clip' instead of mode='wrap' would clip the excessive indices to the last index of a. But then, np.take() would probably have a completely different perf. characteristic / scaling characteristic. | 7 | 7 |
65,279,115 | 2020-12-13 | https://stackoverflow.com/questions/65279115/how-to-use-collate-fn-with-dataloaders | I am trying to train a pretrained roberta model using 3 inputs, 3 input_masks and a label as tensors of my training dataset. I do this using the following code: from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler batch_size = 32 # Create the DataLoader for our training set. train_data = TensorDataset(train_AT, train_BT, train_CT, train_maskAT, train_maskBT, train_maskCT, labels_trainT) train_dataloader = DataLoader(train_data, batch_size=batch_size) # Create the Dataloader for our validation set. validation_data = TensorDataset(val_AT, val_BT, val_CT, val_maskAT, val_maskBT, val_maskCT, labels_valT) val_dataloader = DataLoader(validation_data, batch_size=batch_size) # Pytorch Training training_args = TrainingArguments( output_dir='C:/Users/samvd/Documents/Master/AppliedMachineLearning/FinalProject/results', # output directory num_train_epochs=1, # total # of training epochs per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='C:/Users/samvd/Documents/Master/AppliedMachineLearning/FinalProject/logs', # directory for storing logs ) trainer = Trainer( model=model, # the instantiated π€ Transformers model to be trained args=training_args, # training arguments, defined above train_dataset = train_data, # training dataset eval_dataset = validation_data, # evaluation dataset ) trainer.train() However this gives me the following error: TypeError: vars() argument must have dict attribute Now I have found out that it is probably because I don't use collate_fn when using DataLoader, but I can't really find a source that helps me define this correctly so the trainer understands the different tensors I put in. Can anyone point me in the right direction? | Basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create your batch without spending much time implementing it manually. Try to see it as a glue that you specify the way examples stick together in a batch. If you donβt use it, PyTorch only put batch_size examples together as you would using torch.stack (not exactly it, but it is simple like that). Suppose for example, you want to create batches of a list of varying dimension tensors. The below code pads sequences with 0 until the maximum sequence size of the batch, that is why we need the collate_fn, because a standard batching algorithm (simply using torch.stack) wonβt work in this case, and we need to manually pad different sequences with variable length to the same size before creating the batch. def collate_fn(data): """ data: is a list of tuples with (example, label, length) where 'example' is a tensor of arbitrary shape and label/length are scalars """ _, labels, lengths = zip(*data) max_len = max(lengths) n_ftrs = data[0][0].size(1) features = torch.zeros((len(data), max_len, n_ftrs)) labels = torch.tensor(labels) lengths = torch.tensor(lengths) for i in range(len(data)): j, k = data[i][0].size(0), data[i][0].size(1) features[i] = torch.cat([data[i][0], torch.zeros((max_len - j, k))]) return features.float(), labels.long(), lengths.long() The function above is fed to the collate_fn param in the DataLoader, as this example: DataLoader(toy_dataset, collate_fn=collate_fn, batch_size=5) With this collate_fn function, you always gonna have a tensor where all your examples have the same size. So, when you feed your forward() function with this data, you need to use the length to get the original data back, to not use those meaningless zeros in your computation. Source: Pytorch Forum | 45 | 78 |
65,272,408 | 2020-12-13 | https://stackoverflow.com/questions/65272408/plotly-how-to-embed-a-fully-interactive-plotly-figure-in-excel | I'm trying to embed an interactive plotly (or bokeh) plot into excel. To do this I've tried the following three things: embed a Microsoft Web Browser UserForm into excel, following: How do I embed a browser in an Excel VBA form? This works and enables both online and offline html to be loaded creating a plotly html ''' import plotly import plotly.graph_objects as go x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0] y = [i**2 for i in x] fig = go.Figure() fig.add_trace(go.Scatter(x=x, y=x, mode='markers', name="y=x", marker=dict(color='royalblue', size=8))) fig.add_trace(go.Scatter(x=x, y=y, name="y=x^2", line=dict(width=3))) plotly.offline.plot(fig, filename='C:/Users/.../pythonProject/test1.html') repointing the webbrowser object in excel using .Navigate to the local plotly.html. Banner pops up with ".... restricted this file from showing active content that could access your computer" clicking on the banner, I run into this error: The same HTML can be opened in a browser. Is there any way to show interactive plots in excel? | Finally, I have managed to bring the interactive plot to excel after a discussion from Microsoft QnA and Web Browser Control & Specifying the IE Version To insert a Microsoft webpage to excel you have to change the compatibility Flag in the registry editor Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\ClickToRun\REGISTRY\MACHINE\Software\Wow6432Node\Microsoft\Office\16.0\Common\COM Compatibility{8856F961-340A-11D0-A96B-00C04FD705A2} Change the DWord 0 instead of 400 Now you can insert the web browser object to excel, Step by step details are here Edit the HTML File generated from plotly manually by adding a tag for Using the X-UA-Compatible HTML Meta Tag Originally generated HTML file from plotly looks like this <html> <head><meta charset="utf-8" /></head> <body> Modified HTML with browser compatibility <html> <head> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> </head> <body> After this, I can able to view the interactive plot in excel also able to do the interactions same as a web browser Macro used: Sub Button4_Click() ActiveSheet.WebBrowser1.Navigate "file:///C:/Users/vignesh.rajendran/Desktop/test5.html" End Sub | 11 | 1 |
65,314,235 | 2020-12-15 | https://stackoverflow.com/questions/65314235/how-should-i-configure-my-headers-to-make-an-http-2-post-to-apns-to-avoid-recei | I'm pretty new to HTTP stuff, primarily stick to iOS so please bear with me. I'm using the httpx python library to try and send a notification to an iPhone because I have to make an HTTP/2 POST to do so. Apple's Documentation says it requires ":method" and ":path" headers but I when I try to make the POST with these headers included, headers = { ':method' : 'POST', ':path' : '/3/device/{}'.format(deviceToken), ... } I get the error h2.exceptions.ProtocolError: Received duplicate pseudo-header field b':path It's pretty apparent there's a problem with having the ":path" header included but I'm also required to send it so I'm not sure what I'm doing wrong. Apple's Documentation also says to Encode the :path and authorization values as literal header fields without indexing. Encode all other fields as literal header fields with incremental indexing. I really don't know what that means or how to implement that or if it's related. I would think httpx would merge my ":path" header with the default one to eliminate the duplicate but I'm just spitballing here. Full Traceback File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpx/_client.py", line 992, in post return self.request( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpx/_client.py", line 733, in request return self.send( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpx/_client.py", line 767, in send response = self._send_handling_auth( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpx/_client.py", line 805, in _send_handling_auth response = self._send_handling_redirects( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpx/_client.py", line 837, in _send_handling_redirects response = self._send_single_request(request, timeout) File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpx/_client.py", line 861, in _send_single_request (status_code, headers, stream, ext) = transport.request( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 218, in request response = connection.request( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 106, in request return self.connection.request(method, url, headers, stream, ext) File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpcore/_sync/http2.py", line 119, in request return h2_stream.request(method, url, headers, stream, ext) File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpcore/_sync/http2.py", line 292, in request self.send_headers(method, url, headers, has_body, timeout) File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpcore/_sync/http2.py", line 330, in send_headers self.connection.send_headers(self.stream_id, headers, end_stream, timeout) File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/httpcore/_sync/http2.py", line 227, in send_headers self.h2_state.send_headers(stream_id, headers, end_stream=end_stream) File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/h2/connection.py", line 770, in send_headers frames = stream.send_headers( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/h2/stream.py", line 865, in send_headers frames = self._build_headers_frames( File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/h2/stream.py", line 1252, in _build_headers_frames encoded_headers = encoder.encode(headers) File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/hpack/hpack.py", line 249, in encode for header in headers: File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/h2/utilities.py", line 496, in inner for header in headers: File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/h2/utilities.py", line 441, in _validate_host_authority_header for header in headers: File "/Users/User/.pyenv/versions/3.9.0/lib/python3.9/site-packages/h2/utilities.py", line 338, in _reject_pseudo_header_fields raise ProtocolError( h2.exceptions.ProtocolError: Received duplicate pseudo-header field b':method' Request: devServer = "https://api.sandbox.push.apple.com:443" title = "some title" notification = { "aps": { "alert": title, "sound": "someSound.caf" } } client = httpx.Client(http2=True) try: r = client.post(devServer, json=notification, headers=headers) finally: client.close() | Just need to append '/3/device/{}'.format(deviceToken) to the devServer url as the path, and the ":path" pseudo-header will be automatically set to it. that is, devServer = 'https://api.sandbox.push.apple.com:443/3/device/{}'.format(deviceToken) Explanation: The ":path", ":method" and ":scheme" pseudo-headers generally wouldn't need to be added manually in http2 Reference: Hypertext Transfer Protocol Version 2 (HTTP/2) | 7 | 4 |
65,305,864 | 2020-12-15 | https://stackoverflow.com/questions/65305864/understanding-weightedkappaloss-using-keras | I'm using Keras to try to predict a vector of scores (0-1) using a sequence of events. For example, X is a sequence of 3 vectors comprised of 6 features each, while y is a vector of 3 scores: X [ [1,2,3,4,5,6], <--- dummy data [1,2,3,4,5,6], [1,2,3,4,5,6] ] y [0.34 ,0.12 ,0.46] <--- dummy data I want to adress the problem as ordinal classification, so if the actual values are [0.5,0.5,0.5] the prediction [0.49,0.49,0.49] is better then [0.3,0.3,0.3]. My Original solution, was to use sigmoid activation on my last layer and mse as the loss function, so the output is ranged between 0-1 for each of the output neurons: def get_model(num_samples, num_features, output_size): opt = Adam() model = Sequential() model.add(LSTM(config['lstm_neurons'], activation=config['lstm_activation'], input_shape=(num_samples, num_features))) model.add(Dropout(config['dropout_rate'])) for layer in config['dense_layers']: model.add(Dense(layer['neurons'], activation=layer['activation'])) model.add(Dense(output_size, activation='sigmoid')) model.compile(loss='mse', optimizer=opt, metrics=['mae', 'mse']) return model My Goal is to understand the usage of WeightedKappaLoss and to implement it on my actual data. I've created this Colab to fiddle around with the idea. In the Colab, my data is a sequence shaped (5000,3,3) and my targets shape is (5000, 4) representing 1 of 4 possible classes. I want the model to understand that it needs to trim the floating point of the X in order to predict the right y class: [[3.49877793, 3.65873511, 3.20218196], [3.20258153, 3.7578669 , 3.83365481], [3.9579924 , 3.41765455, 3.89652426]], ----> y is 3 [0,0,1,0] [[1.74290875, 1.41573056, 1.31195701], [1.89952004, 1.95459796, 1.93148095], [1.18668981, 1.98982041, 1.89025326]], ----> y is 1 [1,0,0,0] New model code: def get_model(num_samples, num_features, output_size): opt = Adam(learning_rate=config['learning_rate']) model = Sequential() model.add(LSTM(config['lstm_neurons'], activation=config['lstm_activation'], input_shape=(num_samples, num_features))) model.add(Dropout(config['dropout_rate'])) for layer in config['dense_layers']: model.add(Dense(layer['neurons'], activation=layer['activation'])) model.add(Dense(output_size, activation='softmax')) model.compile(loss=tfa.losses.WeightedKappaLoss(num_classes=4), optimizer=opt, metrics=[tfa.metrics.CohenKappa(num_classes=4)]) return model When fitting the model I can see the following metrics on TensorBoard: I'm not sure about the following points and would appreciate clarification: Am I using it right? In my original problem, I'm predicting 3 scores, as opposed of the Colab example, where I'm predicting only 1. If I'm using WeightedKappaLoss, does it mean I'll need to convert each of the scores to a vector of 100 one-hot encoding? Is there a way to use the WeightedKappaLoss on the original floating point scores without converting to a classification problem? | Let we separate the goal to two sub-goals, we walk through the purpose, concept, mathematical details of Weighted Kappa first, after that we summarize the things to note when we try to use WeightedKappaLoss in tensorflow PS: you can skip the understand part if you only care about usage Weighted Kappa detailed explanation Since the Weighted Kappa can be see as Cohen's kappa + weights, so we need to understand the Cohen's kappa first Example of Cohen's kappa Suppose we have two classifier (A and B) trying to classify 50 statements into two categories (True and False), the way they classify those statements wrt each other in a contingency table: B True False A True 20 5 25 statements A think is true False 10 15 25 statements A think is false 30 statements B think is true 20 statements B think is false Now suppose we want know: How reliable the prediction A and B made? What we can do is simply take the percentage of classified statements which A and B agree with each other, i.e proportion of observed agreement denote as Po, so: Po = (20 + 15) / 50 = 0.7 But this is problematic, because there have probability that A and B agree with each other by random chance, i.e proportion of expected chance agreement denote as Pe, if we use observed percentage as expect probability, then: Pe = (probability statement A think is true) * (probability statement B think is true) + (probability statement A think is false) * (probability statement B think is false) = (25 / 50) * (30 / 50) + (25 / 50) * (20 / 50) = 0.5 Cohen's kappa coefficient denote as K that incorporate Po and Pe to give us more robust prediction about reliability of prediction A and B made: K = (Po - Pe) / (1 - Pe) = 1 - (1 - Po) / (1 - Pe) = 1 - (1 - 0.7) / (1 - 0.5) = 0.4 We can see the more A and B are agree with each other (Po higher) and less they agree because of chance (Pe lower), the more Cohen's kappa "think" the result is reliable Now assume A is the labels (ground truth) of statements, then K is telling us how reliable the B's prediction are, i.e how much prediction agree with labels when take random chance into consideration Weights for Cohen's kappa We define the contingency table with m classes formally: classifier 2 class.1 class.2 class... class.k Sum over row class.1 n11 n12 ... n1k n1+ class.2 n21 n22 ... n2k n2+ classifier 1 class... ... ... ... ... ... class.k nk1 nk2 ... nkk nk+ Sum over column n+1 n+2 ... n+k N # total sum of all table cells The table cells contain the counts of cross-classified categories denote as nij, i,j for row and column index respectively Consider those k ordinal classes are separate from two categorical classes, e.g separate 1, 0 into five classes 1, 0.75, 0.5, 0.25, 0 which have a smooth ordered transition, we cannot say the classes are independent except the first and last class, e.g very good, good, normal, bad, very bad, the very good and good are not independent and the good should closer to bad than to very bad Since the adjacent classes are interdependent then in order to calculate the quantity related to agreement we need define this dependency, i.e Weights denote as Wij, it assigned to each cell in the contingency table, value of weight (within range [0, 1]) depend on how close two classes are Now let's look at Po and Pe formula in Weighted Kappa: And Po and Pe formula in Cohen's kappa: We can see Po and Pe formula in Cohen's kappa is special case of formula in Weighted Kappa, where weight = 1 assigned to all diagonal cells and weight = 0 elsewhere, when we calculate K (Cohen's kappa coefficient) using Po and Pe formula in Weighted Kappa we also take dependency between adjacent classes into consideration Here are two commonly used weighting system: Linear weight: Quadratic weight: Where, |i-j| is the distance between classes and k is the number of classes Weighted Kappa Loss This loss is use in case we mentioned before where one classifier is the labels, and the purpose of this loss is to make the model's (another classifier) prediction as reliable as possible, i.e encourage model to make more prediction agree with labels while make less random guess when take dependency between adjacent classes into consideration The formula of Weighted Kappa Loss given by: It just take formula of negative Cohen's kappa coefficient and get rid of constant -1 then apply natural logarithm on it, where dij = |i-j| for Linear weight, dij = (|i-j|)^2 for Quadratic weight Following is the source code of Weighted Kappa Loss written with tensroflow, as you can see it just implement the formula of Weighted Kappa Loss above: import warnings from typing import Optional import tensorflow as tf from typeguard import typechecked from tensorflow_addons.utils.types import Number class WeightedKappaLoss(tf.keras.losses.Loss): @typechecked def __init__( self, num_classes: int, weightage: Optional[str] = "quadratic", name: Optional[str] = "cohen_kappa_loss", epsilon: Optional[Number] = 1e-6, dtype: Optional[tf.DType] = tf.float32, reduction: str = tf.keras.losses.Reduction.NONE, ): super().__init__(name=name, reduction=reduction) warnings.warn( "The data type for `WeightedKappaLoss` defaults to " "`tf.keras.backend.floatx()`." "The argument `dtype` will be removed in Addons `0.12`.", DeprecationWarning, ) if weightage not in ("linear", "quadratic"): raise ValueError("Unknown kappa weighting type.") self.weightage = weightage self.num_classes = num_classes self.epsilon = epsilon or tf.keras.backend.epsilon() label_vec = tf.range(num_classes, dtype=tf.keras.backend.floatx()) self.row_label_vec = tf.reshape(label_vec, [1, num_classes]) self.col_label_vec = tf.reshape(label_vec, [num_classes, 1]) col_mat = tf.tile(self.col_label_vec, [1, num_classes]) row_mat = tf.tile(self.row_label_vec, [num_classes, 1]) if weightage == "linear": self.weight_mat = tf.abs(col_mat - row_mat) else: self.weight_mat = (col_mat - row_mat) ** 2 def call(self, y_true, y_pred): y_true = tf.cast(y_true, dtype=self.col_label_vec.dtype) y_pred = tf.cast(y_pred, dtype=self.weight_mat.dtype) batch_size = tf.shape(y_true)[0] cat_labels = tf.matmul(y_true, self.col_label_vec) cat_label_mat = tf.tile(cat_labels, [1, self.num_classes]) row_label_mat = tf.tile(self.row_label_vec, [batch_size, 1]) if self.weightage == "linear": weight = tf.abs(cat_label_mat - row_label_mat) else: weight = (cat_label_mat - row_label_mat) ** 2 numerator = tf.reduce_sum(weight * y_pred) label_dist = tf.reduce_sum(y_true, axis=0, keepdims=True) pred_dist = tf.reduce_sum(y_pred, axis=0, keepdims=True) w_pred_dist = tf.matmul(self.weight_mat, pred_dist, transpose_b=True) denominator = tf.reduce_sum(tf.matmul(label_dist, w_pred_dist)) denominator /= tf.cast(batch_size, dtype=denominator.dtype) loss = tf.math.divide_no_nan(numerator, denominator) return tf.math.log(loss + self.epsilon) def get_config(self): config = { "num_classes": self.num_classes, "weightage": self.weightage, "epsilon": self.epsilon, } base_config = super().get_config() return {**base_config, **config} Usage of Weighted Kappa Loss We can using Weighted Kappa Loss whenever we can form our problem to Ordinal Classification Problems, i.e the classes form a smooth ordered transition and adjacent classes are interdependent, like ranking something with very good, good, normal, bad, very bad, and the output of the model should be like Softmax results We cannot using Weighted Kappa Loss when we try to predict the vector of scores (0-1) even if they can sum to 1, since the Weights in each elements of vector is different and this loss not ask how different is the value by subtract, but ask how many are the number by multiplication, e.g: import tensorflow as tf from tensorflow_addons.losses import WeightedKappaLoss y_true = tf.constant([[0.1, 0.2, 0.6, 0.1], [0.1, 0.5, 0.3, 0.1], [0.8, 0.05, 0.05, 0.1], [0.01, 0.09, 0.1, 0.8]]) y_pred_0 = tf.constant([[0.1, 0.2, 0.6, 0.1], [0.1, 0.5, 0.3, 0.1], [0.8, 0.05, 0.05, 0.1], [0.01, 0.09, 0.1, 0.8]]) y_pred_1 = tf.constant([[0.0, 0.1, 0.9, 0.0], [0.1, 0.5, 0.3, 0.1], [0.8, 0.05, 0.05, 0.1], [0.01, 0.09, 0.1, 0.8]]) kappa_loss = WeightedKappaLoss(weightage='linear', num_classes=4) loss_0 = kappa_loss(y_true, y_pred_0) loss_1 = kappa_loss(y_true, y_pred_1) print('Loss_0: {}, loss_1: {}'.format(loss_0.numpy(), loss_1.numpy())) Outputs: # y_pred_0 equal to y_true yet loss_1 is smaller than loss_0 Loss_0: -0.7053321599960327, loss_1: -0.8015820980072021 Your code in Colab is working correctly in the context of Ordinal Classification Problems, since the function you form X->Y is very simple (int of X is Y index + 1), so the model learn it fairly quick and accurate, as we can see K (Cohen's kappa coefficient) up to 1.0 and Weighted Kappa Loss drop below -13.0 (which in practice usually is minimal we can expect) In summary, you can using Weighted Kappa Loss unless you can form your problem to Ordinal Classification Problems which have labels in one-hot fashion, if you can and trying to solve the LTR (Learning to rank) problems, then you can check this tutorial of implement ListNet and this tutorial of tensorflow_ranking for better result, otherwise you shouldn't using Weighted Kappa Loss, if you can only form your problem to Regression Problems, then you should do the same as your original solution Reference: Cohen's kappa on Wikipedia Weighted Kappa in R: For Two Ordinal Variables source code of WeightedKappaLoss in tensroflow-addons Documentation of tfa.losses.WeightedKappaLoss Difference between categorical, ordinal and numerical variables | 6 | 11 |
65,304,455 | 2020-12-15 | https://stackoverflow.com/questions/65304455/something-wrong-when-implementing-svm-one-vs-all-in-python | I was trying to verify that I had correctly understood how SVM - OVA (One-versus-All) works, by comparing the function OneVsRestClassifier with my own implementation. In the following code, I implemented num_classes classifiers in the training phase, and then tested all of them on the testset and selected the one returning the highest probability value. import pandas as pd import numpy as np from sklearn.svm import SVC from sklearn.metrics import accuracy_score,classification_report from sklearn.preprocessing import scale # Read dataset df = pd.read_csv('In/winequality-white.csv', delimiter=';') X = df.loc[:, df.columns != 'quality'] Y = df.loc[:, df.columns == 'quality'] my_classes = np.unique(Y) num_classes = len(my_classes) # Train-test split np.random.seed(42) msk = np.random.rand(len(df)) <= 0.8 train = df[msk] test = df[~msk] # From dataset to features and labels X_train = train.loc[:, train.columns != 'quality'] Y_train = train.loc[:, train.columns == 'quality'] X_test = test.loc[:, test.columns != 'quality'] Y_test = test.loc[:, test.columns == 'quality'] # Models clf = [None] * num_classes for k in np.arange(0,num_classes): my_model = SVC(gamma='auto', C=1000, kernel='rbf', class_weight='balanced', probability=True) clf[k] = my_model.fit(X_train, Y_train==my_classes[k]) # Prediction prob_table = np.zeros((len(Y_test), num_classes)) for k in np.arange(0,num_classes): p = clf[k].predict_proba(X_test) prob_table[:,k] = p[:,list(clf[k].classes_).index(True)] Y_pred = prob_table.argmax(axis=1) print("Test accuracy = ", accuracy_score( Y_test, Y_pred) * 100,"\n\n") Test accuracy is equal to 0.21, while when using the function OneVsRestClassifier, it returns 0.59. For completeness, I also report the other code (the pre-processing steps are the same as before): .... clf = OneVsRestClassifier(SVC(gamma='auto', C=1000, kernel='rbf', class_weight='balanced')) clf.fit(X_train, Y_train) Y_pred = clf.predict(X_test) print("Test accuracy = ", accuracy_score( Y_test, Y_pred) * 100,"\n\n") Is there something wrong in my own implementation of SVM - OVA? | Is there something wrong in my own implementation of SVM - OVA? You have unique classes array([3, 4, 5, 6, 7, 8, 9]), however the line Y_pred = prob_table.argmax(axis=1) assumes they are 0-indexed. Try refactoring your code to be less error prone to assumptions like that: from sklearn.svm import SVC from sklearn.metrics import accuracy_score,classification_report from sklearn.preprocessing import scale from sklearn.model_selection import train_test_split df = pd.read_csv('winequality-white.csv', delimiter=';') y = df["quality"] my_classes = np.unique(y) X = df.drop("quality", axis=1) X_train, X_test, Y_train, Y_test = train_test_split(X,y, random_state=42) # Models clfs = [] for k in my_classes: my_model = SVC(gamma='auto', C=1000, kernel='rbf', class_weight='balanced' , probability=True, random_state=42) clfs.append(my_model.fit(X_train, Y_train==k)) # Prediction prob_table = np.zeros((len(X_test),len(my_classes))) for i,clf in enumerate(clfs): probs = clf.predict_proba(X_test)[:,1] prob_table[:,i] = probs Y_pred = my_classes[prob_table.argmax(1)] print("Test accuracy = ", accuracy_score(Y_test, Y_pred) * 100,) from sklearn.multiclass import OneVsRestClassifier clf = OneVsRestClassifier(SVC(gamma='auto', C=1000, kernel='rbf' ,class_weight='balanced', random_state=42)) clf.fit(X_train, Y_train) Y_pred = clf.predict(X_test) print("Test accuracy = ", accuracy_score(Y_test, Y_pred) * 100,) Test accuracy = 61.795918367346935 Test accuracy = 58.93877551020408 Note the difference in OVR based on probabilities, which is more fine grained and yields better results, vs one based on labels. For further experiments you may wish to wrap classifier into a reusable class: class OVRBinomial(BaseEstimator, ClassifierMixin): def __init__(self, cls): self.cls = cls def fit(self, X, y, **kwargs): self.classes_ = np.unique(y) self.clfs_ = [] for c in self.classes_: clf = self.cls(**kwargs) clf.fit(X, y == c) self.clfs_.append(clf) return self def predict(self, X, **kwargs): probs = np.zeros((len(X), len(self.classes_))) for i, c in enumerate(self.classes_): prob = self.clfs_[i].predict_proba(X, **kwargs)[:, 1] probs[:, i] = prob idx_max = np.argmax(probs, 1) return self.classes_[idx_max] | 5 | 6 |
65,233,000 | 2020-12-10 | https://stackoverflow.com/questions/65233000/how-to-find-an-existing-html-element-with-python-selenium-in-a-jupyterhub-page | I have the following construct in a HTML page and I want to select the li element (with python-selenium): <li class="p-Menu-item p-mod-disabled" data-type="command" data-command="notebook:run-all-below"> <div class="p-Menu-itemIcon"></div> <div class="p-Menu-itemLabel" style="">Run Selected Cell and All Below</div> <div class="p-Menu-itemShortcut" style=""></div> <div class="p-Menu-itemSubmenuIcon"></div> </li> I am using the following xpath: //li[@data-command='notebook:run-all-below'] But the element does not seem to be found. Complete, minimal working example code: import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Firefox() driver.get("https://mybinder.org/v2/gh/jupyterlab/jupyterlab-demo/master?urlpath=lab/tree/demo") # Wait for the page to be loaded xpath = "//button[@title='Save the notebook contents and create checkpoint']" element = WebDriverWait(driver, 600).until( EC.presence_of_element_located((By.XPATH, xpath)) ) time.sleep(10) print("Page loaded") # Find and click on menu "Run" xpath_run = "//div[text()='Run']" element = WebDriverWait(driver, 60).until( EC.element_to_be_clickable((By.XPATH, xpath_run)) ) element.click() print("Clicked on 'Run'") # Find and click on menu entry "Run Selected Cell and All Below" xpath_runall = "//li[@data-command='notebook:run-all-below']" element = WebDriverWait(driver, 600).until( EC.element_to_be_clickable((By.XPATH, xpath_runall)) ) print("Found element 'Run Selected Cell and All Below'") element.click() print("Clicked on 'Run Selected Cell and All Below'") driver.close() Environment: MacOS Mojave (10.14.6) python 3.8.6 selenium 3.8.0 geckodriver 0.26.0 Addendum I have been trying to record the steps with the Firefox "Selenium IDE" add-on which gives the following steps for python: sdriver.get("https://hub.gke2.mybinder.org/user/jupyterlab-jupyterlab-demo-y0bp97e4/lab/tree/demo") driver.set_window_size(1650, 916) driver.execute_script("window.scrollTo(0,0)") driver.find_element(By.CSS_SELECTOR, ".lm-mod-active > .lm-MenuBar-itemLabel").click() which, of course, also does not work. With that code lines I get an error selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .lm-mod-active > .lm-MenuBar-itemLabel | You were close enough. Factually your entire program had only a single issue as follows: The xpath_runall = "//li[@data-command='notebook:run-all-below']" doesn't identify the visible element with text as Run Selected Cell and All Below uniquely as the first matched element is a hidden element. Additional considerations Some more optimizations: The element identified as xpath = "//button[@title='Save the notebook contents and create checkpoint']" is a clickable element. So instead of EC as presence_of_element_located() you can use element_to_be_clickable() Once the element is returned through EC as element_to_be_clickable() you can invoke the click() on the same line. The xpath to identify the element with text as Run Selected Cell and All Below would be: //li[@data-command='notebook:run-all-below']//div[@class='lm-Menu-itemLabel p-Menu-itemLabel' and text()='Run Selected Cell and All Below'] As the application is built through JavaScript you need to use ActionChains. Solution Your optimized solution will be: Code Block: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.action_chains import ActionChains driver = webdriver.Firefox(executable_path=r'C:\WebDrivers\geckodriver.exe') driver.get("https://mybinder.org/v2/gh/jupyterlab/jupyterlab-demo/master?urlpath=lab/tree/demo") WebDriverWait(driver, 60).until(EC.element_to_be_clickable((By.XPATH, "//button[@title='Save the notebook contents and create checkpoint']"))) print("Page loaded") WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[text()='Run']"))).click() print("Clicked on Run") element = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//li[@data-command='notebook:run-all-below']//div[@class='lm-Menu-itemLabel p-Menu-itemLabel' and text()='Run Selected Cell and All Below']"))) ActionChains(driver).move_to_element(element).click(element).perform() print("Clicked on Run Selected Cell and All Below") Console Output: Page loaded Clicked on Run Clicked on Run Selected Cell and All Below | 6 | 4 |
65,296,604 | 2020-12-14 | https://stackoverflow.com/questions/65296604/how-to-return-a-htmlresponse-with-fastapi | Is it possible to display an HTML file at the endpoint? For example the home page then the user is visiting "/"? | Yes, it's possible FastAPI has HTMLResponse. You can return a HTMLResponse from fastapi import FastAPI from fastapi.responses import HTMLResponse app = FastAPI() @app.get("/", response_class=HTMLResponse) async def read_items(): html_content = """ <html> <head> <title>Some HTML in here</title> </head> <body> <h1>Look ma! HTML!</h1> </body> </html> """ return HTMLResponse(content=html_content, status_code=200) You can also render templates with Jinja2 from fastapi import FastAPI, Request from fastapi.responses import HTMLResponse from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates app = FastAPI() app.mount("/static", StaticFiles(directory="static"), name="static") templates = Jinja2Templates(directory="templates") @app.get("/items/{id}", response_class=HTMLResponse) async def read_item(request: Request, id: str): return templates.TemplateResponse("item.html", {"request": request, "id": id} | 33 | 56 |
65,315,077 | 2020-12-15 | https://stackoverflow.com/questions/65315077/interpreter-crashes-trying-to-use-tkinter-library | I have tried to staring the application in VSCODE by Python3. This is the code: from tkinter import * window = Tk() window.mainloop() only 3 lines :)), but when I'm trying to execute the file in terminal it will give me an error, which you can see below. arash@Arash-MacBook-Pro tkinter % python3 main.py macOS 11 or later required ! zsh: abort ------ python3 main.py arash@Arash-MacBook-Pro tkinter % My Mac worked on "Big Sur" and has Python3.9. | This is an issue in the way brew installs Python (source). If you install Python directly via the official installer here then tkinter should work as expected. | 8 | 12 |
65,216,292 | 2020-12-9 | https://stackoverflow.com/questions/65216292/how-to-define-a-dataclass-so-each-of-its-attributes-is-the-list-of-its-subclass | I have this code: from dataclasses import dataclass from typing import List @dataclass class Position: name: str lon: float lat: float @dataclass class Section: positions: List[Position] pos1 = Position('a', 52, 10) pos2 = Position('b', 46, -10) pos3 = Position('c', 45, -10) sec = Section([pos1, pos2 , pos3]) print(sec.positions) How can I create additional attributes in the dataclass Section so they would be a list of the attribute of its subclass Position? In my example, I would like that the section object also returns: sec.name = ['a', 'b', 'c'] #[pos1.name,pos2.name,pos3.name] sec.lon = [52, 46, 45] #[pos1.lon,pos2.lon,pos3.lon] sec.lat = [10, -10, -10] #[pos1.lat,pos2.lat,pos3.lat] I tried to define the dataclass as: @dataclass class Section: positions: List[Position] names : List[Position.name] But it is not working because name is not an attribute of position. I can define the object attributed later in the code (e.g. by doing secs.name = [x.name for x in section.positions]). But it would be nicer if it can be done at the dataclass definition level. After posting this question I found a beginning of answer (https://stackoverflow.com/a/65222586/13890678). But I was wondering if there was not a more generic/"automatic" way of defining the Section methods : .names(), .lons(), .lats(), ... ? So the developer doesn't have to define each method individually but instead, these methods are created based on the Positions object attributes? | You could create a new field after __init__ was called: from dataclasses import dataclass, field, fields from typing import List @dataclass class Position: name: str lon: float lat: float @dataclass class Section: positions: List[Position] _pos: dict = field(init=False, repr=False) def __post_init__(self): # create _pos after init is done, read only! Section._pos = property(Section._get_positions) def _get_positions(self): _pos = {} # iterate over all fields and add to _pos for field in [f.name for f in fields(self.positions[0])]: if field not in _pos: _pos[field] = [] for p in self.positions: _pos[field].append(getattr(p, field)) return _pos pos1 = Position('a', 52, 10) pos2 = Position('b', 46, -10) pos3 = Position('c', 45, -10) sec = Section([pos1, pos2, pos3]) print(sec.positions) print(sec._pos['name']) print(sec._pos['lon']) print(sec._pos['lat']) Out: [Position(name='a', lon=52, lat=10), Position(name='b', lon=46, lat=-10), Position(name='c', lon=45, lat=-10)] ['a', 'b', 'c'] [52, 46, 45] [10, -10, -10] Edit: In case you just need it more generic, you could overwrite __getattr__: from dataclasses import dataclass, field, fields from typing import List @dataclass class Position: name: str lon: float lat: float @dataclass class Section: positions: List[Position] def __getattr__(self, keyName): for f in fields(self.positions[0]): if f"{f.name}s" == keyName: return [getattr(x, f.name) for x in self.positions] # Error handling here: Return empty list, raise AttributeError, ... pos1 = Position('a', 52, 10) pos2 = Position('b', 46, -10) pos3 = Position('c', 45, -10) sec = Section([pos1, pos2, pos3]) print(sec.names) print(sec.lons) print(sec.lats) Out: ['a', 'b', 'c'] [52, 46, 45] [10, -10, -10] | 6 | 6 |
65,298,241 | 2020-12-15 | https://stackoverflow.com/questions/65298241/what-does-this-tensorflow-message-mean-any-side-effect-was-the-installation-su | I just installed tensorflow v2.3 on anaconda python. I tried to test out the installation using the python command below; $ python -c "import tensorflow as tf; x = [[2.]]; print('tensorflow version', tf.__version__); print('hello, {}'.format(tf.matmul(x, x)))" I got the following message; 2020-12-15 07:59:12.411952: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. hello, [[4.]] From the message, it seems that the installation was installed successfully. But what does This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 mean exactly? Am I using a tensorflow version with some limited features? Any side effects? I am using Windows 10. | An important part of Tensorflow is that it is supposed to be fast. With a suitable installation, it works with CPUs, GPUs, or TPUs. Part of going fast means that it uses different code depending on your hardware. Some CPUs support operations that other CPUs do not, such as vectorized addition (adding multiple variables at once). Tensorflow is simply telling you that the version you have installed can use the AVX and AVX2 operations and is set to do so by default in certain situations (say inside a forward or back-prop matrix multiply), which can speed things up. This is not an error, it is just telling you that it can and will take advantage of your CPU to get that extra speed out. Note: AVX stands for Advanced Vector Extensions. | 159 | 302 |
65,311,659 | 2020-12-15 | https://stackoverflow.com/questions/65311659/getting-the-latest-python-3-version-programmatically | I want to get the latest Python source from https://www.python.org/ftp/python/. While posting this, the latest version is 3.9.1. I do not want to hardcode 3.9.1 in my code to get the latest version and keep on updating the version when a new version comes out. I am using Ubuntu 16.04. Is there a programmatic way to get the latest Python version (using curl) and use that version to get the latest source? | I had a similar problem and couldn't find anything better than scraping the downloads page. You mentioned curl, so I'm assuming you want a shell script. I ended up with this: url='https://www.python.org/ftp/python/' curl --silent "$url" | sed -n 's!.*href="\([0-9]\+\.[0-9]\+\.[0-9]\+\)/".*!\1!p' | sort -rV | while read -r version; do filename="Python-$version.tar.xz" # Versions which only have alpha, beta, or rc releases will fail here. # Stop when we find one with a final release. if curl --fail --silent -O "$url/$version/$filename"; then echo "$filename" break fi done This relies on sort -V, which I believe is specific to GNU coreutils. That shouldn't be a problem on Ubuntu. If you're using this in a larger script and want to use the version or filename variables after the loop, see How to pipe input to a Bash while loop and preserve variables after loop ends. | 6 | 5 |
65,289,591 | 2020-12-14 | https://stackoverflow.com/questions/65289591/stacked-grouped-bar-chart | I'm trying to create a bar chart using plotly in python, which is both stacked and grouped. Toy example (money spent and earned in different years): import pandas as pd import plotly.graph_objs as go data = pd.DataFrame( dict( year=[2000,2010,2020], var1=[10,20,15], var2=[12,8,18], var3=[10,17,13], var4=[12,11,20], ) ) fig = go.Figure( data = [ go.Bar(x=data['year'], y=data['var1'], offsetgroup=0, name='spent on fruit'), go.Bar(x=data['year'], y=data['var2'], offsetgroup=0, base=data['var1'], name='spent on toys'), go.Bar(x=data['year'], y=data['var3'], offsetgroup=1, name='earned from stocks'), go.Bar(x=data['year'], y=data['var4'], offsetgroup=1, base=data['var3'], name='earned from gambling'), ] ) fig.show() The result seems fine at first: But watch what happens when I turn off e.g. "spent on fruit": The "spent on toys" trace remains floating instead of starting from 0. Can this be fixed? or maybe the whole offsetgroup + base approach won't work here. But what else can I do? Thanks! Update: according to this Github issue, stacked, grouped bar plots are being developed for future plotly versions, so this probably won't be an issue anymore. | There doesn't seem to be a way to create both stacked and grouped bar charts in Plotly, but there is a workaround that might resolve your issue. You will need to create subgroups, then use a stacked bar in Plotly to plot the bars one at a time, plotting var1 and var2 with subgroup1, and var3 and var4 with subgroup2. This solution gives you the functionality you want, but changes the formatting and aesthetic of the bar chart. There will be equal spacing between each bar as from Plotly's point of view these are stacked bars (and not grouped bars), and I couldn't figure out a way to eliminate the subgroup1 and subgroup2 text without also getting rid of the years in the x-axis ticks. Any Plotly experts please feel free to chime in and improve my answer! import pandas as pd import plotly.graph_objs as go df = pd.DataFrame( dict( year=[2000,2010,2020], var1=[10,20,15], var2=[12,8,18], var3=[10,17,13], var4=[12,11,20], ) ) fig = go.Figure() fig.update_layout( template="simple_white", xaxis=dict(title_text="Year"), yaxis=dict(title_text="Count"), barmode="stack", ) groups = ['var1','var2','var3','var4'] colors = ["blue","red","green","purple"] names = ['spent on fruit','spent on toys','earned from stocks','earned from gambling'] i = 0 for r, n, c in zip(groups, names, colors): ## put var1 and var2 together on the first subgrouped bar if i <= 1: fig.add_trace( go.Bar(x=[df.year, ['subgroup1']*len(df.year)], y=df[r], name=n, marker_color=c), ) ## put var3 and var4 together on the first subgrouped bar else: fig.add_trace( go.Bar(x=[df.year, ['subgroup2']*len(df.year)], y=df[r], name=n, marker_color=c), ) i+=1 fig.show() | 9 | 8 |
65,271,399 | 2020-12-13 | https://stackoverflow.com/questions/65271399/vs-code-pylance-pylint-cannot-resolve-import | The Summary I have a python import that works when run from the VS Code terminal, but that VS Code's editor is giving warnings about. Also, "Go to Definition" doesn't work. The Problem I have created a docker container from the image tensorflow/tensorflow:1.15.2-py3, then attach to it using VS Code's "Remote- Containers" extension. Then I've created the following file in the container. main.py: import tensorflow.compat.v1 as tf print(tf.__version__) This runs fine in the VS Code terminal, but the Editor and the Problems pane both give me an unresolved import 'tensorflow.compat' warning. Also "Go to Definition" doesn't work on tf.__version__. I'm using several extensions but I believe the relevant ones are the Microsoft Python extension (installed in the container), as well as the Remote - Containers extension, and now the Pylance extension (installed in the container). The Things I've Tried I've tried this with the default pylint, and then also after installing pylance with similar results. I've also seen some docs about similar issues, but they were related to setting the correct source folder location for modules that were part of a project. In contrast, my code within my project seems to work fine with imports/go-to-definition. It's external libraries that don't seem to work. Also, for the sake of this minimal example, I've attached to the container as root, so I am guessing it's not an issue of elevated permissions. I've also tried disabling all the extensions except the following, but got the same results: Remote - Containers (local) Remote - WSL (local) Python (on container) Jupyter (on container, required by Python for some reason) All the extensions above are on the latest versions. I've also fiddled around with setting python.autocomplete.extraPaths, but I'm not sure what the right path is. It also seems like the wrong thing to have to add libraries to the path that are installed in the global python installation, especially since I'm not using a virtual environment (it being in a docker container and all). The Question How do I fix VS Code so that it recognizes this import and I can use "Go to Definition" to explore these tensorflow functions/classes/etc? | tldr; TensorFlow defines some of its modules in a way that pylint & pylance aren't able to recognize. These errors don't necessarily indicate an incorrect setup. To Fix: pylint: The pylint warnings are safely ignored. Intellisense: The best way I know of at the moment to fix Intellisense is to replace the imports with the modules they are aliasing (found by importing alias in a repl as x then running help(x)). Because the target of the alias in my case is an internal name, you probably don't want to check in these changes to source control. Not ideal. Details Regarding the linting: It seems that tensorflow defines its modules in a way that the tools can't understand. Also, it appears that the package is an alias of some kind to another package. For example: import tensorflow.compat.v1 as tf tf.estimator.RunConfig() The above code gives the pylint warning and breaks intellisense. But if you manually import the above in a REPL and run help(tf), it shows you the below package, which you can use instead: import tensorflow_core._api.v1.compat.v1 as tf tf.estimator.RunConfig() This second example does not cause the pylint warning. Also the Intellisense features (Go to definition, Ctrl+Click, etc) work with this second example. However, based on the _api, it looks like that second package name is an internal namespace, so I'm guessing it is probably best to only use this internal name for local debugging. Confirmation/Tickets pylint: I've found a ticket about pylint having issues with a couple tensorflow imports that looks related. Intellisense: I've opened a ticket with pylance. | 17 | 9 |
65,301,875 | 2020-12-15 | https://stackoverflow.com/questions/65301875/how-to-understand-creating-leaf-tensors-in-pytorch | From PyTorch documentation: b = torch.rand(10, requires_grad=True).cuda() b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor e = torch.rand(10).cuda().requires_grad_() e.is_leaf True # e requires gradients and has no operations creating it f = torch.rand(10, requires_grad=True, device="cuda") f.is_leaf True # f requires grad, has no operation creating it But why are e and f leaf Tensors, when they both were also cast from a CPU Tensor, into a Cuda Tensor (an operation)? Is it because Tensor e was cast into Cuda before the in-place operation requires_grad_()? And because f was cast by assignment device="cuda" rather than by method .cuda()? | When a tensor is first created, it becomes a leaf node. Basically, all inputs and weights of a neural network are leaf nodes of the computational graph. When any operation is performed on a tensor, it is not a leaf node anymore. b = torch.rand(10, requires_grad=True) # create a leaf node b.is_leaf # True b = b.cuda() # perform a casting operation b.is_leaf # False requires_grad_() is not an operation in the same way as cuda() or others are. It creates a new tensor, because tensor which requires gradient (trainable weight) cannot depend on anything else. e = torch.rand(10) # create a leaf node e.is_leaf # True e = e.cuda() # perform a casting operation e.is_leaf # False e = e.requires_grad_() # this creates a NEW tensor e.is_leaf # True Also, detach() operation creates a new tensor which does not require gradient: b = torch.rand(10, requires_grad=True) b.is_leaf # True b = b.detach() b.is_leaf # True In the last example we create a new tensor which is already on a cuda device. We do not need any operation to cast it. f = torch.rand(10, requires_grad=True, device="cuda") # create a leaf node on cuda | 11 | 17 |
65,300,649 | 2020-12-15 | https://stackoverflow.com/questions/65300649/the-command-pip-install-upgrade-pip-install-all-version-of-pip | When I run this command, pip install --upgrade pip, all version of pip is installed (in Linux/2.9.16) I just want to update pip that I'm using to the latest. How could I resolve this? Below is what I got from the command pip install --upgrade pip Requirement already satisfied: pip in /opt/python/run/venv/lib/python3.6/site-packages (20.3.2) Collecting pip Using cached pip-20.3.2-py2.py3-none-any.whl (1.5 MB) Using cached pip-20.3.2.tar.gz (1.5 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Using cached pip-20.3.1-py2.py3-none-any.whl (1.5 MB) Using cached pip-20.3.1.tar.gz (1.5 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Using cached pip-20.3-py2.py3-none-any.whl (1.5 MB) Using cached pip-20.3.tar.gz (1.5 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Using cached pip-20.2.4-py2.py3-none-any.whl (1.5 MB) Using cached pip-20.2.4.tar.gz (1.5 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Downloading pip-20.2.3-py2.py3-none-any.whl (1.5 MB) |ββββββββββββββββββββββββββββββββ| 1.5 MB 2.2 MB/s Downloading pip-20.2.3.tar.gz (1.5 MB) |ββββββββββββββββββββββββββββββββ| 1.5 MB 8.6 MB/s Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Downloading pip-20.2.2-py2.py3-none-any.whl (1.5 MB) |ββββββββββββββββββββββββββββββββ| 1.5 MB 13.0 MB/s Downloading pip-20.2.2.tar.gz (1.5 MB) |ββββββββββββββββββββββββββββββββ| 1.5 MB 17.1 MB/s Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Downloading pip-20.2.1-py2.py3-none-any.whl (1.5 MB) |ββββββββββββββββββββββββββββββββ| 1.5 MB 18.3 MB/s Downloading pip-20.2.1.tar.gz (1.5 MB) |ββββββββββββββββββββββββββββββββ| 1.5 MB 19.8 MB/s ... --- update --- I run pip install --upgrade pip in Beanstalk using the commands below in .ebextensions/02_python.config commands: 01_install_node: command: | sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash - sudo yum -y install nodejs 02_install_yarn: test: '[ ! -f /usr/bin/yarn ] && echo "Yarn not found, installing..."' command: | sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo sudo yum -y install yarn 03_upgrade_pip: command: /opt/python/run/venv/bin/pip install --upgrade pip setuptools wheel ignoreErrors: false And below is the log from beanstalk eb-activity [2020-12-15T05:08:40.474Z] INFO [4175] - [Application update app-930a-201215_140751@14/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_1_mdd_bean/Command 03_upgrade_pip] : Starting activity... [2020-12-15T05:16:26.425Z] INFO [4175] - [Application update app-930a-201215_140751@14/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_1_mdd_bean/Command 03_upgrade_pip] : Activity execution failed, because: Requirement already satisfied: pip in /opt/python/run/venv/lib/python3.6/site-packages (20.3.2) Collecting pip Using cached pip-20.3.2-py2.py3-none-any.whl (1.5 MB) Using cached pip-20.3.2.tar.gz (1.5 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing wheel metadata: started Preparing wheel metadata: finished with status 'done' Using cached pip-20.3.1-py2.py3-none-any.whl (1.5 MB) Using cached pip-20.3.1.tar.gz (1.5 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' ---- too long ... --- Using cached pip-1.2.tar.gz (94 kB) Using cached pip-1.1.tar.gz (95 kB) Using cached pip-1.0.2.tar.gz (105 kB) Using cached pip-1.0.1.tar.gz (104 kB) Using cached pip-1.0.tar.gz (100 kB) Using cached pip-0.8.3.tar.gz (107 kB) Using cached pip-0.8.2.tar.gz (106 kB) Using cached pip-0.8.1.tar.gz (105 kB) Using cached pip-0.8.tar.gz (98 kB) Using cached pip-0.7.2.tar.gz (68 kB) Using cached pip-0.7.1.tar.gz (82 kB) Using cached pip-0.7.tar.gz (68 kB) Using cached pip-0.6.3.tar.gz (71 kB) Using cached pip-0.6.2.tar.gz (70 kB) Using cached pip-0.6.1.tar.gz (55 kB) Using cached pip-0.6.tar.gz (64 kB) Using cached pip-0.5.1.tar.gz (54 kB) Using cached pip-0.5.tar.gz (53 kB) Using cached pip-0.4.tar.gz (50 kB) Using cached pip-0.3.1.tar.gz (48 kB) Using cached pip-0.3.tar.gz (47 kB) WARNING: Discarding https://files.pythonhosted.org/packages/17/05/f66144ef69b436d07f8eeeb28b7f77137f80de4bf60349ec6f0f9509e801/pip-0.3.tar.gz#sha256=183c72455cb7f8860ac1376f8c4f14d7f545aeab8ee7c22cd4caf79f35a2ed47 (from https://pypi.org/simple/pip/). Requested pip from https://files.pythonhosted.org/packages/17/05/f66144ef69b436d07f8eeeb28b7f77137f80de4bf60349ec6f0f9509e801/pip-0.3.tar.gz#sha256=183c72455cb7f8860ac1376f8c4f14d7f545aeab8ee7c22cd4caf79f35a2ed47 has different version in metadata: '0.3.dev0' Using cached pip-0.2.1.tar.gz (39 kB) Using cached pip-0.2.tar.gz (38 kB) Requirement already satisfied: setuptools in /opt/python/run/venv/lib/python3.6/site-packages (51.0.0) Collecting setuptools Using cached setuptools-51.0.0-py3-none-any.whl (785 kB) Using cached setuptools-51.0.0.zip (2.1 MB) Using cached setuptools-50.3.2-py3-none-any.whl (785 kB) Using cached setuptools-50.3.2.zip (2.1 MB) Using cached setuptools-50.3.1-py3-none-any.whl (785 kB) Using cached setuptools-50.3.1.zip (2.1 MB) Using cached setuptools-50.3.0-py3-none-any.whl (785 kB) Using cached setuptools-50.3.0.zip (2.2 MB) Using cached setuptools-50.2.0-py3-none-any.whl (784 kB) Using cached setuptools-50.2.0.zip (2.2 MB) Using cached setuptools-50.1.0-py3-none-any.whl (784 kB) Using cached setuptools-50.1.0.zip (2.2 MB) Using cached setuptools-50.0.3-py3-none-any.whl (784 kB) Using cached setuptools-50.0.3.zip (2.2 MB) Using cached setuptools-50.0.2-py3-none-any.whl (784 kB) Using cached setuptools-50.0.2.zip (2.2 MB) Using cached setuptools-50.0.1-py3-none-any.whl (784 kB) Using cached setuptools-50.0.1.zip (2.2 MB) Using cached setuptools-50.0.0-py3-none-any.whl (783 kB) Using cached setuptools-50.0.0.zip (2.2 MB) Using cached setuptools-49.6.0-py3-none-any.whl (803 kB) Using cached setuptools-49.6.0.zip (2.2 MB) Using cached setuptools-49.5.0-py3-none-any.whl (803 kB) Using cached setuptools-49.5.0.zip (2.2 MB) Using cached setuptools-49.4.0-py3-none-any.whl (803 kB) Using cached setuptools-49.4.0.zip (2.2 MB) Using cached setuptools-49.3.2-py3-none-any.whl (790 kB) Using cached setuptools-49.3.2.zip (2.2 MB) Using cached setuptools-49.3.1-py3-none-any.whl (790 kB) Using cached setuptools-49.3.1.zip (2.2 MB) Using cached setuptools-49.3.0-py3-none-any.whl (790 kB) Using cached setuptools-49.3.0.zip (2.2 MB) Using cached setuptools-49.2.1-py3-none-any.whl (789 kB) Using cached setuptools-49.2.1.zip (2.2 MB) Using cached setuptools-49.2.0-py3-none-any.whl (789 kB) Using cached setuptools-49.2.0.zip (2.2 MB) Using cached setuptools-49.1.3-py3-none-any.whl (789 kB) Using cached setuptools-49.1.3.zip (2.2 MB) Using cached setuptools-49.1.2-py3-none-any.whl (789 kB) Using cached setuptools-49.1.2.zip (2.2 MB) Using cached setuptools-49.1.1-py3-none-any.whl (789 kB) Using cached setuptools-49.1.1.zip (2.2 MB) Using cached setuptools-49.1.0-py3-none-any.whl (789 kB) Using cached setuptools-49.1.0.zip (2.2 MB) Using cached setuptools-49.0.1-py3-none-any.whl (789 kB) Using cached setuptools-49.0.1.zip (2.2 MB) ... Too long ... ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/4f/b3/51ef01e9af978f6ddc388754a201a5abb316fb8c84293901c92c52344b57/setuptools-0.9.2.zip#sha256=3713572ca0adb93e52a8aabfe1321f616b196dbd2121bc918b1fe829c312f715 (from https://pypi.org/simple/setuptools/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached setuptools-0.9.2.tar.gz (764 kB) ERROR: Command errored out with exit status 1: command: /opt/python/run/venv/bin/python3.6 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/setup.py'"'"'; __file__='"'"'/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-5fwh_uaz cwd: /tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/ Complete output (15 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/setuptools/__init__.py", line 2, in <module> from setuptools.extension import Extension, Library File "/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/setuptools/extension.py", line 5, in <module> from setuptools.dist import _get_unpatched File "/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/setuptools/dist.py", line 7, in <module> from setuptools.command.install import install File "/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/setuptools/command/__init__.py", line 8, in <module> from setuptools.command import install_scripts File "/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/setuptools/command/install_scripts.py", line 3, in <module> from pkg_resources import Distribution, PathMetadata, ensure_directory File "/tmp/pip-install-5ym3esw7/setuptools_20237e3eec0a4c279498d1832c687656/pkg_resources.py", line 1545, in <module> register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/2c/02/3e1e2e547114b6a659923c9d88fa74adec9aa46d46a48f70cd02b9fb4646/setuptools-0.9.2.tar.gz#sha256=5c35683a5473e803a3e192a55c0d86ac3848e8888940dbebbfc6981aa48aa626 (from https://pypi.org/simple/setuptools/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached setuptools-0.9.1.zip (832 kB) ERROR: Command errored out with exit status 1: command: /opt/python/run/venv/bin/python3.6 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/setup.py'"'"'; __file__='"'"'/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-p15lass8 cwd: /tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/ Complete output (15 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/setuptools/__init__.py", line 2, in <module> from setuptools.extension import Extension, Library File "/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/setuptools/extension.py", line 5, in <module> from setuptools.dist import _get_unpatched File "/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/setuptools/dist.py", line 7, in <module> from setuptools.command.install import install File "/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/setuptools/command/__init__.py", line 8, in <module> from setuptools.command import install_scripts File "/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/setuptools/command/install_scripts.py", line 3, in <module> from pkg_resources import Distribution, PathMetadata, ensure_directory File "/tmp/pip-install-5ym3esw7/setuptools_b1f18f92b6014cf09a2c38f0c8f5317c/pkg_resources.py", line 1545, in <module> register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/9c/46/83b866faeab163db1c4a9fddf93e7d60e28ce2a97cf2669667551f496294/setuptools-0.9.1.zip#sha256=96beffdca47822f90f8e766edd714f3e1b6ca25ef19ea63105b25c0f8b0a384c (from https://pypi.org/simple/setuptools/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached setuptools-0.9.1.tar.gz (764 kB) ERROR: Command errored out with exit status 1: command: /opt/python/run/venv/bin/python3.6 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/setup.py'"'"'; __file__='"'"'/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-12bms7x7 cwd: /tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/ Complete output (15 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/setuptools/__init__.py", line 2, in <module> from setuptools.extension import Extension, Library File "/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/setuptools/extension.py", line 5, in <module> from setuptools.dist import _get_unpatched File "/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/setuptools/dist.py", line 7, in <module> from setuptools.command.install import install File "/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/setuptools/command/__init__.py", line 8, in <module> from setuptools.command import install_scripts File "/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/setuptools/command/install_scripts.py", line 3, in <module> from pkg_resources import Distribution, PathMetadata, ensure_directory File "/tmp/pip-install-5ym3esw7/setuptools_5101018cccae4f75bea590debe61f9ef/pkg_resources.py", line 1545, in <module> register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/1a/52/645c11a1c57513a43a557cf752833c19223f365771e30c88637170026ef7/setuptools-0.9.1.tar.gz#sha256=00340736e0dd9aa66aed3f52c015080c7fdd7855c4879a13fa5f18afa65ebbb9 (from https://pypi.org/simple/setuptools/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached setuptools-0.9.zip (824 kB) ERROR: Could not install packages due to an EnvironmentError: [Errno 28] No space left on device (ElasticBeanstalk::ExternalInvocationError) | I answer my question myself. To find the cause of the problem, I created and tested a new virtualenv within the beanstalk instance. At first, pip install --upgrade setuptools, pip install --upgrade pip works properly. But after upgrading pip to the latest (2020.3.2), pip install --upgrade setuptools make the same problem. And When I downgraded pip to 2020.3.1 using pip install --upgrade pip==2020.3.1, it solved the problem! So the root cause is pip2020.3.2. It seems that pip 2020.3.2 is yanked release yet (https://pypi.org/project/pip/20.3.2/). I'm not sure why the yanked release is installed. I tested it in Linux/2.9.16(python 3.6), MacOS11.0.1(python 3.6, 3.9) and got the same results. Anyway, hope this helps others who are having similar problems. Below is the troubleshooting process. [ec2-user@ip-... ~]$ python3 -m venv test-env [ec2-user@ip-... ~]$ source ./test-env/bin/activate (test-env) [ec2-user@ip-... ~]$ pip --version pip 18.1 from /home/ec2-user/test-env/lib64/python3.6/dist-packages/pip (python 3.6) (test-env) [ec2-user@ip-... ~]$ pip install --upgrade setuptools Collecting setuptools Using cached https://files.pythonhosted.org/packages/3d/f2/1489d3b6c72d68bf79cd0fba6b6c7497df4ebf7d40970e2d7eceb8d0ea9c/setuptools-51.0.0-py3-none-any.whl Installing collected packages: setuptools Found existing installation: setuptools 40.6.2 Uninstalling setuptools-40.6.2: Successfully uninstalled setuptools-40.6.2 Successfully installed setuptools-51.0.0 You are using pip version 18.1, however version 20.3.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command. (test-env) [ec2-user@ip-... ~]$ pip install --upgrade pip Collecting pip Using cached https://files.pythonhosted.org/packages/3d/0c/01014c0442830eb38d6baef0932fdcb389279ce74295350ecb9fe09e048a/pip-20.3.2-py2.py3-none-any.whl Installing collected packages: pip Found existing installation: pip 18.1 Uninstalling pip-18.1: Successfully uninstalled pip-18.1 Successfully installed pip-20.3.2 (test-env) [ec2-user@ip-... ~]$ pip install --upgrade setuptools Requirement already satisfied: setuptools in ./test-env/lib/python3.6/dist-packages (51.0.0) Collecting setuptools Using cached setuptools-51.0.0-py3-none-any.whl (785 kB) Using cached setuptools-51.0.0.zip (2.1 MB) Using cached setuptools-50.3.2-py3-none-any.whl (785 kB) Using cached setuptools-50.3.2.zip (2.1 MB) Using cached setuptools-50.3.1-py3-none-any.whl (785 kB) Using cached setuptools-50.3.1.zip (2.1 MB) Using cached setuptools-50.3.0-py3-none-any.whl (785 kB) Using cached setuptools-50.3.0.zip (2.2 MB) Using cached setuptools-50.2.0-py3-none-any.whl (784 kB) Using cached setuptools-50.2.0.zip (2.2 MB) Using cached setuptools-50.1.0-py3-none-any.whl (784 kB) Using cached setuptools-50.1.0.zip (2.2 MB) Using cached setuptools-50.0.3-py3-none-any.whl (784 kB) Using cached setuptools-50.0.3.zip (2.2 MB) ^CERROR: Operation cancelled by user (test-env) [ec2-user@ip-... ~]$ pip --version pip 20.3.2 from /home/ec2-user/test-env/lib64/python3.6/dist-packages/pip (python 3.6) (test-env) [ec2-user@ip-... ~]$ pip install --upgrade pip==20.3.1 Collecting pip==20.3.1 Using cached pip-20.3.1-py2.py3-none-any.whl (1.5 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 20.3.2 Uninstalling pip-20.3.2: Successfully uninstalled pip-20.3.2 Successfully installed pip-20.3.1 (test-env) [ec2-user@ip-... ~]$ pip install --upgrade setuptools Requirement already satisfied: setuptools in ./test-env/lib/python3.6/dist-packages (51.0.0) (test-env) [ec2-user@ip-... ~]$ | 7 | 6 |
65,293,813 | 2020-12-14 | https://stackoverflow.com/questions/65293813/whats-the-best-way-to-find-lines-on-a-very-poor-quality-image-knowing-the-angl | i'm trying to find theses two horizontal lines with the Houghlines transform. As you can see, the picture is very noisy ! Currently my workflow looks like this : crop the image blur it low the noise (for that, I invert the image, and then substract the blured image to the inverted one) open it and dilate it with an "horizontal kernel" (kernel_1 = np.ones((10,1), np.uint8) threshold Houglines the results are not as good as expected... is there a better strategy, knowing that I will always serach for horizontal lines (hence, abs(theta) will always be closed to 0 or pi) | the issue is the noise and the faint signal. you can subdue the noise with averaging/integration, while maintaining the signal because it's replicated along a dimension (signal is a line). your approach using a very wide but narrow kernel can be extended to simply integrating along the whole image. rotate the image so the suspected line is aligned with an axis (let's say horizontal) sum up all pixels of one scanline (horizontal line), np.sum(axis=1) or mean, either way mind the data type. working with floats is convenient. work with the 1-dimensional series of values. this will not tell you how long the line is, only that it's there and potentially spanning the whole width. edit: since my answer got a reaction, I'll elaborate as well: I think you can lowpass that to get the "gray" baseline, then subtract ("difference of gaussians"). that should give you a nice signal. import numpy as np import cv2 as cv import matplotlib.pyplot as plt import scipy.ndimage im = cv.imread("0gczo.png", cv.IMREAD_GRAYSCALE) / np.float32(255) relief = im.mean(axis=1) smoothed = scipy.ndimage.gaussian_filter(relief, sigma=2.0) baseline = scipy.ndimage.gaussian_filter(relief, sigma=10.0) difference = smoothed - baseline std = np.std(difference) level = 2 outliers = (difference <= std * -level) plt.plot(difference) plt.hlines([std * +level, std * -level], xmin=0, xmax=len(relief)) plt.plot(std * -level + outliers * std) plt.show() # where those peaks are: edgemap = np.diff(outliers.astype(np.int8)) (edges,) = edgemap.nonzero() print(edges) # [392 398 421 427] print(edgemap[edges]) # [ 1 -1 1 -1] | 5 | 9 |
65,295,837 | 2020-12-14 | https://stackoverflow.com/questions/65295837/turn-string-representation-of-interval-into-actual-interval-in-pandas | My problem is kind of simple, but I'm not sure there's a way to do what I'm looking for: I had to store in a SQL database some data, that includes some intervals that will later be used. Because of this, I had to store it as a string, like this: variable interval A (-0.001, 2.0] A (2.0, 6.0] So, then, I want to use said intervals to cut another variable, like this: df1 = pd.DataFrame({'interval': {4: '(-0.001, 2.0]', 5: '(2.0, 6.0]'}, 'variable': {4: 'A', 5: 'A', }}) df2 = pd.DataFrame({'A': [1,1,3]}) bins = df1[df1.variable.eq('A')].interval new_series = pd.cut(df2['A'], bins=bins) But this brings: ValueError: could not convert string to float: '(-0.001, 2.0]' Tried: bins = bins.astype('interval') But this brings: TypeError: type <class 'str'> with value (-0.001, 2.0] is not an interval Is there something I can do? Thanks | IIUC, you could parse the string by hand, then convert bins to IntervalIndex: import ast import pandas as pd def interval_type(s): """Parse interval string to Interval""" table = str.maketrans({'[': '(', ']': ')'}) left_closed = s.startswith('[') right_closed = s.endswith(']') left, right = ast.literal_eval(s.translate(table)) t = 'neither' if left_closed and right_closed: t = 'both' elif left_closed: t = 'left' elif right_closed: t = 'right' return pd.Interval(left, right, closed=t) df1 = pd.DataFrame({'interval': {4: '(-0.001, 2.0]', 5: '(2.0, 6.0]'}, 'variable': {4: 'A', 5: 'A'}}) df1['interval'] = df1['interval'].apply(interval_type) df2 = pd.DataFrame({'A': [1, 1, 3]}) bins = df1[df1.variable.eq('A')].interval new_series = pd.cut(df2['A'], bins=pd.IntervalIndex(bins)) print(new_series) Output 0 (-0.001, 2.0] 1 (-0.001, 2.0] 2 (2.0, 6.0] Name: A, dtype: category Categories (2, interval[float64]): [(-0.001, 2.0] < (2.0, 6.0]] | 6 | 9 |
65,285,516 | 2020-12-14 | https://stackoverflow.com/questions/65285516/ipython-display-how-to-change-width-height-and-resolution-of-a-displayed-image | I am displaying an image of a molecule using IPython.display in Jupyter. The resolution of the image is quite low. Is there a way to specify the width and height of the displayed image and its resolution? I googled it and could not find anything. All I need is something like this: display(moleSmilemol, format='svg', width=1000, height=1000) Any pointers would be appreciated. Update: I could add custom css, which will blow up the picture that was generate, but it is still low quality. I am interested increasing the quality of the picture and its size too. So something deeper than CSS is needed. | Try changing the variables in the rdkit.Chem.Draw.IPythonConsole module: from rdkit.Chem.Draw import IPythonConsole IPythonConsole.molSize = (800, 800) # Change image size IPythonConsole.ipython_useSVG = True # Change output to SVG mol = Chem.MolFromSmiles('N#Cc1cccc(-c2nc(-c3cccnc3)no2)c1') display(mol) Otherwise, you need to use the rdMolDraw2D module to create the drawings yourself with the parameters you require. | 6 | 3 |
65,284,942 | 2020-12-14 | https://stackoverflow.com/questions/65284942/what-is-a-python-pandas-equivalent-to-rs-with | In R I can have a data.frame or a list with several arguments, and I can operate on them using the with function. For example: d <- data.frame(x = 1:3, y = 2:4, z = 3:5) # I can use: d$x+d$y*d$z-5 # Or, more simply, I can use: with(d, x+y*z-5) # [1] 2 9 18 In pandas DataFrame I can use: d = {'x': [1, 2, 3], 'y': [2, 3, 4], 'z': [3, 4, 5]} df = pd.DataFrame(data=d) df.x+df.y*df.z-5 # 0 2 # 1 9 # 2 18 # dtype: int64 But is there a way to do some "with" like statement? | One idea is use DataFrame.eval if need processing some columns names some simple arithmetic operations: print (df.x+df.y*df.z-5) 0 2 1 9 2 18 dtype: int64 print (df.eval('x+y*z-5')) 0 2 1 9 2 18 dtype: int64 | 8 | 8 |
65,233,882 | 2020-12-10 | https://stackoverflow.com/questions/65233882/among-the-many-python-file-copy-functions-which-ones-are-safe-if-the-copy-is-in | As seen in How do I copy a file in Python?, there are many file copy functions: shutil.copy shutil.copy2 shutil.copyfile (and also shutil.copyfileobj) or even a naive method: with open('sourcefile', 'rb') as f, open('destfile', 'wb') as g: while True: block = f.read(16*1024*1024) # work by blocks of 16 MB if not block: # EOF break g.write(block) Among all these methods, which ones are safe in the case of a copy interruption (example: kill the Python process)? The last one in the list looks ok. By safe I mean: if a 1 GB file copy is not 100% finished (let's say it's interrupted in the middle of the copy, after 400MB), the file size should not be reported as 1GB in the filesystem, it should: either report the size the file had when the last bytes were written (e.g. 400MB) or be deleted The worst would be that the final filesize is written first (internally with an fallocate or ftruncate?). This would be a problem if the copy is interrupted: by looking at the file-size, we would think the file is correctly written. Many incremental backup programs (I'm coding one) use "filename+mtime+fsize" to check if a file has to be copied or if it's already there (of course a better solution is to SHA256 source and destination files but this is not done for every sync, too much time-consuming; off-topic here). So I want to make sure that the "copy file" function does not store the final file size immediately (then it could fool the fsize comparison), before copying the actual file content. Note: I'm asking the question because, while shutil.filecopy was rather straighforward on Python 3.7 and below, see source (which is more or less the naive method above), it seems much more complicated on Python 3.9, see source, with many different cases for Windows, Linux, MacOS, "fastcopy" tricks, etc. | Assuming that destfile does not exist prior to the copy, the naive method is safe, per your definition of safe. shutil.copyfileobj() and shutil.copyfile() are close second in the ranking. shutils.copy() is next, and shutils.copy2() would be last. Explanation: It is a filesystem's job to guarantee consistency based on application requests. If you are only writing X bytes to a file, the file size will only account for these X bytes. Therefore, doing direct FS operations like the naive method will work. It is now a matter of what these higher-level functions do with the filesystem. The API doesn't state what happens if python crashes mid-copy, but it is a de-facto expectation from everyone that these functions behave like Unix cp, i.e. don't mess with the file size. Assuming that the maintainers of CPython don't want to break people's expectations, all these functions should be safe per your definition. That said, it isn't guaranteed anywhere, AFAICT. However, shutil.copyfileobj() and shutil.copyfile() expressly have their API promise to not copy metadata, so they're not likely to try and set the size. shutils.copy() wouldn't try to set the file size, only the mode, and in most filesystems setting the size and the mode require two different filesystem operations, so it should still be safe. shutils.copy2() says it will copy metadata, and if you look at its source code, you'll see that it only copies the metadata after copying the data, so even that should be safe. Even more, copying the metadata doesn't copy the size. So this would only be a problem if some of the internal functions python uses try to optimize using ftruncate(), fallocate(), or some such, which is unlikely given that people who write system APIs (like the python maintainers) are very aware of people's expectations. | 15 | 8 |
65,282,049 | 2020-12-14 | https://stackoverflow.com/questions/65282049/local-scope-vs-relative-imports-inside-init-py | I've noticed that asyncio/init.py from python 3.6 uses the following construct: from .base_events import * ... __all__ = (base_events.__all__ + ...) The base_events symbol is not imported anywhere in the source code, yet the module still contains a local variable for it. I've checked this behavior with the following code, put into an __init__.py with a dummy test.py next to it: test = "not a module" print(test) from .test import * print(test) not a module <module 'testpy.test' from 'C:\Users\MrM\Desktop\testpy\test.py'> Which means that the test variable got shadowed after using a star import. I fiddled with it a bit, and it turns out that it doesn't have to be a star import, but it has to be inside an __init__.py, and it has to be relative. Otherwise the module object is not being assigned anywhere. Without the assignment, running the above example from a file that isn't an __init__.py will raise a NameError. Where is this behavior coming from? Has this been outlined in the spec for import system somewhere? What's the reason behind __init__.py having to be special in this way? It's not in the reference, or at least I couldn't find it. | This behavior is defined in The import system documentation section 5.4.2 Submodules When a submodule is loaded using any mechanism (e.g. importlib APIs, the import or import-from statements, or built-in import()) a binding is placed in the parent moduleβs namespace to the submodule object. For example, if package spam has a submodule foo, after importing spam.foo, spam will have an attribute foo which is bound to the submodule. A package namespace includes the namespace created in __init__.py plus extras added by the import system. The why is for namespace consistency. Given Pythonβs familiar name binding rules this might seem surprising, but itβs actually a fundamental feature of the import system. The invariant holding is that if you have sys.modules['spam'] and sys.modules['spam.foo'] (as you would after the above import), the latter must appear as the foo attribute of the former. | 9 | 8 |
65,280,790 | 2020-12-13 | https://stackoverflow.com/questions/65280790/install-newer-version-of-sqlite3-on-aws-lambda | I want to use Window functions on sqlite3 on my python3.8 code running on AWS Lambda. They are available since version 3.25. Unfortunately, on AWS Lambda Python3.8, sqlite3 library is outdated: >>> sqlite3.sqlite_version '3.7.17' while locally, on my homebrew install of Python3.8: (working) >>> import sqlite3 >>> sqlite3.sqlite_version '3.31.1' How can I get an sqlite3 version > 3.25 on AWS Lambda Python 3.8 ? | I found a way: I used the external package pysqlite3, in the binary version. in my requirements.txt pysqlite3-binary==0.4.4 in the code try: import pysqlite3 as sqlite3 except ModuleNotFoundError: import sqlite3 # for local testing because pysqlite3-binary couldn't be installed on macos | 6 | 5 |
65,270,624 | 2020-12-12 | https://stackoverflow.com/questions/65270624/how-to-connect-to-a-sqlite3-db-file-and-fetch-contents-in-fastapi | I have a sqlite.db file which has 5 columns and 10million rows. I have created a api using fastapi, now in one of the api methods I want to connect to that sqlite.db file and fetch content based on certain conditions (based on the columns present). I mostly will be using SELECT and WHERE. How can I do it by also taking advantage of async requests. I have came across Tortoise ORM but I am not sure how to properly use it to fetch results. from fastapi import FastAPI, UploadFile, File, Form from fastapi.middleware.cors import CORSMiddleware DATABASE_URL = "sqlite:///test.db" @app.post("/test") async def fetch_data(id: int): query = "SELECT * FROM tablename WHERE ID={}".format(str(id)) # how can I fetch such query faster from 10 million records while taking advantage of async func return results | You are missing a point here, defining a function with async is not enough. You need to use an asynchronous Database Driver to taking the advantage of using a coroutine. Encode's Databases library is great for this purpose. pip install databases You can also install the required database drivers with: pip install databases[sqlite] In your case, this should do good. from fastapi import FastAPI, UploadFile, File, Form from fastapi.middleware.cors import CORSMiddleware from databases import Database database = Database("sqlite:///test.db") @app.on_event("startup") async def database_connect(): await database.connect() @app.on_event("shutdown") async def database_disconnect(): await database.disconnect() @app.post("/test") async def fetch_data(id: int): query = "SELECT * FROM tablename WHERE ID={}".format(str(id)) results = await database.fetch_all(query=query) return results | 6 | 15 |
65,252,463 | 2020-12-11 | https://stackoverflow.com/questions/65252463/mypy-class-forward-references-in-type-alias-gives-error-when-in-other-module | I want to keep my type aliases in one module, say my_types, to be able to use them anywhere in my application (similar to the standard typing module). But mypy complains that the forward reference to class X is not defined. If I define class X later in that same module, itβs okay, but if it defined in another one, mypy gets upset. So my question is, how do I keep all my type aliases in one module without mypy producing an error about forward references that are not defined in the same module? Or is that a wrong approach somehow? Here is my example code: from my_types import SomeXs class X: pass Type aliases are defined like so: # my_types.py from typing import List SomeXs = List['X'] When I run mypy, I get an error that X is not defined: $ mypy module.py my_types.py:4: error: Name 'X' is not defined Found 1 error in 1 file (checked 1 source file) | Iβm gonna share the solution I found in the mypy documentation common issues section here: It is necessary for mypy to have access to the definition of X. To avoid an import cycle, the mypy documentation recommends a trick - only import the definition that would create a cyclic import when type checking. It goes like this: from typing import List, TYPE_CHECKING if TYPE_CHECKING: from main import X SomeXs = List['X'] This makes the Python interpreter ignore the import when executing this code, but mypy still uses the definition of X from main. Et voilΓ‘: $ mypy *.py Success: no issues found in 2 source files | 6 | 12 |
65,263,061 | 2020-12-12 | https://stackoverflow.com/questions/65263061/selecting-multiple-columns-to-plot-with-plotly-python | I have the following code: def campaign_plot(col1,col2): grouper = df.groupby(['Day','Campaign']).agg({col1: 'sum', col2: 'mean'}).unstack() result = grouper.fillna(0) fig = go.Figure() fig.add_trace(go.Scatter( x = result.index, y = result.iloc[:, [0, 4]], #<--- name = '1', line = dict( color = ('rgb(205, 12, 24)'), width = 2) )) fig.show() I want to create a plot using the first and fifth columns of a dataframe. If i just do result.iloc[:, [0, 4]] this outputs the correct columns of the dataframe. But in the plot it just outputs two points. How can I fix this? EDIT: Here is a snippet of the dataframe, which is grouped: Day Campaign Clicks CTR 0 2013-08-05 1 0 0 1 2013-08-05 3 1 0.5 2 2013-08-05 7 0 0.2 3 2013-08-05 15 5 3 4 2013-08-08 1 6 0.1 5 2013-08-08 3 1 0 6 2013-08-08 7 15 4.5 7 2013-08-08 15 0 1 8 2013-08-10 1 6 2.2 9 2013-08-10 3 20 0 10 2013-08-10 7 1 0.2 11 2013-08-10 15 1 0.1 So in the function, col1 is Clicks and col2 is CTR. Clicks is summed while CTR is averaged. The above dataframe is then grouped by Campaign and by Day, so that in the graph the x axis is the day and each Campaign has a separate line. | Another approach would be to melt you dataframe. Here is an example of how you could do this.Suppose that you have the following dataframe: Date High Low Open Close Volume \ 0 2019-01-02 19.000000 17.980000 18.010000 18.830000 87148700 1 2019-01-03 18.680000 16.940001 18.420000 17.049999 117277600 2 2019-01-04 19.070000 17.430000 17.549999 19.000000 111878600 3 2019-01-07 20.680000 19.000000 19.440001 20.570000 107157000 4 2019-01-08 21.200001 19.680000 21.190001 20.750000 121271000 .. ... ... ... ... ... ... 458 2020-10-26 84.970001 80.860001 82.550003 82.230003 69423700 459 2020-10-27 82.370003 77.570000 82.000000 78.879997 156669500 460 2020-10-28 78.959999 75.760002 78.730003 76.400002 76529900 461 2020-10-29 79.180000 76.290001 76.750000 78.019997 52784100 462 2020-10-30 77.699997 74.230003 77.089996 75.290001 51349000 and that you wish to plot column High and Close. Then, an easy way to do this would be: pd.options.plotting.backend = "plotly" df.plot(x='Date', y=[ 'High', 'Close']) df_melt = df.melt(id_vars='Date', value_vars=['High', 'Close']) px.line(df_melt, x='Date' , y='value' , color='variable') EDIT: Adaptation of solution to the actual data The problem you are facing is the facts that that you have, after grouping, a multi-level indexing, which makes it hard to work with in this context. A work-around is to drop them. I am not an expert but I do this this way (usually). First, I want to drop the indexes in such a way that I will keep track of the columns (the name need to correspond to Clicks and CTR AND Campaign). I therefore need to make the Campaign number a string, then do the groupby that you did df['Campaign'] = df['Campaign'].astype(str) grouper = df.groupby(['Day','Campaign']).agg({'Clicks': 'sum', 'CTR': 'mean'}).unstack() Now, comes the tricky part of reindexing (uggly but it works) a = grouper.columns ind = pd.Index([e[0] + e[1] for e in a.tolist()]) grouper.columns = ind result = grouper.reset_index() which gives: Day Clicks1 Clicks15 Clicks3 Clicks7 CTR1 CTR15 CTR3 CTR7 0 2013-08-05 0 5 1 0 0.0 3.0 0.5 0.2 1 2013-08-08 6 0 1 15 0.1 1.0 0.0 4.5 2 2013-08-10 6 1 20 1 2.2 0.1 0.0 0.2 The last step is the plotting. pd.options.plotting.backend = "plotly" result.plot(x='Day', y=[ 'Clicks1', 'CTR1']) result_melt = result.melt(id_vars='Day', value_vars= ['Clicks1', 'CTR1']) px.line(result_melt, x='Day' , y='value' , color='variable') In your function, you'll have to replace ['Clicks1', 'CTR1'] by ['col1', 'col2'] which returns the following plot: | 6 | 5 |
65,258,942 | 2020-12-11 | https://stackoverflow.com/questions/65258942/remove-duplicate-value-from-list-of-tuples-based-on-values-from-another-list | I have 2 lists similar to these: l1 = [('zero', 0),('one', 2),('two', 3),('three', 3),('four', 5)] l2 = [('zero', 0),('one', 3),('four', 2),('ten', 3),('twelve', 8)] I want to compare the lists and remove duplicates from both if both values are the same if the first value is a match remove the tuple from the list where the second value is lower I can do the first with l3 = [(a,b) for (a,b) in l1 if (a,b) not in l2] l4 = [(a,b) for (a,b) in l2 if (a,b) not in l1] or using set though it doesn't preserve order l3 = set(l1) - set(l2) but I'm having a hard time figuring out the second. I tried to start with removing based on just the first value with l3 = [(a,b) for (a,b) in l1 if a not in l2] but that doesn't work. My desired output for l3 & l4 is: l3 [('two', 3),('three', 3),('four', 5)] l4 [('one', 3),('ten', 3),('twelve', 8)] Any guidance would be appreciated. | You could do: d1 = dict(l1) d2 = dict(l2) l3 = [(k, v) for k, v in d1.items() if k not in d2 or d2[k] < v] l4 = [(k, v) for k, v in d2.items() if k not in d1 or d1[k] < v] print(l3) print(l4) Output [('two', 3), ('three', 3), ('four', 5)] [('one', 3), ('ten', 3), ('twelve', 8)] The idea is to use dictionaries for fast lookups of matching first values, if any, and then check if the corresponding second value is less to the one in the current list. | 7 | 6 |
65,247,307 | 2020-12-11 | https://stackoverflow.com/questions/65247307/find-the-minimum-possible-difference-between-two-arrays | I am struggling to figure out an efficient algorithm to perform the following task: Given two arrays A and B with equal length, the difference between the two arrays is defined as: diff = |a[0]-b[0]| + |a[1]-b[1]| +...+|a[a.length-1]-b[b.length-1]| I am required to find the minimum possible difference between A and B, and I am allowed to replace at most one element from A with any other element in A. Note that you are not required to perform a replacement. For example: A = [1,3,5] B = [5,3,1] If we replace A[2] with A[0], then the difference between the two arrays is: |1-5| + |3-3| + |1-1| = 4 This is the minimal possible difference between the two arrays. No other replacement of an element in A with another element in A would result in a smaller difference between A and B. How would I go about solving this problem? I know how to solved the problem in O(n^2) (brute force), but struggling to figure out a more efficient way. Thanks! | I'll implement Shridhar's suggestion of identifying the best modification for each element individually in O(n log n) time and taking the best one. import bisect def abs_diff(x, y): return abs(x - y) def find_nearest(sorted_a, y): i = bisect.bisect(sorted_a, y) return min( sorted_a[max(i - 1, 0) : min(i + 1, len(sorted_a))], key=lambda z: abs_diff(z, y), ) def improvement(x, y, z): return abs_diff(x, y) - abs_diff(z, y) def min_diff(a, b): sorted_a = sorted(a) nearest = [find_nearest(sorted_a, y) for y in b] return sum(map(abs_diff, a, b)) - max(map(improvement, a, b, nearest)) print(min_diff([1, 3, 5], [5, 3, 1])) | 9 | 3 |
65,246,703 | 2020-12-11 | https://stackoverflow.com/questions/65246703/how-does-max-length-padding-and-truncation-arguments-work-in-huggingface-bertt | I am working with Text Classification problem where I want to use the BERT model as the base followed by Dense layers. I want to know how does the 3 arguments work? For example, if I have 3 sentences as: 'My name is slim shade and I am an aspiring AI Engineer', 'I am an aspiring AI Engineer', 'My name is Slim' SO what will these 3 arguments do? What I think is as follows: max_length=5 will keep all the sentences as of length 5 strictly padding=max_length will add a padding of 1 to the third sentence truncate=True will truncate the first and second sentence so that their length will be strictly 5. Please correct me if I am wrong. Below is my code which I have used. ! pip install transformers==3.5.1 from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') tokens = tokenizer.batch_encode_plus(text,max_length=5,padding='max_length', truncation=True) text_seq = torch.tensor(tokens['input_ids']) text_mask = torch.tensor(tokens['attention_mask']) | What you have assumed is almost correct, however, there are few differences. max_length=5, the max_length specifies the length of the tokenized text. By default, BERT performs word-piece tokenization. For example the word "playing" can be split into "play" and "##ing" (This may not be very precise, but just to help you understand about word-piece tokenization), followed by adding [CLS] token at the beginning of the sentence, and [SEP] token at the end of sentence. Thus, it first tokenizes the sentence, truncates it to max_length-2 (if truncation=True), then prepend [CLS] at the beginning and [SEP] token at the end.(So a total length of max_length) padding='max_length', In this example it is not very evident that the 3rd example will be padded, as the length exceeds 5 after appending [CLS] and [SEP] tokens. However, if you have a max_length of 10. The tokenized text corresponds to [101, 2026, 2171, 2003, 11754, 102, 0, 0, 0, 0], where 101 is id of [CLS] and 102 is id of [SEP] tokens. Thus, padded by zeros to make all the text to the length of max_length Likewise, truncate=True will ensure that the max_length is strictly adhered, i.e, longer sentences are truncated to max_length only if truncate=True | 27 | 35 |
65,244,798 | 2020-12-11 | https://stackoverflow.com/questions/65244798/in-python-how-do-i-type-hint-has-attribute | Consider this contrived example; @dataclass class A: name: str = "John" .... @dataclass class B: name: str = "Doe" Q: How do I type hint an object that has an attribute, such as the following? def print_name(obj: HasAttr['name']) print(obj.name) I understand the SO rule on showing what you have tried. The best I can offer is that I've searched the docs; Pep526, PythonSheets, Docs, and am aware of this SO Question. None seem to help (or maybe I missed it.) [Edit] I recognize that you can get there with inheritance, but I don't want to go that route. | So, what you are describing is structural typing. This is distinct from the class-based nominal subtyping that the python typing system is based on. However, structural subtyping is sort of the statically typed version of Python's dynamic duck typing. Python's typing system allows a form of this through typing.Protocol. An example, suppose we have a Python module, test_typing.py: from typing import Protocol from dataclasses import dataclass class Named(Protocol): name: str @dataclass class A: name: str id: int @dataclass class B: name: int @dataclass class C: foo: str def frobnicate(obj: Named) -> int: return sum(map(ord, obj.name)) frobnicate(A('Juan', 1)) frobnicate(B(8)) frobnicate(C('Jon')) Using mypy version 0.790: (py38) juanarrivillaga@Juan-Arrivillaga-MacBook-Pro ~ % mypy test_typing.py test_typing.py:28: error: Argument 1 to "frobnicate" has incompatible type "B"; expected "Named" test_typing.py:28: note: Following member(s) of "B" have conflicts: test_typing.py:28: note: name: expected "str", got "int" test_typing.py:29: error: Argument 1 to "frobnicate" has incompatible type "C"; expected "Named" Found 2 errors in 1 file (checked 1 source file) | 9 | 11 |
65,240,677 | 2020-12-10 | https://stackoverflow.com/questions/65240677/django-admin-interface-how-to-change-user-password | In Django: I have created a super user and can view all the users I have also implemented forgot password for my user, who can input their email and a password reset link is sent to their email and then the user can reset his password But how can admin change some users password from the admin dashboard | This answer is just an extension of answer by @kunal Sharma To change user password from Django admin Go into the user and click this form, and a form below will be shown, change password there | 19 | 7 |
65,233,123 | 2020-12-10 | https://stackoverflow.com/questions/65233123/adding-percentage-of-count-to-a-stacked-bar-chart-in-plotly | Given the following chart created in plotly. I want to add the percentage values of each count for M and F categories inside each block. The code used to generate this plot. arr = np.array([ ['Dog', 'M'], ['Dog', 'M'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'F'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Dog', 'M'] ]) df = pd.DataFrame(arr, columns=['A', 'G']) fig = px.histogram(df, x="A", color='G', barmode="stack") fig.update_layout(height=400, width=800) fig.show() | As far as I know histograms in Plotly don't have a text attribute. But you could generate the bar chart yourself and then add the percentage via the text attribute. import numpy as np import pandas as pd import plotly.express as px arr = np.array([ ['Dog', 'M'], ['Dog', 'M'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'F'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Dog', 'M'] ]) df = pd.DataFrame(arr, columns=['A', 'G']) df_g = df.groupby(['A', 'G']).size().reset_index() df_g['percentage'] = df.groupby(['A', 'G']).size().groupby(level=0).apply(lambda x: 100 * x / float(x.sum())).values df_g.columns = ['A', 'G', 'Counts', 'Percentage'] px.bar(df_g, x='A', y=['Counts'], color='G', text=df_g['Percentage'].apply(lambda x: '{0:1.2f}%'.format(x))) | 5 | 16 |
65,238,459 | 2020-12-10 | https://stackoverflow.com/questions/65238459/templatedoesnotexist-at-users-register-bootstrap5-uni-form-html | I am building a registration form for my django project, and for styling it I am using crispy forms. But, when I run my server and go to my registration page, I see this error: Internal Server Error: /users/register/ Traceback (most recent call last): File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\backends\django.py", line 61, in render return self.template.render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 170, in render return self._render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 162, in _render return self.nodelist.render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 938, in render bit = node.render_annotated(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 905, in render_annotated return self.render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\loader_tags.py", line 150, in render return compiled_parent._render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 162, in _render return self.nodelist.render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 938, in render bit = node.render_annotated(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 905, in render_annotated return self.render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\loader_tags.py", line 62, in render result = block.nodelist.render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 938, in render bit = node.render_annotated(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 905, in render_annotated return self.render(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 988, in render output = self.filter_expression.resolve(context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\base.py", line 698, in resolve new_obj = func(obj, *arg_vals) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\crispy_forms\templatetags\crispy_forms_filters.py", line 60, in as_crispy_form template = uni_form_template(template_pack) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\crispy_forms\templatetags\crispy_forms_filters.py", line 21, in uni_form_template return get_template("%s/uni_form.html" % template_pack) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\loader.py", line 19, in get_template raise TemplateDoesNotExist(template_name, chain=chain) django.template.exceptions.TemplateDoesNotExist: bootstrap5/uni_form.html The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\core\handlers\exception.py", line 47, in inner response = get_response(request) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\core\handlers\base.py", line 179, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\Dell\Desktop\Django\microblog\microblog_project\users\views.py", line 17, in register return render(request, 'users/register.html',context) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\shortcuts.py", line 19, in render content = loader.render_to_string(template_name, context, request, using=using) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\loader.py", line 62, in render_to_string return template.render(context, request) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\backends\django.py", line 63, in render reraise(exc, self.backend) File "C:\Users\Dell\Desktop\Django\microblog\venv\lib\site-packages\django\template\backends\django.py", line 84, in reraise raise new from exc django.template.exceptions.TemplateDoesNotExist: bootstrap5/uni_form.html This doesn't look like the usual TemplateDoesNotExistError I get. I think this is the error in my crispy form, because if I remove the crispy template tags in my form, everything works fine. Here is my register template: {% extends 'base.html' %} {% load crispy_forms_tags %} {% block title %}Register{% endblock %} {% block content %} <h1>Sign up for a new account</h1> <hr> <form action="" method="post"> {% csrf_token %} {{ form|crispy }} <button type="submit" class="btn btn-sm btn-primary">Register</button> </form> <p> Already have an account? <a href="#">Log in</a> </p> {% endblock %} And just in case it is required, here is my view function: def register(request): form = RegistrationForm() if request.method == 'POST': form = RegistrationForm(data=request.POST) if form.is_valid(): form.save() return HttpResponse("Successfully Registered!!!") context = { 'form':form, } return render(request, 'users/register.html',context) I have specefied CRISPY_TEMPLATE_PACK in my settings.py and also added crispy_forms to my INSTALLED_APPS. Where am I going wrong? EDIT: So I did a bit more research on this topic and this error is probably because crispy forms does not support bootstrap 5 yet. So, by changing the bootstrap5 to bootstrap4 in my CRISPY_TEMPLATE_PACK, the error is solved | Based on the latest crispy form doc, it seems that there is no built-in bootstrap5 for it. Are you sure you are using bootstrap5? Currently, there are only bootstrap, bootstrap3, bootstrap4, and uni-form. You can take a look at your file structure if you even see bootstrap5 folder. | 14 | 4 |
65,234,748 | 2020-12-10 | https://stackoverflow.com/questions/65234748/what-is-the-numpy-equivalent-of-expand-in-pytorch | Suppose I have a numpy array x of shape [1,5]. I want to expand it along axis 0 such that the resulting array y has shape [10,5] and y[i:i+1,:] is equal to x for each i. If x were a pytorch tensor I could simply do y = x.expand(10,-1) But there is no expand in numpy and the ones that look like it (expand_dims and repeat) don't seem to behave like it. Example: >>> import torch >>> x = torch.randn(1,5) >>> print(x) tensor([[ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724]]) >>> print(x.expand(10,-1)) tensor([[ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724], [ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724]]) | You can achieve that with np.broadcast_to. But you can't use negative numbers: >>> import numpy as np >>> x = np.array([[ 1.3306, 0.0627, 0.5585, -1.3128, -1.4724]]) >>> print(np.broadcast_to(x,(10,5))) [[ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724] [ 1.3306 0.0627 0.5585 -1.3128 -1.4724]] | 6 | 13 |
65,231,702 | 2020-12-10 | https://stackoverflow.com/questions/65231702/how-to-pass-multiple-parameters-to-azure-durable-activity-function | My orchestrator receives a payload, with that payload it contains instructions that need to be passed along with other sets of data to activity functions. how do I pass multiple parameters to an activity function? Or do I have to mash all my data together? def orchestrator_function(context: df.DurableOrchestrationContext): # User defined configuration instructions: str = context.get_input() task_batch = yield context.call_activity("get_tasks", None) # Need to pass in instructions too parallel_tasks = [context.call_activity("perform_task", task) for task in task_batch] results = yield context.task_all(parallel_tasks) return results The perform_task activity needs both the items from task_batch and the user input instructions Do I do something in my function.json? Workaround Not ideal, but I can pass multiple parameters as a single Tuple something = yield context.call_activity("activity", ("param_1", "param_2")) I then just need to reference the correct index of the parameter in the activity. | Seems there's no text-book way to do it. I have opted to give my single parameter a generic name like parameter or payload. Then when passing in the value in the orchestrator I do it like so: payload = {"value_1": some_var, "value_2": another_var} something = yield context.call_activity("activity", payload) then within the activity function, I unpack it again. edit: Some buried documentation seems to show that https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-error-handling?tabs=python | 16 | 18 |
65,222,106 | 2020-12-9 | https://stackoverflow.com/questions/65222106/running-into-java-lang-outofmemoryerror-java-heap-space-when-using-topandas | I'm trying to transform a pyspark dataframe of size [2734984 rows x 11 columns] to a pandas dataframe calling toPandas(). Whereas it is working totally fine (11 seconds) when using an Azure Databricks Notebook, I run into a java.lang.OutOfMemoryError: Java heap space exception when i run the exact same code using databricks-connect (db-connect version and Databricks Runtime Version match and are both 7.1). I already increased the spark driver memory (100g) and the maxResultSize (15g). I suppose that the error lies somewhere in databricks-connect because I cannot replicate it using the Notebooks. Any hint what's going on here? The error is the following one: Exception in thread "serve-Arrow" java.lang.OutOfMemoryError: Java heap space at com.ning.compress.lzf.ChunkDecoder.decode(ChunkDecoder.java:51) at com.ning.compress.lzf.LZFDecoder.decode(LZFDecoder.java:102) at com.databricks.service.SparkServiceRPCClient.executeRPC0(SparkServiceRPCClient.scala:84) at com.databricks.service.SparkServiceRemoteFuncRunner.withRpcRetries(SparkServiceRemoteFuncRunner.scala:234) at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPC(SparkServiceRemoteFuncRunner.scala:156) at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPCHandleCancels(SparkServiceRemoteFuncRunner.scala:287) at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute0$1(SparkServiceRemoteFuncRunner.scala:118) at com.databricks.service.SparkServiceRemoteFuncRunner$$Lambda$934/2145652039.apply(Unknown Source) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at com.databricks.service.SparkServiceRemoteFuncRunner.withRetry(SparkServiceRemoteFuncRunner.scala:135) at com.databricks.service.SparkServiceRemoteFuncRunner.execute0(SparkServiceRemoteFuncRunner.scala:113) at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute$1(SparkServiceRemoteFuncRunner.scala:86) at com.databricks.service.SparkServiceRemoteFuncRunner$$Lambda$1031/465320026.apply(Unknown Source) at com.databricks.spark.util.Log4jUsageLogger.recordOperation(UsageLogger.scala:210) at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:346) at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:325) at com.databricks.service.SparkServiceRPCClientStub.recordOperation(SparkServiceRPCClientStub.scala:61) at com.databricks.service.SparkServiceRemoteFuncRunner.execute(SparkServiceRemoteFuncRunner.scala:78) at com.databricks.service.SparkServiceRemoteFuncRunner.execute$(SparkServiceRemoteFuncRunner.scala:67) at com.databricks.service.SparkServiceRPCClientStub.execute(SparkServiceRPCClientStub.scala:61) at com.databricks.service.SparkServiceRPCClientStub.executeRDD(SparkServiceRPCClientStub.scala:225) at com.databricks.service.SparkClient$.executeRDD(SparkClient.scala:279) at com.databricks.spark.util.SparkClientContext$.executeRDD(SparkClientContext.scala:161) at org.apache.spark.scheduler.DAGScheduler.submitJob(DAGScheduler.scala:864) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:928) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2331) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2426) at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$6(Dataset.scala:3638) at org.apache.spark.sql.Dataset$$Lambda$3567/1086808304.apply$mcV$sp(Unknown Source) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581) at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$3(Dataset.scala:3642)``` | This is likely because Databricks-connect is executing the toPandas on the client machine which can then run out of memory. You could increase the local driver memory by setting spark.driver.memory in the (local) config file ${spark_home}/conf/spark-defaults.conf where ${spark_home} can be obtained with databricks-connect get-spark-home. | 8 | 10 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.