question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
71,099,818 | 2022-2-13 | https://stackoverflow.com/questions/71099818/websocket-not-working-when-trying-to-send-generated-answer-by-keras | I am implementing a simple chatbot using keras and WebSockets. I now have a model that can make a prediction about the user input and send the according answer. When I do it through command line it works fine, however when I try to send the answer through my WebSocket, the WebSocket doesn't even start anymore. Here is my working WebSocket code: @sock.route('/api') def echo(sock): while True: # get user input from browser user_input = sock.receive() # print user input on console print(user_input) # read answer from console response = input() # send response to browser sock.send(response) Here is my code to communicate with the keras model on command line: while True: question = input("") ints = predict(question) answer = response(ints, json_data) print(answer) Used methods are those: def predict(sentence): bag_of_words = convert_sentence_in_bag_of_words(sentence) # pass bag as list and get index 0 prediction = model.predict(np.array([bag_of_words]))[0] ERROR_THRESHOLD = 0.25 accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD] accepted_results.sort(key=lambda x: x[1], reverse=True) output = [] for accepted_result in accepted_results: output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])}) print(output) return output def response(intents, json): tag = intents[0]['intent'] intents_as_list = json['intents'] for i in intents_as_list: if i['tag'] == tag: res = random.choice(i['responses']) break return res So when I start the WebSocket with the working code I get this output: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat * Serving Flask app 'server' (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on But as soon as I have anything of my model in the server.py class I get this output: 2022-02-13 11:31:38.887640: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-02-13 11:31:38.887734: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) Metal device set to: Apple M1 systemMemory: 16.00 GB maxCacheSize: 5.33 GB It is enough when I just have an import at the top like this: from chatty import response, predict - even though they are unused. | I am devastated, I just wasted 2 days into the dumbest possible issue (and fix) I still had the while True: question = input("") ints = predict(question) answer = response(ints, json_data) print(answer) in my model file, so the server didn't start. The fix was to delete it and now it works fine. | 6 | 4 |
71,134,787 | 2022-2-15 | https://stackoverflow.com/questions/71134787/listing-objects-in-s3-with-suffix-using-boto3 | def get_latest_file_movement(**kwargs): get_last_modified = lambda obj: int(obj['LastModified'].strftime('%s')) s3 = boto3.client('s3') objs = s3.list_objects_v2(Bucket='my-bucket',Prefix='prefix')['Contents'] last_added = [obj['Key'] for obj in sorted(objs, key=get_last_modified, reverse=True)][0] return last_added Above code gets me the latest file however i only want the files ending with 'csv' | You can check if they end with .csv: def get_latest_file_movement(**kwargs): get_last_modified = lambda obj: int(obj['LastModified'].strftime('%s')) s3 = boto3.client('s3') objs = s3.list_objects_v2(Bucket='my-bucket',Prefix='prefix')['Contents'] last_added = [obj['Key'] for obj in sorted(objs, key=get_last_modified, reverse=True) if obj['Key'].endswith('.csv')][0] return last_added | 5 | 1 |
71,111,005 | 2022-2-14 | https://stackoverflow.com/questions/71111005/modulenotfounderror-no-module-named-keras-applications-resnet50-on-google-cola | I am trying to run an image-based project on colab. I found the project on github. Everything runs fine till I reached the cell with the following code: import keras from keras.preprocessing.image import ImageDataGenerator from keras.applications.resnet50 import preprocess_input, ResNet50 from keras.models import Model from keras.layers import Dense, MaxPool2D, Conv2D When I run it, the following output is observed: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-24-173cbce466d6> in <module>() 1 import keras 2 from keras.preprocessing.image import ImageDataGenerator ----> 3 from keras.applications.resnet50 import preprocess_input, ResNet50 4 from keras.models import Model 5 from keras.layers import Dense, MaxPool2D, Conv2D ModuleNotFoundError: No module named 'keras.applications.resnet50' --------------------------------------------------------------------------- It's running 2.7.0 keras, connected to a TPU runtime. I tried !pip installing the said module but no use. I even tried running a demo resnet50 project too but got the same error. Can anyone please help me solve the error? | from tensorflow.keras.applications.resnet50 import ResNet50 | 8 | 9 |
71,102,782 | 2022-2-13 | https://stackoverflow.com/questions/71102782/python-debugger-in-spyder-stops-at-line-2 | when i am trying to debug my code the debugger stops at line 2 and doenst respond to any commands (like go to next line). I am using python 3.9.7. This is what the console looks like: If I try to stop the debugger this happens: The only thing I can do then is close the console. | I had recently the same problem (using Python 3.8) and the solution was to revert an recent upgrade of qtconsole from 5.1.1 to 5.2.2. In case you use conda, the command to revert would be "conda install qtconsole=5.1.1". | 5 | 9 |
71,120,350 | 2022-2-15 | https://stackoverflow.com/questions/71120350/none-vs-nonetype-for-type-annotation | If a function can return None, shouldn't the type annotation use NoneType? For example, shouldn't we use this: from types import NoneType def my_function(num: int) -> int | NoneType: if num > 0: return num return None instead of: def my_function(num: int) -> int | None: if num > 0: return num return None ? | No. types.NoneType was removed in Python 3. Attempting to import NoneType from types will produce an ImportError in Python 3, before Python 3.10. (For Python 3.10, types.NoneType was reintroduced; however, for the purposes of type hinting, types.NoneType and None are equivalent, and you should prefer the latter for conciseness.) In Python 3.10, int | None is the way to describe a return type that could possibly be None. However, for Python versions earlier than 3.10, this syntax is unsupported, so you should use Optional[int] instead. | 7 | 12 |
71,119,621 | 2022-2-14 | https://stackoverflow.com/questions/71119621/python-logging-in-aws-lambda | Something just doesn't click internally for me with pythons logging despite reading the documentation. I have this code import logging logging.basicConfig(level=logging.INFO,format='%(levelname)s::%(message)s') LOG = logging.getLogger("__name__") LOG.info("hey") If I run it from bash I get this: INFO::hey If I run it in an aws lambda the "hey" doesn't shows up at all in the logs. I then did a test setting the level on the logger by adding this: LOG.setLevel(logging.INFO) Run from bash I get the same thing I got before (desired format), but run from the lambda this shows up in the logs: [INFO] 2022-02-14T23:30:43.39Z eb94600a-af45-4124-99b6-d9651d6a3cf6 hey Okay... that is pretty odd. The format is not the same as from bash. I thought I could rationalize the first example because the output on bash is actually going to stderr. And I then assumed the aws lamdba logs just don't grab that. But the second example is also going to stderr on bash, yet it shows up in the lambda logs but with the wrong format. So clearly I am missing something. What is going on under the hood here? | When your Lambda runs, a harness is running that does some basic bootstrap and then loads your module and invokes it. Part of that bootstrap in the AWS Lambda Python Runtime replaces the standard Python logger with its own: logger_handler = LambdaLoggerHandler(log_sink) logger_handler.setFormatter( logging.Formatter( "[%(levelname)s]\t%(asctime)s.%(msecs)dZ\t%(aws_request_id)s\t%(message)s\n", "%Y-%m-%dT%H:%M:%S", ) ) logger_handler.addFilter(LambdaLoggerFilter()) This behavior is formally documented by AWS as AWS Lambda function logging in Python , under the section "Logging library". | 6 | 7 |
71,059,123 | 2022-2-10 | https://stackoverflow.com/questions/71059123/pyreverse-not-showing-composition-relationships-in-the-umls-when-using-absolute | I am having trouble generating UMLs with pyreverse, in particular with composition relationships when classes are not part of the same module, and when using absolute imports. To illustrate the problem, I have the following two modules a.py and b.py in the same package: a.py: from b import B class A: def __init__(self, b): self.b: B = b b.py: class B: pass When I run a pyreverse command in a terminal from the package, I get the following UML. It does not show the composition relationship between the two classes A and B: However, when I do a relative import from .b import B in a.py, I get the expected result: It seems like pyreverse does not recognize in the first case that classes B are the same. To resolve the problem, I have tried to add the absolute path of the package to the environment variable PYTHONPATH. However, this did not resolve the problem. Does anybody know how I can make pyreverse generate the right relationships in the UMLs when classes are defined in different modules, and when using absolute imports? I am using python 3.8.8 and pylint version 2.12.2. | Edited answer following additional experimentation I'm not a python expert, but after a couple more experiments I think I get the information that you need First experiments : no package In my first experiment, I used several modules that were not in a package. It appeared when using different ways to do the imports that pyreverse shows only the classes of the modules that are mentioned on the command line. While I initially assumed that the issue was related to a stricter import syntax, it turned out that in reality it just worked as designed and documented in the man page: pyreverse shows in the diagram only the classes of the modules listed in the pyreverse command line. So with a little project with, using almost your definitions in files main.py,a.py and b.py the easy workaround to get all the classes in the diagram was to use pyresverse main.py a.py b.py on a single command line. It generates the diagram: It appears however that the result also depends on the PYTHONPATH, the current directory from where you call pyreverse and the path used to specify the module, as these elements can influence the finding of the right import. Additional experiments : packages I renamed then main.py into __init__.py to make a package. I can then use pyreverse providing only the directory name of the package (or . if it's the current working directory). Then all the files of the packages are processed for making the class diagram, withoout a need to enumerate them. Using pyreverse on the package from within the package directory works with your import syntax and produced a similar diagram as above. However, running pyreverse from its parent directory produces 2 classes side by side without relationship. This means that we are dealing with two different classes B one of which was not found. Using the relative import solved the issue. I found it by the way useful to add option -m y to disambiguate the class names by adding the module name: I could verify experimentally: whenever I get unlinked classes, launching the module in python from the same current working directory as pyreverse caused an import error. | 6 | 4 |
71,117,478 | 2022-2-14 | https://stackoverflow.com/questions/71117478/vs-code-pylint-highlighting-the-whole-function-with-blue-underline-on-missing-fu | This just suddenly started happening where python pylint will highlight the whole function with blue squiggly lines when for a missing function docstring warning. How can I get it to only highlight the function definition or make a small indicator on the definition line. Its super annoying to get the whole file highlighted when you're developing. Here's an example of a missing class docstring. The whole file becomes ridiculous to work with. example of annoying behavior Desired behaviour is just a small quiggle at the beginning of the line. Thats how it used to be. enter image description here | A solution for this is being actively discussed and developed at the Pylint project. The workarounds until a fix is merged are either to use an earlier version of VS Code (before January 2022) or Pylint (below 2.12.2). If the latter is desired, you can download a local copy and specify a custom path to Pylint in the Python extension settings. | 6 | 7 |
71,113,116 | 2022-2-14 | https://stackoverflow.com/questions/71113116/modulenotfounderror-no-module-named-fastapi | Here is my file structure and requirements.txt: Getting ModuleNotFoundError, any help will be appreciated. main.py from fastapi import FastAPI from .import models from .database import engine from .routers import ratings models.Base.metadata.create_all(bind=engine) app = FastAPI() app.include_router(ratings.router) | The error comes from the fact that you were not using the right environment and python version on VSCODE. Your environment knew your different packages, but VSCode probably did not take them into account. The solution was, in VSCODE: CTRL + SHIFT + P then Python:select interpreter and choose the version of python linked to your environment. You can try to change the version of python to see the consequences on your imports | 19 | 15 |
71,113,281 | 2022-2-14 | https://stackoverflow.com/questions/71113281/how-to-use-constructor-of-generic-type | How do I use the constructor of a python Generic typed class? T = typing.TypeVar('T') class MyClass(typing.Generic[T]): def __init__(self, initialValue: typing.Iterable): self.values: T = T(initialValue) test = MyClass[tuple[int]]([1, 2, 3]) In this case I am expecting T(initialValue) to be equivalent to tuple(initialValue) but instead I get an error. "Exception has occurred: TypeError 'TypeVar' object is not callable" I guess that's not too surprising since that's not what typing was built for, but is there a workaround to accomplish this? | You'll need to take an explicit factory method. Type annotations only exist for compile-time purposes, and at runtime that T is just a TypeVar. Consider class MyClass(Generic[T]): def __init__(self, initialValue: Iterable[int], factory: Callable[[Iterable[int]], T]): self.values: T = factory(initialValue) Then call it as test = MyClass([1, 2, 3], lambda x: tuple(x)) Note: It would be nice to just pass tuple as the second argument, but mypy seems to choke when converting that typename to a Callable. Other type checkers may be able to handle it; your mileage may vary. | 5 | 2 |
71,106,529 | 2022-2-14 | https://stackoverflow.com/questions/71106529/cannot-import-pyi-file-in-stubs-package | I am trying to build a package of .pyi stub files, for use with type annotations. I have this structure: m/ __init__.pyi sub.pyi This works: >>> import m This does not (including in Python 3.10): >>> import m.sub ModuleNotFoundError: No module named 'm.sub' If I rename sub.pyi to sub.py, then I can import m.sub. What am I misunderstanding about where I can use .pyi files, or whether I can make multi-file stubs package? | cf PEP 484 -- Type Hints : While stub files are syntactically valid Python modules, they use the .pyi extension to make it possible to maintain stub files in the same directory as the corresponding real module. This also reinforces the notion that no runtime behavior should be expected of stub files. The .pyi files are not meant to be used by the Python interpreter itself but by type-checking tools. By default, a Python loader will not look for .pyi files, so you can't import them. But you can import a package (a directory whose name matches what you are importing). That's the reason of your error. Looking at the stubs files that PyCharm includes, the only imports it does is to actual .py files, never to .pyi files. | 5 | 5 |
71,079,342 | 2022-2-11 | https://stackoverflow.com/questions/71079342/how-can-i-take-comma-separated-inputs-for-python-anytree-module | Community. I need to accept multiple comma-separated inputs to produce a summary of information ( specifically, how many different employees participated in each group/project)? The program takes employees, managers and groups in the form of strings. I'm using anytree python library to be able to search/count the occurrence of each employee per group. However, this program is only accepting one value/cell at a time instead of multiple values. Here is the tree structure and how I accept input values? Press q to exit, Enter your data: Joe Press q to exit, Enter your data: Manager1 Press q to exit, Enter your data: Group1 Press q to exit, Enter your data: Charles Press q to exit, Enter your data: Manager1 Press q to exit, Enter your data: Group2 Press q to exit, Enter your data: Joe Press q to exit, Enter your data: Manager3 Press q to exit, Enter your data: Group1 Press q to exit, Enter your data: Charles Press q to exit, Enter your data: Manager3 Press q to exit, Enter your data: Group1 Press q to exit, Enter your data: Joe Press q to exit, Enter your data: Manager5 Press q to exit, Enter your data: Group2 Press q to exit, Enter your data: q Employee No of groups JOE 2 CHARLES 2 Group ├── GROUP1 │ ├── JOE │ │ └── MANAGER1 │ ├── JOE │ │ └── MANAGER3 │ └── CHARLES │ └── MANAGER3 └── GROUP2 ├── CHARLES │ └── MANAGER1 └── JOE └── MANAGER5 I need help with this code so that It can accept comma-separated values; for example, to enter Joe, Manager1, Group1 at a time. import anytree from anytree import Node, RenderTree, LevelOrderIter, LevelOrderGroupIter, PreOrderIter import sys # user input io='' lst_input = [] while (io!='q'): io=input('Press q to exit, Enter your data: ') if io!='q': lst_input.append(io.upper()) # change list in to matrix lst=[] for i in range(0, len(lst_input), 3): lst.append(lst_input[i:i + 3]) lst # create tree structure from lst group = Node('Group') storeGroup = {} for i in range(len(lst)): if lst[i][2] in [x.name for x in group.children]: # parent already exist, append childrens storeGroup[lst[i][0]] = Node(lst[i][0], parent=storeGroup[lst[i][2]]) storeGroup[lst[i][1]] = Node(lst[i][1], parent=storeGroup[lst[i][0]]) else: # create parent and append childreds storeGroup[lst[i][2]] = Node(lst[i][2], parent=group) storeGroup[lst[i][0]] = Node(lst[i][0], parent=storeGroup[lst[i][2]]) storeGroup[lst[i][1]] = Node(lst[i][1], parent=storeGroup[lst[i][0]]) store = {} for children in LevelOrderIter(group, maxlevel=3): if children.parent!=None and children.parent.name!='Group': if children.name not in store: store[children.name] = {children.parent.name} else: store[children.name] = store[children.name] | {children.parent.name} print('Employee', ' No of groups') for i in store: print(' '+i+' ', len(store[i])) for pre,fill, node in RenderTree(group): print('{}{}'.format(pre,node.name)) Thank you! Any thoughts are welcomed. | Leverage unpacking to extract elements. Then the if statement can be re-written this way. if io!='q': name, role, grp = io.upper(). split(',') lst_input.append([name,role, grp]) you also need to change lst.append(lst_input[i:i + 3]) in the for loop to this. lst.append(lst_input[0][i:i + 3]) | 6 | 5 |
71,104,319 | 2022-2-13 | https://stackoverflow.com/questions/71104319/postgresql-select-from-table-inside-docker-container-bash | I can access my postgresql db inside docker container and the see the postgresql bash with this command : docker exec -it <container-name> psql -U <dataBaseUserName> <dataBaseName> But I need to see the data I inserted to the table with the api. Is there a way to perform select statement here? | Whenever you access your container using the docker exec -it <container-name> psql -U <username> <database> you can run any PSQL-Query you like. To list all the tables within your database you can use \dt After you identified the table you want to query you can call any select, update or delete statement you like, e.g.: SELECT * FROM <tablename>; See the official PostgresQL documentation for more details. However, I highly recommend using tools such as DBeaver or Database-Tools integrated within your IDE to manage your databases. Connecting your databases running in a container is as easy as running the docker exec command. But the user interface and integrated shortcuts of those tools to e.g. generate SQL-Queries makes it much more easy to access and manage your databases. | 6 | 7 |
71,104,227 | 2022-2-13 | https://stackoverflow.com/questions/71104227/display-django-time-variable-as-full-hours-minutes-am-pm | I want to use a time variable instead of a CharField for a model, which displays multiple saved times. My issue is it displays as "9 am" instead of "9:00 am", and "noon" instead of "12:00 pm". Can anyone help? Thanks in advance. Relevant code below- models.py class Post(models.Model): time=models.TimeField() free=models.CharField(max_length=50, default="") player1=models.CharField(max_length=50, default="Player 1") player2=models.CharField(max_length=50, default="Player 2") player3=models.CharField(max_length=50, default="Player 3") player4=models.CharField(max_length=50, default="Player 4") def __str__(self): return self.time HTML <h7>{{post.time}} <a href="{% url 'update0' post.id %}" class="btn-sm btn-success" >Update</a></h7> | You can achieve this by just using the built-in template datetime format like so: {{ post.time|date:"h:i A" }} This will display the datetime as: 09:00 AM. You can read more about this in the Django docs. | 5 | 4 |
71,102,012 | 2022-2-13 | https://stackoverflow.com/questions/71102012/python-how-to-solve-bracket-not-closed-error | following a tutorial I'm getting error "(" is not closed while using the exact same code: compiled_sol = compile_standard( { "language": "Solidity", "sources": {"SimpleStorage.sol": {"content" = simple_storage_file}} } ) don't know where it's going wrong getting these errors: "{" was not closedPylance Expected parameter namePylance and Expected parameter namePylance | try to replace "=" for ":" I hope this solves the problem. | 5 | 5 |
71,092,732 | 2022-2-12 | https://stackoverflow.com/questions/71092732/generating-new-unique-uuid4-in-django-for-each-object-of-factory-class | I have a Model Sector which has a id field (pk) which is UUID4 type. I am trying to populate that table(Sector Model) using faker and factory_boy. But, DETAIL: Key (id)=(46f0cf58-7e63-4d0b-9dff-e157261562d2) already exists. This is the error I am getting. Is it possible that the error is due to the fact that everytime I am creating SectorFactory objects (which is in a different django app) and the seed gets reset to some previous number causing the uuid to repeat? Please suggest some ways as to how I shall get unique uuid for each Factory object? SectorFactory class import uuid from factory.django import DjangoModelFactory from factory.faker import Faker from factory import Sequence class SectorFactory(DjangoModelFactory): id = uuid.uuid4() name = Sequence(lambda n: f'Sector-{n}') class Meta: model = 'user.Sector' django_get_or_create = ['name'] Class Sector class Sector(models.Model): id = models.UUIDField(primary_key=True, default = uuid.uuid4, editable=False) name = models.CharField(max_length=100) class Meta: db_table = 'sector' constraints = [ models.UniqueConstraint('name', name = 'unique_sector_name') ] The script which creates the custom command to create SectorFactory objects. from types import NoneType from django.core.management.base import BaseCommand from user.factories import SectorFactory class Command(BaseCommand): help = 'Generate fake data and seed the models with them.' def add_arguments(self, parser) -> None: parser.add_argument( '--amount', type=int, help='The amount of fake objects to create.' ) def _generate_sectors(self, amount): for _ in range(amount): SectorFactory() def handle(self, *args, **options) : amount = options['amount'] if(type(amount) == NoneType): amount = 10 self._generate_sectors(amount) | just use like this: class SectorFactory(DjangoModelFactory): id = Faker('uuid4') name = Sequence(lambda n: f'Sector-{n}') class Meta: model = 'user.Sector' django_get_or_create = ['name'] | 6 | 10 |
71,086,270 | 2022-2-11 | https://stackoverflow.com/questions/71086270/no-module-named-virtualenv-activation-xonsh | I triyed to execute pipenv shell in a new environtment and I got the following error: Loading .env environment variables… Creating a virtualenv for this project… Using /home/user/.pyenv/shims/python3.9 (3.9.7) to create virtualenv… ⠋ModuleNotFoundError: No module named 'virtualenv.activation.xonsh' Error while trying to remove the /home/user/.local/share/virtualenvs/7t env: No such file or directory Virtualenv location: Warning: Your Pipfile requires python_version 3.9, but you are using None (/bin/python). $ pipenv check will surely fail. Spawning environment shell (/usr/bin/zsh). Use 'exit' to leave. I tried to remove pipenv, install python with pienv create an alias to python, but anything works. Any idea, I got the same error in existing environment, I tried to remove all environments folder but nothing. Thanks. | By github issue, the solution that works was the following: sudo apt-get remove python3-virtualenv | 32 | 20 |
71,053,839 | 2022-2-9 | https://stackoverflow.com/questions/71053839/vs-code-jupyter-not-connecting-to-python-kernel | Launching a cell will make this message appear: Connecting to kernel: Python 3.9.6 64-bit: Activating Python Environment 'Python 3.9.6 64-bit' This message will then stay up loading indefinitely, without anything happening. No actual error message. I've already tried searching for this problem, but every other post seem to obtain at least an error message, which isn't the case here. I still looked at some of these, which seemed to indicate the problem might have come from the traitlets package. I tried to downgrade it to what was recommended, but it didn't solve anything, so I reverted the downgrade. The main problem here is that I have no idea what could cause such a problem, without even an error message. | Not sure what did the trick but downgrading VSCode to November version and after that reinstalling Jupyter extension worked for me. | 10 | 5 |
71,090,310 | 2022-2-12 | https://stackoverflow.com/questions/71090310/attributeerror-cant-get-attribute-unpickle-block | While using: with open("data_file.pickle", "rb") as pfile: raw_data = pickle.load(pfile) I get the error: AttributeError: Can't get attribute '_unpickle_block' on <module 'pandas._libs.internals' from '/opt/conda/lib/python3.8/site-packages/pandas/_libs/internals.cpython-38-x86_64-linux-gnu.so'> Another answer to a similar question suggests checking the version of pickle I am using. It is the same on my machine, where I developed the code and on server, where I am running the code. I have searched everywhere with no answers. Please help. | I don't think the problem is pickle module but Pandas version. Your file was probably created with an older version of Pandas. Now you use a newer version, pickle can't "deserialize" the object because the API change. Try to downgrade your Pandas version and reload file. You can also try to use pd.read_pickle. | 30 | 26 |
71,087,163 | 2022-2-11 | https://stackoverflow.com/questions/71087163/screenshotting-the-windows-desktop-when-working-through-wsl | I'm primarily using Windows, where I run WSL2. So from a python script running in the subsystem, I would like to screenshot whatever is on windows monitor, as simple as such: v1 import mss import os os.environ['DISPLAY'] = ':0' with mss.mss() as sct: sct.shot() This gives only gives "Segmentation fault" error and no image. So I tried to setup vcxsrv in Windows and I'm able to open stuff from my subsystem in Windows through the server, however I cant get it the other way around.. I just want access to the windows screen so I can screenshot it. Any help about how to access the monitor through wsl would be greatly appreciated, I can't find much on google.. | The problem with your attempted solution is that the WSL/Linux Python's mss, as you've found, isn't able to capture the Windows desktop. Being the Linux version of MSS, it will only be able to communicate with Linux processes and protocols like X. Starting up VcXsrv might get you part of the way there, in that you might be able to capture X output, but you may need to be running the Python app from inside a terminal that is running in a X window. Regardless, you've said that your goal is to capture the entire Windows desktop, not just the X output in VcXsrv. To do that, you'll need to use a Windows process of some sort. But don't worry; using WSL's interop, you can still do that from inside WSL/Linux Python. We just need to call out to a Windows .exe of some sort. There are literally dozens of third-party apps that you could use in Windows to create a screenshot. But I prefer to use a solution that doesn't require any additional installation. So I resorted to PowerShell for this, since you can easily call powershell.exe and pass in a script from WSL. There are a number of examples here on Stack Overflow, but I ended up going slightly "lower tech" to try to simplify a bit. The code here is most similar to this solution, so refer to that if you want to expand on this. From WSL/Linux Python: import os os.system(""" powershell.exe \" Add-Type -AssemblyName System.Windows.Forms [Windows.Forms.Sendkeys]::SendWait('+{Prtsc}') \$img = [Windows.Forms.Clipboard]::GetImage() \$img.Save(\\\"\$env:USERPROFILE\\Pictures\\Screenshots\\screenshot.jpg\\\", [Drawing.Imaging.ImageFormat]::Jpeg)\" """) That essentially sends the ShiftPrintScreen key chord to capture the current desktop to the clipboard, then saves the clipboard. It can get slightly hairy with the quoting, since you are essentially wrapping PowerShell inside a /bin/sh inside a Python script. Note that, even though you are in Linux Python, since it's the Windows PowerShell that we are calling, it takes the Windows path format (C:\Users\Username\Pictures...) rather than the Linux version (/mnt/c/Users/...). While I didn't have any timing issues with this, you may need to insert small delays. Again, refer to the existing answer for that. This solution is primarily to explain how to do it through WSL's Python using PowerShell. | 6 | 2 |
71,082,435 | 2022-2-11 | https://stackoverflow.com/questions/71082435/conversionerror-failed-to-convert-values-to-axis-units-2015-01-01 | I am trying to convert values to axis units. I checked codes with similar problems but none addressed this specific challenge. As can be seen in the image below, expected plot (A) was supposed to show month (Jan, Feb etc.) on the x-axis, but it was showing dates (2015-01 etc) in plot (B). Below is the source code, kindly assist. Thanks. plt.rcParams["font.size"] = 18 plt.figure(figsize=(20,5)) plt.plot(df.air_temperature,label="Air temperature at Frankfurt Int. Airport in 2015") plt.xlim(("2015-01-01","2015-12-31")) plt.xticks(["2015-{:02d}-15".format(x) for x in range(1,13,1)],["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]) plt.legend() plt.ylabel("Temperature (°C)") plt.show() | A wise way to draw the plot with datetime is to use datetime format in place of str; so, first of all, you should do this conversion: df = pd.read_csv(r'data/frankfurt_weather.csv') df['time'] = pd.to_datetime(df['time'], format = '%Y-%m-%d %H:%M') Then you can set up the plot as you please, preferably following Object Oriented Interface: plt.rcParams['font.size'] = 18 fig, ax = plt.subplots(figsize = (20,5)) ax.plot(df['time'], df['air_temperature'], label = 'Air temperature at Frankfurt Int. Airport in 2015') ax.legend() ax.set_ylabel('Temperature (°C)') plt.show() Then you can customize: x ticks' labels format and position with matplotlib.dates: ax.xaxis.set_major_locator(md.MonthLocator(interval = 1)) ax.xaxis.set_major_formatter(md.DateFormatter('%b')) x axis limits: ax.set_xlim([pd.to_datetime('2015-01-01', format = '%Y-%m-%d'), pd.to_datetime('2015-12-31', format = '%Y-%m-%d')]) capital first letter of x ticks' labels for months' names fig.canvas.draw() ax.set_xticklabels([month.get_text().title() for month in ax.get_xticklabels()]) Complete Code import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as md df = pd.read_csv(r'data/frankfurt_weather.csv') df['time'] = pd.to_datetime(df['time'], format = '%Y-%m-%d %H:%M') plt.rcParams['font.size'] = 18 fig, ax = plt.subplots(figsize = (20,5)) ax.plot(df['time'], df['air_temperature'], label = 'Air temperature at Frankfurt Int. Airport in 2015') ax.legend() ax.set_ylabel('Temperature (°C)') ax.xaxis.set_major_locator(md.MonthLocator(interval = 1)) ax.xaxis.set_major_formatter(md.DateFormatter('%b')) ax.set_xlim([pd.to_datetime('2015-01-01', format = '%Y-%m-%d'), pd.to_datetime('2015-12-31', format = '%Y-%m-%d')]) fig.canvas.draw() ax.set_xticklabels([month.get_text().title() for month in ax.get_xticklabels()]) plt.show() | 6 | 8 |
71,075,798 | 2022-2-11 | https://stackoverflow.com/questions/71075798/include-one-yaml-file-inside-another | I want to have a base config file which is used by other config files to share common config. E.g if I have one file base.yml with foo: 1 bar: - 2 - 3 And then a second file some_file.yml with foo: 2 baz: "baz" What I'd want to end up with a merged config file with foo: 2 bar: - 2 - 3 baz: "baz" It's easy enough to write a custom loader that handles an !include tag. class ConfigLoader(yaml.SafeLoader): def __init__(self, stream): super().__init__(stream) self._base = Path(stream.name).parent def include(self, node): file_name = self.construct_scalar(node) file_path = self._base.joinpath(file_name) with file_path.open("rt") as fh: return yaml.load(fh, IncludeLoader) Then I can parse an !include tag. So if my file is inherit: !include base.yml foo: 2 baz: "baz" But now the base config is a mapping. I.e. if I load the the file I'll end up with config = {'a': [42], 'c': [3.6, [1, 2, 3]], 'include': [{'a': 1, 'b': [1.43, 543.55]}]} But if I don't make the tag part of a mapping, e.g. !include base.yml foo: 2 baz: "baz" I get an error. yaml.scanner.ScannerError: mapping values are not allowed here. But I know that the yaml parser can parse tags without needing a mapping. Because I can do things like !!python/object:foo.Bar x: 1.0 y: 3.14 So how do I write a loader and/or structure my YAML file so that I can include another file in my configuration? | In YAML you cannot mix scalars, mapping keys and sequence elements. This is invalid YAML: - abc d: e and so is this some_file_name a: b and that you have that scalar quoted, and provide a tag does of course not change the fact that it is invalid YAML. As you can already found out, you can trick the loader into returning a dict instead of the string (just like the parser already has built in constructors for non-primitive types like datetime.date). That this: !!python/object:foo.Bar x: 1.0 y: 3.14 works is because the whole mapping is tagged, where you just tag a scalar value. What also would be invalid syntax: !include base.yaml foo: 2 baz: baz but you could do: !include filename: base.yaml foo: 2 baz: baz and process the 'filename' key in a special way, or make the !include tag an empty key: !include : base.yaml # : is a valid tag character, so you need the space foo: 2 baz: baz I would however look at using merge keys, as merging is essentially what you are trying to do. The following YAML works: import sys import ruamel.yaml from pathlib import Path yaml_str = """ <<: {x: 42, y: 196, foo: 3} foo: 2 baz: baz """ yaml = ruamel.yaml.YAML(typ='safe') yaml.default_flow_style = False data = yaml.load(yaml_str) yaml.dump(data, sys.stdout) which gives: baz: baz foo: 2 x: 42 y: 196 So you should be able to do: <<: !load base.yaml foo: 2 baz: baz and anyone with knowledge of merge keys would know what happens if base.yaml does include the key foo with value 3, and would also understand: <<: [!load base.yaml, !load config.yaml] foo: 2 baz: baz (As I tend to associate "including" with textual including as in the C preprocessor, I think `!load' might be a more appropriate tag, but that is probably a matter of taste). To get the merge keys to work, it is probably easiest to just sublass the Constructor, as merging is done before tag resolving: import sys import ruamel.yaml from ruamel.yaml.nodes import MappingNode, SequenceNode, ScalarNode from ruamel.yaml.constructor import ConstructorError from ruamel.yaml.compat import _F from pathlib import Path class MyConstructor(ruamel.yaml.constructor.SafeConstructor): def flatten_mapping(self, node): # type: (Any) -> Any """ This implements the merge key feature http://yaml.org/type/merge.html by inserting keys from the merge dict/list of dicts if not yet available in this node """ merge = [] # type: List[Any] index = 0 while index < len(node.value): key_node, value_node = node.value[index] if key_node.tag == 'tag:yaml.org,2002:merge': if merge: # double << key if self.allow_duplicate_keys: del node.value[index] index += 1 continue args = [ 'while constructing a mapping', node.start_mark, 'found duplicate key "{}"'.format(key_node.value), key_node.start_mark, """ To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys """, """\ Duplicate keys will become an error in future releases, and are errors by default when using the new API. """, ] if self.allow_duplicate_keys is None: warnings.warn(DuplicateKeyFutureWarning(*args)) else: raise DuplicateKeyError(*args) del node.value[index] if isinstance(value_node, ScalarNode) and value_node.tag == '!load': file_path = None try: if self.loader.reader.stream is not None: file_path = Path(self.loader.reader.stream.name).parent / value_node.value except AttributeError: pass if file_path is None: file_path = Path(value_node.value) # there is a bug in ruamel.yaml<=0.17.20 that prevents # the use of a Path as argument to compose() with file_path.open('rb') as fp: merge.extend(ruamel.yaml.YAML().compose(fp).value) elif isinstance(value_node, MappingNode): self.flatten_mapping(value_node) print('vn0', type(value_node.value), value_node.value) merge.extend(value_node.value) elif isinstance(value_node, SequenceNode): submerge = [] for subnode in value_node.value: if not isinstance(subnode, MappingNode): raise ConstructorError( 'while constructing a mapping', node.start_mark, _F( 'expected a mapping for merging, but found {subnode_id!s}', subnode_id=subnode.id, ), subnode.start_mark, ) self.flatten_mapping(subnode) submerge.append(subnode.value) submerge.reverse() for value in submerge: merge.extend(value) else: raise ConstructorError( 'while constructing a mapping', node.start_mark, _F( 'expected a mapping or list of mappings for merging, ' 'but found {value_node_id!s}', value_node_id=value_node.id, ), value_node.start_mark, ) elif key_node.tag == 'tag:yaml.org,2002:value': key_node.tag = 'tag:yaml.org,2002:str' index += 1 else: index += 1 if bool(merge): node.merge = merge # separate merge keys to be able to update without duplicate node.value = merge + node.value yaml = ruamel.yaml.YAML(typ='safe', pure=True) yaml.default_flow_style = False yaml.Constructor = MyConstructor yaml_str = """\ <<: !load base.yaml foo: 2 baz: baz """ data = yaml.load(yaml_str) yaml.dump(data, sys.stdout) print('---') file_name = Path('test.yaml') file_name.write_text("""\ <<: !load base.yaml bar: 2 baz: baz """) data = yaml.load(file_name) yaml.dump(data, sys.stdout) this prints: bar: - 2 - 3 baz: baz foo: 2 --- bar: 2 baz: baz foo: 1 Notes: don't open YAML files as text. They are written binary (UTF-8), and you should load them as such (open(filename, 'rb')). If you had included a full working program in your question (or at least included the text of IncludeLoader, it would have been possible to provide a full working example with the merge keys (or find out for you that it doesn't work for some reason) as it is, it is unclear if your yaml.load() is an instance method call (import ruamel.yaml; yaml = ruamel.yaml.YAML()) or calling a function (from ruamel import yaml). You should not use the latter as it is deprecated. | 5 | 3 |
71,078,751 | 2022-2-11 | https://stackoverflow.com/questions/71078751/vs-code-python-formatting-change-max-line-length-with-autopep8-yapf-black | I am experimenting with different python formatters and would like to increase the max line length. Ideally without editing the settings.json file. Is there a way to achieve that? | For all three formatters, the max line length can be increased with additional arguments passed in from settings, i.e.: autopep8 args: --max-line-length=120 black args: --line-length=120 yapf args: --style={based_on_style: google, column_limit: 120, indent_width: 4} Hope that helps someone in the future! | 24 | 67 |
71,077,943 | 2022-2-11 | https://stackoverflow.com/questions/71077943/how-many-processors-should-be-used-with-multiprocessing-pool | I am trying to use multiprocessing.Pool to run my code in parallel. To instantiate Pool, you have to set the number of processes. I am trying to figure out how many I should set for this. I understand this number shouldn't be more than the number of cores you have but I've seen different ways to determine what your system has available. 2 Methods: multiprocessing.cpu_count() len(os.sched_getaffinity(0)) I'm a little confused; what is the difference between the two and which should be implemented with Pool? I am working on a remote cluster, with the first, it outputs that there are 128 cpu, but the second gives 10. | The difference between the two is clearly stated in the doc: multiprocessing.cpu_count() Return the number of CPUs in the system. This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with len(os.sched_getaffinity(0)). So even if you are on a 128-core system, your program could have been somehow limited to only run on a specific set of 10 out of the 128 available CPUs. Since affinity also applies to child threads and processes, it doesn't make much sense to spawn more than 10. You could however try to increase the number of available CPUs through os.sched_setaffinity() before starting your pool. import os import multiprocessing as mp cpu_count = mp.cpu_count() if len(os.sched_getaffinity(0)) < cpu_count: try: os.sched_setaffinity(0, range(cpu_count)) except OSError: print('Could not set affinity') n = max(len(os.sched_getaffinity(0)), 96) print('Using', n, 'processes for the pool') pool = mp.Pool(n) # ... See also man 2 sched_setaffinity. | 5 | 4 |
71,071,355 | 2022-2-10 | https://stackoverflow.com/questions/71071355/no-numeric-types-to-aggregate-while-using-pandas-expanding | In Pandas 1.1.4, I am receiving a DataError: No numeric types to aggregate when using an ExpandingGroupby. Example dataset: tmp = pd.DataFrame({'col1':['a','b','b','c','d','d'], 'col2': ['red','red','green','green','red','blue']}) print(tmp) col1 col2 a red b red b green c green d red d blue This works: tmp.groupby('col1').agg(lambda x: ','.join(x)) And this works: tmp.groupby('col1').expanding().agg('count') But this returns an error: tmp.groupby('col1').expanding().agg(lambda x: ','.join(x)) DataError: No numeric types to aggregate There is no conceptual reason this shouldn't work, and there are several references online to people using custom functions within an ExpandingGroupby. There is obviously no reason that this should need to be a numeric, especially given that the count works with the non-numeric column. What is happening here? If this can't be done natively for whatever reason, how can I do it manually? | You can use accumulate from itertools module: from itertools import accumulate concat = lambda *args: ','.join(args) expand = lambda x: list(accumulate(x, func=concat)) df['col3'] = df.groupby('col1')['col2'].transform(expand) print(df) # Output col1 col2 col3 0 a red red 1 b red red 2 b green red,green 3 c green green 4 d red red 5 d blue red,blue Update One line version: df['col3'] = df.groupby('col1')['col2'].transform(lambda x: list(accumulate(x, func=lambda *args: ','.join(args)))) | 5 | 1 |
71,058,732 | 2022-2-10 | https://stackoverflow.com/questions/71058732/how-to-load-transformers-pipeline-from-folder | According to here pipeline provides an interface to save a pretrained pipeline locally with a save_pretrained method. When I use it, I see a folder created with a bunch of json and bin files presumably for the tokenizer and the model. But the documentation does not specify a load method. How does one initialize a pipeline using a locally saved pipeline? | Apparently the default initialization works with local folders as well. So one can download a model like this: pipe = pipeline("text-classification") pipe.save_pretrained("my_local_path") And later load it like pipe = pipeline("text-classification", model = "my_local_path") | 13 | 18 |
71,058,888 | 2022-2-10 | https://stackoverflow.com/questions/71058888/zoneinfonotfounderror-no-time-zone-found-with-key-utc | While trying to load my webpage on the browser, I got the message. A server error occurred. Please contact the administrator And when I go back to check my termimal, I see the message zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key UTC' I have checked but don't know what's wrong. I even tried changing the UTC to my original timezone, but it still didn't work. P.S: I started getting this error after I tried working on templates inheritance Here's what my settings.py file looks like INSTALLED_APPS = [ 'newapp.apps.NewappConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'new.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'new.wsgi.application' # Database # https://docs.djangoproject.com/en/4.0/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', } } # Password validation # https://docs.djangoproject.com/en/4.0/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/4.0/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/4.0/howto/static-files/ STATIC_URL = 'static/' # Default primary key field type # https://docs.djangoproject.com/en/4.0/ref/settings/#default-auto-field DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' | Add tzdata to your requirements or pip install tzdata | 28 | 70 |
71,062,983 | 2022-2-10 | https://stackoverflow.com/questions/71062983/accessing-the-data-interval-of-a-dag-run-inside-a-task | I'm building an ETL pipeline with Apache Airflow. I have to extract the latest data added to a SQL database (say daily). Therefore, I want to construct a query as follows: SELECT foo FROM bar WHERE insert_date >= "DATA_INTERVAL_START_HERE" AND insert_date < "DATA_INTERVAL_END_HERE" To execute this query in a task (with e.g. pyodbc), I need to access the data interval start and end time of the Dag Run object inside the extract task. How can I retrieve this information? | A similar question was asked here: Airflow ETL pipeline - using schedule date in functions? However, the answer is not updated to the TaskFlow API since Airflow 2.0. A concise way to access the data interval parameters: @dag(schedule_interval="@daily", start_date=datetime(2022, 2, 8), catchup=True) def tutorial_access_data_interval(): @task() def extract(data_interval_start=None, data_interval_end=None, **kwargs): #Use data_interval_start, data_interval_end here The Airflow engine will provide the parameters by default. References: https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html https://airflow.apache.org/docs/apache-airflow/stable/templates-ref.html#variables | 6 | 6 |
71,043,378 | 2022-2-9 | https://stackoverflow.com/questions/71043378/unable-to-create-process-using-python-exe-error-in-virtual-environment | I'm unable to use python within the virtual environment. Python works fine outside of the virtual environment. I'm using Python 3.10.2 I keep on getting the error below when trying to run any python commands. 'C:\Users\User\AppData\Local\Programs\Python\Python310\python.exe' It might be relevant to mention that I was unable to create the virtual environment through the 'python -m venv env' command. Error generated was Error: Command '['C:\\Users\\User\\Documents\\Python Projects\\PDFtoText\\env\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 101. I had to add 'without-pip' to the end of the command to create the virtual environment. Weird thing is, I was able to use pip within the virtual environment without having to manually install it. The path to python is in the environmental variables. I tried reinstalling python but that did not help. Lastly, all these errors started occurring after I downloaded Visual Studio Community 2022. | Short answer, I bet you have a space in your Window's account name (say Your Account is where your account is saved so you have C:\Users\Your Account folder, and there is also a text file C:\Users\Your ("Your" being the first part of your user name). MSVS2022 (maybe earlier versions, too) is known to leave this log file which exposes a bug in Python venv's python launcher. Delete this text file, and your problem should be solved. See my question/answer for more details. | 6 | 27 |
71,050,697 | 2022-2-9 | https://stackoverflow.com/questions/71050697/transformers-how-to-use-cuda-for-inferencing | I have fine-tuned my models with GPU but inferencing process is very slow, I think this is because inferencing uses CPU by default. Here is my inferencing code: txt = "This was nice place" model = transformers.BertForSequenceClassification.from_pretrained(model_path, num_labels=24) tokenizer = transformers.BertTokenizer.from_pretrained('TurkuNLP/bert-base-finnish-cased-v1') encoding = tokenizer.encode_plus(txt, add_special_tokens = True, truncation = True, padding = "max_length", return_attention_mask = True, return_tensors = "pt") output = model(**encoding) output = output.logits.softmax(dim=-1).detach().cpu().flatten().numpy().tolist() Here is my second inferencing code, which is using pipeline (for different model): classifier = transformers.pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english") result = classifier(txt) How can I force transformers library to do faster inferencing on GPU? I have tried adding model.to(torch.device("cuda")) but that throws error: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu I suppose the problem is related to the data not being sent to GPU. There is a similar issue here: pytorch summary fails with huggingface model II: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu How would I send data to GPU with and without pipeline? Any advise is highly appreciated. | You should transfer your input to CUDA as well before performing the inference: device = torch.device('cuda') # transfer model model.to(device) # define input and transfer to device encoding = tokenizer.encode_plus(txt, add_special_tokens=True, truncation=True, padding="max_length", return_attention_mask=True, return_tensors="pt") encoding = encoding.to(device) # inference output = model(**encoding) Be aware nn.Module.to is in-place, while torch.Tensor.to is not (it does a copy!). | 9 | 13 |
71,050,098 | 2022-2-9 | https://stackoverflow.com/questions/71050098/how-is-cpython-implemented | So I lately came across an explanation for Python's interpreter and compiler (CPython specifically). Please correct me if I'm wrong. I just want to be sure I understand these specific concepts. So CPython gets both compiled (to bytecode) and then interpreted (in the PVM)? And what does the PVM do exactly? Does it read the bytecode line by line, and translate each one to binary instructions that can be executed on a specific computer? Does this mean that a computer based on an Intel processor needs a different PVM from an AMD-based computer? | Yes, CPython is compiled to bytecode which is then executed by the virtual machine. The virtual machine executes instructions one-by-one. It's written in C (but you can write it in another language) and looks like a huge if/else statement like "if the current instruction is this, do this; if the instruction is this, do another thing", and so on. Instructions aren't translated to binary - that's why it's called an interpreter. You can find the list of instructions here: https://docs.python.org/3.10/library/dis.html#python-bytecode-instructions The implementation of the VM is available here: https://github.com/python/cpython/blob/f71a69aa9209cf67cc1060051b147d6afa379bba/Python/ceval.c#L1718 Bytecode doesn't have a concept of "line": it's just a stream of bytes. The interpreter can read one byte at a time and use another if/else statement to decide what instruction it's looking at. For example: curr_byte = read_byte() if curr_byte == 0x00: # Parse instruction with no arguments curr_instruction = DO_THING_A; args = NULL; elif curr_byte == 0x01: another_byte = read_byte() if another_byte == 0x00: # Parse a two-byte instruction curr_instruction = DO_THING_B; args = NULL; else: # Parse a one-byte instruction # with one argument curr_instruction = DO_THING_C; args = another_byte >> 1; # or whatever elif curr_byte == ...: ... # go on and on and on The entire point of bytecode is that it can be executed by another program (the interpreter, or virtual machine) on almost any hardware. For example, in order to get CPython running on new hardware, you'll need a C toolchain (compiler, linker, assembler etc) for this hardware and a bunch of functions that Python can call to do low-level stuff (allocate memory, output text, do networking etc). Once you have that, write C code that can execute the bytecode - and that's it. | 5 | 5 |
71,048,056 | 2022-2-9 | https://stackoverflow.com/questions/71048056/pandas-to-sql-create-table-permission-denied | I am trying to write a df to an existing table with pandas.to_sql with this code: import sqlalchemy #CREATE CONNECTION constring = "mssql+pyodbc://UID:PASSWORD@SERVER/DATABASE?driver=SQL Server" dbEngine = sqlalchemy.create_engine(constring, fast_executemany=True, connect_args={'connect_timeout':10}, echo=False) #WRITE INTO TABLE df.to_sql(con=dbEngine, schema="dbo", name="target_table", if_exists="replace", index=False, chunksize=1000) But it gives the following error: ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]CREATE TABLE permission denied in database 'database'. (262) (SQLExecDirectW)") [SQL: CREATE TABLE [target_table] ( [chLocalIndustryCode] VARCHAR(max) NULL, [vcLocalIndustryDesc] VARCHAR(max) NULL, [chLocalSectorCode] VARCHAR(max) NULL, [vcLocalSectorDesc] VARCHAR(max) NULL, [chLocalClusterCode] VARCHAR(max) NULL, [vcLocalClusterDesc] VARCHAR(max) NULL, [chLocalMegaClusterCode] VARCHAR(max) NULL, [vcLocalMegaClusterDesc] VARCHAR(max) NULL) ] (Background on this error at: https://sqlalche.me/e/14/f405) I have checked the connection and works properly when reading the table , so I assume the problem is with pandas to_sql() function. I have also tried writing into the table with a cursor, but takes to long. I am using pandas 1.3.4 Is there any way of fixing this error or any alternative to pd.to_sql() function that I can use to increase writing speed? | You might need to add create permission to the SQL Server user. You can follow below steps from the link: To add a Windows user that has the login “domainname \username” to the sysadmin fixed server role a. Log on to the computer using the credentials for the domainname\username account. b. Click the Start button, point to All Programs, click Microsoft SQL Server, right-click SQL Server Management Studio, and then click Run as administrator. ps: "Run As Administrator" option elevates the user permissions In the User Access Control dialog box, click Continue. c. In SQL Server Management Studio, connect to an instance of SQL Server. d. Click Security, right-click Logins, and then click New Login. e. In the Login name box, enter the user name. f. In the Select a page pane, click Server Roles, select the sysadmin check box, and then click OK. | 5 | 1 |
70,951,929 | 2022-2-2 | https://stackoverflow.com/questions/70951929/how-come-an-abstract-base-class-in-python-can-be-instantiated | It is very surprising to me that I can instantiate an abstract class in python: from abc import ABC class Duck(ABC): def __init__(self, name): self.name = name if __name__=="__main__": d = Duck("Bob") print(d.name) The above code compiles just fine and prints out the expected result. Doesn't this sort of defeat the purpose of having an ABC? | If you have no abstract method, you will able to instantiate the class. If you have at least one, you will not. Consider the following code: from abc import ABC, abstractmethod class Duck(ABC): def __init__(self, name): self.name = name @abstractmethod def implement_me(self): ... if __name__=="__main__": d = Duck("Bob") print(d.name) which returns: TypeError: Can't instantiate abstract class Duck with abstract method implent_me It is the combination of the ABC metaclass, and at least one abstractmethod that will lead to you not being able to instantiate a class. If you leave out one of the two, you will be able to do so. | 5 | 9 |
70,975,344 | 2022-2-3 | https://stackoverflow.com/questions/70975344/how-to-post-json-data-to-fastapi-and-retrieve-the-json-data-inside-the-endpoint | I would like to pass a JSON object to a FastAPI backend. Here is what I am doing in the frontend app: data = {'labels': labels, 'sequences': sequences} response = requests.post(api_url, data = data) Here is how the backend API looks like in FastAPI: @app.post("/api/zero-shot/") async def Zero_Shot_Classification(request: Request): data = await request.json() However, I am getting this error: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) | You should use the json parameter instead (which would change the Content-Type header to application/json): payload = {'labels': labels, 'sequences': sequences} r = requests.post(url, json=payload) not data which is used for sending form data with the Content-Type being application/x-www-form-urlencoded by default, or multipart/form-data if files are also included in the request—unless you serialized your JSON first and manually set the Content-Type header to application/json, as described in this answer: payload = {'labels': labels, 'sequences': sequences} r = requests.post(url, data=json.dumps(payload), headers={'Content-Type': 'application/json'}) Also, please have a look at the documentation on how to benefit from using Pydantic models when sending JSON request bodies, as well as this answer and this answer for more options and examples on how to define an endpoint expecting JSON data. | 8 | 7 |
70,952,692 | 2022-2-2 | https://stackoverflow.com/questions/70952692/how-to-customize-error-response-in-fastapi | I have the following FastAPI backend: from fastapi import FastAPI app = FastAPI class Demo(BaseModel): content: str = None @app.post("/demo") async def demoFunc(d:Demo): return d.content The issue is that when I send a request to this API with extra data like: data = {"content":"some text here"}aaaa or data = {"content":"some text here"aaaaaa} resp = requests.post(url, json=data) it throws an error with status code 422 unprocessable entity error with Actual("some text here") and Extra("aaaaa") data in the return field in case of data = {"content":"some text here"}aaaa: { "detail": [ { "loc": [ "body", 47 ], "msg": "Extra data: line 4 column 2 (char 47)", "type": "value_error.jsondecode", "ctx": { "msg": "Extra data", "doc": "{\n \"content\": \"some text here\"}aaaaa", "pos": 47, "lineno": 4, "colno": 2 } } ] } I tried to put the line app=FastAPI() in a try-catch block, however, it doesn't work. Is there any way I can handle this issue with own response instead of the above mentioned auto response? Something like this: {"error": {"message": "Invalid JSON body"}, "status": 0} | You are passing an invalid JSON, and hence, the server correctly responds with the 422 Unprocessable Entity error. Your test client shouldn't be able to run at all, without throwing an invalid syntax error. So, I'm guessing you posted the request through the interactive autodocs provided by Swagger UI at /docs, and received the relevant 422 error. If what you actually want is to handle the error, in order to customize the error or something, you can override the request validation exception handler, as described in the documentation (have a look at this discussion, as well as this answer and this answer that demonstrates how to customize the RequestValidationError for specific routes only). Working Example: from fastapi import FastAPI, Body, Request, status from fastapi.encoders import jsonable_encoder from fastapi.exceptions import RequestValidationError from fastapi.responses import JSONResponse from pydantic import BaseModel app = FastAPI() class Demo(BaseModel): content: str = None @app.exception_handler(RequestValidationError) async def validation_exception_handler(request: Request, exc: RequestValidationError): return JSONResponse( status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, content=jsonable_encoder({"detail": exc.errors(), # optionally include the errors "body": exc.body, "custom msg": {"Your error message"}}), ) @app.post("/demo") async def some_func(d: Demo): return d.content Or, you could also return a PlainTextResponse with a custom message: from fastapi.responses import PlainTextResponse @app.exception_handler(RequestValidationError) async def validation_exception_handler(request, exc): return PlainTextResponse(str(exc), status_code=422) | 6 | 9 |
70,968,749 | 2022-2-3 | https://stackoverflow.com/questions/70968749/pandas-replace-equivalent-in-python-polars | Is there an elegant way how to recode values in polars dataframe. For example 1->0, 2->0, 3->1... in Pandas it is simple like that: df.replace([1,2,3,4,97,98,99],[0,0,1,1,2,2,2]) | Edit 2024-07-09 Polars has dedicated replace and replace_strict expressions. df = pl.DataFrame({ "a": [1, 2, 3, 4, 5] }) mapper = { 1: 0, 2: 0, 3: 10, 4: 10 } df.select( pl.all().replace(mapper) ) shape: (5, 1) ┌─────┐ │ a │ │ --- │ │ i64 │ ╞═════╡ │ 0 │ │ 0 │ │ 10 │ │ 10 │ │ 5 │ └─────┘ Before Edit In polars you can build columnar if else statetements called if -> then -> otherwise expressions. So let's say we have this DataFrame. df = pl.DataFrame({ "a": [1, 2, 3, 4, 5] }) And we'd like to replace these with the following values: from_ = [1, 2] to_ = [99, 12] We could write: df.with_columns( pl.when(pl.col("a") == from_[0]) .then(to_[0]) .when(pl.col("a") == from_[1]) .then(to_[1]) .otherwise(pl.col("a")).alias("a") ) shape: (5, 1) ┌─────┐ │ a │ │ --- │ │ i64 │ ╞═════╡ │ 99 │ │ 12 │ │ 3 │ │ 4 │ │ 5 │ └─────┘ Don't repeat yourself Now, this becomes very tedious to write really fast, so we could write a function that generates these expressions for use, we are programmers aren't we! So to replace with the values you have suggested, you could do: from_ = [1,2,3,4,97,98,99] to_ = [0,0,1,1,2,2,2] def replace(column, from_, to_): # initiate the expression with `pl.when` branch = pl.when(pl.col(column) == from_[0]).then(to_[0]) # for every value add a `when.then` for (from_value, to_value) in zip(from_, to_): branch = branch.when(pl.col(column) == from_value).then(to_value) # finish with an `otherwise` return branch.otherwise(pl.col(column)).alias(column) df.with_columns(replace("a", from_, to_)) Which outputs: shape: (5, 1) ┌─────┐ │ a │ │ --- │ │ i64 │ ╞═════╡ │ 0 │ │ 0 │ │ 1 │ │ 1 │ │ 5 │ └─────┘ | 16 | 29 |
71,029,876 | 2022-2-8 | https://stackoverflow.com/questions/71029876/how-can-i-perform-a-type-guard-on-a-property-of-an-object-in-python | PEP 647 introduced type guards to perform complex type narrowing operations using functions. If I have a class where properties can have various types, is there a way that I can perform a similar type narrowing operation on the property of an object given as the function argument? class MyClass: """ If `a` is `None` then `b` is `str` """ a: Optional[int] b: Optional[str] # Some other things def someTypeGuard(my_obj: MyClass) -> ???: return my_obj.a is not None I'm thinking it might be necessary for me to implement something to do with square brackets in type hints, but I really don't know where to start on this. | TypeGuard annotations can be used to annotate subclasses of a class. If parameter types are specified for those classes, then MyPy will recognise the type narrowing operation successfully. class MyClass: a: Optional[int] b: Optional[str] # Some other things # Two hidden classes for the different types class _MyClassInt(MyClass): a: int b: None class _MyClassStr(MyClass): a: None b: str def my_class_has_a(my_obj: MyClass) -> TypeGuard[_MyClassInt]: """Check if my_obj's `a` property is NOT `None`""" return my_obj.a is not None def my_class_has_b(my_obj: MyClass) -> TypeGuard[_MyClassStr]: """Check if my_obj's `b` property is NOT `None`""" return my_obj.b is not None Sadly failure to narrow to one type doesn't automatically narrow to the other type, and I can't find an easy way to do this other than an assert my_class_has_b(obj) in your else block. Even still this seems to be the best solution. | 13 | 10 |
70,977,935 | 2022-2-3 | https://stackoverflow.com/questions/70977935/why-do-i-receive-unable-to-get-local-issuer-certificate-ssl-c997 | When sending a request to a specific URL I get an SSL error and I am not sure why. First please see the error message I am presented with: requests.exceptions.SSLError: HTTPSConnectionPool(host='dicmedia.korean.go.kr', port=443): Max retries exceeded with url: /multimedia/naver/2016/40000/35000/14470_byeon-gyeong.wav (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)'))) I already tried: https://github.com/Unbabel/COMET/issues/29 (This seems to be related with an internal update Python received relating to the use of specific SSL certificates (not an expert here) Downloading the certificate in question and directly linking to it with verify="private/etc/ssl/certs" I am honestly at loss why I receive this error. As the error message itself indicates it seems that the server in question could get my local certificates somehow. The script worked until a week before. I did not update Python before then. Right now I use python 3.10.2 downloaded from the official website. I don't want to set verify=False as this just skips the verification process and leaves me vulnerable as numerous people already pointed out at different questions. Besides that it really bothers me that I can't resolve the error. Any help is much appreciated. See the specific request: import requests def request(url): response = requests.get(url, verify="/private/etc/ssl/certs") print(response) request("https://dicmedia.korean.go.kr/multimedia/naver/2016/40000/35000/14470_byeon- gyeong.wav") | The problem was that not all certificates needed were included in Python's cacert.pem file. To tackle this I downloaded the certifi module at first. As this didn't work out as well I suppose as certifi also missed the necessary certificates. But I suppose not all certificates in the certificate where missing. As answers to similar questions indicated as well mostly what is missing is not the entire chain, but only the intermediate certificates. After: 1. downloading the necessary certificates (see the lock symbol in your browser; if you're on OSX you need to drag and drop the big images of the certificates to your finder or desktop etc.), 2. converting them to .perm files and bundling them together: cat first_cert.pem second_cert.pem > combined_cert.pem and 3. providing the specific path of the bundled certificates as indicated in my question: verify="private/etc/ssl/certs (you may of course choose a different file path). my request got accepted by the server. I guess my mistake when trying this solution was that I didn't download the entire chain at first, but only the last certificate. I really hope this helps someone else as a point of reference. What I am still dying to know though, is why the error popped up in the first place. I didn't change my script at all and use it on a regular basis, but suddenly got presented with said error. Was the reason that the server I tried to reach change its certificates? | 10 | 14 |
71,011,161 | 2022-2-6 | https://stackoverflow.com/questions/71011161/compare-two-polars-dataframes-for-equality | How do I compare two polars DataFrames for value equality? It appears that == is only true if the two tables are the same object: import polars as pl pl.DataFrame({"x": [1,2,3]}) == pl.DataFrame({"x": [1,2,3]}) # False | It's the equals method of DataFrame: import polars as pl pl.DataFrame({"x": [1,2,3]}).frame_equal(pl.DataFrame({"x": [1,2,3]})) # True Before version 0.19.16, it was called frame_equals. | 12 | 7 |
70,953,357 | 2022-2-2 | https://stackoverflow.com/questions/70953357/how-to-verify-jwt-produced-by-azure-ad | Problem When I receive a JWK from Azure AD in Python, I would like to validate and decode it. I, however, keep getting the error "Signature verification failed". My Setup I have the following setup: Azure Setup In Azure I have created an app registration with the setting "Personal Microsoft accounts only". Python Setup In Python I use the MSAL package for receiving tokens. And I use a public key from Azure to verify the token. Code Using the credentials from the Azure Portal I set up a client for getting tokens. import msal ad_auth_client = msal.ConfidentialClientApplication( client_id = client_id, client_credential = client_secret, authority = "https://login.microsoftonline.com/consumers" ) my_token = ad_auth_client.acquire_token_for_client(scopes=['https://graph.microsoft.com/.default']) If I throw the token into a site like https://jwt.io/ everything looks good. Next I need public keys from Azure for verifying the token. import requests response = requests.get("https://login.microsoftonline.com/common/discovery/keys") keys = response.json()['keys'] To match up the public key to the token, I use the 'kid' in the token header. I also get which algorithm was used for encryption. import jwt token_headers = jwt.get_unverified_header(my_token['access_token']) token_alg = token_headers['alg'] token_kid = token_headers['kid'] public_key = None for key in keys: if key['kid'] == token_kid: public_key = key Now I have the correct public key from Azure to verify my token, but the problem is that it is a JWT key. Before I can use it for decoding, I need to convert it to a RSA PEM key. from cryptography.hazmat.primitives import serialization rsa_pem_key = jwt.algorithms.RSAAlgorithm.from_jwk(json.dumps(public_key)) rsa_pem_key_bytes = rsa_pem_key.public_bytes( encoding=serialization.Encoding.PEM, format=serialization.PublicFormat.SubjectPublicKeyInfo ) This is what the Azure Public Key looks like: b'-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyr3v1uETrFfT17zvOiy0\n1w8nO+1t67cmiZLZxq2ISDdte9dw+IxCR7lPV2wezczIRgcWmYgFnsk2j6m10H4t\nKzcqZM0JJ/NigY29pFimxlL7/qXMB1PorFJdlAKvp5SgjSTwLrXjkr1AqWwbpzG2\nyZUNN3GE8GvmTeo4yweQbNCd+yO/Zpozx0J34wHBEMuaw+ZfCUk7mdKKsg+EcE4Z\nv0Xgl9wP2MpKPx0V8gLazxe6UQ9ShzNuruSOncpLYJN/oQ4aKf5ptOp1rsfDY2IK\n9frtmRTKOdQ+MEmSdjGL/88IQcvCs7jqVz53XKoXRlXB8tMIGOcg+ICer6yxe2it\nIQIDAQAB\n-----END PUBLIC KEY-----\n' The last thing I need to do is to verify the token using the public key. decoded_token = jwt.decode( my_token['access_token'], key=rsa_pem_key_bytes, verify=True, algorithms=[token_alg], audience=[client_id], issuer="https://login.microsoftonline.com/consumers" ) The result I get is: jwt.exceptions.InvalidSignatureError: Signature verification failed What I also tried I also tried to follow this popular guide: How to verify JWT id_token produced by MS Azure AD? Placing the x5c into the certificate pre- and postfixes just generated errors of invalid formatting. What is next? Can you guys see any obvious errors? My main guess is that there is something wrong with the audience or the issuer, but I cannot pin down what it is, and Microsoft's documentation is horrible as always. Also, there is a secret key in the app registration in Azure, but it does not seem to work either. Update So it turns out that my verification code was correct, but that I was trying to verify the wrong token. After creating slight modifications I now receive an id_token, which can be decoded and verified. | There are at least 2 options to decode Microsoft Azure AD ID tokens: Option 1: Using jwt The code provided by OP gives me the exception InvalidIssuerError. Even replacing the issuer argument by https://login.microsoftonline.com/{your-tenant-id} did not work for me. However omitting this argument all together allowed me to decode the ID token. Here is the full code: # Get the public keys from Microsoft import requests response = requests.get("https://login.microsoftonline.com/common/discovery/keys") keys = response.json()['keys'] # Format keys as PEM from cryptography.hazmat.primitives import serialization rsa_pem_key = jwt.algorithms.RSAAlgorithm.from_jwk(json.dumps(public_key)) rsa_pem_key_bytes = rsa_pem_key.public_bytes( encoding=serialization.Encoding.PEM, format=serialization.PublicFormat.SubjectPublicKeyInfo ) # Get algorithm from token header alg = jwt.get_unverified_header(your-id-token)['alg'] # Decode token jwt.decode( your-id-token, key=rsa_pem_key_bytes, algorithms=[alg], verify=True, audience=[your-client-id], options={"verify_signature": True} ) Option 2: use msal's decode_id_token Microsoft's package msal provides a function to decode the id token. The code simply becomes: from msal.oauth2cli.oidc import decode_id_token decode_id_token(id_token=your-token-id, client_id=your-client-id) | 11 | 2 |
70,966,298 | 2022-2-3 | https://stackoverflow.com/questions/70966298/python-black-code-formatter-doesnt-format-docstring-line-length | I am running the Black code formatter against a Python script however it doesn't reformat the line length for docstrings. For example, given the following code: def my_func(): """ This is a really long docstring. This is a really long docstring. This is a really long docstring. This is a really long docstring. This is a really long docstring. This is a really long docstring. """ return When running Black against this script, the line length does not change. How can I ensure docstrings get formatted when running Black? | maintainer here! :wave: The short answer is no you cannot configure Black to fix line length issues in docstrings currently. It's not likely Black will split or merge lines in docstrings as it would be far too risky, structured data can and does exist in docstrings. While I would hope the added newlines wouldn't break the consumers it's still a valid concern. There's currently an open issue asking for this (although it also wants the line length limit for docstrings and strings to be 79) GH-2289, and specifically for docstrings GH-2865. You can also read GH-1713 which is about splitting comments (and likewise has mixed feelings from maintainers). For the time being, perhaps you can look into https://github.com/PyCQA/docformatter which does seem to wrap docstrings (see the --wrap-descriptions and --wrap-summaries options) P.S. if you're curious whether we'll add a flag to split docstrings or comments, it's once again unlikely since we seek to minimize formatting configurability. Especially as the pre-existing flags only disable certain elements of Black's style (barring --line-length which exists as there's no real consensus what it should be). Feel free to state your arguments in the linked issues tho! | 33 | 48 |
71,031,816 | 2022-2-8 | https://stackoverflow.com/questions/71031816/how-do-you-properly-reuse-an-httpx-asyncclient-within-a-fastapi-application | I have a FastAPI application which, in several different occasions, needs to call external APIs. I use httpx.AsyncClient for these calls. The point is that I don't fully understand how I shoud use it. From httpx' documentation I should use context managers, async def foo(): """" I need to call foo quite often from different parts of my application """ async with httpx.AsyncClient() as aclient: # make some http requests, e.g., await aclient.get("http://example.it") However, I understand that in this way a new client is spawned each time I call foo(), and is precisely what we want to avoid by using a client in the first place. I suppose an alternative would be to have some global client defined somewhere, and just import it whenever I need it like so aclient = httpx.AsyncClient() async def bar(): # make some http requests using the global aclient, e.g., await aclient.get("http://example.it") This second option looks somewhat fishy, though, as nobody is taking care of closing the session and the like. So the question is: how do I properly (re)use httpx.AsyncClient() within a FastAPI application? | You can have a global client that is closed in the FastApi shutdown event. import logging from fastapi import FastAPI import httpx logging.basicConfig(level=logging.INFO, format="%(levelname)-9s %(asctime)s - %(name)s - %(message)s") LOGGER = logging.getLogger(__name__) class HTTPXClientWrapper: async_client = None def start(self): """ Instantiate the client. Call from the FastAPI startup hook.""" self.async_client = httpx.AsyncClient() LOGGER.info(f'httpx AsyncClient instantiated. Id {id(self.async_client)}') async def stop(self): """ Gracefully shutdown. Call from FastAPI shutdown hook.""" LOGGER.info(f'httpx async_client.is_closed(): {self.async_client.is_closed} - Now close it. Id (will be unchanged): {id(self.async_client)}') await self.async_client.aclose() LOGGER.info(f'httpx async_client.is_closed(): {self.async_client.is_closed}. Id (will be unchanged): {id(self.async_client)}') self.async_client = None LOGGER.info('httpx AsyncClient closed') def __call__(self): """ Calling the instantiated HTTPXClientWrapper returns the wrapped singleton.""" # Ensure we don't use it if not started / running assert self.async_client is not None LOGGER.info(f'httpx async_client.is_closed(): {self.async_client.is_closed}. Id (will be unchanged): {id(self.async_client)}') return self.async_client httpx_client_wrapper = HTTPXClientWrapper() app = FastAPI() @app.get('/test-call-external') async def call_external_api(url: str = 'https://stackoverflow.com'): async_client = httpx_client_wrapper() res = await async_client.get(url) result = res.text return { 'result': result, 'status': res.status_code } @app.on_event("startup") async def startup_event(): httpx_client_wrapper.start() @app.on_event("shutdown") async def shutdown_event(): await httpx_client_wrapper.stop() if __name__ == '__main__': import uvicorn LOGGER.info(f'starting...') uvicorn.run(f"{__name__}:app", host="127.0.0.1", port=8000) Note - this answer was inspired by a similar answer I saw elsewhere a long time ago for aiohttp, I can't find the reference but thanks to whoever that was! EDIT I've added uvicorn bootstrapping in the example so that it's now fully functional. I've also added logging to show what's going on on startup and shutdown, and you can visit localhost:8000/docs to trigger the endpoint and see what happens (via the logs). The reason for calling the start() method from the startup hook is that by the time the hook is called the eventloop has already started, so we know we will be instantiating the httpx client in an async context. Also I was missing the async on the stop() method, and had a self.async_client = None instead of just async_client = None, so I have fixed those errors in the example. | 24 | 15 |
70,975,237 | 2022-2-3 | https://stackoverflow.com/questions/70975237/strange-autocomplete-suggestions-in-ipython-shell | I use the IPython shell fairly often and have just started to notice it giving me strange autocomplete suggestions without any prompting from me. In this example, I just typed "im" and it suggests importing matplotlib? This is very strange for several reasons: I've never seen this kind of grayed out code suggestion before that appears just as I type without the need to press tab or anything like that, the suggestions seem to be very arbitrary (why would typing im mean I want to import matplotlib of all things) and sometimes the suggestions make so sense (image 2: it just asks me to run plt.show() even though I haven't plotted anything yet). Any clues to what could be going on here? | Try this. import IPython terminal = IPython.get_ipython() terminal.pt_app.auto_suggest = None https://github.com/ipython/ipython/issues/13451 | 5 | 5 |
70,977,165 | 2022-2-3 | https://stackoverflow.com/questions/70977165/how-to-use-loguru-defaults-and-extra-information | I'm still reaseaching about Loguru, but I can't find an easy way to do this. I want to use the default options from Loguru, I believe they are great, but I want to add information to it, I want to add the IP of a request that will be logged. If I try this: import sys from loguru import logger logger.info("This is log info!") # This is directle from Loguru page logger.add(sys.stderr, format="{extra[ip]} {extra[user]} {message}") context_logger = logger.bind(ip="192.168.0.1", user="someone") context_logger.info("Contextualize your logger easily") context_logger.bind(user="someone_else").info("Inline binding of extra attribute") context_logger.info("Use kwargs to add context during formatting: {user}", user="anybody") That logs this: I know that with logger.remove(0) I will remove the default logs, but I want to use it to obtain something like this: 2022-02-03 15:16:54.920 | INFO | __main__:<module>:79 - XXX.XXX.XX.X - Use kwargs to add context during formatting: anybody, with XXX.XXX.XX.X being the IP. Using the default config (for color and the rest of thing) and adding a little thing to the format. I'm trying to access the default configs, but I haven't been able to import them and use them with logger.add. I think I will have to configure everything from scratch. Hope someone can help me, thanks. | I made the same question in the Github Repository and this was the answer by Delgan (Loguru maintainer): I think you simply need to add() your handler using a custom format containing the extra information. Here is an example: logger_format = ( "<green>{time:YYYY-MM-DD HH:mm:ss.SSS}</green> | " "<level>{level: <8}</level> | " "<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> | " "{extra[ip]} {extra[user]} - <level>{message}</level>" ) logger.configure(extra={"ip": "", "user": ""}) # Default values logger.remove() logger.add(sys.stderr, format=logger_format) Extra: if you want to use TRACE level use this when adding the configurations: logger.add(sys.stderr, format=logger_format, level="TRACE") | 19 | 29 |
71,042,138 | 2022-2-8 | https://stackoverflow.com/questions/71042138/docker-and-playwright | I need to install playwright inside of docker. This is my dockerfile. FROM python:3.9 EXPOSE 8000 WORKDIR /fastanalytics COPY /requirements.txt /fastanalytics/requirements.txt RUN pip install --no-cache-dir --upgrade -r /fastanalytics/requirements.txt RUN playwright install RUN playwright install-deps RUN apt-get update && apt-get upgrade -y But when I am installing I got the below error. I tried installing everything in the error message but it didn't help. E: Package 'ttf-ubuntu-font-family' has no installation candidate E: Unable to locate package libenchant1c2a E: Unable to locate package libicu66 E: Package 'libjpeg-turbo8' has no installation candidate | Microsoft released a python docker image for Playwright Dockerfile # Build Environment: Playwright FROM mcr.microsoft.com/playwright/python:v1.21.0-focal # Add python script to Docker COPY index.py / # Run Python script CMD [ "python", "index.py" ] Check Playwright - Docker docs for the latest playwright version. | 14 | 18 |
70,958,081 | 2022-2-2 | https://stackoverflow.com/questions/70958081/include-indices-in-pandas-groupby-results | With Pandas groupby, I can do things like this: >>> df = pd.DataFrame( ... { ... "A": ["foo", "bar", "bar", "foo", "bar"], ... "B": ["one", "two", "three", "four", "five"], ... } ... ) >>> print(df) A B 0 foo one 1 bar two 2 bar three 3 foo four 4 bar five >>> print(df.groupby('A')['B'].unique()) A bar [two, three, five] foo [one, four] Name: B, dtype: object What I am looking for is output that produces a list of indices instead of a list of column B: A bar [1, 2, 4] foo [0, 3] However, groupby('A').index.unique() doesn't work. What syntax would provide me the output I'm after? I'd be more than happy to do this in some other way than with groupby, although I do need to group by two columns in my real application. | You do not necessarily need to have a label in groupby, you can use a grouping object. This enables things like: df.index.to_series().groupby(df['A']).unique() output: A bar [1, 2, 4] foo [0, 3] dtype: object getting the indices of the unique B values: df[~df[['A', 'B']].duplicated()].index.to_series().groupby(df['A']).unique() | 7 | 4 |
71,023,429 | 2022-2-7 | https://stackoverflow.com/questions/71023429/unexpected-result-when-using-list-append-what-am-i-doing-wrong | I can't understand the following two examples of behaviour of list.append() in Python: list_1 = ['A', 'B'] list_2 = ['C', 'D'] copy_l1 = list_1 copy_l1.append(list_2) Example print(copy_l1) result: ['A', 'B', ['C', 'D']] expected: ['A', 'B', 'C', 'D']. I kind of understand this, but how to get the expected result? Example print(list_1) result: ['A', 'B', ['C', 'D']] expected: ['A', 'B']. This is the most puzzling for me. Why does copy_l1.append(list_2) also affect list_1? Due to my backgound in C, this looks to me like I'm working on pointers, but I gather that should not be the case. What means? | The first result is expected as you are adding the list itself, not its values, to copy_l1. To get the desired result, use either of the following: copy_l1 += list2 copy_l1.extend(list2) The second result is harder to understand, but it has to do with the fact that lists are mutable in Python. To understand this, you must first understand what actually happens when you assign one variable to another. Variable Assignment When you use var_a = var_b, both names point to the same variable - that is, the same space in memory; they are just two different ways of accessing it. This can be tested using is, which checks for identity, not just value: a = 1 b = a print(a is b) # True So you would expect that changing the value of one name would also affect the other. However, this is not usually the case. Changing the Value of an Immutable Variable Most basic data types in Python are immutable. This means that their value can't actually be changed once they are created. Some examples of immutable data types are strs, ints, floats and bools. If the data type of a variable is immutable, then its value can't be changed directly. Instead, when you alter the value of a name pointing it, this is what actually happens: A new variable is created Its value is set to the new value It is given the same name as the name which you are trying to alter - it effectively replaces it During this process, only one name has had its value changed (or rather, replaced) and any other names pointing to the same variable stay unchanged. You can test that the actual variable the name points to has changed using is: a = 5 b = a print(a is b) # True a += 4 print(a is b) # False Because of this, the fact that variable assignment works the way it does can almost always be ignored. But this changes when you use mutable variables, such as lists. Changing the Value of a Mutable Variable Because lists are mutable, their value can actually be changed so this three-step process is not necessary. Therefore, all names pointing to the list get changed. You can see this by using is once again: a = [1, 2] b = a print(a is b) # True a += [3, 4] print(a is b, "again") # True again How to Stop this Happening To stop this from happening, use .copy() to get a shallow copy of the list: copy_l1 = list1.copy() Now both names point to different variables, which happen to have the same value. Operations on one will not affect the other. | 5 | 6 |
70,998,452 | 2022-2-5 | https://stackoverflow.com/questions/70998452/warning-ignoring-invalid-distribution-c-python310-lib-site-packages | Whenever I install a pip library in Python, I get a series of warnings. For example : WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) How can I avoid getting these warnings ? | the warning below: can fix as follows. go to the lib\site-packages folder, then look for folders starting with ~ like what you see in the picture below and mentioned in that warning, then remove them this can be fixed this warning and no longer appears | 21 | 40 |
70,997,997 | 2022-2-5 | https://stackoverflow.com/questions/70997997/not-able-to-save-plotly-plots-using-to-image-or-write-image | fig.write_image("images/fig1.png",format='png',engine='kaleido') This makes my VSCode go bananas, the terminal hangs and the program stops then and there. Everything works fine if I remove just that line. I want to save the plots as pngs, but it is not working. I have kaleido installed. | Try this version of kaleido. pip install kaleido==0.1.0post1 It works for me | 24 | 37 |
70,958,434 | 2022-2-2 | https://stackoverflow.com/questions/70958434/unexpected-python-paths-in-conda-environment | In a Conda environment (base here), I'm surprised by the order of directories in the Python path: python -c "import sys; print(sys.path)" ['', '/export/projects/III-data/wcmp_bioinformatics/db291g/miniconda3/lib/python37.zip', '/export/projects/III-data/wcmp_bioinformatics/db291g/miniconda3/lib/python3.7', '/export/projects/III-data/wcmp_bioinformatics/db291g/miniconda3/lib/python3.7/lib-dynload', '/export/home/db291g/.local/lib/python3.7/site-packages', '/export/projects/III-data/wcmp_bioinformatics/db291g/miniconda3/lib/python3.7/site-packages'] As you can see, my local non-Conda path: /export/home/db291g/.local/lib/python3.7/site-packages comes before the Conda counterpart: /export/projects/III-data/wcmp_bioinformatics/db291g/miniconda3/lib/python3.7/site-packages This means that Python packages installed in miniconda3/lib/python3.7/site-packages will be ignored if they are also found in .local/lib/python3.7/site-packages. In fact, in .local/lib/python3.7/site-packages I have numpy v1.20, but in the Conda environment I need v1.19, which is correctly installed, but superseded by v1.20. This seems to defeat the point of using Conda. Is there something wrong with my configuration or am I missing something here? Some info: which python /export/projects/III-data/wcmp_bioinformatics/db291g/miniconda3/bin/python python -V Python 3.7.12 which conda /export/projects/III-data/wcmp_bioinformatics/db291g/miniconda3/bin/conda conda --version conda 4.11.0 | This is expected behavior (see PEP 370) and partially why Anaconda recommended against user-level package installations. The site module is responsible for setting the sys.path when Python is initializing. The code in site.py specifically appends the user site prior to appending the prefix site, which is what leads to this prioritization. The motivation according to PEP 370 is that users would have a Python installed at system-level, but want to prioritize packages they install at the user level, hence the user site should load prior to the prefix site. Options There are several options for avoiding the user-level site-packages from getting loaded. 1: Environment variable The environment variable PYTHONNOUSERSITE will toggle loading of user-level site-packages. Namely, PYTHONNOUSERSITE=1 python -c "import sys; print(sys.path)" 2: Python -s flag Alternatively, the Python binary has an -s argument to specifically disable user-level site packages. python -s -c "import sys; print(sys.path)" 3: Remove (and avoid future) user-level installs The Conda recommendation is to avoid pip install --user altogether, which would be interpreted that one should remove the ~/.local/lib/python* folders from your system. 4: Automated Conda environment variable Conda Forge package The Conda Forge package conda-ecosystem-user-package-isolation will automatically set PYTHONNOUSERSITE=1 during environment activation. If you would like all environments to have such isolation by default, then consider adding this to the create_default_packages configuration list: conda config --add create_default_packages conda-ecosystem-user-package-isolation Note that this package also sets R_LIBS_USER="-", isolating any R environments from user-level packages. Alternative packages If you want a more granular option, I have also created separate packages that set just the PYTHONNOUSERSITE=1 and PYTHONPATH="" environment variables, which can be installed with: ## set PYTHONNOUSERSITE=1 conda install merv::envvar-pythonnousersite-true ## clear PYTHONPATH conda install merv::envvar-pythonpath-null | 9 | 12 |
70,967,266 | 2022-2-3 | https://stackoverflow.com/questions/70967266/what-exactly-is-python-typing-callable | I have seen typing.Callable, but I didn't find any useful docs about it. What exactly is typing.Callable? | typing.Callable is the type you use to indicate a callable. Most python types that support the () operator are of the type collections.abc.Callable. Examples include functions, classmethods, staticmethods, bound methods and lambdas. In summary, anything with a __call__ method (which is how () is implemented), is a callable. PEP 677 attempted to introduce implicit tuple-with-arrow syntax, so that something like Callable[[int, str], list[float]] could be expressed much more intuitively as (int, str) -> list[float]. The PEP was rejected because the benefits of the new syntax were not deemed sufficient given the added maintenance burden and possible room for confusion. | 50 | 55 |
71,027,763 | 2022-2-8 | https://stackoverflow.com/questions/71027763/how-to-open-a-new-mdi-sub-window-in-pyqt5 | What I want to do is to open a new Countrypage sub-window by clicking on the "New" button which is in Countrypage itself. For example, if I click the "New" button in a CountryPage window (window title: "Country page"), one more new Countrypage window will be opened in the MDI area (window title: "Country Page 1"). Now if we click the "New" button in "Country Page 1", one more new window will open in the MDI area (window title: "Country page 2") and so on - and I want to close the windows one by one by pressing the corresponding "Close" button in Countrypage. New window are opened only by pressing a "New" button. And if we close the last opened window by pressing the "Close" button, the text item in the "Country" text-box will be automatically updated in the previous window's "Country" text-box and so on. Main Script : import sys,os from PyQt5.QtWidgets import * from PyQt5.QtCore import * from sample_countrypage import Countrypage class MainPage(QMainWindow): count = 0 def __init__(self): super().__init__() self.mdi = QMdiArea() self.mdi.setFixedSize(1000,400) self.mdi.setHorizontalScrollBarPolicy(Qt.ScrollBarAsNeeded) self.mdi.setVerticalScrollBarPolicy(Qt.ScrollBarAsNeeded) self.setWindowTitle(" Sample Programme") self.setGeometry(100,100,1600,600) self.Ui() self.show() def Ui(self): self.btn1=QPushButton("Country") self.btn1.setFixedSize(100, 30) self.btn1.clicked.connect(self.countrypage) self.left_layout = QVBoxLayout() self.right_layout = QHBoxLayout() self.main_layout = QHBoxLayout() self.left_layout.setContentsMargins(3,5,5,3) self.left_layout.addWidget(self.btn1) self.left_layout.addStretch() self.right_layout.addWidget(self.mdi) self.main_layout.setSpacing(5) self.main_layout.setContentsMargins(0,0,0,0) self.main_layout.addLayout(self.left_layout) self.main_layout.addLayout(self.right_layout) self.main_layout.addStretch() widget = QWidget() widget.setLayout(self.main_layout) self.setCentralWidget(widget) self.subwindow1 = QMdiSubWindow() self.subwindow1.setObjectName("SubWindow_1") # self.subwindow1.setWindowFlag(Qt.FramelessWindowHint) print(Countrypage.btn2click) def countrypage(self): self.countrywindow = Countrypage() self.subwindow1.setWidget(self.countrywindow) self.subwindow1.setWindowTitle("Create Country") self.subwindow1.setFixedWidth(300) self.mdi.addSubWindow(self.subwindow1) self.subwindow1.show() self.mdi.cascadeSubWindows() self.countrywindow.closeRequsted.connect(self.subwindow1close) def subwindow1close(self): print("close activated from mdi programme") self.subwindow1.close() if __name__ == "__main__": app = QApplication(sys.argv) mainwindow = MainPage() app.setStyle("Windows") mainwindow.show() sys.exit(app.exec_()) Countrypage.py import sys,os from PyQt5.QtWidgets import QWidget,QApplication,QPushButton,QLineEdit,QFormLayout,QVBoxLayout,QHBoxLayout from PyQt5.QtCore import pyqtSignal class Countrypage(QWidget): closeRequsted = pyqtSignal() def __init__(self): super().__init__() self.btn1 = QPushButton("close") self.btn2 = QPushButton("New") self.btn1.clicked.connect(self.result) self.btn2.clicked.connect(self.btn2click) self.tb_country = QLineEdit() self.tb_continent =QLineEdit() self.form_layout = QFormLayout() self.form_layout.addRow("Country",self.tb_country) self.form_layout.addRow("continent",self.tb_continent) self.form_layout.addRow("",self.btn2) self.form_layout.addRow("",self.btn1) self.setLayout(self.form_layout) def result(self): self.closeRequsted.emit() def btn2click(self): btn2text = (self.btn2.text()) print(btn2text) if __name__=="__main__": app = QApplication(sys.argv) countrywin = Countrypage() countrywin.show() sys.exit(app.exec_()) | The adding and closing of sub-windows is best handled by the main-window. The CountryPage class doesn't need to know anything about the sub-windows. The new/close buttons can be directly connected to methods of the main-window. This makes it easier to manage the sub-windows via the functions of the mdi-area. Below is a re-write of your example which should do what you asked for: Main Script: import sys, os from PyQt5.QtWidgets import * from PyQt5.QtCore import * class MainPage(QMainWindow): def __init__(self): super().__init__() self.mdi = QMdiArea() self.mdi.setFixedSize(1000, 400) self.mdi.setHorizontalScrollBarPolicy(Qt.ScrollBarAsNeeded) self.mdi.setVerticalScrollBarPolicy(Qt.ScrollBarAsNeeded) self.setWindowTitle("Sample Programme") self.setGeometry(100, 100, 1600, 600) self.Ui() def Ui(self): self.btn1 = QPushButton("Country") self.btn1.setFixedSize(100, 30) self.btn1.clicked.connect(self.countrypage) self.left_layout = QVBoxLayout() self.right_layout = QHBoxLayout() self.main_layout = QHBoxLayout() self.left_layout.setContentsMargins(3, 5, 5, 3) self.left_layout.addWidget(self.btn1) self.left_layout.addStretch() self.right_layout.addWidget(self.mdi) self.main_layout.setSpacing(5) self.main_layout.setContentsMargins(0, 0, 0, 0) self.main_layout.addLayout(self.left_layout) self.main_layout.addLayout(self.right_layout) self.main_layout.addStretch() widget = QWidget() widget.setLayout(self.main_layout) self.setCentralWidget(widget) def countrypage(self): page = Countrypage() subwindow = self.mdi.addSubWindow(page) subwindow.setWindowTitle("Create Country") subwindow.setFixedWidth(300) page.btn_close.clicked.connect(self.subwindowclose) page.btn_new.clicked.connect(self.countrypage) subwindow.show() self.mdi.cascadeSubWindows() def subwindowclose(self): print("close activated from mdi programme") current = self.mdi.activeSubWindow() if current is not None: self.mdi.activatePreviousSubWindow() previous = self.mdi.activeSubWindow() if previous is not None: previous.widget().update_fields(current.widget()) current.close() if __name__ == "__main__": app = QApplication(sys.argv) mainwindow = MainPage() app.setStyle("Windows") mainwindow.show() sys.exit(app.exec_()) Countrypage.py: import sys,os from PyQt5.QtWidgets import QWidget,QApplication,QPushButton,QLineEdit,QFormLayout,QVBoxLayout,QHBoxLayout from PyQt5.QtCore import pyqtSignal class Countrypage(QWidget): def __init__(self): super().__init__() self.btn_close = QPushButton("Close") self.btn_new = QPushButton("New") self.tb_country = QLineEdit() self.tb_continent = QLineEdit() self.form_layout = QFormLayout() self.form_layout.addRow("Country", self.tb_country) self.form_layout.addRow("Continent", self.tb_continent) self.form_layout.addRow("", self.btn_close) self.form_layout.addRow("", self.btn_new) self.setLayout(self.form_layout) def update_fields(self, other): if isinstance(other, Countrypage): self.tb_country.setText(other.tb_country.text()) self.tb_continent.setText(other.tb_continent.text()) else: raise TypeError('invalid page type') | 6 | 3 |
70,964,954 | 2022-2-3 | https://stackoverflow.com/questions/70964954/filter-out-everything-before-a-condition-is-met-keep-all-elements-after | I was wondering if there was an easy solution to the the following problem. The problem here is that I want to keep every element occurring inside this list after the initial condition is true. The condition here being that I want to remove everything before the condition that a value is greater than 18 is true, but keep everything after. Example Input: p = [4,9,10,4,20,13,29,3,39] Expected output: p = [20,13,29,3,39] I know that you can filter over the entire list through [x for x in p if x>18] But I want to stop this operation once the first value above 18 is found, and then include the rest of the values regardless if they satisfy the condition or not. It seems like an easy problem but I haven't found the solution to it yet. | You could use enumerate and list slicing in a generator expression and next: out = next((p[i:] for i, item in enumerate(p) if item > 18), []) Output: [20, 13, 29, 3, 39] In terms of runtime, it depends on the data structure. The plots below show the runtime difference among the answers on here for various lengths of p. If the original data is a list, then using a lazy iterator as proposed by @Kelly Bundy is the clear winner: But if the initial data is a ndarray object, then the vectorized operations as proposed by @richardec and @0x263A (for large arrays) are faster. In particular, numpy beats list methods regardless of array size. But for very large arrays, pandas starts to perform better than numpy (I don't know why, I (and I'm sure others) would appreciate it if anyone can explain it). Code used to generate the first plot: import perfplot import numpy as np import pandas as pd import random from itertools import dropwhile def it_dropwhile(p): return list(dropwhile(lambda x: x <= 18, p)) def walrus(p): exceeded = False return [x for x in p if (exceeded := exceeded or x > 18)] def explicit_loop(p): for i, x in enumerate(p): if x > 18: output = p[i:] break else: output = [] return output def genexpr_next(p): return next((p[i:] for i, item in enumerate(p) if item > 18), []) def np_argmax(p): return p[(np.array(p) > 18).argmax():] def pd_idxmax(p): s = pd.Series(p) return s[s.gt(18).idxmax():] def list_index(p): for x in p: if x > 18: return p[p.index(x):] return [] def lazy_iter(p): it = iter(p) for x in it: if x > 18: return [x, *it] return [] perfplot.show( setup=lambda n: random.choices(range(0, 15), k=10*n) + random.choices(range(-20,30), k=10*n), kernels=[it_dropwhile, walrus, explicit_loop, genexpr_next, np_argmax, pd_idxmax, list_index, lazy_iter], labels=['it_dropwhile','walrus','explicit_loop','genexpr_next','np_argmax','pd_idxmax', 'list_index', 'lazy_iter'], n_range=[2 ** k for k in range(18)], equality_check=np.allclose, xlabel='~n/20' ) Code used to generate the second plot (note that I had to modify list_index because numpy doesn't have index method): def list_index(p): for x in p: if x > 18: return p[np.where(p==x)[0][0]:] return [] perfplot.show( setup=lambda n: np.hstack([np.random.randint(0,15,10*n), np.random.randint(-20,30,10*n)]), kernels=[it_dropwhile, walrus, explicit_loop, genexpr_next, np_argmax, pd_idxmax, list_index, lazy_iter], labels=['it_dropwhile','walrus','explicit_loop','genexpr_next','np_argmax','pd_idxmax', 'list_index', 'lazy_iter'], n_range=[2 ** k for k in range(18)], equality_check=np.allclose, xlabel='~n/20' ) | 26 | 23 |
71,039,820 | 2022-2-8 | https://stackoverflow.com/questions/71039820/retrieve-the-pytorch-model-from-a-pytorch-lightning-model | I have trained a PyTorch lightning model that looks like this: In [16]: MLP Out[16]: DecoderMLP( (loss): RMSE() (logging_metrics): ModuleList( (0): SMAPE() (1): MAE() (2): RMSE() (3): MAPE() (4): MASE() ) (input_embeddings): MultiEmbedding( (embeddings): ModuleDict( (LCLid): Embedding(5, 4) (sun): Embedding(5, 4) (day_of_week): Embedding(7, 5) (month): Embedding(12, 6) (year): Embedding(3, 3) (holidays): Embedding(2, 1) (BusinessDay): Embedding(2, 1) (day): Embedding(31, 11) (hour): Embedding(24, 9) ) ) (mlp): FullyConnectedModule( (sequential): Sequential( (0): Linear(in_features=60, out_features=435, bias=True) (1): ReLU() (2): Dropout(p=0.13371112461182535, inplace=False) (3): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (4): Linear(in_features=435, out_features=435, bias=True) (5): ReLU() (6): Dropout(p=0.13371112461182535, inplace=False) (7): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (8): Linear(in_features=435, out_features=435, bias=True) (9): ReLU() (10): Dropout(p=0.13371112461182535, inplace=False) (11): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (12): Linear(in_features=435, out_features=435, bias=True) (13): ReLU() (14): Dropout(p=0.13371112461182535, inplace=False) (15): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (16): Linear(in_features=435, out_features=435, bias=True) (17): ReLU() (18): Dropout(p=0.13371112461182535, inplace=False) (19): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (20): Linear(in_features=435, out_features=435, bias=True) (21): ReLU() (22): Dropout(p=0.13371112461182535, inplace=False) (23): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (24): Linear(in_features=435, out_features=435, bias=True) (25): ReLU() (26): Dropout(p=0.13371112461182535, inplace=False) (27): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (28): Linear(in_features=435, out_features=435, bias=True) (29): ReLU() (30): Dropout(p=0.13371112461182535, inplace=False) (31): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (32): Linear(in_features=435, out_features=435, bias=True) (33): ReLU() (34): Dropout(p=0.13371112461182535, inplace=False) (35): LayerNorm((435,), eps=1e-05, elementwise_affine=True) (36): Linear(in_features=435, out_features=1, bias=True) ) ) ) I need the corresponding PyTorch model to use in one of my other applications. Is there a simple way to do that? I thought of saving the checkpoint but then I don't know how to do it. Can you please help? Thanks | You can manually save the weights of the torch.nn.Modules in the LightningModule. Something like: trainer.fit(model, trainloader, valloader) torch.save( model.input_embeddings.state_dict(), "input_embeddings.pt" ) torch.save(model.mlp.state_dict(), "mlp.pt") Then to load without needing Lightning: # create the "blank" networks like they # were created in the Lightning Module input_embeddings = MultiEmbedding(...) mlp = FullyConnectedModule(...) # Load the models for inference input_embeddings.load_state_dict( torch.load("input_embeddings.pt") ) input_embeddings.eval() mlp.load_state_dict( torch.load("mlp.pt") ) mlp.eval() For more information about saving and loading PyTorch Modules see Saving and Loading Models: Saving & Loading Model for Inference in the PyTorch documentation. Since Lightning automatically saves checkpoints to disk (check the lightning_logs folder if using the default Tensorboard logger), you can also load a pretrained LightningModule and then save the state dicts without needing to repeat all the training. Instead of calling trainer.fit in the previous code, try model = DecoderMLP.load_from_checkpoint("path/to/checkpoint.ckpt") | 6 | 6 |
71,035,556 | 2022-2-8 | https://stackoverflow.com/questions/71035556/how-to-do-n-point-circular-convolution-for-1d-signal-with-numpy | I want a circular convolution function where I can set the number N as I like. All examples I looked at like here and here assume that full padding is required but that not what I want. I want to have the result for different values of N so input would N and and two different arrays of values the output should be the N point convolved signal Here is the formula for circular convolution. Sub N can be seen as the modulo operation. taken from this basic introduction update for possible solution This answer is a suitable solution when the array a is piled accordingly to the different cases of N. When I find time I will post a complete answer, meanwhile feel free to do so. Thanks to @André pointing this out in the comments! examples for input/output from here N = 4 N = 7 with zero padding | I think that this should work: def conv(x1, x2, N): n, m = np.ogrid[:N, :N] return (x1[:N] * x2[(n - m) % N]).sum(axis=1) This is a direct translation of the formula posted in the question: To implement this formula, first we compute an array of indices used by x₂. This is done using the code n, m = np.ogrid[:N, :N] indices = (n - m) % N For example, for N=5, the array indices is: [[0 4 3 2 1] [1 0 4 3 2] [2 1 0 4 3] [3 2 1 0 4] [4 3 2 1 0]] The entry in the i-th row and j-th column is (i-j) % N. Then, x2[indices] creates an array consisting of elements of x2 corresponding to these indices. It remains to multiply each row of this array by the first N elements of x1 and take the sum of each row: (x1[:N] * x2[indices]).sum(axis=1) | 6 | 3 |
71,039,131 | 2022-2-8 | https://stackoverflow.com/questions/71039131/windows-python-3-10-2-fails-to-run-python-m-venv-venv | This issue has been solved, resulted in a bug report to Python.org. See the my self-answer below for the workaround until it's fixed in a future release of Python One of my PCs got bitten by this bug which no longer allows me to create venv with the error: Error: Command '['C:\\Users\\kesh\\test\\.venv\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 101. This issue has been known, chronologically: v3.7.2, v3.8, v3.?, & v3.10.1 The only known solution is to give up per-user install and use global install by checking "Install for all users" option I'm trying to figure out what exactly is happening, but quickly running out of ideas. Here are what I've tried so far: On my PC, "Install for all users" works as well as per-user install on a dummy account (all using the same v3.10.2 installer). This singles out the issue to be on my Windows account. Changing the install location does not help. Went into venv source by running Python with venv.main(args=('.venv',)), debugging line-by-line and noted that it copies Lib\venv\scripts\nt\python.exe from the python install dir to the local .venv\Scripts folder using shutil.copyfile(). If I run the original Lib\venv\scripts\nt\python.exe in command prompt, it runs with a message No pyvenv.cfg file (which makes sense as the .cfg file is in .venv folder which it couldn't see) If I call the copied .venv\Scripts\python.exe then it returns an error Unable to create process using 'C:\Users\kesh\AppData\Local\Programs\Python\Python310\python.exe' (note that the python.exe path for the process is that of the installed Python exe) If .venv is installed successfully (on the dummy Windows account), the above run starts a Python session as you'd expect. venv\scripts\nt\python.exe is different from the standard python binary and verified that this file and its source in venv\Scripts\nt are identical. All this points to that something in my account configuration is bothering the .venv\Scripts\python.exe to do the right thing, but my environmental variables are pretty clean and python paths are at the top of the user PATH variable. Currently trying to locate the source code of .venv\Scripts\python.exe but not found it yet. Can it be something in registry? If you have any other ideas to try, please share. Update #1: Found the source of the error message PC/launcher.c Line 814 Possibility: CreateProcessW(NULL, cmdline,... where cmdline is the original python path in the error message, without quote. CreateProcessW documentation states executable name is deduced from the first white space–delimited token in the cmdline string. Though I replaced my actual account name with kesh it actually comprises two words and a space... Update #2: Solution found as posted below | Bingo, the finding in the update #1 was the cause. The space in my username was the culprit. Although I have no idea what triggered this behavior change on my account... (anybody with an answer, please follow up.) Let's say the per-user python is installed at C:\Users\User Name\AppData\Local\Programs\Python\Python310 In my case, "Microsoft Visual C++ 2015-2022 Redistributable" installer (VC_redist.x64.exe) left a log file C:\Users\User (a text file with the first part of my account name as its file name). This caused python venv to use C:\Users\User as the python executable and promptly failed (see the issue tracker link below for the full explanation). You can fix the issue in 2 ways until Python patches the problem. Easy Fix Simply delete the file C:\Users\User Note: This will work until next time another installer leaves this nasty log file. More Involved Fix In command console, run DIR /X C:\Users which lists something like: 02/08/2022 11:44 AM <DIR> . 02/08/2022 11:44 AM <DIR> .. 11/19/2020 01:48 AM <DIR> Public 02/08/2022 02:47 PM <DIR> USERNA~1 User Name Open the Environmental Variables dialog and edit Path user variable's Python entries from C:\Users\User Name\AppData\Local\Programs\Python\Python310\Scripts C:\Users\User Name\AppData\Local\Programs\Python\Python310 to C:\Users\USERNA~1\AppData\Local\Programs\Python\Python310\Scripts C:\Users\USERNA~1\AppData\Local\Programs\Python\Python310 Re-open python console window or app so the path change is applied to your dev environment. Note: This fix will work as long as you don't update python version. When you do, you need to delete the old path entries manually and update the new path entries. Eventual Fix I reported this bug to python bug tracker: Issue 46686. They've acknowledged the bug and have it labeled as critical with a proposed fix. So, hopefully it will get fixed in a near future release. (>3.10.2) | 8 | 18 |
70,993,385 | 2022-2-4 | https://stackoverflow.com/questions/70993385/how-to-add-kaleido-package-to-poetry-lock-file | When attempting to install "kaleido" via Poetry, I receive the following error message: ~ poetry add kaleido Using version ^0.2.1 for kaleido Updating dependencies Resolving dependencies... (3.1s) Package operations: 1 install, 0 updates, 0 removals • Installing kaleido (0.2.1.post1): Failed RuntimeError Unable to find installation candidates for kaleido (0.2.1.post1) at ~/.poetry/lib/poetry/installation/chooser.py:72 in choose_for 68│ 69│ links.append(link) 70│ 71│ if not links: → 72│ raise RuntimeError( 73│ "Unable to find installation candidates for {}".format(package) 74│ ) 75│ 76│ # Get the best link However, the "kaleido" appears in the poetry.lock file: [[package]] name = "kaleido" version = "0.2.1.post1" description = "Static image export for web-based visualization libraries with zero dependencies" category = "main" optional = false python-versions = "*" If I try to export an image, I unsurprisingly receive the following error message: ValueError: Image export using the "kaleido" engine requires the kaleido package, which can be installed using pip: $ pip install -U kaleido Does anyone know how to install this package via poetry (or amend the .lock file to do it manually)? | Firstly try to use a master version of poetry as advised in Github issue or upgrade it to the latest version pip3 install --upgrade poetry Then try to install with kaleido with locked version: poetry add kaleido==0.2.1 That worked in my case. | 11 | 30 |
70,993,316 | 2022-2-4 | https://stackoverflow.com/questions/70993316/get-feature-names-after-sklearn-pipeline | I want to match the output np array with the features to make a new pandas dataframe Here is my pipeline: from sklearn.pipeline import Pipeline # Categorical pipeline categorical_preprocessing = Pipeline( [ ('Imputation', SimpleImputer(missing_values=np.nan, strategy='most_frequent')), ('Ordinal encoding', OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)), ] ) # Continuous pipeline continuous_preprocessing = Pipeline( [ ('Imputation', SimpleImputer(missing_values=np.nan, strategy='mean')), ('Scaling', StandardScaler()) ] ) # Creating preprocessing pipeline preprocessing = make_column_transformer( (continuous_preprocessing, continuous_cols), (categorical_preprocessing, categorical_cols), ) # Final pipeline pipeline = Pipeline( [('Preprocessing', preprocessing)] ) Here is how I call it: X_train = pipeline.fit_transform(X_train) X_val = pipeline.transform(X_val) X_test = pipeline.transform(X_test) Here is what I get when trying to get the feature names: pipeline['Preprocessing'].transformers_[1][1]['Ordinal encoding'].get_feature_names() OUT: AttributeError: 'OrdinalEncoder' object has no attribute 'get_feature_names' Here is a SO question that was similar: Sklearn Pipeline: Get feature names after OneHotEncode In ColumnTransformer | Point is that, as of today, some transformers do expose a method .get_feature_names_out() and some others do not, which generates some problems - for instance - whenever you want to create a well-formatted DataFrame from the np.array outputted by a Pipeline or ColumnTransformer instance. (Instead, afaik, .get_feature_names() was deprecated in latest versions in favor of .get_feature_names_out()). For what concerns the transformers that you are using, StandardScaler belongs to the first category of transformers exposing the method, while both SimpleImputer and OrdinalEncoder do belong to the second. The docs show the exposed methods within the Methods paragraphs. As said, this causes problems when doing something like pd.DataFrame(pipeline.fit_transform(X_train), columns=pipeline.get_feature_names_out()) on your pipeline, but it would cause problems as well on your categorical_preprocessing and continuous_preprocessing pipelines (as in both cases at least one transformer lacks of the method) and on the preprocessing ColumnTransformer instance. There's an ongoing attempt in sklearn to enrich all estimators with the .get_feature_names_out() method. It is tracked within github issue #21308, which, as you might see, branches in many PRs (each one dealing with a specific module). For instance, issue #21079 for the preprocessing module, which will enrich the OrdinalEncoder among the others, issue #21078 for the impute module, which will enrich the SimpleImputer. I guess that they'll be available in a new release as soon as all the referenced PR will be merged. In the meanwhile, imo, you should go with a custom solution that might fit your needs. Here's a simple example, which do not necessarily resemble your need, but which is meant to give a (possible) way of proceeding: import pandas as pd import numpy as np from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import OrdinalEncoder, StandardScaler from sklearn.compose import make_column_transformer, make_column_selector X = pd.DataFrame({'city': ['London', 'London', 'Paris', 'Sallisaw', ''], 'title': ['His Last Bow', 'How Watson Learned the Trick', 'A Moveable Feast', 'The Grapes of Wrath', 'The Jungle'], 'expert_rating': [5, 3, 4, 5, np.NaN], 'user_rating': [4, 5, 4, np.NaN, 3]}) X num_cols = X.select_dtypes(include=np.number).columns.tolist() cat_cols = X.select_dtypes(exclude=np.number).columns.tolist() # Categorical pipeline categorical_preprocessing = Pipeline( [ ('Imputation', SimpleImputer(missing_values='', strategy='most_frequent')), ('Ordinal encoding', OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)), ] ) # Continuous pipeline continuous_preprocessing = Pipeline( [ ('Imputation', SimpleImputer(missing_values=np.nan, strategy='mean')), ('Scaling', StandardScaler()) ] ) # Creating preprocessing pipeline preprocessing = make_column_transformer( (continuous_preprocessing, num_cols), (categorical_preprocessing, cat_cols), ) # Final pipeline pipeline = Pipeline( [('Preprocessing', preprocessing)] ) X_trans = pipeline.fit_transform(X) pd.DataFrame(X_trans, columns= num_cols + cat_cols) | 8 | 6 |
71,041,284 | 2022-2-8 | https://stackoverflow.com/questions/71041284/how-to-create-an-array-or-list-column-in-sqlalchemy-model | I know it's possible to create array of string in postgres but I want to create a model in sqlalchemy that contains a list or array column but I don't know how Please view the code below class Question(Base): __tablename__= 'questions' id = Column(Integer, nullable=False, primary_key=True) question = Column(String,nullable=False) options = Column([String],nullable=False) answer = Column(String,nullable=False) created_at = Column(TIMESTAMP(timezone=true),server_default=text('now()') | You need to use: from sqlalchemy.dialects.postgresql import ARRAY Here: from datetime import datetime from sqlalchemy import * from sqlalchemy.dialects.postgresql import ARRAY from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Question1(Base): __tablename__= 'questions' id = Column(Integer, nullable=False, primary_key=True) question = Column(String,nullable=False) options = Column(ARRAY(String),nullable=False) answer = Column(String,nullable=False) Example: Python 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from datetime import datetime >>> from sqlalchemy import * >>> from sqlalchemy.dialects.postgresql import ARRAY >>> from sqlalchemy.ext.declarative import declarative_base >>> Base = declarative_base() >>> class Question1(Base): ... __tablename__= 'questions' ... id = Column(Integer, nullable=False, primary_key=True) ... question = Column(String,nullable=False) ... options = Column(ARRAY(String),nullable=False) ... answer = Column(String,nullable=False) ... >>> | 5 | 5 |
71,041,586 | 2022-2-8 | https://stackoverflow.com/questions/71041586/typeerror-type-object-is-not-subscriptable | I’m trying to get the function below to run. However I’m getting an error saying TypeError: ‘type’ object is not subscriptable def dist(loc1: tuple[float], loc2: tuple[float]) -> float: dx = loc1[0] - loc2[0] dy = loc1[1] - loc2[1] return (dx**2 + dy**2)**0.5 | You need to use typing.Tuple, not the tuple class. from typing import Tuple def dist(loc1: Tuple[float], loc2: Tuple[float]) -> float: dx = loc1[0] - loc2[0] dy = loc1[1] - loc2[1] return (dx**2 + dy**2)**0.5 dist((1,2),(2,1)) # output 1.4142135623730951 | 8 | 14 |
71,034,111 | 2022-2-8 | https://stackoverflow.com/questions/71034111/how-to-set-default-python3-to-python-3-9-instead-of-python-3-8-in-ubuntu-20-04-l | I have installed Python 3.9 in the Ubuntu 20.04 LTS. Now the system has both Python 3.8 and Python 3.9. # which python # which python3 /usr/bin/python3 # which python3.8 /usr/bin/python3.8 # which python3.9 /usr/bin/python3.9 # ls -alith /usr/bin/python3 12583916 lrwxrwxrwx 1 root root 9 Jul 19 2021 /usr/bin/python3 -> python3.8 But the pip3 command will still install everything into the Python 3.8 directory. # pip3 install --upgrade --find-links file:///path/to/directory <...> I want to change that default pip3 behavior by updating the symbolic link /usr/bin/python3 to /usr/bin/python3.9. How to do that? # update-alternatives --set python3 /usr/bin/python3.9 This command will not work as expected. Here is the pip3 info: # which pip3 /usr/bin/pip3 # ls -alith /usr/bin/pip3 12589712 -rwxr-xr-x 1 root root 367 Jul 13 2021 /usr/bin/pip3 # pip3 -V pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8) # The alias command will not work: # alias python3=python3.9 # ls -alith /usr/bin/python3 12583916 lrwxrwxrwx 1 root root 9 Jul 19 2021 /usr/bin/python3 -> python3.8 | You should be able to use python3.9 -m pip install <package> to run pip with a specific python version, in this case 3.9. The full docs on this are here: https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/ If you want python3 to point to python3.9 you could use the quick and dirty. alias python3=python3.9 EDIT: Tried to recreate your problem, # which python3 /usr/bin/python3 # python3 --version Python 3.8.10 # which python3.8 /usr/bin/python3.8 # which python3.9 /usr/bin/python3.9 Then update the alternatives, and set new priority: # sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1 # sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 2 # sudo update-alternatives --config python3 There are 2 choices for the alternative python3 (providing /usr/bin/python3). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/bin/python3.9 2 auto mode 1 /usr/bin/python3.8 2 manual mode * 2 /usr/bin/python3.9 2 manual mode Press <enter> to keep the current choice[*], or type selection number: 0 Check new version: # ls -alith /usr/bin/python3 3338 lrwxrwxrwx 1 root root 25 Feb 8 14:33 /usr/bin/python3 -> /etc/alternatives/python3 # python3 -V Python 3.9.5 # ls -alith /usr/bin/pip3 48482 -rwxr-xr-x 1 root root 367 Jul 13 2021 /usr/bin/pip3 # pip3 -V pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.9) Hope this helps (tried it in wsl2 Ubuntu 20.04 LTS) | 31 | 28 |
71,029,800 | 2022-2-8 | https://stackoverflow.com/questions/71029800/how-to-dynamically-change-the-scale-ticks-of-y-axis-in-plotly-charts-upon-zoomin | I am trying to make a candle stick chart using plotly. I am using stock data spanning over 10 years. Due to this the candles appear very small as the y axis has a large scale. However if I zoom into a smaller time period (lets say any 1 month in the 10 years) I want the y axis scale to change so that the candle looks big. Below is my code: df_stockData = pdr.DataReader('TSLA', data_source='yahoo', start='2011-11-04', end='2021-11-04') fig = make_subplots(rows=2, cols=1, shared_xaxes=True, row_width=[0.25, 0.75]) fig.add_trace(go.Candlestick( x=df_stockData.index, open=df_stockData['Open'], high=df_stockData['High'], low=df_stockData['Low'], close=df_stockData['Close'], increasing_line_color='green', decreasing_line_color='red', showlegend=False ), row=1, col=1) fig.add_trace(go.Scatter( x=df_stockData.index, y=df_stockData['RSI_14'], line=dict(color='#ff9900', width=2), showlegend=False, ), row=2, col=1 ) fig.show() My chart looks as follows: As you can see the y-axis (stock price) has a very large scale. Even if I zoom in to a smaller time period the y axis scale remains the same. Is there any way to make the y-axis scale change dynamically so that the candles appear bigger when I zoom in? | This approach uses a callback with dash to set the range of y-axis based on values selected in range slider. A significant amount of the code is making your figure an MWE (calc RSI_14) import pandas_datareader as pdr from plotly.subplots import make_subplots import plotly.graph_objects as go import numpy as np import dash from dash.dependencies import Input, Output, State from jupyter_dash import JupyterDash # make dataframe complete ... df_stockData = pdr.DataReader( "TSLA", data_source="yahoo", start="2011-11-04", end="2021-11-04" ) def rma(x, n, y0): a = (n - 1) / n ak = a ** np.arange(len(x) - 1, -1, -1) return np.r_[ np.full(n, np.nan), y0, np.cumsum(ak * x) / ak / n + y0 * a ** np.arange(1, len(x) + 1), ] n = 14 df = df_stockData df["change"] = df["Close"].diff() df["gain"] = df.change.mask(df.change < 0, 0.0) df["loss"] = -df.change.mask(df.change > 0, -0.0) df["avg_gain"] = rma( df.gain[n + 1 :].to_numpy(), n, np.nansum(df.gain.to_numpy()[: n + 1]) / n ) df["avg_loss"] = rma( df.loss[n + 1 :].to_numpy(), n, np.nansum(df.loss.to_numpy()[: n + 1]) / n ) df["rs"] = df.avg_gain / df.avg_loss df["RSI_14"] = 100 - (100 / (1 + df.rs)) fig = make_subplots(rows=2, cols=1, shared_xaxes=True, row_width=[0.25, 0.75]) fig.add_trace( go.Candlestick( x=df_stockData.index, open=df_stockData["Open"], high=df_stockData["High"], low=df_stockData["Low"], close=df_stockData["Close"], increasing_line_color="green", decreasing_line_color="red", showlegend=False, ), row=1, col=1, ) fig.add_trace( go.Scatter( x=df_stockData.index, y=df_stockData["RSI_14"], line=dict(color="#ff9900", width=2), showlegend=False, ), row=2, col=1, ) # Build App app = JupyterDash(__name__) app.layout = dash.html.Div( [ dash.dcc.Graph( id="fig", figure=fig, ), ] ) @app.callback( Output("fig", "figure"), Input("fig", "relayoutData"), ) def scaleYaxis(rng): if rng and "xaxis.range" in rng.keys(): try: d = df_stockData.loc[ rng["xaxis.range"][0] : rng["xaxis.range"][1], ["High", "Low", "Open", "Close"], ] if len(d) > 0: fig["layout"]["yaxis"]["range"] = [d.min().min(), d.max().max()] except KeyError: pass finally: fig["layout"]["xaxis"]["range"] = rng["xaxis.range"] return fig app.run_server(mode="inline") | 9 | 7 |
71,031,173 | 2022-2-8 | https://stackoverflow.com/questions/71031173/type-hinting-for-a-function-wrapping-another-function-accepting-the-same-argume | def function(a: int, b: str) -> None: pass def wrapper(extra: int, *args, **kwargs) -> None: do_something_with_extra(extra) function(*args, **kwargs) Is there an easy way for wrapper to inherit function()'s type hints without retyping all of them? Normally I'd write def wrapper(extra: int, a: int, b: str) -> None: But it becomes very verbose with a lot of arguments, and I have to update wrapper() every time I update function()'s arguments, and using *args, **kwargs means there won't be proper autocomplete in vscode and other editors. | The best that you can do is to use a decorator for this purpose, and you can use Concatenate and ParamSpec. I use python 3.8 and I had to install typing_extensions for them. In the code below, if you type function in VSCode, it will show the int argument but without the "extra" name in front of it. from typing import Callable, TypeVar from typing_extensions import Concatenate, ParamSpec P = ParamSpec('P') R = TypeVar('R') def wrapper(func: Callable[P, R]) -> Callable[Concatenate[int, P], R]: def inner(extra, *args, **kwargs): print(extra) func(*args, **kwargs) return inner @wrapper def function(a: int, b: str) -> None: print(a, b) pass # typing function will show 'function: (int, a: int, b: str) -> None' | 11 | 6 |
71,024,254 | 2022-2-7 | https://stackoverflow.com/questions/71024254/jupyter-kernel-dies-when-importing-pipeline-function-from-transformers-class-on | I'm unable to import pipeline function of transformers class as my jupyter kernel keeps dying. Tried on transformers-4.15.0 and 4.16.2. Anyone faced this issue? I tried importing the class in a new notebook as you can see in the image and it keeps killing the kernel. | It works fine for me. You could try creating a fresh conda environment and reinstalling the app. You could also try using jupyterlab instead of jupyter-notebook. Are you on Mac OS? I couldn't get it to run at first using conda install transformers my jupyterlab kept hanging as well. Then I did this, conda install -c huggingface transformers and here is the result. It works fine for me on linux and Mac now. | 5 | 2 |
71,027,193 | 2022-2-8 | https://stackoverflow.com/questions/71027193/datetimeindex-get-loc-is-deprecated | I updated Pandas to 1.4.0 with yfinance 0.1.70. Previously, I had to stay with Pandas 1.3.5 as Pandas and yfinance did't play well together. These latest versions of Pandas and yfinance now work together, BUT Pandas now gives me this warning: Future Warning: Passing method to DatetimeIndex.get_loc is deprecated... Use index.get_indexer([item], method=...) instead I had enough trouble as a novice Python person getting the original get_loc statement to work: last_week = format((df.index[df.index.get_loc(last_week, method='nearest')]).strftime('%Y-%m-%d')) This statement allowed me to get a date from the dataframe that I could use further in determining the value associated with that date: week_value = df.loc[last_week, ans] Truth be known, I am intimidated in trying to change this statement to be compliant with the new and improved get_indexer function. Can someone help me out please? | Should be pretty simple. Just change get_loc(XXX, ...) to get_indexer([XXX], ...)[0]: last_week = format((df.index[df.index.get_indexer([last_week], method='nearest')[0]]).strftime('%Y-%m-%d')) | 7 | 13 |
71,022,619 | 2022-2-7 | https://stackoverflow.com/questions/71022619/pre-commit-vs-tox-whats-the-difference-scope-of-use | Tox: https://tox.wiki/en/latest/ pre-commit: https://pre-commit.com/ I would like to understand the borders for both choices. I know that pre-commit creates a py environment - same as tox. To me, their architecture looks a bit the same. Some people use them in combination... what pre-commit can't do, that tox can? I saw examples where with pre-commit during CI pipeline you can run unit tests, etc. Which one is the best to integrate within the CI build? | In short, pre-commit is a linter/formatter runner, tox is a generic virtual env management and test command line tool. While tox could run linters too, it is tedious to manage the versions of the linters. In pre-commit you can just run pre-commit autoupdate, and all linters get updated. On the other hand tox can run e.g. a test suite or a coverage report for many different Python versions. This is not only helpful for a library, but also for an app - so you can already test the upcoming Python versions. tox is also used to create documentation, and sometimes also to make a release - you can't (shouldn't) do this with pre-commit. And tox is certainly not obsolete, although GitHub actions can test against different Python versions, as you can run tox both local and in CI. I gave a lightning talk on this topic: https://www.youtube.com/watch?v=OnM3KuE7MQM Which one is the best to integrate within the CI build? I like to run pre-commit via tox, both in CI and locally. Example tox configuration https://github.com/jugmac00/flask-reuploaded/blob/6496f8427a06be4a9f6a5699757ca7f9aba78ef6/tox.ini#L24-L26 Example pre-commit configuration https://github.com/jugmac00/flask-reuploaded/blob/6496f8427a06be4a9f6a5699757ca7f9aba78ef6/.pre-commit-config.yaml | 6 | 16 |
71,020,663 | 2022-2-7 | https://stackoverflow.com/questions/71020663/measuring-coverage-in-python-threads | I am trying to measure code coverage with a code that uses threads created by the python threading module. I am using coverage to measure coverage. However I can not get the code that is run within a thread to get measured. I tried following the suggestion on the coverage docs to measure coverage in subprocesses but no luck. Here is a minimal example file untitled.py: import time, threading import numpy as np def thread(): print(f'{threading.get_ident()}: started thread') time.sleep(np.random.uniform(1, 10)) print(f'{threading.get_ident()}: done') def run_threads(): threads = [threading.Thread(target=thread)] for t in threads: t.start() print('started all threads') for t in threads: t.join() print('all threads done') if __name__ == '__main__': run_threads() > coverage run untitled.py 139952541644544: started thread started all threads 139952541644544: done all threads done > coverage combine Combined data file .coverage.248973.677959 > coverage report -m Name Stmts Miss Cover Missing ------------------------------------------- untitled.py 14 3 79% 6-8 ------------------------------------------- TOTAL 14 3 79% > As you can see the lines 6-8 (the thread() function) is executed but not measured. For context I am running on a linux machine, Python 3.9.0, coverage 6.2. The directory contains this .coveragerc file: # .coveragerc to control coverage.py [run] source = ./ parallel = True concurrency = multiprocessing [report] # Regexes for lines to exclude from consideration exclude_lines = # Have to re-enable the standard pragma pragma: no cover # Don't complain about missing debug-only code: def __repr__ if self\.debug # Don't complain if tests don't hit defensive assertion code: raise AssertionError raise NotImplementedError # Don't complain if non-runnable code isn't run: if 0: if __name__ == .__main__.: I am very thankful for any suggestion!! | You need to specify "thread" as part of your concurrency setting: concurrency = multiprocessing,thread I'm not sure you need multiprocessing. Your sample program doesn't use it, maybe your real program doesn't either. | 5 | 6 |
71,022,591 | 2022-2-7 | https://stackoverflow.com/questions/71022591/pandas-replace-values-in-column-with-the-last-character-in-the-column-name | I have a dataframe as follows: import pandas as pd df = pd.DataFrame({'sent.1':[0,1,0,1], 'sent.2':[0,1,1,0], 'sent.3':[0,0,0,1], 'sent.4':[1,1,0,1] }) I am trying to replace the non-zero values with the 5th character in the column names (which is the numeric part of the column names), so the output should be, sent.1 sent.2 sent.3 sent.4 0 0 0 0 4 1 1 2 0 4 2 0 2 0 0 3 1 0 3 4 I have tried the following but it does not work, print(df.replace(1, pd.Series([i[5] for i in df.columns], [i[5] for i in df.columns]))) However when I replace it with column name, the above code works, so I am not sure which part is wrong. print(df.replace(1, pd.Series(df.columns, df.columns))) | Since you're dealing with 1's and 0's, you can actually just use multiply the dataframe by a range: df = df * range(1, df.shape[1] + 1) Output: sent.1 sent.2 sent.3 sent.4 0 0 0 0 4 1 1 2 0 4 2 0 2 0 0 3 1 0 3 4 Or, if you want to take the numbers from the column names: df = df * df.columns.str.split('.').str[-1].astype(int) | 5 | 3 |
71,019,671 | 2022-2-7 | https://stackoverflow.com/questions/71019671/vscode-python-debugger-stops-suddenly | after installing Windows updates today, debugging is not working anymore. This is my active debug configuration: "launch": { "version": "0.2.0", "configurations": [ { "name": "DEBUG CURR", "type": "python", "request": "launch", "program": "${file}", "console": "internalConsole", "justMyCode": false, "stopOnEntry": false, }... When I start the debugger, the menu pops up briefly for 1-2 seconds. But then it closes. There is no output in the console. It does not stop at set breakpoints. Does anybody have the same problem? Is there a solution? System settings OS: Microsoft Windows 10 Enterprise (10.0.17763 Build 17763) VSCode version 1.64.0 Python version: 3.8.11 (in the active Anaconda Environment) Installed VSCode extensions: Python (Microsoft) version: v2022.0.1786462952 Pylance (Microsoft) version: v2022.2.0 | It's an issue with the latest Python Extension for VSCode. Downgrading the python extension to v2021.12.1559732655 fixes the problem. | 16 | 21 |
71,011,333 | 2022-2-6 | https://stackoverflow.com/questions/71011333/runtimeerror-stack-expects-each-tensor-to-be-equal-size-but-got-7-768-at-en | When running this code: embedding_matrix = torch.stack(embeddings) I got this error: RuntimeError: stack expects each tensor to be equal size, but got [7, 768] at entry 0 and [8, 768] at entry 1 I'm trying to get embedding using BERT via: split_sent = sent.split() tokens_embedding = [] j = 0 for full_token in split_sent: curr_token = '' x = 0 for i,_ in enumerate(tokenized_sent[1:]): token = tokenized_sent[i+j] piece_embedding = bert_embedding[i+j] if token == full_token and curr_token == '' : tokens_embedding.append(piece_embedding) j += 1 break sent_embedding = torch.stack(tokens_embedding) embeddings.append(sent_embedding) embedding_matrix = torch.stack(embeddings) Does anyone know how I can fix this? | As per PyTorch Docs about torch.stack() function, it needs the input tensors in the same shape to stack. I don't know how will you be using the embedding_matrix but either you can add padding to your tensors (which will be a list of zeros at the end till a certain user-defined length and is recommended if you will train with this stacked tensor, refer this tutorial) to make them equidimensional or you can simply use something like torch.cat(data,dim=0). | 6 | 6 |
70,953,743 | 2022-2-2 | https://stackoverflow.com/questions/70953743/reinterpreting-numpy-arrays-as-a-different-dtype | Say I have a large NumPy array of dtype int32 import numpy as np N = 1000 # (large) number of elements a = np.random.randint(0, 100, N, dtype=np.int32) but now I want the data to be uint32. I could do b = a.astype(np.uint32) or even b = a.astype(np.uint32, copy=False) but in both cases b is a copy of a, whereas I want to simply reinterpret the data in a as being uint32, as to not duplicate the memory. Similarly, using np.asarray() does not help. What does work is a.dtpye = np.uint32 which simply changes the dtype without altering the data at all. Here's a striking example: import numpy as np a = np.array([-1, 0, 1, 2], dtype=np.int32) print(a) a.dtype = np.uint32 print(a) # shows "overflow", which is what I want My questions are about the solution of simply overwriting the dtype of the array: Is this legitimate? Can you point me to where this feature is documented? Does it in fact leave the data of the array untouched, i.e. no duplication of the data? What if I want two arrays a and b sharing the same data, but view it as different dtypes? I've found the following to work, but again I'm concerned if this is really OK to do: import numpy as np a = np.array([0, 1, 2, 3], dtype=np.int32) b = a.view(np.uint32) print(a) # [0 1 2 3] print(b) # [0 1 2 3] a[0] = -1 print(a) # [-1 1 2 3] print(b) # [4294967295 1 2 3] Though this seems to work, I find it weird that the underlying data of the two arrays does not seem to be located the same place in memory: print(a.data) print(b.data) Actually, it seems that the above gives different results each time it is run, so I don't understand what's going on there at all. This can be extended to other dtypes, the most extreme of which is probably mixing 32 and 64 bit floats: import numpy as np a = np.array([0, 1, 2, np.pi], dtype=np.float32) b = a.view(np.float64) print(a) # [0. 1. 2. 3.1415927] print(b) # [0.0078125 50.12387848] b[0] = 8 print(a) # [0. 2.5 2. 3.1415927] print(b) # [8. 50.12387848] Again, is this condoned, if the obtained behaviour is really what I'm after? | Is this legitimate? Can you point me to where this feature is documented? This is legitimate. However, using np.view (which is equivalent) is better since it is compatible with a static analysers (so it is somehow safer). Indeed, the documentation states: It’s possible to mutate the dtype of an array at runtime. [...] This sort of mutation is not allowed by the types. Users who want to write statically typed code should instead use the numpy.ndarray.view method to create a view of the array with a different dtype. Does it in fact leave the data of the array untouched, i.e. no duplication of the data? Yes. Since the array is still a view on the same internal memory buffer (a basic byte array). Numpy will just reinterpret it differently (this is directly done the C code of each Numpy computing function). What if I want two arrays a and b sharing the same data, but view it as different dtypes? [...] np.view can be used in this case as you did in your example. However, the result is platform dependent. Indeed, Numpy just reinterpret bytes of memory and theoretically the representation of negative numbers can change from one machine to another. Hopefully, nowadays, all mainstream modern processors use use the two's complement (source). This means that a np.in32 value like -1 will be reinterpreted as 2**32-1 = 4294967295 with a view of type np.uint32. Positive signed values are unchanged. As long as you are aware of this, this is fine and the behaviour is predictable. This can be extended to other dtypes, the most extreme of which is probably mixing 32 and 64 bit floats. Well, put it shortly, this is really like playing fire. In this case this certainly unsafe although it may work on your specific machine. Let us venturing into troubled waters. First of all, the documentation of np.view states: The behavior of the view cannot be predicted just from the superficial appearance of a. It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results. The thing is Numpy reinterpret the pointer using a C code. Thus, AFAIK, the strict aliasing rule applies. This means that reinterpreting a np.float32 value to a np.float64 cause an undefined behaviour. One reason is that the alignment requirements are not the same for np.float32 (typically 4) and np.float32 (typically 8) and so reading an unaligned np.float64 value from memory can cause a crash on some architecture (eg. POWER) although x86-64 processors support this. Another reason comes from the compiler which can over-optimize the code due to the strict aliasing rule by making wrong assumptions in your case (like a np.float32 value and a np.float64 value cannot overlap in memory so the modification of the view should not change the original array). However, since Numpy is called from CPython and no function calls are inlined from the interpreter (probably not with Cython), this last point should not be a problem (it may be the case be if you use Numba or any JIT though). Note that this is safe to get an np.uint8 view of a np.float32 since it does not break the strict aliasing rule (and the alignment is Ok). This could be useful to efficiently serialize Numpy arrays. The opposite operation is not safe (especially due to the alignment). Update about last section: a deeper analysis from the Numpy code show that some part of the code like type-conversion functions perform a safe type punning using the memmove C call, while some other functions like all basic unary operators or binary ones do not appear to do a proper type punning yet! Moreover, such feature is barely tested by users and tricky corner cases are likely to cause weird bugs (especially if you read and write in two views of the same array). Thus, use it at your own risk. | 9 | 11 |
71,012,012 | 2022-2-6 | https://stackoverflow.com/questions/71012012/modulenotfounderror-no-module-named-transformers | This is my first post and I am new to coding, so please let me know if you need more information. I have been running some AI to generate artwork and it has been working, but when I reloaded it the python script won't work and it is now saying "No module named 'transformers'". Can anyone help me out? It was when I upgraded to Google Colab Pro that I started to encounter issues although I am not sure why that would make a difference. ModuleNotFoundError | Probably it is because you have not installed in your (new, since you've upgraded to colabs pro) session the library transformers. Try to run as first cell the following: !pip install transformers (the "!" at the beginning of the instruction is needed to go into "terminal mode" ). This will download the transformers package into the session's environment. | 31 | 32 |
71,010,343 | 2022-2-6 | https://stackoverflow.com/questions/71010343/cannot-load-swrast-and-iris-drivers-in-fedora-35 | Essentially, trying to write the following code results in the error below: Code from matplotlib import pyplot as plt plt.plot([1,2,3,2,1]) plt.show() Error libGL error: MESA-LOADER: failed to open iris: /home/xxx/.conda/envs/stat/lib/python3.8/site-packages/pandas/_libs/window/../../../../../libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib64/dri/iris_dri.so) (search paths /usr/lib64/dri, suffix _dri) libGL error: failed to load driver: iris libGL error: MESA-LOADER: failed to open swrast: /home/xxx/.conda/envs/stat/lib/python3.8/site-packages/pandas/_libs/window/../../../../../libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib64/dri/swrast_dri.so) (search paths /usr/lib64/dri, suffix _dri) libGL error: failed to load driver: swrast I found similar errors on StackOverflow but none were what is needed here. | Short answer: export LD_PRELOAD=/usr/lib64/libstdc++.so.6 Long answer: The underlying problem is that we have a piece of software that was built with an older C++ compiler. Part of the compiler is its implementation of libstdc++ which becomes part of the runtime requirements for anything built by the compiler. The software in question has, evidently, brought its own, older implementation of libstdc++ along for the ride, and given its libstdc++ precedence over the system's libstdc++. Typically, this is done via the $LD_LIBRARY_PATH environment variable. Unfortunately, /usr/lib64/dri/swrast_dri.so is a piece of system software built by the native compiler for that system, and it's more recent than the compiler that built the other software in question. The result of this is that the older compiler's libstdc++ gets loaded first, with its older, more limited symbol set. When it then wants to load swrast, this fails because swrast insists on having the level of compiler/runtime with which it was built. The solution to this whole mess is the force the system's (newer) libstdc++ into use and prevent the older libstdc++ from being brought into play. This is achieved via the code snippet export LD_PRELOAD=/usr/lib64/libstdc++.so.6 where we set the preload environment variable. | 33 | 22 |
71,003,828 | 2022-2-6 | https://stackoverflow.com/questions/71003828/is-there-a-way-to-have-a-default-value-inside-a-dictionary-in-python | I want to know if there is a way to have a default value in a dictionary (without using the get function), so that: dict colors = { "Black":(0, 0, 0), "White":(255, 255, 255), default:(100, 100, 100) }; paint(colors["Blue"]); # Paints the default value (Grey) onto the screen Of course, the code above wouldn't work in Python, and I have serious doubt it is even possible. I Know that by using get I could do this easily, however I'm still curious if there is any other way (just for curiosity). | You can use a defaultdict. from collections import defaultdict colors = defaultdict(lambda: (100, 100, 100)) colors["Black"] = (0, 0, 0), colors["White"] = (255, 255, 255) # Prints (0, 0, 0), because "Black" is mapped to (0, 0, 0) in the dictionary. print(colors["Black"]) # Prints (100, 100, 100), because "Blue" is not a key in the dictionary. print(colors["Blue"]) | 5 | 9 |
71,007,924 | 2022-2-6 | https://stackoverflow.com/questions/71007924/how-can-i-get-a-version-to-the-root-of-a-typer-typer-application | My CLI applications typically have subcommands. I want to have the --version flag at the root of my CLI applications, but with Typer I've only seen ways to put it to a command. I want to add it to the typer.Typer object (the root) itself. How can I do that? What I've tried import typer from typing import Optional __version__ = "0.1.0" def version_callback(value: bool): if value: typer.echo(f"Awesome CLI Version: {__version__}") raise typer.Exit() app = typer.Typer( add_completion=False, ) @app.command() def main( version: Optional[bool] = typer.Option( None, "--version", callback=version_callback ), ) -> None: pass @app.command() def foo() -> None: pass if __name__ == "__main__": app() This gives $ python cli.py --help Usage: cli.py [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: foo main $ python cli.py main --help Usage: cli.py main [OPTIONS] Options: --version --help Show this message and exit. What I wanted: $ python cli.py --help Usage: cli.py [OPTIONS] COMMAND [ARGS]... Options: --version --help Show this message and exit. Commands: foo | This is addressed in the documentation: But as those CLI parameters are handled by each of those commands, they don't allow us to create CLI parameters for the main CLI application itself. But we can use @app.callback() for that. It's very similar to @app.command(), but it declares the CLI parameters for the main CLI application (before the commands): To do what you want, you could write something like this: import typer from typing import Optional __version__ = "0.1.0" def version_callback(value: bool): if value: typer.echo(f"Awesome CLI Version: {__version__}") raise typer.Exit() app = typer.Typer( add_completion=False, ) @app.callback() def common( ctx: typer.Context, version: bool = typer.Option(None, "--version", callback=version_callback), ): pass @app.command() def main() -> None: pass @app.command() def foo() -> None: pass if __name__ == "__main__": app() Which gives us: $ python typertest.py --help Usage: typertest.py [OPTIONS] COMMAND [ARGS]... Options: --version --help Show this message and exit. Commands: foo main | 6 | 9 |
71,006,708 | 2022-2-6 | https://stackoverflow.com/questions/71006708/getting-sslv3-alert-handshake-failure-when-trying-to-connect-to-imap | i need to do a script for imap backup but when i'm trying to connect to the imap server with my script i'm getting that error: File "c:\Users\Lenovo\Desktop\python\progettoscuola.py", line 5, in <module> imapSrc = imaplib.IMAP4_SSL('mail.safemail.it') File "C:\Program Files\Python310\lib\imaplib.py", line 1323, in __init__ IMAP4.__init__(self, host, port, timeout) File "C:\Program Files\Python310\lib\imaplib.py", line 202, in __init__ self.open(host, port, timeout) File "C:\Program Files\Python310\lib\imaplib.py", line 1336, in open IMAP4.open(self, host, port, timeout) File "C:\Program Files\Python310\lib\imaplib.py", line 312, in open self.sock = self._create_socket(timeout) File "C:\Program Files\Python310\lib\imaplib.py", line 1327, in _create_socket return self.ssl_context.wrap_socket(sock, File "C:\Program Files\Python310\lib\ssl.py", line 512, in wrap_socket return self.sslsocket_class._create( File "C:\Program Files\Python310\lib\ssl.py", line 1070, in _create self.do_handshake() File "C:\Program Files\Python310\lib\ssl.py", line 1341, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)``` | Python 3.10 increased the default security settings of the TLS stack by among other things prohibiting any ciphers which still use the RSA key exchange. RSA key exchange is long considered inferior since it does not provide forward secrecy and is therefore also no longer available in TLS 1.3. So in general the change in Python 3.10 can be considered an improvement. But, some servers still require this obsolete key exchange and mail.safemail.it seems to be among these. Connecting to such servers with the newly hardened TLS settings will now fail, even if it succeeded with older versions of Python. To make connections possible again it is necessary to use weaker security settings. For this specific server it can be done by falling back to the DEFAULT ciphers used by OpenSSL. The following code will create a new SSL context and use it for connecting to the host. The important part here is to use weaker settings using ctx.set_ciphers('DEFAULT') . import imaplib import ssl ctx = ssl.create_default_context() ctx.set_ciphers('DEFAULT') imapSrc = imaplib.IMAP4_SSL('mail.safemail.it', ssl_context = ctx) | 7 | 21 |
71,004,414 | 2022-2-6 | https://stackoverflow.com/questions/71004414/numpy-dot-for-dimensions-2 | I am trying to understand how dot product works for dimensions more than 2. The documentation says: If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last # axis of a and the second-to-last axis of b: dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) I don't understand this rule -- where does it come from? Why is it the last axis of a and the second-to-last axis of b? | It may be easier to visualize this using the notation of np.einsum. Start with regular 2D matrix multiplication, which is pretty unambiguous, and follows "normal math" rules: a = np.ones((2, 3)) b = np.ones((3, 4)) np.einsum('ij,jk->ik', a, b) # Same as a.dot(b) Now prepend a few dimensions: a = np.ones((2, 3, 4, 5)) b = np.ones((8, 7, 5, 6)) np.einsum('ijkl,nolp->ijknop', a, b) # Same as a.dot(b) In general, a multidimensional array can be thought of as a stack of matrices. There are a number of reasons that numpy generally places the "elements" of an array, whether features, samples, vectors, or matrices along the last dimensions. So, regardless of dot, it is fairly standard to think of a as a 2x3 array of 4x5 matrices. Similarly, b is a 8x7 array of 5x6 matrices. There are two options going forward. The last two dimensions of the result are unambiguous no matter what choice you make: the output will contain 4x6 matrices. You can compute all the possible combinations of 2x3x8x7 such matrices, or you can expect the leading dimensions to broadcast together. np.dot takes the former approach, while @/np.matmul takes the latter. Anecdotally, I've found that most people will find the latter more intitive. The exact reason that np.dot computes the matrix product of all possible combinations of a and b is known only to the developers making that choice. It likely revolves around not wanting to raise an error, and the rules of broadcasting not being as universally cemented at the time dot was first implemented. Once you decide to take the products of all the combinations, you can decide to do it like this: np.einsum('ijkl,nolp->ijnokp', a, b) # Not `np.dot` On the one hand, this would reflect the idea of a 2x3x8x7 array of 4x6 matrices better. On the other hand, it creates an awkward permutation of dimensions with k out of place in the sequence. The developers went with the arguably less awkward approach of grouping the dimensions from each array separately. You can always a transpose an array to the shape you want after all. | 5 | 3 |
70,995,419 | 2022-2-5 | https://stackoverflow.com/questions/70995419/how-to-mock-an-async-instance-method-of-a-patched-class | (The following code can be run in Jupyter.) I have a class B, which uses class A, needs to be tested. class A: async def f(self): pass class B: async def f(self): a = A() x = await a.f() # need to be patched/mocked And I have the following test code. It seems it mocked the class method of A instead of the instance method. from asyncio import Future from unittest.mock import MagicMock, Mock, patch async def test(): sut = B() with patch('__main__.A') as a: # it's __main__ in Jupyter future = Future() future.set_result('result') a.f = MagicMock(return_value=future) await sut.f() await test() However, the code got the error of: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) C:\Users\X~1\AppData\Local\Temp\1/ipykernel_36576/3227724090.py in <module> 20 await sut.f() 21 ---> 22 await test() C:\Users\X~1\AppData\Local\Temp\1/ipykernel_36576/3227724090.py in test() 18 future.set_result('result') 19 a.f = MagicMock(return_value=future) ---> 20 await sut.f() 21 22 await test() C:\Users\X~1\AppData\Local\Temp\1/ipykernel_36576/3227724090.py in f(self) 6 async def f(self): 7 a = A() ----> 8 x = await a.f() # need to be patched/mocked 9 10 from asyncio import Future TypeError: object MagicMock can't be used in 'await' expression | In Python 3.8+, patching an async method gives you an AsyncMock, so providing a result is a little more straightforward. In the docs of the patch method itself: If new is omitted, then the target is replaced with an AsyncMock if the patched object is an async function or a MagicMock otherwise. AsyncMock lets you supply a return value in a much more straightforward manner: import asyncio from unittest.mock import patch class A: async def f(self): return "foo" class B: async def f(self): return await A().f() async def main(): print(await B().f()) with patch("__main__.A.f", return_value="bar") as p: print(await B().f()) if __name__ == "__main__": try: asyncio.run(main()) except KeyboardInterrupt: sys.exit(1) ....prints: $ python example.py foo bar The side_effect kwarg covers most kinds of values you'd be looking to return (e.g. if you need your mock function await something). if side_effect is a function, the async function will return the result of that function, if side_effect is an exception, the async function will raise the exception, if side_effect is an iterable, the async function will return the next value of the iterable, however, if the sequence of result is exhausted, StopAsyncIteration is raised immediately, if side_effect is not defined, the async function will return the value defined by return_value, hence, by default, the async function returns a new AsyncMock object. | 7 | 7 |
70,987,896 | 2022-2-4 | https://stackoverflow.com/questions/70987896/why-is-this-task-faster-in-python-than-julia | I ran the following code in RStudio: exo <- read.csv('exoplanets.csv',TRUE,",") df <- data.frame(exo) ranks <- 570 files <- 3198 datas <- vector() for ( w in 2:files ) { listas <-vector() for ( i in 1:ranks) { name <- as.character(df[i,w]) listas <- append (listas, name) } datas <- append (datas, listas) } It reads a huge NASA CSV file, converts it to a dataframe, converts each element to string, and adds them to a vector. RStudio took 4 min and 15 seconds. So I decided to implement the same code in Julia. I ran the following in VS Code: using CSV, DataFrames df = CSV.read("exoplanets.csv", DataFrame) fil, col = 570, 3198 arr = [] for i in 2:fil for j in 1:col push!(arr, string(df[i, j])) end end The result was good. The Julia code took only 1 minute and 25 seconds! Then for pure curiosity I implemented the same code this time in Python to compare. I ran the following in VS Code: import numpy as np import pandas as pd exo = pd.read_csv("exoplanets.csv") arr = np.array(exo) fil, col = 570, 3198 lis = [] for i in range(1, fil): for j in range(col): lis.append(arr[i][j].astype('str')) The result shocked me! Only 35 seconds!!! And in Spyder from Anaconda only 26 seconds!!! Almost 2 million floats!!! Is Julia slower than Python in data analysis? Can I improve the Julia code? | NOTE: I wrote the below assuming you want the other column order (as in the Python and R examples). It is more efficient in Julia this way; to make it work equivalently to your original behaviour, permute the logic or your data at the right places (left as an exercise). Bogumił's anwer does the right thing already. Put stuff into functions, preallocate where possible, iterate in stride order, use views, and use builtin functions and broadcasting: function tostringvector(d) r, c = size(d) result = Vector{String}(undef, r*c) v = reshape(result, r, c) for (rcol, dcol) in zip(eachcol(v), eachcol(d)) @inbounds rcol .= string.(dcol) end return result end Which certainly can be optimized harder. Or shorter, making use of what DataFrames already provides: tostringvector(d) = vec(Matrix(string.(d))) | 7 | 8 |
70,988,235 | 2022-2-4 | https://stackoverflow.com/questions/70988235/make-bar-charts-x-axis-markers-horizontal-or-45-degree-readable-in-python-altai | I am trying to create a bar chart with year data on x-axis. It works but the year marker on x-xais are all in vertical direction and I want to make them more readable - either horizontal or 45 degree. I tried using the year:T in datatime format but it gave me the year and month markers (I just wanted to have the year markers on xais). How do I make the year markers on x-axis either horizontal or 45 degree? import altair as alt from vega_datasets import data source = data.wheat() source.info() abars = alt.Chart(source).mark_bar().encode( x='year:O', y="wheat:Q" ) | Set labelAngle: alt.Chart(source).mark_bar().encode( x=alt.X('year:O', axis=alt.Axis(labelAngle=-45)), y="wheat:Q" ) | 7 | 11 |
70,988,817 | 2022-2-4 | https://stackoverflow.com/questions/70988817/dealing-with-optional-python-dictionary-fields | I'm dealing with JSON data which I load into Python dictionaries. A lot of these have optional fields, which then may contain dictionaries, that kind of stuff. dictionary1 = {"required": {"value1": "one", "value2": "two"}, "optional": {"value1": "one"}} dictionary2 = {"required": {"value1": "one", "value2": "two"}} If I do this, dictionary1.get("required").get("value1") this works, obviously, because the field "required" is always present. However, when I use the same line on dictionary2 (to get the optional field), this will produce an AttributeError dictionary2.get("optional").get("value1") AttributeError: 'NoneType' object has no attribute 'get' which makes sense, because the first .get() will return None, and the second .get() cannot call .get() on the None object. I can solve this by giving default values in case the optional field is missing, but this will be annoying the more complex the data gets, so I'm calling this a "naive fix": dictionary2.get("optional", {}).get("value1", " ") So the first .get() will return an empty dictionary {}, on which the second .get() can be called, and since it obviously contains nothing, it will return the empty string, as defined per the second default. This will no longer produce errors, but I was wondering if there is a better solution for this - especially for more complex cases (value1 containing an array or another dictionary, etc....) I could also fix this with try - except AttributeError, but this is not my preferred way either. try: value1 = dictionary2.get("optional").get("value1") except AttributeError: value1 = " " I also don't like checking if optional field exists, this produces garbage code lines like optional = dictionary2.get("optional") if optional: value1 = optional.get("value1") else: value1 = " " which seems very non-Pythonic... I was thinking maybe my approach of just chaining .get()s is wrong in the first place? | First of all, you refer to " " as the empty string. This is incorrect; "" is the empty string. Second, if you're checking for membership, I don't see a reason to use the get method in the first place. I'd opt for something like the following. if "optional" in dictionary2: value1 = dictionary2["optional"].get("value1") else: value1 = "" Another alternative to consider (since you're using the get method a lot) is to switch to the defaultdict class. For example, from collections import defaultdict dictionary2 = {"required": {"value1": "one", "value2": "two"}} ddic2 = defaultdict(dict,dictionary2) value1 = ddic2["optional"].get("value1") | 6 | 1 |
70,987,818 | 2022-2-4 | https://stackoverflow.com/questions/70987818/what-does-the-svv-flag-mean-when-running-pytest-via-the-command-line-interfac | Can anyone explain what the -svv flag means/does when running pytest from command line? Such as: pytest -svv This is really driving me crazy, as I can't find it in pytest's official documentation, its code base or through a web search. | This is the same as passing pytest -s -vv where -s disables capturing of stdout/stderr (source) -vv enables verbose output (source) Specifically note that the increased verbosity specifier currently has no effect in base pytest above normal increased verbosity Using higher verbosity levels (-vvv, -vvvv, …) is supported, but has no effect in pytest itself at the moment, however some plugins might make use of higher verbosity | 6 | 8 |
70,986,620 | 2022-2-4 | https://stackoverflow.com/questions/70986620/combining-single-dispatch-and-protocols | I'm having some issues combining single-dispatch overloads with a protocol for structural typing based on having a specific attribute. I have constructed the following snippet to give an idea of what I'm attempting. from typing_extensions import Protocol from functools import singledispatch class HasFoo(Protocol): foo: str class FooBar: def __init__(self): self.foo = 'bar' @singledispatch def f(_) -> None: raise NotImplementedError @f.register def _(item: HasFoo) -> None: print(item.foo) x = FooBar() f(x) The above raises the NotImplementedError instead of the print statement as desired. I've tried various modifications to the above with no success. Any help would be very appreciated. | That apparently doesn't work. Single dispatch uses the MRO (Method resolution order) to find out if a given instance matches a registered type. However, the protocol is not part of the MRO, hence python will not find it. To make it part of the MRO you could inherit from it, but that is not what Protocols are meant to be used for (though I think it also doesn't cost anything). | 5 | 4 |
70,984,947 | 2022-2-4 | https://stackoverflow.com/questions/70984947/efficient-way-to-generate-lime-explanations-for-full-dataset | Am working on a binary classification problem with 1000 rows and 15 features. Currently am using Lime to explain the predictions of each instance. I use the below code to generate explanations for full test dataframe test_indx_list = X_test.index.tolist() test_dict={} for n in test_indx_list: exp = explainer.explain_instance(X_test.loc[n].values, model.predict_proba, num_features=5) a=exp.as_list() test_dict[n] = a But this is not efficient. Is there any alternative approach to generate explanation/ get feature contributions quicker? | From what the docs show, there isn't currently an option to do batch explain_instance, although there are plans for it. This should help a lot with speed on newer versions later on. What seems to be the most appropriate change to get better speed is decreasing the number of samples used to learn the linear model. explainer.explain_instance(... num_features=5, num_samples=2500) The default value for num_samples is 5000, which can be much more than you need depending on your model, and is currently the argument that will most affect the speed of the explainer. Another approach would be to try adding parallelization to the snippet. It's a more complex solution where you run multiple instances of the snippet at the same time, and gather the results at the end. For that, I leave a link, but really it's not something I can give a snippet right out of the box. | 6 | 4 |
70,981,458 | 2022-2-4 | https://stackoverflow.com/questions/70981458/how-to-resolve-this-error-py4jjavaerror-an-error-occurred-while-calling-o70-sh | Currently I'm doing PySpark and working on DataFrame. I've created a DataFrame: from pyspark.sql import * import pandas as pd spark = SparkSession.builder.appName("DataFarme").getOrCreate() df = spark.createDataFrame([("Java", "20000"), ("Python", "100000"), ("Scala", "3000")]) df.printSchema() #Output:- root |-- _1: string (nullable = true) |-- _2: string (nullable = true) But when I do df.show() its showing error as: Py4JJavaError Traceback (most recent call last) C:\Users\PRATIK~1\AppData\Local\Temp/ipykernel_20924/3726558592.py in <module> ----> 1 df.show() C:\Spark\spark-3.2.1-bin-hadoop3.2\python\pyspark\sql\dataframe.py in show(self, n, truncate, vertical) 492 493 if isinstance(truncate, bool) and truncate: --> 494 print(self._jdf.showString(n, 20, vertical)) 495 else: 496 try: C:\Spark\spark-3.2.1-bin-hadoop3.2\python\lib\py4j-0.10.9.3-src.zip\py4j\java_gateway.py in __call__(self, *args) 1319 1320 answer = self.gateway_client.send_command(command) -> 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1323 C:\Spark\spark-3.2.1-bin-hadoop3.2\python\pyspark\sql\utils.py in deco(*a, **kw) 109 def deco(*a, **kw): 110 try: --> 111 return f(*a, **kw) 112 except py4j.protocol.Py4JJavaError as e: 113 converted = convert_exception(e.java_exception) C:\Spark\spark-3.2.1-bin-hadoop3.2\python\lib\py4j-0.10.9.3-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". 328 format(target_id, ".", name), value) Py4JJavaError: An error occurred while calling o70.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (KPI-PratikTrainee executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "C:\Spark\spark-3.2.1-bin-hadoop3.2\python\lib\pyspark.zip\pyspark\worker.py", line 481, in main RuntimeError: Python in worker has different version 3.9 than that in driver 3.10, PySpark cannot run with different minor versions. Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set. at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:555) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:713) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:695) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:508) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:349) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:476) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:429) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48) at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3715) at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2728) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704) at org.apache.spark.sql.Dataset.head(Dataset.scala:2728) at org.apache.spark.sql.Dataset.take(Dataset.scala:2935) at org.apache.spark.sql.Dataset.getRows(Dataset.scala:287) at org.apache.spark.sql.Dataset.showString(Dataset.scala:326) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.lang.Thread.run(Unknown Source) Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "C:\Spark\spark-3.2.1-bin-hadoop3.2\python\lib\pyspark.zip\pyspark\worker.py", line 481, in main RuntimeError: Python in worker has different version 3.9 than that in driver 3.10, PySpark cannot run with different minor versions. Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set. at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:555) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:713) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:695) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:508) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:349) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ... 1 more Same thing occured for df.collect(). But the same thing works perfectly fine in PyCharm once I set these 2 zip files in Project Structure: py4j-0.10.9.3-src.zip, pyspark.zip Can anybody tell me how to set these 2 files in Jupyter so that I can run df.show() and df.collect() please? | The key is in this part of the error message: RuntimeError: Python in worker has different version 3.9 than that in driver 3.10, PySpark cannot run with different minor versions. Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set. You need to have exactly the same Python versions in driver and worker nodes. Probably a quick solution would be to downgrade your Python version to 3.9 (assuming driver is running on the client you're using). | 6 | 4 |
70,977,131 | 2022-2-3 | https://stackoverflow.com/questions/70977131/flask-render-template-after-client-post-request | So, I'm working on a small web application that has a small canvas, the user is supposed to draw something and then I want to do some python with the image from that canvas. Like this: This is working fine. When I press "Click me!", I call a JS function that POST the image to my Flask server. And this is also working, the thing is that after I receive the image, I want to render a new page to show some results, but this page is not rendering at all. Now, I'm completely new to Javascript, barely know anything, basically every JS I'm using is copy pasted from the internet, so I'm assuming I must be missing something very simple, I just don't know what. Here is the server code: from flask import Flask, render_template, request import re from io import BytesIO import base64 from PIL import Image app = Flask(__name__) @app.route("/") def hello_world(): return render_template('drawing.html') @app.route("/hook", methods=['POST', 'GET']) def hook(): image_data = re.sub('^data:image/.+;base64,', '', request.form['imageBase64']) im = Image.open(BytesIO(base64.b64decode(image_data))) im.show() return render_template('results.html') The first route just opens the canvas, the second is executed after the client's request, which has this function: function surprise() { var dataURL = canvas.toDataURL(); $.ajax({ type: "POST", url: "http://127.0.0.1:5000/hook", data:{ imageBase64: dataURL } }).done(function() { console.log('sent'); });} Apparently, this is working. I have the line: im.show() Just so I can see the image and I get exactly the drawing from the canvas, which I'm now supposed to work on: but the results page is not rendered afterwards, nothing happens. Why? | The problem is that you are returning the page to view in response to the call that posts the image. Instead, you should return a response (for example in a json format) containing the information regarding the result of the call just made (i.e. the post of the image) and consequently, on the client side, you must redirect the user to the appropriate page . To understand why it is not working, print the content of the response returned from the call made var response = ''; $.ajax({ type: "POST", url: "http://127.0.0.1:5000/hook", data:{ imageBase64: dataURL }, success : function(text) { response = text; } }); alert(response); | 6 | 3 |
70,982,008 | 2022-2-4 | https://stackoverflow.com/questions/70982008/vscode-pytest-discovery-not-working-conda-error | I'm having a strange problem with VSCode's python testing functionality. When I try to discover tests I get the following error: > conda run -n sandbox --no-capture-output python ~/.vscode/extensions/ms-python.python-2022.0.1786462952/pythonFiles/get_output_via_markers.py ~/.vscode/extensions/ms-python.python-2022.0.1786462952/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir . -s --cache-clear . cwd: . [ERROR 2022-1-3 21:49:47.851]: Error discovering pytest tests: [r [Error]: EnvironmentLocationNotFound: Not a conda environment: /Users/david.hoffman/miniconda3/envs/sandbox/envs/sandbox But obviously there's a duplication error: /Users/david.hoffman/miniconda3/envs/sandbox/envs/sandbox. If I run this command directly in the terminal I get the expected output and no errors: conda run -n sandbox --no-capture-output python ~/.vscode/extensions/ms-python.python-2022.0.1786462952/pythonFiles/get_output_via_markers.py ~/.vscode/extensions/ms-python.python-2022.0.1786462952/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir . -s --cache-clear I'm completely stumped as there doesn't seem to be any settings that would affect this. I tried reinstalling VSCode from scratch (after removing all the local files) same with conda. | Two ways I've found to fix: Change the name of the conda environment. Just cloning sandbox to boxsand did the trick Add python.condaPath variable to VSCode's preferences | 10 | 6 |
70,964,740 | 2022-2-3 | https://stackoverflow.com/questions/70964740/explode-pandas-column-of-dictionary-with-list-of-tuples-as-value | I have the following dataframe where col2 is a dictionary with a list of tuples as values. The keys are consistantly 'added' and 'deleted' in the whole dataframe. Input df col1 col2 value1 {'added': [(59, 'dep1_v2'), (60, 'dep2_v2')], 'deleted': [(59, 'dep1_v1'), (60, 'dep2_v1')]} value 2 {'added': [(61, 'dep3_v2')], 'deleted': [(61, 'dep3_v1')]} Here's a copy-pasteable example dataframe: jsons = ["{'added': [(59, 'dep1_v2'), (60, 'dep2_v2')], 'deleted': [(59, 'dep1_v1'), (60, 'dep2_v1')]}", "{'added': [(61, 'dep3_v2')], 'deleted': [(61, 'dep3_v1')]}"] df = pd.DataFrame({"col1": ["value1", "value2"], "col2": jsons}) edit col2 directly comes from the diff_parsed field of pydriller output I want to "explode" col2 so that I obtain the following result: Desired output col1 number added deleted value1 59 dep1_v2 dep1_v1 value1 60 dep2_v2 dep2_v1 value2 61 dep3_v2 dep3_v1 So far, I tried the following: df = df.join(pd.json_normalize(df.col2)) df.drop(columns=['col2'], inplace=True) The above code is simplified. I first manipulate the column to convert to proper json. It was in an attempt to first explode on 'added' and 'deleted' and then try to play around with the format to obtain what I want...but the list of tuples is not preserved and I obtain the following: col1 added deleted value1 59, dep1_v2, 60, dep2_v2 59, dep1_v1, 60, dep2_v1 value2 61, dep3_v1 61, dep3_v2 Thanks | Well this certainly isn't elegant, but here's a potential solution that is at least easier to understand and reason about: def explode_records(df): new_records = [] def map_dict_to_row(value, col2_dict): temp = {} for number, added in col2_dict["added"]: temp[number] = {"value": value, "number": number, "added": added} for number, deleted in col2_dict["deleted"]: if number in temp: temp[number] = {**temp[number], "deleted": deleted} else: temp[number] = {"value": value, "deleted": deleted} new_records.extend(list(temp.values())) df.apply(lambda row: map_dict_to_row(row.col1, row.col2), axis=1) # assumes col2 is a dict return pd.DataFrame(new_records) Usage: In [4]: explode_records(df) Out[4]: value number added deleted 0 value1 59 dep1_v2 dep1_v1 1 value1 60 dep2_v2 dep2_v1 2 value 2 61 dep3_v2 dep3_v1 Note that I got value 2 from your original data. I'm assuming it's just a typo, and not that you also need value x -> valuex functionality. I wasn't able to get the other solution working, so I wasn't able to compare its performance vs mine. | 5 | 3 |
70,969,592 | 2022-2-3 | https://stackoverflow.com/questions/70969592/pytest-asserting-fixture-after-teardown | I have a test that makes a thing, validates it, deletes the thing and confirms it was deleted. def test_thing(): thing = Thing() # Simplified, it actually takes many lines to make a thing assert thing.exists thing.delete() # Simplified, it also takes a few lines to delete it assert thing.deleted Next I want to make many more tests that all use the thing, so it's a natural next step to move the thing creation/deletion into a fixture @pytest.fixture def thing(): thing = Thing() # Simplified, it actually takes many lines to make a thing yield thing thing.delete() # Simplified, it also takes a few lines to delete it def test_thing(thing): assert thing.exists def test_thing_again(thing): # Do more stuff with thing ... But now I've lost my assert thing.deleted. I feel like I have a few options here, but none are satisfying. I could assert in the fixture but AFAIK it's bad practice to put assertions in the fixture because if it fails it will result in an ERROR instead of a FAIL. I could keep my original test and also create the fixture, but this would result in a lot of duplicated code for creating/deleting the thing. I can't call the fixture directly because I get a Fixture called directly exception so I could move the thing creation out into a generator that is used by both the fixture and the test. This feels clunky though and what happens if my thing fixture needs to use another fixture? What is my best option here? Is there something I haven't thought of? | If you want to test that a "thing" is deleted, make a fixture without teardown, delete it in the test, then assert if it is deleted. @pytest.fixture def thing_create(): # Perform all the creation steps thing = Thing() ... yield thing def thing_delete(thing): # Perform all the deletion steps ... thing.delete() @pytest.fixture def thing_all(thing_create): yield thing_create thing_delete(thing_create) def test_thing(thing_all): assert thing_all.exists def test_thing_again(thing_create): thing_delete(thing_create) assert thing_create.deleted | 5 | 5 |
70,964,001 | 2022-2-2 | https://stackoverflow.com/questions/70964001/numpy-maxima-of-groups-defined-by-a-label-array | I have two arrays, one is a list of values and one is a list of IDs corresponding to each value. Some IDs have multiple values. I want to create a new array that contains the maximum value recorded for each id, which will have a length equal to the number of unique ids. Example using a for loop: import numpy as np values = np.array([5, 3, 2, 6, 3, 4, 8, 2, 4, 8]) ids = np.array([0, 1, 3, 3, 3, 3, 5, 6, 6, 6]) uniq_ids = np.unique(ids) maximums = np.ones_like(uniq_ids) * np.nan for i, id in enumerate(uniq_ids): maximums[i] = np.max(values[np.where(ids == id)]) print(uniq_ids) print(maximums) [0 1 3 5 6] [5. 3. 6. 8. 8.] Is it possible to vectorize this so it runs fast? I'm imagining a one-liner that can create the "maximums" array using only NumPy functions, but I haven't been able to come up with anything that works. | np.lexsort sorts by multiple columns. However, this is not compulsory. You can sort ids first and then choose maximum item of each divided group using numpy.maximum.reduceat def mathfux(values, ids, return_groups=False): argidx = np.argsort(ids) #70% time ids_sort, values_sort = ids[argidx], values[argidx] #4% time div_points = np.r_[0, np.flatnonzero(np.diff(ids_sort)) + 1] #11% time (the most part for np.flatnonzero) if return_groups: return ids[div_points], np.maximum.reduceat(values_sort, div_points) else: return np.maximum.reduceat(values_sort, div_points) mathfux(values, ids, return_groups=True) >>> (array([0, 1, 3, 5, 6]), array([5, 3, 6, 8, 8])) mathfux(values, ids) >>> mathfux(values, ids) array([5, 3, 6, 8, 8]) Usually, some parts of numpy codes could be optimised further in numba. Note that np.argsort is a bottleneck in majority of groupby problems which can't be replaced by any other method. It is unlikely to be improved soon in numba or numpy. So you are reaching an optimal performance here and can't do much in further optimisations. | 5 | 4 |
70,953,643 | 2022-2-2 | https://stackoverflow.com/questions/70953643/how-to-turn-a-list-of-lists-into-columns-of-a-pandas-dataframe | I would like to ask how I can unnest a list of list and turn it into different columns of a dataframe. Specifically, I have the following dataframe where the Route_set column is a list of lists: Generation Route_set 0 0 [[20. 19. 47. 56.] [21. 34. 78. 34.]] The desired output is the following dataframe: route1 route2 0 20 21 1 19 34 2 47 78 3 56 34 Any ideas how I can do it? Thank you in advance! | You can try using df.explode and df.apply: import pandas as pd df = pd.DataFrame(data= {'Generation': 0, 'Route_set':[[[20., 19., 47., 56.], [21., 34., 78., 34.]]]}) df['route1']=df['Route_set'].apply(lambda x: x[0]) df['route2']=df['Route_set'].apply(lambda x: x[1]) df = df.explode(['route1', 'route2'], ignore_index=True) df2 = df[df.columns.difference(['Route_set', 'Generation'])] | | route1 | route2 | |---:|---------:|---------:| | 0 | 20 | 21 | | 1 | 19 | 34 | | 2 | 47 | 78 | | 3 | 56 | 34 | Or you can just create a new dataframe with the values like this: import pandas as pd df = pd.DataFrame(data= {'Generation': 0, 'Route_set':[[[20., 19., 47., 56.], [21., 34., 78., 34.]]]}) df1 = pd.DataFrame.from_dict(dict(zip(['route1', 'route2'], df.Route_set.to_numpy()[0])), orient='index').transpose() | | route1 | route2 | |---:|---------:|---------:| | 0 | 20 | 21 | | 1 | 19 | 34 | | 2 | 47 | 78 | | 3 | 56 | 34 | Update 1: import pandas as pd df = pd.DataFrame(data= {'Generation': 0, 'Route_set':[ [[20.0, 19.0, 47.0, 56.0, 43.0, 53.0, 18.0, -1.0, -1.0, -1.0, -1.0, -1.0], [20.0, 51.0, 46.0, 37.0, 2.0, 57.0, 49.0, 36.0, 25.0, 5.0, 4.0, 34.0], [54.0, 23.0, 5.0, 46.0, 34.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [57.0, 48.0, 46.0, 35.0, 25.0, 27.0, 52.0, 8.0, 39.0, 22.0, 51.0, 28.0], [57.0, 16.0, 45.0, 25.0, 49.0, 38.0, 0.0, 46.0, 13.0, 18.0, 19.0, 20.0], [21.0, 11.0, 6.0, 33.0, 25.0, 49.0, 57.0, 29.0, 12.0, 3.0, -1.0, -1.0], [9.0, 15.0, 47.0, 42.0, 25.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [51.0, 25.0, 22.0, 14.0, 39.0, 8.0, 40.0, 0.0, 10.0, 26.0, 32.0, 47.0], [1.0, 33.0, 24.0, 46.0, 56.0, 30.0, 48.0, 51.0, -1.0, -1.0, -1.0, -1.0], [25.0, 31.0, 50.0, 17.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [57.0, 12.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [20.0, 41.0, 47.0, 15.0, 46.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [14.0, 44.0, 39.0, 25.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [20.0, 51.0, 25.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [57.0, 49.0, 5.0, 20.0, 37.0, 46.0, 36.0, 25.0, 39.0, 51.0, 48.0, -1.0], [5.0, 0.0, 33.0, 55.0, 25.0, 48.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [51.0, 32.0, 33.0, 24.0, 35.0, 8.0, 25.0, 4.0, 46.0, 1.0, 7.0, -1.0], [5.0, 25.0, 34.0, 46.0, 1.0, 9.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [38.0, 57.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [12.0, 57.0, 49.0, 25.0, 9.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0]], ]}) data = df.Route_set.to_numpy()[0] df = pd.DataFrame.from_dict(dict(zip(['route{}'.format(i) for i in range(1, len(data)+1)], [data[i] for i in range(len(data))])), orient='index').transpose() df = df.apply(lambda x: x.explode() if 'route' in x.name else x) df[sorted(df.columns)] print(df.to_markdown()) | | route1 | route2 | route3 | route4 | route5 | route6 | route7 | route8 | route9 | route10 | route11 | route12 | route13 | route14 | route15 | route16 | route17 | route18 | route19 | route20 | |---:|---------:|---------:|---------:|---------:|---------:|---------:|---------:|---------:|---------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:|----------:| | 0 | 20 | 20 | 54 | 57 | 57 | 21 | 9 | 51 | 1 | 25 | 57 | 20 | 14 | 20 | 57 | 5 | 51 | 5 | 38 | 12 | | 1 | 19 | 51 | 23 | 48 | 16 | 11 | 15 | 25 | 33 | 31 | 12 | 41 | 44 | 51 | 49 | 0 | 32 | 25 | 57 | 57 | | 2 | 47 | 46 | 5 | 46 | 45 | 6 | 47 | 22 | 24 | 50 | -1 | 47 | 39 | 25 | 5 | 33 | 33 | 34 | -1 | 49 | | 3 | 56 | 37 | 46 | 35 | 25 | 33 | 42 | 14 | 46 | 17 | -1 | 15 | 25 | -1 | 20 | 55 | 24 | 46 | -1 | 25 | | 4 | 43 | 2 | 34 | 25 | 49 | 25 | 25 | 39 | 56 | -1 | -1 | 46 | -1 | -1 | 37 | 25 | 35 | 1 | -1 | 9 | | 5 | 53 | 57 | -1 | 27 | 38 | 49 | -1 | 8 | 30 | -1 | -1 | -1 | -1 | -1 | 46 | 48 | 8 | 9 | -1 | -1 | | 6 | 18 | 49 | -1 | 52 | 0 | 57 | -1 | 40 | 48 | -1 | -1 | -1 | -1 | -1 | 36 | -1 | 25 | -1 | -1 | -1 | | 7 | -1 | 36 | -1 | 8 | 46 | 29 | -1 | 0 | 51 | -1 | -1 | -1 | -1 | -1 | 25 | -1 | 4 | -1 | -1 | -1 | | 8 | -1 | 25 | -1 | 39 | 13 | 12 | -1 | 10 | -1 | -1 | -1 | -1 | -1 | -1 | 39 | -1 | 46 | -1 | -1 | -1 | | 9 | -1 | 5 | -1 | 22 | 18 | 3 | -1 | 26 | -1 | -1 | -1 | -1 | -1 | -1 | 51 | -1 | 1 | -1 | -1 | -1 | | 10 | -1 | 4 | -1 | 51 | 19 | -1 | -1 | 32 | -1 | -1 | -1 | -1 | -1 | -1 | 48 | -1 | 7 | -1 | -1 | -1 | | 11 | -1 | 34 | -1 | 28 | 20 | -1 | -1 | 47 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | | 6 | 2 |
70,952,473 | 2022-2-2 | https://stackoverflow.com/questions/70952473/unicodedecodeerror-utf-8-can-t-decode-byte-0x90-in-position-4024984-invalid | I’m running a subprocess in full trace mode and displaying it using logger.info() > std = subprocess.run(subprocess_cmd, shell=True, > universal_newlines=True, stdout=subprocess.PIPE, > stderr=subprocess.PIPE) > > all_stdout = all_stdout + std.stdout + ‘\n’ all_stderr = all_stderr + > std.stderr + ‘\n’ > > logger.info(‘\nstdout:\n’ + all_stdout + ‘\nstderr:\n’ + all_stderr) But I’m getting the below error when the subprocess is getting printed. UnicodeDecodeError: ‘utf-8’ codec can’t decide byte 0x90 in position 4024984:invalid start byte I have tried giving universal_lines as False but it throws TypeError: must be str, not bytes. I have also tried this - std = subprocess.run(subprocess_cmd, shell=True, stdout=subprocess.PIPE, > stderr=subprocess.PIPE, env=environ, encoding=‘utf-8’) This gives me same UnicodeDecodeError. | Use the encoding encoding="unicode_escape" instead of encoding="utf-8" | 5 | 3 |
70,952,155 | 2022-2-2 | https://stackoverflow.com/questions/70952155/how-to-read-a-kubernetes-deployment-with-python-kubernetes-client | what is python kubernetes client equivalent for kubectl get deploy -o yaml CRUD python Client example i referred this example for getting python deployment but there is no read deployment option | read_namespaced_deployment() does the thing: from kubernetes import client, config config.load_kube_config() api = client.AppsV1Api() deployment = api.read_namespaced_deployment(name='foo', namespace='bar') | 6 | 11 |
70,891,687 | 2022-1-28 | https://stackoverflow.com/questions/70891687/how-do-i-get-my-fastapi-applications-console-log-in-json-format-with-a-differen | I have a FastAPI application where I would like to get the default logs written to the STDOUT with the following data in JSON format: App logs should look like this: { "XYZ": { "log": { "level": "info", "type": "app", "timestamp": "2022-01-16T08:30:08.181Z", "file": "api/predictor/predict.py", "line": 34, "threadId": 435454, "message": "API Server started on port 8080 (development)" } } } Access logs should look like this: { "XYZ": { "log": { "level": "info", "type": "access", "timestamp": "2022-01-16T08:30:08.181Z", "message": "GET /app/health 200 6ms" }, "req": { "url": "/app/health", "headers": { "host": "localhost:8080", "user-agent": "curl/7.68.0", "accept": "*/*" }, "method": "GET", "httpVersion": "1.1", "originalUrl": "/app/health", "query": {} }, "res": { "statusCode": 200, "body": { "statusCode": 200, "status": "OK" } } } } What I've tried: I tried using the json-logging package for this. Using this example, I'm able to access the request logs in JSON and change the structure. But, I'm unable to find how to access and change the app logs. Current output logs structure: {"written_at": "2022-01-28T09:31:38.686Z", "written_ts": 1643362298686910000, "msg": "Started server process [12919]", "type": "log", "logger": "uvicorn.error", "thread": "MainThread", "level": "INFO", "module": "server", "line_no": 82, "correlation_id": "-"} {"written_at": "2022-01-28T09:31:38.739Z", "written_ts": 1643362298739838000, "msg": "Started server process [12919]", "type": "log", "logger": "uvicorn.error", "thread": "MainThread", "level": "INFO", "module": "server", "line_no": 82, "correlation_id": "-"} {"written_at": "2022-01-28T09:31:38.739Z", "written_ts": 1643362298739951000, "msg": "Waiting for application startup.", "type": "log", "logger": "uvicorn.error", "thread": "MainThread", "level": "INFO", "module": "on", "line_no": 45, "correlation_id": "-"} | You could do that by creating a custom Formatter, using the built-in logger module in Python. You could use the extra parameter when logging messages to pass contextual information, such as url and headers. Python's JSON module already implements pretty-printing JSON data, using the json.dumps() function and adjusting the indent level (one could instead use, likely faster, alternatives such as orjson). Below is a working example using a custom formatter to log messages in a similar format to the one described in your question. For "app" logs, you could use, for instance, logger.info('sample log message'). Whereas, for "access" logs, you could use logger.info('sample log message', extra={'extra_info': get_extra_info(request, response)}). Passing the Request and Response instances to the get_extra_info() method would allow you extracting information about the request and response. The approach below uses a FastAPI Middleware to log requests/responses, which allows you to handle the request before it is processed by some endpoint, as well as the response, before it is returned to the client. Additionally, the approach demonstrated below uses a BackgroundTask for logging the data (as described in this answer). A background task "will run only once the response has been sent" (as per Starlette documentation), meaning that the client won't have to wait for the logging to complete, before receiving the response. It is also worth mentioning that since logging using Python's built-in module is a synchronous operation, the write_log_data() background task is defined with normal def instead of async def, meaning that FastAPI will run it in a separate thread from anyio's external threadpool and then await it; otherwise, it would block the entire server, until is completed—for more details on this subject, as well as the various solutions available, please have a look at this answer. That is also the reason that the log output in the example below shows "thread_name": "AnyIO worker thread", when that background task was called after accessing /docs from a web browser. For more LogRecord attributes, besides asctime, levelname, etc, have a look at the relevant documentation. The example below uses a RotatingFileHandler for writing the logs to a file on disk as well. If you don't need that, please feel free to remove it from the get_logger() method. Finally, I would highly suggest having a look at this answer with regard to logging Request and Response data through a middleware, as well as this answer that demonstrates how to customize and use the uvicorn loggers too, in addition to logging your own custom messages. You might also want to consider initializing the logger within a lifespan handler at application startup, which would allow you to access the logger instance within APIRouter modules, outisde the main file of your application (which is usually the case when building Bigger Applications)—please have a look at this answer on how to achieve that. app_logger.py import logging, sys def get_file_handler(formatter, filename="info.log", maxBytes=1024*1024, backupCount=3): file_handler = logging.handlers.RotatingFileHandler(filename, maxBytes, backupCount) file_handler.setLevel(logging.DEBUG) file_handler.setFormatter(formatter) return file_handler def get_stream_handler(formatter): stream_handler = logging.StreamHandler(sys.stdout) stream_handler.setLevel(logging.DEBUG) stream_handler.setFormatter(formatter) return stream_handler def get_logger(name, formatter): logger = logging.getLogger(name) logger.setLevel(logging.DEBUG) logger.addHandler(get_file_handler(formatter)) logger.addHandler(get_stream_handler(formatter)) return logger app_logger_formatter.py import logging, json class CustomJSONFormatter(logging.Formatter): def __init__(self, fmt): logging.Formatter.__init__(self, fmt) def format(self, record): logging.Formatter.format(self, record) return json.dumps(get_log(record), indent=2) def get_log(record): d = { "time": record.asctime, "process_name": record.processName, "process_id": record.process, "thread_name": record.threadName, "thread_id": record.thread, "level": record.levelname, "logger_name": record.name, "pathname": record.pathname, #'filename': record.filename, "line": record.lineno, "message": record.message, } if hasattr(record, "extra_info"): d["req"] = record.extra_info["req"] d["res"] = record.extra_info["res"] return d app.py from fastapi import FastAPI, Request, Response from starlette.background import BackgroundTask from app_logger_formatter import CustomJSONFormatter from http import HTTPStatus import app_logger import uvicorn app = FastAPI() formatter = CustomJSONFormatter('%(asctime)s') logger = app_logger.get_logger(__name__, formatter) status_reasons = {x.value:x.name for x in list(HTTPStatus)} def get_extra_info(request: Request, response: Response): return { "req": { "url": request.url.path, "headers": { "host": request.headers["host"], "user-agent": request.headers["user-agent"], "accept": request.headers["accept"], }, "method": request.method, "http_version": request.scope["http_version"], "original_url": request.url.path, "query": {}, }, "res": { "status_code": response.status_code, "status": status_reasons.get(response.status_code), } } def write_log_data(request, response): logger.info( request.method + " " + request.url.path, extra={"extra_info": get_extra_info(request, response)} ) @app.middleware("http") async def log_request(request: Request, call_next): response = await call_next(request) response.background = BackgroundTask(write_log_data, request, response) return response @app.get("/") async def foo(request: Request): return "success" if __name__ == '__main__': logger.info("Server is listening...") uvicorn.run(app, host='0.0.0.0', port=8000) Output: { "time": "2024-10-27 12:15:00,115", "process_name": "MainProcess", "process_id": 1937, "thread_name": "MainThread", "thread_id": 1495, "level": "INFO", "logger_name": "__main__", "pathname": "C:\\...", "line": 56, "message": "Server started listening on port: 8000" } { "time": "2024-10-27 12:15:10,335", "process_name": "MainProcess", "process_id": 1937, "thread_name": "AnyIO worker thread", "thread_id": 1712, "level": "INFO", "logger_name": "__main__", "pathname": "C:\\...", "line": 37, "message": "GET /docs", "req": { "url": "/docs", "headers": { "host": "127.0.0.1:8000", "user-agent": "Mozilla...", "accept": "text/html,application/xhtml+xml..." }, "method": "GET", "http_version": "1.1", "original_url": "/docs", "query": {} }, "res": { "status_code": 200, "status": "OK" } } | 10 | 21 |
70,872,276 | 2022-1-27 | https://stackoverflow.com/questions/70872276/fastapi-python-how-to-run-a-thread-in-the-background | I'm making a server in python using FastAPI, and I want a function that is not related to my API, to run in background every 5 minutes (like checking stuff from an API and printing stuff depending on the response) I've tried to make a thread that runs the function start_worker, but it doesn't print anything. Does anyone know how to do so ? def start_worker(): print('[main]: starting worker...') my_worker = worker.Worker() my_worker.working_loop() # this function prints "hello" every 5 seconds if __name__ == '__main__': print('[main]: starting...') uvicorn.run(app, host="0.0.0.0", port=8000, reload=True) _worker_thread = Thread(target=start_worker, daemon=False) _worker_thread.start() | Option 1 You should start your Thread before calling uvicorn.run, as uvicorn.run is blocking the thread. from fastapi import FastAPI import threading import uvicorn import time app = FastAPI() class BackgroundTasks(threading.Thread): def run(self,*args,**kwargs): while True: print('Hello') time.sleep(5) if __name__ == '__main__': t = BackgroundTasks() t.start() uvicorn.run(app, host="0.0.0.0", port=8000) You could also start your Thread using FastAPI's startup event, as long as it is ok to run before the application starts (Update: see Option 3 below on how to use lifespan event instead, as startup event is now deprecated). @app.on_event("startup") async def startup_event(): t = BackgroundTasks() t.start() Option 2 Instead of while True: loop, you could use a repeating Event scheduler for the background task, as shown below: from fastapi import FastAPI from threading import Thread import uvicorn import sched, time app = FastAPI() s = sched.scheduler(time.time, time.sleep) def print_event(sc): print("Hello") sc.enter(5, 1, print_event, (sc,)) def start_scheduler(): s.enter(5, 1, print_event, (s,)) s.run() @app.on_event("startup") async def startup_event(): thread = Thread(target=start_scheduler) thread.start() if __name__ == '__main__': uvicorn.run(app, host="0.0.0.0", port=8000) Option 3 If your task is an async def function (see this answer for more details on def vs async def endpoints/background tasks in FastAPI), then you could add the task to the current event loop, using the asyncio.create_task() function. The create_task() function takes a coroutine object (i.e., an async def function) and returns a Task object (which can be used to await the task, if needed, or cancel it , etc). The call creates the task inside the event loop for the current thread, and executes it in the "background" concurrently with all other tasks in the event loop, switching between them at await points. It is required to have an event loop created before calling create_task(), and this is already created when starting the uvicorn server either programmatically (using, for instance, uvicorn.run(app)) or in the terminal (using, for instance, uvicorn app:app). Instead of using asyncio.create_task(), one could also use asyncio.get_running_loop() to get the current event loop, and then call loop.create_task(). The example below uses the recently documented way for adding lifespan events (using a context manager), i.e., code that should be executed before the application starts up, as well as when the application is shutting down (see the documentation, as well as this answer and this answer for more details and examples). One could also still use the startup and shutdown events, as demonstrated in the previous options; however, those event handlers might be removed from future FastAPI/Starlette versions. from fastapi import FastAPI from contextlib import asynccontextmanager import asyncio async def print_task(s): while True: print('Hello') await asyncio.sleep(s) @asynccontextmanager async def lifespan(app: FastAPI): # Run at startup asyncio.create_task(print_task(5)) yield # Run on shutdown (if required) print('Shutting down...') app = FastAPI(lifespan=lifespan) Other Solutions An alternative solution might include using ApScheduler, and more specifically, AsyncIOScheduler, as demonstrated in this answer. | 32 | 42 |
70,939,969 | 2022-2-1 | https://stackoverflow.com/questions/70939969/psycopg2-connect-to-postgresql-database-using-a-connection-string | I currently have a connection string in the format of: "localhost://username:password@data_quality:5432" What is the best method of using this to connect to my database using psycopg2? e.g.: connection = psycopg2.connect(connection_string) | You could make use of urlparse, creating a dictionary that matches psycopg's connection arguments: import psycopg2 from urllib.parse import urlparse conStr = "postgres://username:password@localhost:5432/data_quality" p = urlparse(conStr) pg_connection_dict = { 'dbname': p.path[1:], 'user': p.username, 'password': p.password, 'port': p.port, 'host': p.hostname } print(pg_connection_dict) con = psycopg2.connect(**pg_connection_dict) print(con) Out: {'dbname': 'data_quality', 'user': 'username', 'password': 'password', 'port': 5432, 'host': 'localhost'} <connection object at 0x105f58190; dsn: 'user=xxx password=xxx dbname=xxx host=xxx port=xxx', closed: 0> | 13 | 15 |
70,879,159 | 2022-1-27 | https://stackoverflow.com/questions/70879159/get-datetime-format-from-string-python | In Python there are multiple DateTime parsers which can parse a date string automatically without providing the datetime format. My problem is that I don't need to cast the datetime, I only need the datetime format. Example: From "2021-01-01", I want something like "%Y-%m-%d" or "yyyy-MM-dd". My only idea was to try casting with different formats and get the successful one, but I don't want to list every possible format. I'm working with pandas, so I can use methods that work either with series or the string DateTime parser. Any ideas? | In pandas, this is achieved by pandas.tseries.api.guess_datetime_format from pandas.tseries.api import guess_datetime_format guess_datetime_format('2021-01-01') # '%Y-%m-%d' As there will always be an ambiguity on the day/month, you can specify the dayfirst case: guess_datetime_format('2021-01-01', dayfirst=True) # '%Y-%d-%m' | 7 | 6 |
70,891,435 | 2022-1-28 | https://stackoverflow.com/questions/70891435/dash-datatable-with-expandable-collapsable-rows | Similar to qtTree, I would like to have a drill down on a column of a datatable. I guess this is better illustrated with an example. Assume we have a dataframe with three columns: Country, City, Population like: Country City Population USA New-York 19MM China Shanghai 26MM China Beijing 20MM USA Los Angeles 12MM France Paris 11MM Is there a way to present this data ideally in a dash-plotly datatable as follows: Country City Population +USA 31MM /----> New-York 19MM /----> Los Angeles 12MM +China 46MM /----> Shanghai 26MM /----> Beijing 20MM +France 11MM /----> Paris 11MM The grouping Country/City would be expandle (or maybe hidden/shown upon click on the row -?-). At the country level, the population would be the sum of its constituents and the City level, the population would be the one from that city. The library dash_treeview_antd allows for treeview representation but I don't know how to include the population column for instance. Maybe there is a simpler way by doing the groupby in pandas first and then having a callback to hide/show the currentrow selection/clicked? Edit: I have been playing around with .groupby in pandas and the 'active_cell' Dash DataTable property in the callback. def defineDF(): df = pd.DataFrame({'Country': ['USA', 'China', 'China', 'USA', 'France'], 'City': ['New-York', 'Shanghai', 'Beijing', 'Los Angeles', 'Paris'], 'Population': [19, 26, 20, 12, 11], 'Other': [5, 3, 4, 11, 43]}) df.sort_values(by=['Country', 'City'], inplace=True) return df def baseDF(): df = pd.DataFrame({'Country': ['USA', 'China', 'China', 'USA', 'France'], 'City': ['New-York', 'Shanghai', 'Beijing', 'Los Angeles', 'Paris'], 'Population': [19, 26, 20, 12, 11], 'Other': [5, 3, 4, 11, 43]}) df.sort_values(by=['Country', 'City'], inplace=True) f = {'Population': 'sum', 'Other': 'sum'} cols = ['Country'] return df.groupby(cols).agg(f).reset_index() startDF = baseDF() app.layout = html.Div([ html.Div(html.H6("Country/City population"), style={"text-align":"center"}), html.Hr(), dash_table.DataTable( id='table', columns=[{'name': i, 'id': i} for i in startDF.columns], data = startDF.to_dict('records'), selected_rows=[], filter_action='native', ) ]) @app.callback([ Output('table', 'data'), Output('table', 'columns') ], [ Input('table', 'active_cell') ], [ State('table', 'data'), State('table', 'columns') ], ) def updateGrouping(active_cell, power_position, power_position_cols): if active_cell is None: returndf = baseDF() elif active_cell['column'] == 0: returndf = defineDF() else: returndf = baseDF() cols = [{'name': i, 'id': i} for i in returndf.columns] return [returndf.to_dict('records'), cols] I am getting there. At start I only display the country column; it would be nice to have the column City there too but with empty values. Then once the user clicks on a country, only show the Cities for that country (and the corresponding Population/Other columns while the rest of the table is unchanged). I am not using current_df nor current_df_cols in the callback yet, but I suspect they might become handy. Maybe I can filter the country column based on active cell? | Dynamic Python Dash app data_table with row-based dropdowns triggering callbacks This is a little tricky, but hopefully the following example might help achieve what you are attempting. The main drawback is probably the requirement for hard-coding the dropdown_conditional parameter (although, you probably wouldn't want to have more columns for a user to interact with then would be prohibitive to hard-code!). So, I think this should provide you with the general functional gist sufficient for you to further customize it as needed. import pandas as pd from collections import OrderedDict from dash import Dash, Input, Output, State, dcc, html, callback from dash.exceptions import PreventUpdate from dash import dash_table start_df = pd.DataFrame( OrderedDict([("Country", ["USA", "China", "France"])]) ) start_df["City"] = "" start_df["Population"] = [31, 46, 11] population_df = pd.DataFrame( OrderedDict( [ ("Country", ["USA", "China", "China", "USA", "France"]), ( "City", ["New-York", "Shanghai", "Beijing", "Los Angeles", "Paris",], ), ("Population", [19, 26, 20, 12, 11]), ] ) ) app = Dash(__name__) app.layout = html.Div( [ html.Div( html.H1("Country/City population"), style={"text-align": "center"} ), html.Hr(), dash_table.DataTable( id="table", columns=[ {"id": "Country", "name": "Country",}, {"id": "City", "name": "City", "presentation": "dropdown",}, {"id": "Population", "name": "Population (Total [M])"}, ], data=start_df.to_dict("records"), editable=True, dropdown_conditional=[ { "if": { "column_id": "City", # skip-id-check "filter_query": '{Country} eq "China"', }, "options": [ {"label": i, "value": i} for i in population_df[ population_df.Country == "China" ].City.values ], }, { "if": { "column_id": "City", # skip-id-check "filter_query": '{Country} eq "USA"', }, "options": [ {"label": i, "value": i} for i in population_df[ population_df.Country == "USA" ].City.values ], }, { "if": { "column_id": "City", # skip-id-check "filter_query": '{Country} eq "France"', }, "options": [ {"label": i, "value": i} for i in population_df[ population_df.Country == "France" ].City.values ], }, ], style_cell={ "fontSize": "0.8rem", "whiteSpace": "normal", "padding": "3px", "textOverflow": "ellipsis", "textAlign": "center", "maxWidth": "300px", }, style_header={ "fontWeight": "500", "fontSize": "0.8rem", "cursor": "pointer", }, ), html.Div(id="table_container"), ] ) @callback( Output("table", "data"), Input("table", "data_timestamp"), State("table", "data"), ) def update_table(timestamp, rows): print(timestamp) for row in rows: country = row["Country"] city = row["City"] if city == "" or city is None: print(country) population = start_df.set_index("Country").loc[country][ "Population" ] print(population) row["Population"] = population elif city in population_df[["City"]].values: print(city) population = population_df.set_index("City").loc[city][ "Population" ] print(population) row["Population"] = population return rows if __name__ == "__main__": print(start_df) app.run(debug=True) gives the following app reactivity: The print() functions were used/are included for debugging purposes. E.g., providing this to stdout in the terminal when running app: Dash is running on http://127.0.0.1:8050/ * Serving Flask app "app" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on Country City Population 0 USA 31 1 China 46 2 France 11 None USA 31 China 46 France 11 1690628030839 New-York 19 China 46 France 11 1690628033725 Los Angeles 12 China 46 France 11 1690628035972 USA 31 China 46 France 11 1690628038742 USA 31 Shanghai 26 France 11 1690628041373 USA 31 Shanghai 26 Paris 11 1690628043825 Los Angeles 12 Shanghai 26 Paris 11 1690628046159 Los Angeles 12 China 46 Paris 11 1690628047975 Los Angeles 12 China 46 France 11 1690628050125 USA 31 China 46 France 11 where you can see every interaction with the data table in the app in browser triggers first a printing of the timestamp recorded for that interaction, and then the changes which become rendered in the data_table triggered by the dropdown components in the "City" column. | 6 | 1 |
70,888,992 | 2022-1-28 | https://stackoverflow.com/questions/70888992/what-are-the-differences-between-unittest-mock-mock-mocker-and-pytest-mock | I am new to Python development, I am writing test cases using pytest where I need to mock some behavior. Googling best mocking library for pytest, has only confused me. I have seen unittest.mock, mock, mocker and pytest-mock. Not really sure which one to use. Can someone please explain me the difference between them and also recommend me one? | pytest-mock is a thin wrapper around mock. mock is since python 3.3. actually the same as unittest.mock. I don't know if mocker is another library, I only know it as the name of the fixture provided by pytest-mock to get mocking done in your tests. I personally use pytest and pytest-mock for my tests, which allows you to write very concise tests like from pytest_mock import MockerFixture @pytest.fixture(autouse=True) def something_to_be_mocked_everywhere(mocker): mocker.patch() def tests_this(mocker: MockerFixture): mocker.patch ... a_mock = mocker.Mock() ... ... But this is mainly due to using fixtures, which is already pointed out is what pytest-mock offers. | 29 | 16 |
70,929,777 | 2022-1-31 | https://stackoverflow.com/questions/70929777/type-annotations-tuple-type-vs-union-type | def func(df_a: pd.DataFrame, df_b: pd.DataFrame) -> (pd.DataFrame, pd.DataFrame): Pylance is advising to modify this line with two solution proposed. What would be the pros and cons of each one if there is any significant difference? Tuple expression not allowed in type annotation Use Tuple[T1, ..., Tn] to indicate a tuple type or Union[T1, T2] to indicate a union type | 2023 edit In newer versions of Python (>=3.10), you should use: tuple[A, B, C] instead of Tuple[A, B, C] (yes, that's the built-in tuple function) A | B instead of Union[A, B] The answer itself is still relevant, even if the newer style makes the difference between Tuple/tuple and Union/| more apparent. Original answer They mean different things: Tuple[A, B, C] means that the function returns a three-element tuple with the A B C data types: def f() -> Tuple[str, int, float]: return 'hello', 10, 3.33 Union[A, B] means that the function returns an object of either A or B data type: import random def f() -> Union[str, int]: if random.random() > 0.5: return 'hello' else: return 10 In your case, it looks like you want to use Tuple[pd.DataFrame, pd.DataFrame]. | 5 | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.