question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
69,319,437 | 2021-9-24 | https://stackoverflow.com/questions/69319437/decode-firebase-jwt-in-python-using-pyjwt | I have written the following code : def check_token(token): response = requests.get("https://www.googleapis.com/robot/v1/metadata/x509/[email protected]") key_list = response.json() decoded_token = jwt.decode(token, key=key_list, algorithms=["RS256"]) print(f"Decoded token : {decoded_token}") I am trying to decode the token provided by firebase client side to verify it server-side. The above code is throwing the following exception : TypeError: Expecting a PEM-formatted key. I have tried to not pass a list to the jwt.decode method, only the key content and i have a bigger error that the library could not deserialize the Key. I was following this answer but i am getting this error. Is it a requests conversion problem ? What am i doing wrong ? | The 2nd parameter key in decode() seems to take a string value instead of list. The Google API request returns a dict/map containing multiple keys. The flow goes like: Fetch public keys from the Google API endpoint Then read headers without validation to get the kid claim then use it to get appropriate key from that dict That is a X.509 Certificate and not the public key as in this answer so you need to get public key from that. The following function worked for me: import jwt import requests from cryptography.hazmat.backends import default_backend from cryptography import x509 def check_token(token): n_decoded = jwt.get_unverified_header(token) kid_claim = n_decoded["kid"] response = requests.get("https://www.googleapis.com/robot/v1/metadata/x509/[email protected]") x509_key = response.json()[kid_claim] key = x509.load_pem_x509_certificate(x509_key.encode('utf-8'), backend=default_backend()) public_key = key.public_key() decoded_token = jwt.decode(token, public_key, ["RS256"], options=None, audience="<FIREBASE_PROJECT_ID>") print(f"Decoded token : {decoded_token}") check_token("FIREBASE_ID_TOKEN") | 6 | 8 |
69,320,328 | 2021-9-24 | https://stackoverflow.com/questions/69320328/rust-loop-performance-same-as-python | I was working on mandelbrot algorithm to learn Rust and I found out that empty 25mil(approx 6k image) loop takes 0.5s. I found it quite slow. So I went to test it in python and found out, it takes almost the same time. Has really python's for loop almost zero cost abstraction? Is this really the best I can get with intel i7? Rust: use std::time::Instant; fn main() { let before = Instant::now(); for i in 0..5000 { for j in 0..5000 {} } println!("Elapsed time: {:.2?}", before.elapsed()); } >>> Elapsed time: 406.90ms Python: import time s = time.time() for i in range(5000): for j in range(5000): pass print(time.time()-s) >>> 0.5715351104736328 UPDATE: If I use initialized tuple instead range, python is even faster than rust -> 0.33s | If you're doing performance testing always build with --release. By default Cargo builds with debugging information enabled and optimizations disabled. The optimizer will completely eliminate these loops. On the Playground it drops from 975ms to 1.25µs. Let's take a look at the assembly on Godbolt for just the loops, no timer: pub fn main() { for i in 0..5000 { for j in 0..5000 {} } } Without optimization: <i32 as core::iter::range::Step>::forward_unchecked: push rax mov eax, esi add edi, eax mov dword ptr [rsp + 4], edi mov eax, dword ptr [rsp + 4] mov dword ptr [rsp], eax mov eax, dword ptr [rsp] pop rcx ret core::intrinsics::copy_nonoverlapping: push rax mov qword ptr [rsp], rsi mov rsi, rdi mov rdi, qword ptr [rsp] shl rdx, 2 call memcpy@PLT pop rax ret core::cmp::impls::<impl core::cmp::PartialOrd for i32>::lt: mov eax, dword ptr [rdi] cmp eax, dword ptr [rsi] setl al and al, 1 movzx eax, al ret core::mem::replace: sub rsp, 40 mov qword ptr [rsp], rdi mov dword ptr [rsp + 12], esi mov byte ptr [rsp + 23], 0 mov byte ptr [rsp + 23], 1 mov rax, qword ptr [rip + core::ptr::read@GOTPCREL] call rax mov ecx, eax mov dword ptr [rsp + 16], ecx jmp .LBB3_1 .LBB3_1: mov esi, dword ptr [rsp + 12] mov rdi, qword ptr [rsp] mov byte ptr [rsp + 23], 0 mov rcx, qword ptr [rip + core::ptr::write@GOTPCREL] call rcx jmp .LBB3_4 .LBB3_2: test byte ptr [rsp + 23], 1 jne .LBB3_8 jmp .LBB3_7 mov rcx, rax mov eax, edx mov qword ptr [rsp + 24], rcx mov dword ptr [rsp + 32], eax jmp .LBB3_2 .LBB3_4: mov eax, dword ptr [rsp + 16] add rsp, 40 ret .LBB3_5: jmp .LBB3_2 mov rcx, rax mov eax, edx mov qword ptr [rsp + 24], rcx mov dword ptr [rsp + 32], eax jmp .LBB3_5 .LBB3_7: mov rdi, qword ptr [rsp + 24] call _Unwind_Resume@PLT ud2 .LBB3_8: jmp .LBB3_7 core::ptr::read: sub rsp, 24 mov qword ptr [rsp + 8], rdi mov eax, dword ptr [rsp + 20] mov dword ptr [rsp + 16], eax jmp .LBB4_2 .LBB4_2: mov rdi, qword ptr [rsp + 8] lea rsi, [rsp + 16] mov edx, 1 call qword ptr [rip + core::intrinsics::copy_nonoverlapping@GOTPCREL] mov eax, dword ptr [rsp + 16] mov dword ptr [rsp + 4], eax mov eax, dword ptr [rsp + 4] add rsp, 24 ret core::ptr::write: sub rsp, 4 mov dword ptr [rsp], esi mov eax, dword ptr [rsp] mov dword ptr [rdi], eax add rsp, 4 ret core::iter::range::<impl core::iter::traits::iterator::Iterator for core::ops::range::Range<A>>::next: push rax call qword ptr [rip + <core::ops::range::Range<T> as core::iter::range::RangeIteratorImpl>::spec_next@GOTPCREL] mov dword ptr [rsp], eax mov dword ptr [rsp + 4], edx mov edx, dword ptr [rsp + 4] mov eax, dword ptr [rsp] pop rcx ret core::clone::impls::<impl core::clone::Clone for i32>::clone: mov eax, dword ptr [rdi] ret <I as core::iter::traits::collect::IntoIterator>::into_iter: mov edx, esi mov eax, edi ret <core::ops::range::Range<T> as core::iter::range::RangeIteratorImpl>::spec_next: sub rsp, 40 mov rsi, rdi mov qword ptr [rsp + 16], rsi mov rdi, rsi add rsi, 4 call core::cmp::impls::<impl core::cmp::PartialOrd for i32>::lt mov byte ptr [rsp + 31], al mov al, byte ptr [rsp + 31] test al, 1 jne .LBB9_3 jmp .LBB9_2 .LBB9_2: mov dword ptr [rsp + 32], 0 jmp .LBB9_7 .LBB9_3: mov rdi, qword ptr [rsp + 16] call core::clone::impls::<impl core::clone::Clone for i32>::clone mov dword ptr [rsp + 12], eax mov edi, dword ptr [rsp + 12] mov esi, 1 call <i32 as core::iter::range::Step>::forward_unchecked mov dword ptr [rsp + 8], eax mov esi, dword ptr [rsp + 8] mov rdi, qword ptr [rsp + 16] call qword ptr [rip + core::mem::replace@GOTPCREL] mov dword ptr [rsp + 4], eax mov eax, dword ptr [rsp + 4] mov dword ptr [rsp + 36], eax mov dword ptr [rsp + 32], 1 .LBB9_7: mov eax, dword ptr [rsp + 32] mov edx, dword ptr [rsp + 36] add rsp, 40 ret example::main: sub rsp, 72 mov dword ptr [rsp + 24], 0 mov dword ptr [rsp + 28], 5000 mov edi, dword ptr [rsp + 24] mov esi, dword ptr [rsp + 28] call qword ptr [rip + <I as core::iter::traits::collect::IntoIterator>::into_iter@GOTPCREL] mov dword ptr [rsp + 16], eax mov dword ptr [rsp + 20], edx mov eax, dword ptr [rsp + 20] mov ecx, dword ptr [rsp + 16] mov dword ptr [rsp + 32], ecx mov dword ptr [rsp + 36], eax .LBB10_2: mov rax, qword ptr [rip + core::iter::range::<impl core::iter::traits::iterator::Iterator for core::ops::range::Range<A>>::next@GOTPCREL] lea rdi, [rsp + 32] call rax mov dword ptr [rsp + 44], edx mov dword ptr [rsp + 40], eax mov eax, dword ptr [rsp + 40] test rax, rax je .LBB10_5 jmp .LBB10_13 .LBB10_13: jmp .LBB10_6 ud2 .LBB10_5: add rsp, 72 ret .LBB10_6: mov dword ptr [rsp + 48], 0 mov dword ptr [rsp + 52], 5000 mov edi, dword ptr [rsp + 48] mov esi, dword ptr [rsp + 52] call qword ptr [rip + <I as core::iter::traits::collect::IntoIterator>::into_iter@GOTPCREL] mov dword ptr [rsp + 8], eax mov dword ptr [rsp + 12], edx mov eax, dword ptr [rsp + 12] mov ecx, dword ptr [rsp + 8] mov dword ptr [rsp + 56], ecx mov dword ptr [rsp + 60], eax .LBB10_8: mov rax, qword ptr [rip + core::iter::range::<impl core::iter::traits::iterator::Iterator for core::ops::range::Range<A>>::next@GOTPCREL] lea rdi, [rsp + 56] call rax mov dword ptr [rsp + 68], edx mov dword ptr [rsp + 64], eax mov eax, dword ptr [rsp + 64] test rax, rax je .LBB10_11 jmp .LBB10_14 .LBB10_14: jmp .LBB10_12 ud2 .LBB10_11: jmp .LBB10_2 .LBB10_12: jmp .LBB10_8 __rustc_debug_gdb_scripts_section__: .asciz "\001gdb_load_rust_pretty_printers.py" DW.ref.rust_eh_personality: .quad rust_eh_personality With optimization example::main: ret | 4 | 13 |
69,313,876 | 2021-9-24 | https://stackoverflow.com/questions/69313876/how-to-get-points-of-the-svg-paths | I have an SVG file, for example, this <svg viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg"> <path fill="none" stroke="red" d="M 10,30 A 20,20 0,0,1 50,30 A 20,20 0,0,1 90,30 Q 90,60 50,90 Q 10,60 10,30 z" /> </svg> How can I get the list of the points (x, y) for those paths? I have seen that answer but it's not full, so I'm asking for the full solution. I would like to have an option to choose how many points would be there or control of the density of the points. | Here you can change the scale, offset, and density of the points: from svg.path import parse_path from xml.dom import minidom def get_point_at(path, distance, scale, offset): pos = path.point(distance) pos += offset pos *= scale return pos.real, pos.imag def points_from_path(path, density, scale, offset): step = int(path.length() * density) last_step = step - 1 if last_step == 0: yield get_point_at(path, 0, scale, offset) return for distance in range(step): yield get_point_at( path, distance / last_step, scale, offset) def points_from_doc(doc, density=5, scale=1, offset=0): offset = offset[0] + offset[1] * 1j points = [] for element in doc.getElementsByTagName("path"): for path in parse_path(element.getAttribute("d")): points.extend(points_from_path( path, density, scale, offset)) return points string = """<svg viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg"> <path fill="none" stroke="red" d="M 10,30 A 20,20 0,0,1 50,30 A 20,20 0,0,1 90,30 Q 90,60 50,90 Q 10,60 10,30 z" /> </svg>""" doc = minidom.parseString(string) points = points_from_doc(doc, density=1, scale=5, offset=(0, 5)) doc.unlink() And you can also visualize those points: import pygame from svg.path import parse_path from xml.dom import minidom ... # other functions and string def main(): screen = pygame.display.set_mode([500, 500]) screen.fill((255, 255, 255)) doc = minidom.parseString(string) points = points_from_doc(doc, 0.05, 5, (0, 5)) doc.unlink() for point in points: pygame.draw.circle(screen, (0, 0, 255), point, 1) pygame.display.flip() while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: return pygame.init() main() pygame.quit() density == 0.05: density == 0.1: density == 0.5: density == 1: density == 5: | 8 | 5 |
69,306,799 | 2021-9-23 | https://stackoverflow.com/questions/69306799/why-does-the-lines-count-differently-using-two-different-way-to-load-text | import pathlib file_path = 'vocab.txt' vocab = pathlib.Path(file_path).read_text().splitlines() print(len(vocab)) count = 0 with open(file_path, 'r', encoding='utf8') as f: for line in f: count += 1 print(count) The two counts are 2122 and 2120. Shouldn't they be same? | So, looking at the documentation for str.splitlines, we see that the line delimiters for this method are a superset of "universal newlines": This method splits on the following line boundaries. In particular, the boundaries are a superset of universal newlines. Representation Description \n Line Feed \r Carriage Return \r\n Carriage Return + Line Feed \v or \x0b Line Tabulation \f or \x0c Form Feed \x1c File Separator \x1d Group Separator \x1e Record Separator \x85 Next Line (C1 Control Code) \u2028 Line Separator \u2029 Paragraph Separator A a line for a text-file will by default use the universal-newlines approach to interpret delimiters, from the docs: When reading input from the stream, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If newline is '', universal newlines mode is enabled, but line endings are returned to the caller untranslated. If newline has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated. | 9 | 7 |
69,292,855 | 2021-9-23 | https://stackoverflow.com/questions/69292855/why-do-i-get-an-unprocessable-entity-error-while-uploading-an-image-with-fasta | I am trying to upload an image but FastAPI is coming back with an error I can't figure out. If I leave out the "file: UploadFile = File(...)" from the function definition, it works correctly. But when I add the file to the function definition, then it throws the error. Here is the complete code. @router.post('/', response_model=schemas.PostItem, status_code=status.HTTP_201_CREATED) def create(request: schemas.Item, file: UploadFile = File(...), db: Session = Depends(get_db)): new_item = models.Item( name=request.name, price=request.price, user_id=1, ) print(file.filename) db.add(new_item) db.commit() db.refresh(new_item) return new_item The Item Pydantic model is just class Item(BaseModel): name: str price: float The error is: Code 422 Error: Unprocessable Entity { "detail": [ { "loc": [ "body", "request", "name" ], "msg": "field required", "type": "value_error.missing" }, { "loc": [ "body", "request", "price" ], "msg": "field required", "type": "value_error.missing" } ] } | The problem is that your route is expecting 2 types of request body: request: schemas.Item This is expecting POSTing an application/json body See the Request Body section of the FastAPI docs: "Read the body of the request as JSON" file: UploadFile = File(...) This is expecting POSTing a multipart/form-data See the Request Files section of the FastAPI docs: "FastAPI will make sure to read that data from the right place instead of JSON. ...when the form includes files, it is encoded as multipart/form-data" That will not work as that breaks not just FastAPI, but general HTTP protocols. FastAPI mentions this in a warning when using File: You can declare multiple File and Form parameters in a path operation, but you can't also declare Body fields that you expect to receive as JSON, as the request will have the body encoded using multipart/form-data instead of application/json. This is not a limitation of FastAPI, it's part of the HTTP protocol. The common solutions, as discussed in Posting a File and Associated Data to a RESTful WebService preferably as JSON, is to either: Break the API into 2 POST requests: 1 for the file, 1 for the metadata Send it all in 1 multipart/form-data Fortunately, FastAPI supports solution 2, combining both your Item model and uploading a file into 1 multipart/form-data. See the section on Request Forms and Files: Use File and Form together when you need to receive data and files in the same request. Here's your modified route (I removed db as that's irrelevant to the problem): class Item(BaseModel): name: str price: float class PostItem(BaseModel): name: str @router.post('/', response_model=PostItem, status_code=status.HTTP_201_CREATED) def create( # Here we expect parameters for each field of the model name: str = Form(...), price: float = Form(...), # Here we expect an uploaded file file: UploadFile = File(...), ): new_item = Item(name=name, price=price) print(new_item) print(file.filename) return new_item The Swagger docs present it as 1 form ...and you should be able now to send both Item params and the file in one request. If you don't like splitting your Item model into separate parameters (it would indeed be annoying for models with many fields), see this Q&A on fastapi form data with pydantic model. Here's the modified code where Item is changed to ItemForm to support accepting its fields as Form values instead of JSON: class ItemForm(BaseModel): name: str price: float @classmethod def as_form(cls, name: str = Form(...), price: float = Form(...)) -> 'ItemForm': return cls(name=name, price=price) class PostItem(BaseModel): name: str @router.post('/', response_model=PostItem, status_code=status.HTTP_201_CREATED) def create( item: ItemForm = Depends(ItemForm.as_form), file: UploadFile = File(...), ): new_item = Item(name=item.name, price=item.price) print(new_item) print(file.filename) return new_item The Swagger UI should still be the same (all the Item fields and the file upload all in one form). For this: If I leave out the "file: UploadFile = File(...)" from the function definition, it works correctly It's not important to focus on this, but it worked because removing File turned the expected request body back to an application/json type, so the JSON body would work. Finally, as a side note, I strongly suggest NOT using request as a parameter name for your route. Aside from being vague (everything is a request), it could conflict with FastAPI's request: Request parameter when using the Request object directly. | 5 | 5 |
69,285,679 | 2021-9-22 | https://stackoverflow.com/questions/69285679/setting-pythonwarnings-to-disable-python-warnings-seems-to-do-nothing | I'm currently running the openstack executable and it generates python deprecation warnings. After some searching I did find this howto. The relevant part is here: Use the PYTHONWARNINGS Environment Variable to Suppress Warnings in Python We can export a new environment variable in Python 2.7 and up. We can export PYTHONWARNINGS and set it to ignore to suppress the warnings raised in the Python program. However, doing this: PYTHONWARNINGS="ignore" openstack image show image name -f value -c id does nothing, deprecation warnings are still displayed. I've tried setting PYTHONWARNINGS to various things: ignore "ignore" "all" "deprecated" "ignore::DeprecationWarning" "error::Warning,default::Warning:has_deprecated_syntax" "error::Warning" but none of them seem to do anything. I was able to work around the issue by appending 2>/dev/null to the end but I would like to know why PYTHONWARNINGS doesn't seem to do anything. | PYTHONWARNINGS certainly does suppress python's warnings. Try running: PYTHONWARNINGS="ignore" python -c "import warnings; warnings.warn('hi')" But in this case you are not calling python, but openstack, which is apparently not inheriting the same environment. Without looking at the source I can't say why. It may even be explicitly settings the warning level, which will override anything you do before hand. If you don't want to see errors, sending STDERR to /dev/null is the proper approach. | 5 | 6 |
69,297,409 | 2021-9-23 | https://stackoverflow.com/questions/69297409/serializers-validated-data-fields-got-changed-with-source-value-in-drf | I am trying to create an api, where user can create programs and add rules to them. Rules has to be executed in order of priority. I am using Django rest framework to achieve this and I am trying to achieve this using Serializer without using ModelSerializer. Provide your solution using serializers.Serializer class One program can have many rules and one rule can be in many programs. So, I am using many_to_many relationship and also I want the user to change the order of rules in a program, to achieve this I am using a through table called Priority, where I keep track of relationship between a rule and a program using priority field models.py class Program(models.Model): name = models.CharField(max_length=32) description = models.TextField(blank=True) rules = models.ManyToManyField(Rule, through='Priority') class Rule(models.Model): name = models.CharField(max_length=20) description = models.TextField(blank=True) rule = models.CharField(max_length=256) class Priority(models.Model): program = models.ForeignKey(Program, on_delete=models.CASCADE) rule = models.ForeignKey(Rule, on_delete=models.CASCADE) priority = models.PositiveIntegerField() def save(self, *args, **kwargs): super(Priority, self).save(*args, **kwargs) serializers.py class RuleSerializer(serializers.Serializer): name = serializers.CharField(max_length=20) description = serializers.CharField(allow_blank=True) rule = serializers.CharField(max_length=256) def create(self, validated_data): return Rule.objects.create(**validated_data) def update(self, instance, validated_data): instance.name = validated_data.get('name', instance.name) instance.description = validated_data.get('description', instance.description) instance.rule = validated_data.get('rule', instance.rule) instance.save() return instance class PrioritySerializer(serializers.Serializer): rule_id = serializers.IntegerField(source='rule.id') rule_name = serializers.CharField(source='rule.name') rule_rule = serializers.CharField(source='rule.rule') priority = serializers.IntegerField() class ProgramSerializer(serializers.Serializer): id = serializers.IntegerField() name = serializers.CharField(max_length=32) description = serializers.CharField(style={'base_template': 'textarea.html'}) rules = PrioritySerializer(source='priority_set', many=True) def create(self, validated_data): rules_data = validated_data.pop('rules') program_obj = Program.objects.create(**validated_data) priority = 0 for rule in rules_data: rule_obj = Rule.objects.get(pk=rule.rule_id) priority += 1 Priority.objects.create(program=program_obj, rule=rule_obj, priority=priority) return program_obj views.py class ProgramList(APIView): """ List all programs, or create a new program. """ def get(self, request, format=None): programs = Program.objects.all() serializers = ProgramSerializer(programs, many=True) return Response(serializers.data) def post(self, request, format=None): serializer = ProgramSerializer(data=request.data, partial=True) print(serializer.initial_data) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) When I make a get request to http://localhost:8000/api/programs/, I got this response [ { "id": 1, "name": "some", "description": "hahah", "rules": [ { "rule_id": 3, "rule_name": "DAIEA", "rule_rule": "date,and,invoice,equal,amount", "priority": 1 }, { "rule_id": 2, "rule_name": "DAIEA", "rule_rule": "date,and,invoice,equal,amount", "priority": 2 }, { "rule_id": 1, "rule_name": "DAI=A", "rule_rule": "date,and,invoice,equal,amount", "priority": 3 } ] }, { "id": 2, "name": "DAI=AS", "description": "Date and Invoice equal Amount Sumif", "rules": [] }, ] I got list of rules under rules field, but when I make post request with this request.data to create a new program with rules with the help of rule.id, Post data: { "name": "DAI=AS", "description": "Date and Invoice equal Amount Sumif", "rules": [ { "rule_id": 1 }, { "rule_id": 2 } ] } After the serialization process, validated_data contains priority_set field instead of rules field, like below { 'name': 'DAI=AS', 'description': 'Date and Invoice equal Amount Sumif', 'priority_set': [ OrderedDict([('rule', {'id': 1})]), OrderedDict([('rule', {'id': 2})]) ] } I don't want the serializer to change rules to priority_set And also, I am getting list of OrderedDict in priority_set, instead I need dictionary of rule objects This is what I want after serialization process, { "name": "DAI=AS", "description": "Date and Invoice equal Amount Sumif", "rules": [ { "rule_id": 1 }, { "rule_id": 2 } ] } Thanks in advance | Solution 1: The fields in validated data is renamed with the source attribute. You can use the renamed attribute in create method of serializer. For nested serializers, seems the validated data contains OrderedDict. You can convert it to regular Dict, and can get the rule id. class ProgramSerializer(serializers.Serializer): id = serializers.IntegerField() name = serializers.CharField(max_length=32) description = serializers.CharField(style={'base_template': 'textarea.html'}) rules = PrioritySerializer(source='priority_set', many=True) def create(self, validated_data): rules_data = validated_data.pop('priority_set') program_obj = Program.objects.create(**validated_data) priority = 0 for rule in rules_data: rule_obj = Rule.objects.get(pk=dict(rule)["rule"]["id"]) priority += 1 Priority.objects.create(program=program_obj, rule=rule_obj, priority=priority) return program_obj Solution 2: You can make rules with source name priority_set as read_only and can extract it from request data. class ProgramSerializer(serializers.Serializer): id = serializers.IntegerField() name = serializers.CharField(max_length=32) description = serializers.CharField(style={'base_template': 'textarea.html'}) rules = PrioritySerializer(source='priority_set', many=True, read_only=True) def create(self, validated_data): rules_data = self.context["request"].data["rules"] program_obj = Program.objects.create(**validated_data) priority = 0 for rule in rules_data: rule_obj = Rule.objects.get(pk=rule["rule_id"]) priority += 1 Priority.objects.create(program=program_obj, rule=rule_obj, priority=priority) return program_obj You will need to pass request context to serializer serializer = ProgramSerializer(data=request.data, partial=True, context={"request": request}) Solution 3: Add related_name in program Foreign key in Priority model. program = models.ForeignKey(Program, related_name="rule", on_delete=models.CASCADE) Rename rules field in ProgramSerializer to something else e.g. rule, and remove source. class ProgramSerializer(serializers.Serializer): id = serializers.IntegerField() name = serializers.CharField(max_length=32) description = serializers.CharField(style={'base_template': 'textarea.html'}) rule = PrioritySerializer(many=True) def create(self, validated_data): rules_data = validated_data.pop('rule') program_obj = Program.objects.create(**validated_data) priority = 0 for rule in rules_data: rule_obj = Rule.objects.get(pk=dict(rule)["rule"]["id"]) priority += 1 Priority.objects.create(program=program_obj, rule=rule_obj, priority=priority) return program_obj Now you will be able to post and retrieve data with rule key. You can rename it something else as appropriate. | 5 | 3 |
69,299,294 | 2021-9-23 | https://stackoverflow.com/questions/69299294/can-we-call-a-pytest-fixture-conditionally | My use case is to call fixture only if a certain condition is met. But since we need to call the pytest fixture as an argument to a test function it gets called every time I run the test. I want to do something like this: @pytest.parameterize("a", [1, 2, 3]) def test_method(a): if a == 2: method_fixture | Yes, you can use indirect=True for a parameter to have the parameter refer to a fixture. import pytest @pytest.fixture def thing(request): if request.param == 2: return func() return None @pytest.mark.parametrize("thing", [1, 2, 3], indirect=True) def test_indirect(thing): pass # thing will either be the retval of `func()` or NOne With dependent "fixtures" As asked in the edit, if your fixtures are dependent on each other, you'll probably need to use the pytest_generate_tests hook instead. E.g. this will parametrize the test with values that aren't equal. import itertools def pytest_generate_tests(metafunc): if metafunc.function.__name__ == "test_combo": a_values = [1, 2, 3, 4] b_values = [2, 3, 4, 5] all_combos = itertools.product(a_values, b_values) combos = [ pair for pair in all_combos if pair[0] != pair[1] ] metafunc.parametrize(["a", "b"], combos) def test_combo(a, b): assert a != b | 5 | 4 |
69,289,547 | 2021-9-22 | https://stackoverflow.com/questions/69289547/how-to-remove-dynamically-fields-from-a-dataclass | I want to inherit my dataclass but remove some of its fields. How can I do that in runtime so I don't need to copy all of the members one by one? Example: from dataclasses import dataclass @dataclass class A: a: int b: int c: int d: int @remove("c", "d") class B(A): pass Such that A would have a, b, c, d defined and B would only have a and b defined. | We can remove the particular fields from the __annotations__ dictionary as well as from __dataclass_fields__ and then rebuild our class using dataclasses.make_dataclass: def remove(*fields): def _(cls): fields_copy = copy.copy(cls.__dataclass_fields__) annotations_copy = copy.deepcopy(cls.__annotations__) for field in fields: del fields_copy[field] del annotations_copy[field] d_cls = dataclasses.make_dataclass(cls.__name__, annotations_copy) d_cls.__dataclass_fields__ = fields_copy return d_cls return _ Note that we copy the annotations and the fields in order to not affect A. otherwise it would remove these fields from A as well and any other attempt to inherit A and remove again a field which we already removed would lead to an error. I.e: from dataclasses import dataclass @dataclass class A: a: int b: int c: int d: int @remove("c", "d") class B(A): pass @remove("b", "c") class C(A): pass This would give us a KeyError since we already removed "c" and this is no longer exists in A's dictionary. | 5 | 2 |
69,297,600 | 2021-9-23 | https://stackoverflow.com/questions/69297600/why-isnt-my-dockerignore-file-ignoring-files | When I build the container and I check the files that should have been ignored, most of them haven't been ignored. This is my folder structure. Root/ data/ project/ __pycache__/ media/ static/ app/ __pycache__/ migrations/ templates/ .dockerignore .gitignore .env docker-compose.yml Dockerfile requirements.txt manage.py Let's say i want to ignore the __pycache__ & data(data will be created with the docker-compose up command, when creating the container) folders and the .gitignore & .env files. I will ignore these with the next .dockerignore file .git .gitignore .docker */__pycache__/ **/__pycache__/ .env/ .venv/ venv/ data/ The final result is that only the git & .env files have been ignored. The data folder hasn't been ignored but it's not accesible from the container. And the __pycache__ folders haven't been ignored either. Here are the docker files. docker-compose.yml version: "3.8" services: app: build: . volumes: - .:/django-app ports: - 8000:8000 command: /bin/bash -c "sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000" container_name: app-container depends_on: - db db: image: postgres volumes: - ./data:/var/lib/postgresql/data environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} container_name: postgres_db_container Dockerfile FROM python:3.9-slim-buster ENV PYTHONUNBUFFERED=1 WORKDIR /django-app EXPOSE 8000 COPY requirements.txt requirements.txt RUN apt-get update \ && adduser --disabled-password --no-create-home userapp \ && apt-get -y install libpq-dev \ && apt-get -y install apt-file \ && apt-get -y install python3-dev build-essential \ && pip install -r requirements.txt USER userapp | You're actually injecting your source code using volumes:, not during the image build, and this doesn't honor .dockerignore. Running a Docker application like this happens in two phases: You build a reusable image that contains the application runtime, any OS and language-specific library dependencies, and the application code; then You run a container based on that image. The .dockerignore file is only considered during the first build phase. In your setup, you don't actually COPY anything in the image beyond the requirements.txt file. Instead, you use volumes: to inject parts of the host system into the container. This happens during the second phase, and ignores .dockerignore. The approach I'd recommend for this is to skip the volumes:, and instead COPY the required source code in the Dockerfile. You should also generally indicate the default CMD the container will run in the Dockerfile, rather than requiring it it the docker-compose.yml or docker run command. FROM python:3.9-slim-buster # Do the OS-level setup _first_ so that it's not repeated # if Python dependencies change RUN apt-get update && apt-get install -y ... WORKDIR /django-app # Then install Python dependencies COPY requirements.txt . RUN pip install -r requirements.txt # Then copy in the rest of the application # NOTE: this _does_ honor .dockerignore COPY . . # And explain how to run it ENV PYTHONUNBUFFERED=1 EXPOSE 8000 USER userapp # consider splitting this into an ENTRYPOINT that waits for the # the database, runs migrations, and then `exec "$@"` to run the CMD CMD sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000 This means, in the docker-compose.yml setup, you don't need volumes:; the application code is already inside the image you built. version: "3.8" services: app: build: . ports: - 8000:8000 depends_on: - db # environment: [PGHOST=db] # no volumes: or container_name: db: image: postgres volumes: # do keep for persistent database data - ./data:/var/lib/postgresql/data environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} # ports: ['5433:5432'] This approach also means you need to docker-compose build a new image when your application changes. This is normal in Docker. For day-to-day development, a useful approach here can be to run all of the non-application dependencies in Docker, but the application itself outside a container. # Start the database but not the application docker-compose up -d db # Create a virtual environment and set it up python3 -m venv venv . venv/bin/activate pip install -r requirements.txt # Set environment variables to point at the Docker database export PGHOST=localhost PGPORT=5433 # Run the application locally ./manage.py runserver Doing this requires making the database visible from outside Docker (via ports:), and making the database location configurable (probably via environment variables, set in Compose with environment:). | 9 | 17 |
69,291,738 | 2021-9-22 | https://stackoverflow.com/questions/69291738/how-to-encode-all-logged-messages-as-utf-8-in-python | I have a little logger function that returns potentially two handlers to log to a RotatingFileHandler and sys.stdout simultaneously. import os, logging, sys from logging.handlers import RotatingFileHandler from config import * def get_logger(filename, log_level_stdout=logging.WARNING, log_level_file=logging.INFO, echo=True): logger = logging.getLogger(__name__) if not os.path.exists(PATH + '/Logs'): os.mkdir(PATH + '/Logs') logger.setLevel(logging.DEBUG) if echo: prn_handler = logging.StreamHandler(sys.stdout) prn_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(message)s')) prn_handler.setLevel(log_level_stdout) logger.addHandler(prn_handler) file_handler = RotatingFileHandler(PATH + '/Logs/' + filename, maxBytes=1048576, backupCount=3) file_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(message)s')) file_handler.setLevel(log_level_file) logger.addHandler(file_handler) return logger This works fine in general but certain strings being logged appear to be encoded in cp1252 and throw an (non-fatal) error when trying to print them to stdout via logger function. It should be noted that the very same characters can be printed just fine in the error message. Logging them to a file also causes no issues. It's only the console - sys.stdout - that throws this error. --- Logging error --- Traceback (most recent call last): File "C:\Program Files\Python38\lib\logging\__init__.py", line 1084, in emit stream.write(msg + self.terminator) File "C:\Program Files\Python38\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\u1ecd' in position 65: character maps to <undefined> Call stack: File "script.py", line 147, in <module> logger.info(f"F-String with a name in it: '{name}'.") Message: "F-String with a name in it: 'Heimstọð'." Arguments: () A fix to this has been to encode every single message getting as utf8 in the code that's calling the logger function like this: logger.info((f"F-String with a name in it: '{name}'.").encode('utf8')) However I feel like this is neither elegant nor efficient. It should also be noted that the logging of the file works just fine and I already tried setting the PYTHONIOENCODING to utf-8 in the system variables of Windows without any noticeable effect. Update: Turns out I'm stupid. Just because an error message is printed in the console doesn't mean the printing to the console is the cause of the error. I was looking into the answers to the other question that has been suggested to me here and after a while realized that nothing I did to the "if echo" part of the function had any impact on the result. The last check was commenting out the whole block and I still got the error. That's when I realized that the issue was in fact caused by not enforcing UTF8 when writing to the file. Adding the simple kwarg encoding='utf-8' to the RotatingFileHandler as suggested by @michael-ruth fixed the issue for me. P.S. I'm not sure how to handle this case because, while that answer fixed my problem, it wasn't really what I was asking for or what the question suggested because I originally misunderstood the root cause. I'll still check it as solution and upvote both answers. I'll also edit the question as to not mislead future readers into believing it would answer that question when it doesn't really. | Set the encoding while instantiating the handler instead of encoding the message explicitly. file_handler = RotatingFileHandler( PATH + '/Logs/' + filename, maxBytes=1048576, backupCount=3, encoding='utf-8' ) help(RotatingFileHandler) is your best friend. Help on class RotatingFileHandler in module logging.handlers: class RotatingFileHandler(BaseRotatingHandler) | RotatingFileHandler(filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False) | 8 | 6 |
69,278,251 | 2021-9-22 | https://stackoverflow.com/questions/69278251/plotly-including-additional-data-in-hovertemplate | hovertemplate= 'Continent: %{df['continent']}'+ 'Country: %{df['country']}'+ 'gdpPercap: %{x:,.4f} '+ 'lifeExp: %{y}'+ '' I'm trying to use hovertemplate to customize hover information. However I can't get it to display what I want. I am getting x & y to work well. I can't figure out how to add other fields to the hovertemplate though. Any help would be appreciated. import numpy as np df = df[df['year'] == 1952] customdata = np.stack((df['continent'], df['country']), axis=-1) fig = go.Figure() for i in df['continent'].unique(): df_by_continent = df[df['continent'] == i] fig.add_trace(go.Scatter(x=df_by_continent['gdpPercap'], y=df_by_continent['lifeExp'], mode='markers', opacity=.7, marker = {'size':15}, name=i, hovertemplate= 'Continent: %{customdata[0]}<br>'+ 'Country: %{customdata[1]}<br>'+ 'gdpPercap: %{x:,.4f} <br>'+ 'lifeExp: %{y}'+ '<extra></extra>', )) fig.update_layout(title="My Plot", xaxis={'title':'GDP Per Cap', 'type':'log'}, yaxis={'title':'Life Expectancy'}, ) fig.show() Updated with more code. The first answer didn't work just returning the text value of comdata. | See below for an additional example of how to use customdata with multiple traces based on the code included in your question. Note that you actually need to add the customdata to the figure traces in order to use it in the hovertemplate, this was also shown in Derek O's answer. import numpy as np import pandas as pd import plotly.graph_objects as go df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminderDataFiveYear.csv') df = df[df['year'] == 1952] fig = go.Figure() for continent in df['continent'].unique(): df_by_continent = df[df['continent'] == continent] fig.add_trace( go.Scatter( x=df_by_continent['gdpPercap'], y=df_by_continent['lifeExp'], customdata=np.stack((df_by_continent['country'], df_by_continent['pop']), axis=-1), mode='markers', opacity=0.7, marker={'size': 15}, name=continent, hovertemplate='<b>Country</b>: %{customdata[0]}<br>' + '<b>Population</b>: %{customdata[1]:,.0f}<br>' + '<b>GDP</b>: %{x:$,.4f}<br>' + '<b>Life Expectancy</b>: %{y:,.2f} Years' + '<extra></extra>', ) ) fig.update_layout( xaxis={'title': 'GDP Per Cap', 'type': 'log'}, yaxis={'title': 'Life Expectancy'}, ) fig.write_html('fig.html', auto_open=True) | 14 | 19 |
69,289,275 | 2021-9-22 | https://stackoverflow.com/questions/69289275/web-parser-in-javascript-like-beautiful-soup-in-python | Python has a library called Beautiful Soup that you can use to parse an HTML tree without creating 'get' requests in external web pages. I'm looking for the same in JavaScript, but I've only found jsdom and JSSoup (which seems unused) and if I'm correct, they only allow you to make requests. I want a library in JavaScript which allows me to parse the entire HTML tree without getting CORS policy errors, that is, without making a request, just parsing it. How can I do this? | In a browser context, you can use DOMParser: const html = "<h1>title</h1>"; const parser = new DOMParser(); const parsed = parser.parseFromString(html, "text/html"); console.log(parsed.firstChild.innerText); // "title" and in node you can use node-html-parser: import { parse } from 'node-html-parser'; const html = "<h1>title</h1>"; const parsed = parse(html); console.log(parsed.firstChild.innerText); // "title" | 11 | 8 |
69,287,269 | 2021-9-22 | https://stackoverflow.com/questions/69287269/installing-ruamel-yaml-clib-with-docker | I have a small project in django rest framework and I want to dockerize it. In my requirements.txt file there is a package called ruamel.yaml.clib==0.2.6. While downloading all other requirements is successfull, there is a problem when it tries to download this package. #11 208.5 Collecting ruamel.yaml.clib==0.2.6 #11 208.7 Downloading ruamel.yaml.clib-0.2.6.tar.gz (180 kB) #11 217.8 ERROR: Command errored out with exit status 1: #11 217.8 command: /usr/local/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-b8oectgw/ruamel-yaml-clib_517e9b3f18a94ebea71ec88fbaece43a/setup.py'"'"'; __file__='"'"'/tmp/pip-install-b8oectgw/ruamel-yaml-clib_517e9b3f18a94ebea71ec88fbaece43a/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-n2gr5j35 #11 217.8 cwd: /tmp/pip-install-b8oectgw/ruamel-yaml-clib_517e9b3f18a94ebea71ec88fbaece43a/ #11 217.8 Complete output (3 lines): #11 217.8 sys.argv ['/tmp/pip-install-b8oectgw/ruamel-yaml-clib_517e9b3f18a94ebea71ec88fbaece43a/setup.py', 'egg_info', '--egg-base', '/tmp/pip-pip-egg-info-n2gr5j35'] #11 217.8 test compiling /tmp/tmp_ruamel_erx3efla/test_ruamel_yaml.c -> test_ruamel_yaml compile error: /tmp/tmp_ruamel_erx3efla/test_ruamel_yaml.c #11 217.8 Exception: command 'gcc' failed: No such file or directory #11 217.8 ---------------------------------------- #11 217.8 WARNING: Discarding https://files.pythonhosted.org/packages/8b/25/08e5ad2431a028d0723ca5540b3af6a32f58f25e83c6dda4d0fcef7288a3/ruamel.yaml.clib-0.2.6.tar.gz#sha256=4ff604ce439abb20794f05613c374759ce10e3595d1867764dd1ae675b85acbd (from https://pypi.org/simple/ruamel-yaml-clib/) (requires-python:>=3.5). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. #11 217.8 ERROR: Could not find a version that satisfies the requirement ruamel.yaml.clib==0.2.6 (from versions: 0.1.0, 0.1.2, 0.2.0, 0.2.2, 0.2.3, 0.2.4, 0.2.6) #11 217.8 ERROR: No matching distribution found for ruamel.yaml.clib==0.2.6 However, there is no problem when I download this package without docker. Any suggestions? Here is Dockerfile: FROM python:3-alpine # set work directory WORKDIR /usr/src/app # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # install dependencies RUN pip install -U pip setuptools wheel ruamel.yaml ruamel.yaml.clib==0.2.6 COPY ./requirements.txt . RUN pip install --default-timeout=100 -r requirements.txt # copy project COPY . . Here is compose file version: '3.8' services: web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/usr/src/app ports: - 8000:8000 env_file: - ./.env.dev EDIT As mentioned in comments by @Anthon, the problem was related to alpine. I used python:3.9-slim-buster instead in Dockerfile and problem solved! | I think the problem is with the way your Dockerfile tries to install ruamel.yaml.clib. It should be installed using pip (just as documented for the ruamel.yaml). I suggest you take it out of the requirements.txt and explicitly do a pip install -U pip setuptools wheel ruamel.yaml.clib==0.2.6 in your Dockerfile instead. This should just get you the pre-compiled wheel instead of trying to compile ruamel.yaml.clib from source, which will not work if you don't have a C compiler installed (this is actually what docker complains about) I have ruamel.yaml.clib running succesfully in multiple Docker containers (but I never use a requirements.txt) | 4 | 3 |
69,285,056 | 2021-9-22 | https://stackoverflow.com/questions/69285056/python-locate-elements-in-sublists | given these sublists lst=[['a', 'b', 'c', 'd', 'e'], ['f', 'g', 'h']] I am trying to find the location of its elements, for instance, the letter 'a' is located at 0,0 but this line print(lst.index('a')) instead produces the following error: ValueError: 'a' is not in list | You can use list comprehension: >>> lst=[['a', 'b', 'a', 'd', 'a'], ['f', 'g', 'a'], ['a','a','b']] >>> [(i,j) for i in range(len(lst)) for j in range(len(lst[i])) if lst[i][j]=='a'] [(0, 0), (0, 2), (0, 4), (1, 2), (2, 0), (2, 1)] | 5 | 0 |
69,284,018 | 2021-9-22 | https://stackoverflow.com/questions/69284018/euler-mascheroni-constant | In programming, I used only Integers. But this time for some calculations. I need to calculate Euler-Mascheroni Constant γ . up to n-th decimal.{Though n ∈ [30, 150] is enough for me. [x] = gif(x) = math.floor(x) But, I doubt the precision Numerical Algorithm I need higher degree of accuracy using Python. | From the French Wikipedia discussion page, an approximation to 6 decimal places: import math as m EulerMascheroniApp = round( (1.-m.gamma(1+1.e-8))*1.e14 )*1.e-6 print(EulerMascheroniApp) # 0.577216 This constant is also available in the sympy module, under the name EulerGamma: >>> import sympy >>> sympy.EulerGamma EulerGamma >>> sympy.EulerGamma.evalf() 0.577215664901533 >>> - sympy.polygamma(0,1) EulerGamma >>> sympy.stieltjes(0) EulerGamma >>> sympy.stieltjes(0, 1) EulerGamma Documentation: math.gamma; sympy.EulerGamma; sympy.functions.special; sympy: numerical evaluation. On this last documentation link, you can find more information about how to evaluate the constant with more precision, if the default of .evalf() is not enough. If you still want to compute the constant yourself as an exercise, I suggest comparing your results to sympy's constant, to check for accuracy and correctness. | 5 | 4 |
69,279,865 | 2021-9-22 | https://stackoverflow.com/questions/69279865/how-to-get-second-highest-value-from-a-column-pyspark | I have a PySpark DataFrame and I would like to get the second highest value of ORDERED_TIME (DateTime Field yyyy-mm-dd format) after a groupBy applied to 2 columns, namely CUSTOMER_ID and ADDRESS_ID. A customer can have many orders associated with an address and I would like to get the second most recent order for a (customer,address) pair My approach was to make a window and partition according to CUSTOMER_ID and ADDRESS_ID, sort by ORDERED_TIME sorted_order_times = Window.partitionBy("CUSTOMER_ID", "ADDRESS_ID").orderBy(col('ORDERED_TIME').desc()) df2 = df2.withColumn("second_recent_order", (df2.select("ORDERED_TIME").collect()[1]).over(sorted_order_times)) However, I get an error saying ValueError: 'over' is not in list Could anyone suggest the right way to go about solving this problem? Please let me know if any other information is needed Sample Data +-----------+----------+-------------------+ |USER_ID |ADDRESS_ID| ORDER DATE | +-----------+----------+-------------------+ | 100| 1000 |2021-01-02 | | 100| 1000 |2021-01-14 | | 100| 1000 |2021-01-03 | | 100| 1000 |2021-01-04 | | 101| 2000 |2020-05-07 | | 101| 2000 |2021-04-14 | +-----------+----------+-------------------+ Expected Output +-----------+----------+-------------------+-------------------+ |USER_ID |ADDRESS_ID| ORDER DATE |second_recent_order +-----------+----------+-------------------+-------------------+ | 100| 1000 |2021-01-02 |2021-01-04 | 100| 1000 |2021-01-14 |2021-01-04 | 100| 1000 |2021-01-03 |2021-01-04 | 100| 1000 |2021-01-04 |2021-01-04 | 101| 2000 |2020-05-07 |2020-05-07 | 101| 2000 |2021-04-14 |2020-05-07 +-----------+----------+-------------------+------------------- | Here is another way to do it. Using collect_list import pyspark.sql.functions as F from pyspark.sql import Window sorted_order_times = Window.partitionBy("CUSTOMER_ID", "ADDRESS_ID").orderBy(F.col('ORDERED_TIME').desc()).rangeBetween(Window.unboundedPreceding, Window.unboundedFollowing) df2 = ( df .withColumn("second_recent_order", (F.collect_list(F.col("ORDERED_TIME")).over(sorted_order_times))[1]) ) df2.show() | 5 | 5 |
69,276,878 | 2021-9-22 | https://stackoverflow.com/questions/69276878/pytest-unittest-mock-patch-function-from-module | Given a folder structure like such: dags/ **/ code.py tests/ dags/ **/ test_code.py conftest.py Where dags serves as the root of the src files, with 'dags/a/b/c.py' imported as 'a.b.c'. I want to test the following function in code.py: from dag_common.connections import get_conn from utils.database import dbtypes def select_records( conn_id: str, sql: str, bindings, ): conn: dbtypes.Connection = get_conn(conn_id) with conn.cursor() as cursor: cursor.execute( sql, bindings ) records = cursor.fetchall() return records But I am faced with the issue that I fail to find a way to patch the get_conn from dag_common.connections. I attempted the following: (1) Globally in conftest.py import os import sys # adds dags to sys.path for tests/*.py files to be able to import them sys.path.append(os.path.join(os.path.dirname(__file__), "..", "dags")) {{fixtures}} Where I have tested the following replacements for {{fixtures}}: (1.a) - default @pytest.fixture(autouse=True, scope="function") def mock_get_conn(): with mock.patch("dag_common.connections.get_conn") as mock_getter: yield mock_getter (1.b) - prefixing path with dags @pytest.fixture(autouse=True, scope="function") def mock_get_conn(): with mock.patch("dags.dag_common.connections.get_conn") as mock_getter: yield mock_getter (1.c) - 1.a, with scope="session" (1.d) - 1.b, with scope="session" (1.e) - object patching the module itself @pytest.fixture(autouse=True, scope="function") def mock_get_conn(): import dags.dag_common.connections mock_getter = mock.MagicMock() with mock.patch.object(dags.dag_common.connections, 'get_conn', mock_getter): yield mock_getter (1.f) - 1.a, but using pytest-mock fixture @pytest.fixture(autouse=True, scope="function") def mock_get_conn(mocker): with mocker.patch("dag_common.connections.get_conn") as mock_getter: yield mock_getter (1.g) - 1.b, but using pytest-mock fixture (1.h) - 1.a, but using pytest's monkeypatch @pytest.fixture(autouse=True, scope="function") def mock_get_conn(mocker, monkeypatch): import dags.dag_common.connections mock_getter = mocker.MagicMock() monkeypatch.setattr(dags.dag_common.connections, 'get_conn', mock_getter) yield mock_getter (2) Locally applying mock.patch in the test/as a decorator (2.a) - decorator @mock.patch("dag_common.connections.get_conn") @mock.patch("dag_common.connections.get_conn") def test_executes_sql_with_default_bindings(mock_getter, mock_context): # arrange sql = "SELECT * FROM table" records = [RealDictRow(col1=1), RealDictRow(col1=2)] mock_conn = mock_getter.return_value mock_cursor = mock_conn.cursor.return_value mock_cursor.execute.return_value = records # act select_records(conn_id="orca", sql=sql, ) # ... # assert mock_cursor.execute.assert_called_once_with( sql, # ... ) (2.b) - (2.a) but with "dags." prefix (2.c) - context manager def test_executes_sql_with_default_bindings(mock_context): # arrange sql = "SELECT * FROM table" records = [RealDictRow(col1=1), RealDictRow(col1=2)] with mock.patch("dag_common.connections.get_conn") as mock_getter: mock_conn = mock_getter.return_value mock_cursor = mock_conn.cursor.return_value mock_cursor.execute.return_value = records # act select_records(conn_id="orca", sql=sql, ) # ... # assert mock_cursor.execute.assert_called_once_with( sql, # ... ) (2.d) - (2.c) but with "dags." prefix Conclusion But alas, no matter what solution I pick, the function-to-be-mocked still gets called. I made sure to attempt each solution separatedly from each other, and to kill/clear/restart my pytest-watch process in between attempts. I feel like this may be related to me meddling with sys.path in conftest.py, because outside of this I feel like I have exhausted all possibilities. Any idea how I can solve this? | Yeah. I also fought with this initially when I learned patching and mocking and know how frustrating it is as you seem to be doing everything right, but it does not work. I sympathise with you! This is actually how mocking of imported stuff works, and once you realise it, it actually makes sense. The problem is that import works in the way that it makes the imported module available in the context of where your import is. Lets' assume your code.py module is in 'my_package' folder. Your code is available then as my_package.code. And once you use from dag_common.connections import get_conn in code module - the imported get_conn becomes available as .... my_package.code.get_conn And in this case you need to patch my_package.code.get_conn not the original package you imported get_conn from. Once you realise this, patching becomes much easier. | 16 | 38 |
69,277,713 | 2021-9-22 | https://stackoverflow.com/questions/69277713/how-to-change-the-grid-line-color-in-plotly-scatter-plot | I use plotly dash to draw the following scatter charts, and how can I change the grid line color | To customize the grid in plotly, do the following. This will allow you to display the grid on the xy axis, set the line width and line color. fig.update_xaxes(showgrid=True, gridwidth=1, gridcolor='LightPink') fig.update_yaxes(showgrid=True, gridwidth=1, gridcolor='LightPink') | 5 | 10 |
69,276,894 | 2021-9-22 | https://stackoverflow.com/questions/69276894/str-isdigit-behaviour-when-handling-strings | Assuming the following: >>> square = '²' # Superscript Two (Unicode U+00B2) >>> cube = '³' # Superscript Three (Unicode U+00B3) Curiously: >>> square.isdigit() True >>> cube.isdigit() True OK, let's convert those "digits" to integer: >>> int(square) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: '²' >>> int(cube) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: '³' Oooops! Could someone please explain what behavior I should expect from the str.isdigit() method when handling strings? | str.isdigit doesn't claim to be related to parsability as an int. It's reporting a simple Unicode property, is it a decimal character or digit of some sort: str.isdigit() Return True if all characters in the string are digits and there is at least one character, False otherwise. Digits include decimal characters and digits that need special handling, such as the compatibility superscript digits. This covers digits which cannot be used to form numbers in base 10, like the Kharosthi numbers. Formally, a digit is a character that has the property value Numeric_Type=Digit or Numeric_Type=Decimal. In short, str.isdigit is thoroughly useless for detecting valid numbers. The correct solution to checking if a given string is a legal integer is to call int on it, and catch the ValueError if it's not a legal integer. Anything else you do will be (badly) reinventing the same tests the actual parsing code in int() performs, so why not let it do the work in the first place? Side-note: You're using the term "utf-8" incorrectly. UTF-8 is a specific way of encoding Unicode, and only applies to raw binary data. Python's str is an "idealized" Unicode text type; it has no encoding (under the hood, it's stored encoded as one of ASCII, latin-1, UCS-2, UCS-4, and possibly also UTF-8, but none of that is visible at the Python layer outside of indirect measurements like sys.getsizeof, which only hints at the underlying encoding by letting you see how much memory the string consumes). The characters you're talking about are simple Unicode characters above the ASCII range, they're not specifically UTF-8. | 17 | 21 |
69,270,727 | 2021-9-21 | https://stackoverflow.com/questions/69270727/how-to-solve-typeerror-the-json-object-must-be-str-bytes-or-bytearray-not-t | enter image description here i can not solve .... how to solve : TypeError: the JSON object must be str, bytes or bytearray, not TextIOWrapper | You are using loads when you need load. json.load is for file-like objects, and json.loads is for strings. (You could also load the string into memory and then parse it with json.load, but you don't want to do that). | 8 | 22 |
69,213,098 | 2021-9-16 | https://stackoverflow.com/questions/69213098/python-aws-sqs-mocking-with-moto | I am trying to mock an AWS SQS with moto, below is my code from myClass import get_msg_from_sqs from moto import mock_sqs #from moto.sqs import mock_sqs @mock_sqs def test_get_all_msg_from_queue(): #from myClass import get_msg_from_sqs conn = boto3.client('sqs', region_name='us-east-1') queue = conn.create_queue(QueueName='Test') os.environ["SQS_URL"] = queue["QueueUrl"] queue.send_message( MessageBody=json.dumps({'a': '1', 'b': '2', 'c': '3'})) #Tried this as well #conn.send_message(QueueUrl=queue["QueueUrl"], MessageBody=json.dumps({'a': '1', 'b': '2', 'c': '3'})) resp = get_msg_from_sqs(queue["QueueUrl"]) assert resp is not None While executing this I am getting the following error > queue.send_message( MessageBody=json.dumps({'a': '1', 'b': '2', 'c': '3'})) E AttributeError: 'dict' object has no attribute 'send_message' If I try another way of to send a message in SQS(see commented out code #Tried this as well) then at the time of actual SQS calling in my method get_msg_from_sqs, I am getting the below error E botocore.exceptions.ClientError: An error occurred (InvalidAddress) when calling the ReceiveMessage operation: The address https://queue.amazonaws.com/ is not valid for this endpoint. I am running it on win10 with PyCharm and the moto version is set to moto = "^2.2.6" My code is given below sqs = boto3.client('sqs') def get_msg_from_queue(queue_url: str) -> dict: return sqs.receive_message(QueueUrl=queue_url, AttributeNames=['All'], MaxNumberOfMessages=1, VisibilityTimeout=3600, WaitTimeSeconds=0) What am I missing over here? | Your queue variable is a dict returned by create_queue: queue = conn.create_queue(QueueName='Test') It is not a queue and thus you cannot call sendMessage on it. To do that, you need to create a queue object: conn = boto3.client('sqs') sqs = boto3.resource('sqs') response = conn.create_queue(QueueName='Test') queue_url = response["QueueUrl"] queue = sqs.Queue(queue_url) queue.send_message() | 6 | 4 |
69,240,815 | 2021-9-19 | https://stackoverflow.com/questions/69240815/i-am-trying-to-importfrom-torchtext-legacy-data-import-field-bucketiterator-it | I am trying to execute the following code for a nlp proj import torchtext from torchtext.legacy.data import Field, BucketIterator, Iterator from torchtext.legacy import data ----> 6 from torchtext.legacy.data import Field, BucketIterator, Iterator 7 from torchtext.legacy import data 8 ModuleNotFoundError: No module named 'torchtext.legacy'. I have tried it on both kaggle notebook and jupyter notebook and found the same error in both. i even tried to install !pip install -qqq deepmatcher==0.1.1 in kaggle to solve the issue but it still gives the same error. is there any solution to this? | Before you import torchtext.legacy, you need to !pip install torchtext==0.10.0. Maybe legacy was removed in version 0.11.0. | 7 | 12 |
69,205,085 | 2021-9-16 | https://stackoverflow.com/questions/69205085/how-to-make-isort-always-produce-multi-line-output-when-there-are-multiple-impor | I'm currently using isort --profile=black --line-length=79 as a linter in my project for python files. This produces the Vertical Hanging Indent (mode 3 in isort's documentation kind of output: from third_party import ( lib1, lib2, lib3, lib4, ) This multiline mode only applies if the line is longer than 79 characters, though. Is there a mode that cause a multiline output as soon as there are two or more imports on the same line, no matter how long the line is? I tried hacking it with isort -m=3 --trailing-comma --line-length=1, but shorter line length will cause multiline output even when there is a single import, which I don't want: from third_party import ( lib1, ) | You should use the --force-grid-wrap 2 flag in the CLI or set in the settings file like pyproject.toml option force_grid_wrap = 2. This would force isort to produce multiline output for 2 or more imports, regardless of line length. More info about this option | 10 | 10 |
69,250,540 | 2021-9-20 | https://stackoverflow.com/questions/69250540/how-to-read-decode-secure-qr-code-on-indian-aadhaar-card-image | I am trying to extract the complete Aadhar number (12 digits) from the image of an Aadhar card (India) I am able to identify the region with QR code. To extract the info - I have been looking into python libraries that read and decode Secure QR codes on Indian Aadhaar cards. These 2 libraries seem particularly useful for this use case: pyaadhaar aadhaar-py I am unable to decode Secure QR code using them on Aadhaar cards. Information on Secure QR code is available here. Please recommend possible resolutions or some other methods to achieve this task Here is my code for decoding secure QR code using these libraries. Python version: 3.8 from pyaadhaar.utils import Qr_img_to_text, isSecureQr from pyaadhaar.deocde import AadhaarSecureQr from pyaadhaar.deocde import AadhaarOldQr qrData = Qr_img_to_text(sys.argv[1]) print(qrData) if len(qrData) == 0: print(" No QR Code Detected !!") else: isSecureQR = (isSecureQr(qrData[0])) if isSecureQR: print("Secure QR code") try: obj = AadhaarSecureQr(qrData[0]) except: print("Try aadhaar-py library") from aadhaar.qr import AadhaarSecureQR integer_scanned_from_qr = 123456 # secure_qr = AadhaarSecureQR(integer_scanned_from_qr) secure_qr = AadhaarSecureQR(int(qrData[0])) decoded_secure_qr_data = secure_qr.extract_data() print(decoded_secure_qr_data) Here are the issues I am facing with these libraries: pyaadhaar: Secure QR code decoding code, tries to convert base10 string to bytes and fails. NOTE: For Old QR Code format of Aadhaar card, pyaadhaar library works well, this issue only occurs for Secure QR code. Stacktrace below: File "/home/piyush/libs/py38/lib/python3.8/site-packages/pyaadhaar/deocde.py", line 23, in __init__ bytes_array = base10encodedstring.to_bytes(5000, 'big').lstrip(b'\x00') AttributeError: 'str' object has no attribute 'to_bytes' aadhaar-py: Secure QR decoding fails cause it is unable to validate integer received from QR code. Stacktrace below: Traceback (most recent call last): File "/home/piyush/libs/py38/lib/python3.8/site-packages/aadhaar/qr.py", line 55, in init self.decompressed_byte_array = zlib.decompress(self.byte_array, wbits=16+zlib.MAX_WBITS) zlib.error: Error -3 while decompressing data: incorrect header check During handling of the above exception, another exception occurred: Traceback (most recent call last): File "aadhaarQRCode.py", line 52, in secure_qr = AadhaarSecureQR(integer_scanned_from_qr) File "/home/piyush/libs/py38/lib/python3.8/site-packages/aadhaar/qr.py", line 57, in init raise MalformedIntegerReceived('Decompression failed, please send a valid integer received from QR code') aadhaar.exceptions.MalformedIntegerReceived: Decompression failed, please send a valid integer received from QR code | Thanks for posting the question. I am the author of aadhaar-py, the code raises an exception because the data passed to the lib cannot be parsed. It has to be of a certain type in order for it to be parsable. Please refer the following link for an example: https://uidai.gov.in/te/ecosystem-te/authentication-devices-documents-te/qr-code-reader-te.html If you scan the qr code present on the page and pass the data received to the lib, you'll receive the extracted data. P.S.: The Lib has been revamped with a new API. Be sure to check it out :) https://pypi.org/project/aadhaar-py/ | 8 | 1 |
69,239,403 | 2021-9-19 | https://stackoverflow.com/questions/69239403/type-hinting-parameters-with-a-sentinel-value-as-the-default | I currently use this strategy when I cannot assign default arguments in a function's signature and/or None already has meaning. from typing import Optional DEFAULT = object() # `None` already has meaning. def spam(ham: Optional[list[str]] = DEFAULT): if ham is DEFAULT: ham = ['prosciutto', 'jamon'] if ham is None: print('Eggs?') else: print(str(len(ham)) + ' ham(s).') Error: Failed (exit code: 1) (2607 ms) main.py:7: error: Incompatible default for argument "ham" (default has type "object", argument has type "Optional[List[str]]") Found 1 error in 1 file (checked 1 source file) How do I type-hint ham without getting errors in mypy? or What strategy should I use instead of DEFAULT = object()? | Something I like to do — which is only a slight variation on @Blckknght's answer — is to use a metaclass to give my sentinel class a nicer repr and make it always-falsey. sentinel.py from typing import Literal class SentinelMeta(type): def __repr__(cls) -> str: return f'<{cls.__name__}>' def __bool__(cls) -> Literal[False]: return False class Sentinel(metaclass=SentinelMeta): pass main.py from sentinel import Sentinel class DEFAULT(Sentinel): pass You use it in type hints exactly in the same way @Blckknght suggests: def spam(ham: list[str]|None|type[DEFAULT] = DEFAULT): ... But you have the added advantages that your sentinel value is always falsey and has a nicer repr: >>> DEFAULT <DEFAULT> >>> bool(DEFAULT) False | 12 | 6 |
69,265,924 | 2021-9-21 | https://stackoverflow.com/questions/69265924/cloud-run-flask-api-container-running-shutit-enters-a-sleep-loop | The issue has appeared recently and the previously healthy container now enters a sleep loop when a shutit session is being created. The issue occurs only on Cloud Run and not locally. Minimum reproducible code: requirements.txt Flask==2.0.1 gunicorn==20.1.0 shutit Dockerfile FROM python:3.9 # Allow statements and log messages to immediately appear in the Cloud Run logs ENV PYTHONUNBUFFERED True COPY requirements.txt ./ RUN pip install -r requirements.txt # Copy local code to the container image. ENV APP_HOME /myapp WORKDIR $APP_HOME COPY . ./ CMD exec gunicorn \ --bind :$PORT \ --worker-class "sync" \ --workers 1 \ --threads 1 \ --timeout 0 \ main:app main.py import os import shutit from flask import Flask, request app = Flask(__name__) # just to prove api works @app.route('/ping', methods=['GET']) def ping(): os.system('echo pong') return 'OK' # issue replication @app.route('/healthcheck', methods=['GET']) def healthcheck(): os.system("echo 'healthcheck'") # hangs inside create_session shell = shutit.create_session(echo=True, loglevel='debug') # never shell.send reached shell.send('echo Hello World', echo=True) # never returned return 'OK' if __name__ == '__main__': app.run(host='127.0.0.1', port=8080, debug=True) cloudbuild.yaml steps: - id: "build_container" name: "gcr.io/kaniko-project/executor:latest" args: - --destination=gcr.io/$PROJECT_ID/borked-service-debug:latest - --cache=true - --cache-ttl=99h - id: "configure infrastructure" name: "gcr.io/cloud-builders/gcloud" entrypoint: "bash" args: - "-c" - | set -euxo pipefail REGION="europe-west1" CLOUD_RUN_SERVICE="borked-service-debug" SA_NAME="$${CLOUD_RUN_SERVICE}@${PROJECT_ID}.iam.gserviceaccount.com" gcloud beta run deploy $${CLOUD_RUN_SERVICE} \ --service-account "$${SA_NAME}" \ --image gcr.io/${PROJECT_ID}/$${CLOUD_RUN_SERVICE}:latest \ --allow-unauthenticated \ --platform managed \ --concurrency 1 \ --max-instances 10 \ --timeout 1000s \ --cpu 1 \ --memory=1Gi \ --region "$${REGION}" cloud run logs that get looped: Setting up prompt In session: host_child, trying to send: export PS1_ORIGIN_ENV=$PS1 && PS1='OR''IGIN_ENV:rkkfQQ2y# ' && PROMPT_COMMAND='sleep .05||sleep 1' ================================================================================ Sending>>> export PS1_ORIGIN_ENV=$PS1 && PS1='OR''IGIN_ENV:rkkfQQ2y# ' && PROMPT_COMMAND='sleep .05||sleep 1'<<<, expecting>>>['\r\nORIGIN_ENV:rkkfQQ2y# ']<<< Sending in pexpect session (68242035994000): export PS1_ORIGIN_ENV=$PS1 && PS1='OR''IGIN_ENV:rkkfQQ2y# ' && PROMPT_COMMAND='sleep .05||sleep 1' Expecting: ['\r\nORIGIN_ENV:rkkfQQ2y# '] export PS1_ORIGIN_ENV=$PS1 && PS1='OR''IGIN_ENV:rkkfQQ2y# ' && PROMPT_COMMAND='sleep .05||sleep 1' root@localhost:/myapp# export PS1_ORIGIN_ENV=$PS1 && PS1='OR''IGIN_ENV:rkkfQQ2y# ' && PROMPT_COMMAND='sleep .05||sleep 1' Stopped sleep .05 Stopped sleep 1 pexpect: buffer: b'' before: b'cm9vdEBsb2NhbGhvc3Q6L3B1YnN1YiMgIGV4cx' after: b'DQpPUklHSU5fRU5WOnJra2ZRUTJ5IyA=' Resetting default expect to: ORIGIN_ENV:rkkfQQ2y# In session: host_child, trying to send: stty cols 65535 ================================================================================ Sending>>> stty cols 65535<<<, expecting>>>ORIGIN_ENV:rkkfQQ2y# <<< Sending in pexpect session (68242035994000): stty cols 65535 Expecting: ORIGIN_ENV:rkkfQQ2y# ORIGIN_ENV:rkkfQQ2y# stty cols 65535 stty cols 65535 Stopped stty cols 65535 Stopped sleep .05 Stopped sleep 1 Workarounds tried: Different regions: a few European(tier 1 and 2), Asia, US. Build with docker instead of kaniko Different CPU and Memory allocated to the container Minimum number of containers 1-5 (to ensure CPU is always allocated to the container) --no-cpu-throttling also made no difference Maximum number of containers 1-30 Different GCP project Different Docker base images (3.5-3.9 + various shas ranging from a year ago to recent ones) | I have reproduced your issue and we have discussed several possibilities, I think the issue is your Cloud Run not being able to process requests and hence preparing to shut down(sigterm). I am listing some possibilities for you to look at and analyse. A good reason for your Cloud Run service failing to start is that the server process inside the container is configured to listen on the localhost (127.0.0.1) address. This refers to the loopback network interface, which is not accessible from outside the container and therefore Cloud Run health check cannot be performed, causing the service deployment failure. To solve this, configure your application to start the HTTP server to listen on all network interfaces, commonly denoted as 0.0.0.0. While searching for the cloud logs error you are getting, I came across this answer and GitHub link from the shutit library developer which points to a technique for tracking inputs and outputs in complex container builds in shutit sessions. One good finding from the GitHub link, I think you will have to pass the session_type in shutit.create_session(‘bash’) or shutit.create_session(‘docker’) which you are not specifying in the main.py file. That can be the reason your shutit session is failing. Also this issue could be due to some Linux kernel feature used by this shutit library which is not currently supported properly in gVisor . I am not sure how it was executed for you the first time. Most apps will work fine, or at least as well as in regular Docker, but may not provide 100% compatibility. Cloud Run applications run on gVisor container sandbox(which supports Linux only currently), which executes Linux kernel system calls made by your application in userspace. gVisor does not implement all system calls (see here). From this Github link, “If your app has such a system call (quite rare), it will not work on Cloud Run. Such an event is logged and you can use strace to determine when the system call was made in your app” If you're running your code on Linux, install and enable strace: sudo apt-get install strace Run your application with strace by prefacing your usual invocation with strace -f where -f means to trace all child threads. For example, if you normally invoke your application with ./main, you can run it with strace by invoking /usr/bin/strace -f ./main From this documentation, “ if you feel your issue is caused by a limitation in the Container sandbox . In the Cloud Logging section of the GCP Console (not in the "Logs'' tab of the Cloud Run section), you can look for Container Sandbox with a DEBUG severity in the varlog/system logs or use the Log Query: resource.type="cloud_run_revision" logName="projects/PROJECT_ID/logs/run.googleapis.com%2Fvarlog%2Fsystem" For example: Container Sandbox: Unsupported syscall setsockopt(0x3,0x1,0x6,0xc0000753d0,0x4,0x0)” By default, container instances have min-instances turned off, with a setting of 0. We can change this default using the Cloud Console, the gcloud command line, or a YAML file, by specifying a minimum number of container instances to be kept warm and ready to serve requests. You can also have a look at this documentation and GitHub Link which talks about the Cloud Run container runtime behaviour and troubleshooting for reference. | 6 | 2 |
69,224,969 | 2021-9-17 | https://stackoverflow.com/questions/69224969/how-to-avoid-to-start-hundreds-of-threads-when-starting-very-short-actions-at | I use this method to launch a few dozen (less than thousand) of calls of do_it at different timings in the future: import threading timers = [] while True: for i in range(20): t = threading.Timer(i * 0.010, do_it, [i]) # I pass the parameter i to function do_it t.start() timers.append(t) # so that they can be cancelled if needed wait_for_something_else() # this can last from 5 ms to 20 seconds The runtime of each do_it call is very fast (much less than 0.1 ms) and non-blocking. I would like to avoid spawning hundreds of new threads for such a simple task. How could I do this with only one additional thread for all do_it calls? Is there a simple way to do this with Python, without third party library and only standard library? | As I understand it, you want a single worker thread that can process submitted tasks, not in the order they are submitted, but rather in some prioritized order. This seems like a job for the thread-safe queue.PriorityQueue. from dataclasses import dataclass, field from threading import Thread from typing import Any from queue import PriorityQueue @dataclass(order=True) class PrioritizedItem: priority: int item: Any=field(compare=False) def thread_worker(q: PriorityQueue[PrioritizedItem]): while True: do_it(q.get().item) q.task_done() q = PriorityQueue() t = Thread(target=thread_worker, args=(q,)) t.start() while True: for i in range(20): q.put(PrioritizedItem(priority=i * 0.010, item=i)) wait_for_something_else() This code assumes you want to run forever. If not, you can add a timeout to the q.get in thread_worker, and return when the queue.Empty exception is thrown because the timeout expired. Like that you'll be able to join the queue/thread after all the jobs have been processed, and the timeout has expired. If you want to wait until some specific time in the future to run the tasks, it gets a bit more complicated. Here's an approach that extends the above approach by sleeping in the worker thread until the specified time has arrived, but be aware that time.sleep is only as accurate as your OS allows it to be. from dataclasses import astuple, dataclass, field from datetime import datetime, timedelta from time import sleep from threading import Thread from typing import Any from queue import PriorityQueue @dataclass(order=True) class TimedItem: when: datetime item: Any=field(compare=False) def thread_worker(q: PriorityQueue[TimedItem]): while True: when, item = astuple(q.get()) sleep_time = (when - datetime.now()).total_seconds() if sleep_time > 0: sleep(sleep_time) do_it(item) q.task_done() q = PriorityQueue() t = Thread(target=thread_worker, args=(q,)) t.start() while True: now = datetime.now() for i in range(20): q.put(TimedItem(when=now + timedelta(seconds=i * 0.010), item=i)) wait_for_something_else() To address this problem using only a single extra thread we have to sleep in that thread, so it's possible that new tasks with higher priority could come in while the worker is sleeping. In that case the worker would process that new high priority task after it's done with the current one. The above code assumes that scenario will not happen, which seems reasonable based on the problem description. If that might happen you can alter the sleep code to repeatedly poll if the task at the front of the priority queue has come due. The disadvantage with a polling approach like that is that it would be more CPU intensive. Also, if you can guarantee that the relative order of the tasks won't change after they've been submitted to the worker, then you can replace the priority queue with a regular queue.Queue to simplify the code somewhat. These do_it tasks can be cancelled by removing them from the queue. The above code was tested with the following mock definitions: def do_it(x): print(x) def wait_for_something_else(): sleep(5) An alternative approach that uses no extra threads would be to use asyncio, as pointed out by smcjones. Here's an approach using asyncio that calls do_it at specific times in the future by using loop.call_later: import asyncio def do_it(x): print(x) async def wait_for_something_else(): await asyncio.sleep(5) async def main(): loop = asyncio.get_event_loop() while True: for i in range(20): loop.call_later(i * 0.010, do_it, i) await wait_for_something_else() asyncio.run(main()) These do_it tasks can be cancelled using the handle returned by loop.call_later. This approach will, however, require either switching over your program to use asyncio throughout, or running the asyncio event loop in a separate thread. | 5 | 6 |
69,263,078 | 2021-9-21 | https://stackoverflow.com/questions/69263078/pandas-dataframe-to-excel-cell-alignment | I noticed that for string in Dataframe will keep left align in Excel and for numerical value will keep right align in Excel. How do we set the desired alignment we wanted when exporting DataFrame to Excel? Example: Center Alignment df = pd.DataFrame({"colname1": ["a","b","c","d"], "colname2": [1,2,3,4]}) with pd.ExcelWriter("test.xlsx") as writer: df.to_excel( writer, index=False, header=False, ) | You can set the styles of the dataframe using the Styler object, which uses the same conventions as CSS. The documentation has a great primer on the different ways of styling your dataframes. For a simple solution to your example, you can set the desired alignment by first creating a function: def align_center(x): return ['text-align: center' for x in x] Then write it to Excel while applying the function you just defined: with pd.ExcelWriter("test.xlsx") as writer: df.style.apply(align_center, axis=0).to_excel( writer, index=False, header=False ) This will center-align the cells in the Excel file. For an exhaustive list of available text alignment options I would suggest the MDN docs. | 9 | 8 |
69,262,697 | 2021-9-21 | https://stackoverflow.com/questions/69262697/divide-group-data-base-on-select-columns-values | df ts_code type close 0 861001.TI 1 648.399 1 861001.TI 20 588.574 2 861001.TI 30 621.926 3 861001.TI 60 760.623 4 861001.TI 90 682.313 ... ... ... ... 8328 885933.TI 5 1083.141 8329 885934.TI 1 951.493 8330 885934.TI 5 1011.346 8331 885935.TI 1 1086.558 8332 885935.TI 5 1028.449 Goal ts_code l5d_close l20d_close …… l90d_close 861001.TI NaN 1.10 0.95 …… …… …… …… I want to groupby ts_code to calculate the close of type(1)/the close of type(N:5,20,30……). Take 861001.TI for example, l5d_close is nan because there is no value when the type is 5. l20d_close equals 648.399/588.574=1.10, l90d_close equals 648.399/682.313=0.95. And the result is rounded. Try df.groupby('ts_code')\ .pipe(lambda x: x[x.type==1].close/x[x.type==10].close) Got: KeyError: 'Column not found: False' The type values is: 1,5,20,30,60,90,180,200 Notice: There is one value of type columns for each ts_code | Use sort_values to make sure type == 1 is the first row per group and extract them with groupby.transform('first'): df = df.sort_values(['ts_code', 'type']) close1 = df.groupby('ts_code')['close'].transform('first') df['close'] = close1 / df['close'] # ts_code type close # 0 861001.TI 1 1.000000 # 1 861001.TI 20 1.101644 # 2 861001.TI 30 1.042566 # 3 861001.TI 60 0.852458 # ... ... ... ... Then pivot the type column into column headers: out = (df.pivot(index='ts_code', columns='type', values='close') .drop(columns=1) .add_prefix('l') .add_suffix('d_close')) # type l5d_close l20d_close l30d_close l60d_close l90d_close # ts_code # 861001.TI NaN 1.101644 1.042566 0.852458 0.950296 # ... ... ... ... ... ... To chain together, assign a ratio column before the pivot: (df.assign(ratio=df.groupby('ts_code').close.transform('first').div(df.close)) .pivot(index='ts_code', columns='type', values='ratio') .drop(columns=1) .add_prefix('l') .add_suffix('d_close')) # type l5d_close l20d_close l30d_close l60d_close l90d_close # ts_code # 861001.TI NaN 1.101644 1.042566 0.852458 0.950296 # ... ... ... ... ... ... | 5 | 5 |
69,234,978 | 2021-9-18 | https://stackoverflow.com/questions/69234978/how-to-visualize-gensim-word2vec-embeddings-in-tensorboard-projector | Following gensim word2vec embedding tutorial, I have trained a simple word2vec model: from gensim.test.utils import common_texts from gensim.models import Word2Vec model = Word2Vec(sentences=common_texts, size=100, window=5, min_count=1, workers=4) model.save("/content/word2vec.model") I would like to visualize it using the Embedding Projector in TensorBoard. There is another straightforward tutorial in gensim documentation. I did the following in Colab: !python3 -m gensim.scripts.word2vec2tensor -i /content/word2vec.model -o /content/my_model Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/gensim/scripts/word2vec2tensor.py", line 94, in <module> word2vec2tensor(args.input, args.output, args.binary) File "/usr/local/lib/python3.7/dist-packages/gensim/scripts/word2vec2tensor.py", line 68, in word2vec2tensor model = gensim.models.KeyedVectors.load_word2vec_format(word2vec_model_path, binary=binary) File "/usr/local/lib/python3.7/dist-packages/gensim/models/keyedvectors.py", line 1438, in load_word2vec_format limit=limit, datatype=datatype) File "/usr/local/lib/python3.7/dist-packages/gensim/models/utils_any2vec.py", line 172, in _load_word2vec_format header = utils.to_unicode(fin.readline(), encoding=encoding) File "/usr/local/lib/python3.7/dist-packages/gensim/utils.py", line 355, in any2unicode return unicode(text, encoding, errors=errors) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte Please note that I did check first this exact same question from 2018 - but the accepted answer no longer works as both in gensim and tensorflow have been updated so I considered it was worth asking again in Q4 2021. | Saving the model in the original C word2vec implementation format resolves the issue: model.wv.save_word2vec_format("/content/word2vec.model"): from gensim.test.utils import common_texts from gensim.models import Word2Vec model = Word2Vec(sentences=common_texts, size=100, window=5, min_count=1, workers=4) model.wv.save_word2vec_format("/content/word2vec.model") There are two formats of storing word2vec models in gensim: keyed vector format from the original word2vec implementation and format that additionally stores hidden weights, vocabulary frequencies, and more. Examples and details can be found in the documentation. The script word2vec2tensor.py uses the original format and loads the model with load_word2vec_format: code. | 5 | 1 |
69,212,337 | 2021-9-16 | https://stackoverflow.com/questions/69212337/pandas-using-apply-lambda-with-two-different-operators | This question is very similar to one I posted before with just one change. Instead of doing just the absolute difference for all the columns I also want to find the magnitude difference for the 'Z' column, so if the current Z is 1.1x greater than prev than keep it. (more context to the problem) Pandas using the previous rank values to filter out current row df = pd.DataFrame({ 'rank': [1, 1, 2, 2, 3, 3], 'x': [0, 3, 0, 3, 4, 2], 'y': [0, 4, 0, 4, 5, 5], 'z': [1, 3, 1.2, 3.25, 3, 6], }) print(df) # rank x y z # 0 1 0 0 1.00 # 1 1 3 4 3.00 # 2 2 0 0 1.20 # 3 2 3 4 3.25 # 4 3 4 5 3.00 # 5 3 2 5 6.00 Here's what I want the output to be output = pd.DataFrame({ 'rank': [1, 1, 2, 3], 'x': [0, 3, 0, 2], 'y': [0, 4, 0, 5], 'z': [1, 3, 1.2, 6], }) print(output) # rank x y z # 0 1 0 0 1.0 # 1 1 3 4 3.0 # 2 2 0 0 1.2 # 5 3 2 5 6.00 basically what I want to happen is if the previous rank has any rows with x, y (+- 1 both ways) AND z (<1.1z) to remove it. So for the rows rank 1 ANY rows in rank 2 that have any combo of x = (-1-1), y = (-1-1), z= (<1.1) OR x = (2-5), y = (3-5), z= (<3.3) I want it to be removed | I have modified mozway's function so that it works according to your requirements. # comparing 'equal' float values, may go wrong, that's why I am using this constant DELTA=0.1**12 def check_previous_group(rank, d, groups): if not rank-1 in groups.groups: # check if a previous group exists, else flag all rows False (i.e. not to be dropped) #return pd.Series(False, index=d.index) return pd.Series(False, index=d.index) else: # get previous group (rank-1) d_prev = groups.get_group(rank-1) # get the absolute difference per row with the whole dataset # of the previous group: abs(d_prev-s) # if differences in x and y are within 1 and z < 1.1*x # for at least one row of the previous group # then flag the row to be dropped (True) return d.apply(lambda s: (abs(d_prev-s)[['x', 'y']].le([1,1]).all(1)& (s['z']<1.1*d_prev['x']-DELTA)).any(), axis=1) tests, >>> df = pd.DataFrame({ 'rank': [1, 1, 2, 2, 3, 3], 'x': [0, 3, 0, 3, 4, 2], 'y': [0, 4, 0, 4, 5, 5], 'z': [1, 3, 1.2, 3.25, 3, 6], }) >>> df rank x y z 0 1 0 0 1.00 1 1 3 4 3.00 2 2 0 0 1.20 3 2 3 4 3.25 4 3 4 5 3.00 5 3 2 5 6.00 >>> groups = df.groupby('rank') >>> mask = pd.concat([check_previous_group(rank, d, groups) for rank,d in groups]) >>> df[~mask] rank x y z 0 1 0 0 1.0 1 1 3 4 3.0 2 2 0 0 1.2 5 3 2 5 6.0 >>> df = pd.DataFrame({ 'rank': [1, 1, 2, 2, 3, 3], 'x': [0, 3, 0, 3, 4, 2], 'y': [0, 4, 0, 4, 5, 5], 'z': [1, 3, 1.2, 3.3, 3, 6], }) >>> df rank x y z 0 1 0 0 1.0 1 1 3 4 3.0 2 2 0 0 1.2 3 2 3 4 3.3 4 3 4 5 3.0 5 3 2 5 6.0 >>> groups = df.groupby('rank') >>> mask = pd.concat([check_previous_group(rank, d, groups) for rank,d in groups]) >>> df[~mask] rank x y z 0 1 0 0 1.0 1 1 3 4 3.0 2 2 0 0 1.2 3 2 3 4 3.3 5 3 2 5 6.0 | 6 | 1 |
69,262,618 | 2021-9-21 | https://stackoverflow.com/questions/69262618/why-is-this-code-able-to-use-the-sklearn-function-without-import-sklearn | So I just watched a tutorial that the author didn't need to import sklearn when using predict function of pickled model in anaconda environment (sklearn installed). I have tried to reproduce the minimal version of it in Google Colab. If you have a pickled-sklearn-model, the code below works in Colab (sklearn installed): import pickle model = pickle.load(open("model.pkl", "rb"), encoding="bytes") out = model.predict([[20, 0, 1, 1, 0]]) print(out) I realized that I still need the sklearn package installed. If I uninstall the sklearn, the predict function now is not working: !pip uninstall scikit-learn import pickle model = pickle.load(open("model.pkl", "rb"), encoding="bytes") out = model.predict([[20, 0, 1, 1, 0]]) print(out) the error: WARNING: Skipping scikit-learn as it is not installed. --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-1-dec96951ae29> in <module>() 1 get_ipython().system('pip uninstall scikit-learn') 2 import pickle ----> 3 model = pickle.load(open("model.pkl", "rb"), encoding="bytes") 4 out = model.predict([[20, 0, 1, 1, 0]]) 5 print(out) ModuleNotFoundError: No module named 'sklearn' So, how does it work? as far as I understand pickle doesn't depend on scikit-learn. Does the serialized model do import sklearn? Why can I use predict function without import scikit learn in the first code? | There's a few questions being asked here, so let's go through them one by one: So, how does it work? as far as I understand pickle doesn't depend on scikit-learn. There is nothing particular to scikit-learn going on here. Pickle will exhibit this behaviour for any module. Here's an example with Numpy: will@will-desktop ~ $ python Python 3.9.6 (default, Aug 24 2021, 18:12:51) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> 'numpy' in sys.modules False >>> import numpy >>> 'numpy' in sys.modules True >>> pickle.dumps(numpy.array([1, 2, 3])) b'\x80\x04\x95\xa0\x00\x00\x00\x00\x00\x00\x00\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x03\x85\x94h\x03\x8c\x05dtype\x94\x93\x94\x8c\x02i8\x94\x89\x88\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C\x18\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x94t\x94b.' >>> exit() So far what I've done is show that in a fresh Python process 'numpy' is not in sys.modules (the dict of imported modules). Then we import Numpy, and pickle a Numpy array. Then in a new Python process shown below, we we see that before we unpickle the array Numpy has not been imported, but after we have Numpy has been imported. will@will-desktop ~ $ python Python 3.9.6 (default, Aug 24 2021, 18:12:51) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pickle >>> import sys >>> 'numpy' in sys.modules False >>> pickle.loads(b'\x80\x04\x95\xa0\x00\x00\x00\x00\x00\x00\x00\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x03\x85\x94h\x03\x8c\x05dtype\x94\x93\x94\x8c\x02i8\x94\x89\x88\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C\x18\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x94t\x94b.') array([1, 2, 3]) >>> 'numpy' in sys.modules True >>> numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'numpy' is not defined Despite being imported, however, numpy is still not a defined variable name. Imports in Python are global, but an import will only update the namespace of the module that actually did the import. If we want to access numpy we still need to write import numpy, but since Numpy was already imported elsewhere in the process this will not re-run Numpy's module initialization code. Instead it will create a numpy variable in our module's globals dictionary, and make it a reference to the Numpy module object that existed beforehand, and could be accessed through sys.modules['numpy']. So what is Pickle doing here? It embeds the information about what module was used to define whatever it is pickling within the pickle. Then when it unpickles something, it uses that information to import the module such that it can use the unpickle method of the class. We can look to the source code for the Pickle module we can see that's what's happening: In the _Pickler we see save method uses the save_global method. This in turn uses the whichmodule function to obtain the module name ('scikit-learn', in your case), which is then saved in the pickle. In the _UnPickler we see the find_class method uses __import__ to import the module using the stored module name. The find_class method is used in a few of the load_* methods, such as load_inst, which is what would be used to load an instance of a class, such as your model instance: def load_inst(self): module = self.readline()[:-1].decode("ascii") name = self.readline()[:-1].decode("ascii") klass = self.find_class(module, name) self._instantiate(klass, self.pop_mark()) The documentation for Unpickler.find_class explains: Import module if necessary and return the object called name from it, where the module and name arguments are str objects. The docs also explain how you can restrict this behaviour: [You] may want to control what gets unpickled by customizing Unpickler.find_class(). Unlike its name suggests, Unpickler.find_class() is called whenever a global (i.e., a class or a function) is requested. Thus it is possible to either completely forbid globals or restrict them to a safe subset. Though this is generally only relevant when unpickling untrusted data, which doesn't appear to be the case here. Does the serialized model do import sklearn? The serialized model itself doesn't do anything, strictly speaking. It's all handled by the Pickle module as described above. Why can I use predict function without import scikit learn in the first code? Because sklearn is imported by the Pickle module when it unpickles the data, thereby providing you with a fully realized model object. It's just like if some other module imported sklearn, created the model object, and then passed it into your code as a parameter to a function. As a consequence of all this, in order to unpickle your model you'll need to have sklearn installed - ideally the same version that was used to create the pickle. In general the Pickle module stores the fully qualified path of any required module, so the Python process that pickles the object and the one that unpickles the object must have all [1] required modules exist with the same fully qualified names. [1] A caveat to that is that the Pickle module can automatically adjust/fix certain imports for particular modules/classes that have different fully qualified names between Python 2 and 3. From the docs: If fix_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. | 7 | 11 |
69,262,518 | 2021-9-21 | https://stackoverflow.com/questions/69262518/problem-with-simpleeval-installationuse-2to3-invalid | We are using poetry to upgrade packages and deploy to our servers but some issue is stopping us from deploying our work continuously to our servers.The code below is the stacktrack where our code stops. $ poetry update Creating virtualenv kpbackend-ad2VTdyQ-py3.9 in /root/.cache/pypoetry/virtualenvs Updating dependencies Resolving dependencies... Writing lock file Package operations: 171 installs, 0 updates, 0 removals • Installing shortuuid (1.0.1) • Installing simpleeval (0.9.10) • Installing starkbank-ecdsa (1.1.1) • Installing starlette (0.13.8) EnvCommandError Command ['/root/.cache/pypoetry/virtualenvs/kpbackend-ad2VTdyQ-py3.9/bin/pip', 'install', '--no-deps', '/root/.cache/pypoetry/artifacts/49/c9/a8/c45627062eb893ac0685ce1146f6b868eea117d5803fc63c56d21326de/simpleeval-0.9.10.tar.gz'] errored with the following return code 1, and output: Processing /root/.cache/pypoetry/artifacts/49/c9/a8/c45627062eb893ac0685ce1146f6b868eea117d5803fc63c56d21326de/simpleeval-0.9.10.tar.gz ERROR: Command errored out with exit status 1: command: /root/.cache/pypoetry/virtualenvs/kpbackend-ad2VTdyQ-py3.9/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-408xzy97/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-408xzy97/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-lfjp6ggy cwd: /tmp/pip-req-build-408xzy97/ Complete output (1 lines): error in simpleeval setup command: use_2to3 is invalid. ---------------------------------------- WARNING: Discarding file:///root/.cache/pypoetry/artifacts/49/c9/a8/c45627062eb893ac0685ce1146f6b868eea117d5803fc63c56d21326de/simpleeval-0.9.10.tar.gz. Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. at /usr/local/lib/python3.9/site-packages/poetry/utils/env.py:1180 in _run 1176│ output = subprocess.check_output( 1177│ cmd, stderr=subprocess.STDOUT, **kwargs 1178│ ) 1179│ except CalledProcessError as e: → 1180│ raise EnvCommandError(e, input=input_) 1181│ 1182│ return decode(output) 1183│ 1184│ def execute(self, bin, *args, **kwargs): Cleaning up file-based variables ERROR: Job failed: exit code 1 I am suspecting some package update broke it as 3 days ago this step worked fine in the Gitlab CI. | I encountered the same error some time ago. The issue seems to be connected with the latest upgrade of setuptools package (https://setuptools.pypa.io/en/latest/history.html#v58-0-0). The workaround that is working for me is to use setuptools<=57.5.0. | 4 | 8 |
69,190,210 | 2021-9-15 | https://stackoverflow.com/questions/69190210/django-django-rest-how-do-i-save-user-device-to-prevent-tedious-2fa-on-every-log | Hello I have been working with Django Rest Framework with JWT as authentication framework and I successfully made Two factor authentication Login based on Email OTP but one thing I want to improve is I want to improve login and save user's device so that repeated 2FA(Two factor Authentcation) can be minimized? here is certain instance of code I did for sending otp on user email. serializers.py class UserLoginSerializer(serializers.Serializer): email = serializers.EmailField() password = PasswordField() views.py class UserLoginView(generics.CreateWithMessageAPIView): """ Use this end-point to get login for user """ message = _('Please check your email for 6 digit OTP code.') serializer_class = serializers.UserLoginSerializer def perform_create(self, serializer): usecases.UserLoginWithOTPUseCase(self.request, serializer=serializer).execute() usecases.py class UserLoginWithOTPUseCase(CreateUseCase, OTPMixin): def __init__(self, request, serializer): self._request = request super().__init__(serializer) def execute(self): self._factory() def _factory(self): credentials = { 'username': self._data['email'], 'password': self._data['password'] } self._user = authenticate(self._request, **credentials) if self._user is not None: """ Sends email confirmation mail to the user's email :return: None """ code = self._generate_totp( user=self._user, purpose='2FA', interval=180 ) EmailVerificationEmail( context={ 'code': code, 'uuid': self._user.id } ).send(to=[self._user.email]) else: raise PermissionDenied( { 'authentication_error': _('User name or password not matched') } ) I am confused how can I allow or save device to prevent repetitive 2FA. | TLDR; At a very high level: tokenize the OTP (exchange the otp for a JWT). Explanation A JWT is nothing more than a JSON signed payload with some standardized fields exp (expiration), nbf (not before), etc.... The signature (symmetric or asymmetric) assures integrity, authenticity, and non-repudiation (eg the token has been issued by whom we think and has not been altered). Within a JWT payload you can put anything (keeping in mind that the value is not encrypted) including data required for your application logic. You can also exchange different JWT during different authentication steps. Some examples Deferred 2FA User gives you <username, password> if valid it receives a type_1_jwt if 2FA is enabled or else a type_2_jwt User gives you <type_1_jwt, otp> if both are valid it receives a type_2_jwt Your app accepts only type_2_jwt on all other endpoints Here you have the flexibility to request the OTP after initial user authentication (or even if a suspicious device is detected). If the OTP is required user is not required to insert the password again but also the password is never stored anywhere nor re-submitted in any future request. The security level is kept the same as <username, password, otp> simultaneous authentication Renew Token User authenticates and receives a renew_jwt (expires after a long period or never) User exchanges the renew_jwt for a auth_jwt (fast expiring) Your app accepts only auth_jwt on all other endpoints This solution allows for great flexibility if you want the authentication to be revoked or altered (altering roles). You can have auth_jwt that is used within your entire application but expires very fast, then you have a renew_jwt that has been already traded for proper authentication and can be used to automatically request an auth_jwt, this request will be met by the system only if certain criteria are met (for example authentication has not been revoked). Mixing all together User gives you <username, password, otp> it receives a auth_jwt (or a renew_jwt) and a otp_jwt. User gives you <username, password, otp_jwt> Your app accepts only auth_jwt on all other endpoints After the first authentication the user device receives an otp_jwt (that has an arbitrarily long duration) and can be stored on the device (cookies, appStorage, secureStorage, ...). This otp_jwt can be used in any future moment (together with username and password) to authenticate again. Conceptually the otp_jwt token is a secure proof the user has successfully authenticated with an OTP in the past Assuming the device can be considered reasonably safe, such strategy does not significantly increase the risk of unauthorized access. But keep in mind that you should always give an option to the user to completely log out eg to clear this token too. | 5 | 3 |
69,254,006 | 2021-9-20 | https://stackoverflow.com/questions/69254006/tuple-with-multiple-numbers-of-arbitrary-but-equal-type | Currently, I am checking for tuples with multiple (e.g. three) numbers of arbitrary but equal type in the following form: from typing import Tuple, Union Union[Tuple[int, int, int], Tuple[float, float, float]] I want to make this check more generic, also allowing numpy number types. I.e. I tried to use numbers.Number: from numbers import Number from typing import Tuple Tuple[Number, Number, Number] The above snipped also allows tuples of mixed types as long as everything is a number. I'd like to restrict the tuple to numbers of equal type. How can this be achieved? Technically, this question applies to Python and the type hints specification itself. However, as pointed out in the comments, its handling is implementation specific, i.e. MyPy will not catch every edge case and/or inconsistency. Personally, I am using run-time checks with typeguard for testing and deactivate them entirely in production. | You can use TypeVar with bound argument. It allows restricting types to subtypes of a given type. In your case, the types should be the subtypes of Number: from numbers import Number from typing import TypeVar T = TypeVar('T', bound=Number) Tuple[T, T, T] Why does it work? TypeVar is a variable that allows to use a particular type several times in type signatures. The simplest example: from typing import TypeVar, Tuple T = TypeVar('T') R = TypeVar('R') def swap(x: T, y: R) -> Tuple[R, T]: return y, x The static type checker will infer that arguments of the function swap should be the same as outputs in reversed order (note that return type T will be the same as the input type T). Next, it might be useful to introduce restrictions for TypeVar. For instance, one could restrict values of a type variable to specific types: TypeVar('T', int, str). In this answer, we use another kind of restriction with a keyword bound – it checks if the values of the type variable are subtypes of a given type (in our case, Number, which is a base class for all numerical types in Python). More examples: mypy documentation and PEP 484. Working test: from numbers import Number from typing import Tuple, TypeVar import numpy as np from typeguard import typechecked T = TypeVar('T', bound=Number) @typechecked def foo(data: Tuple[T, T, T]): print('expected OK:', data) for data in ( (1, 2, 3), # ok (1.0, 2.0, 3.0), # ok (1, 2.0, 3), # TypeError (1.0, 2.0, 3), # TypeError (1.0, 2.0, np.float64(3.0)), # ok (yes!) (1, 2.0, np.float32(3)), # TypeError (1, 2.0, np.uint8(3)), # TypeError ): try: foo( data ) except TypeError: print('expected TypeError:', data) | 11 | 9 |
69,260,592 | 2021-9-20 | https://stackoverflow.com/questions/69260592/construct-graph-connectivity-matrices-in-coo-format | I have faced the following subtask while working with graph data: I need to construct graph connectivity matrices in COO format for graphs with several fully-connected components from arrays of "border" indices. As an example, given array borders = [0, 2, 5] the resulting COO matrix should be coo_matrix = [[0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], [0, 1, 0, 1, 2, 3, 4, 2, 3, 4, 2, 3, 4]]. That is, borders array contains ranges of nodes that should form fully-connected subgraphs (starting index included and ending index excluded). I came up with the following algorithm, but I suspect that the perfomance could be improved: import numpy as np def get_coo(borders): edge_list = [] for s, e in zip(borders, borders[1:]): # create fully-connected subgraph arr = np.arange(s, e) t = np.array(np.meshgrid(arr, arr)).T.reshape(-1, 2) t = t.T edge_list.append(t) edge_list = np.concatenate(edge_list, axis=1) return edge_list I feel there may be a faster solution, maybe using some numpy vectorized operations. Does anyone have any ideas? | Since your goal is a faster solution than what you have, you can explore itertools for solving this efficiently. This approach benchmarks approximately 25 times faster than your current approach as tested on larger border lists. import numpy as np from itertools import product, chain def get_coo(borders): edges = chain(*[product(range(*i),repeat=2) for i in zip(borders, borders[1:])]) return list(edges) output = get_coo(borders) ## NOTE: Remember to can convert to array and Transpose for all approaches below to get Coo format. np.array(output).T array([[0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], [0, 1, 0, 1, 2, 3, 4, 2, 3, 4, 2, 3, 4]]) Alternate approaches and benchmarks: Note: These have been benchmarked on your current small list as well as on a larger border list as generated by borders = np.arange(300)[np.random.randint(0,2,(300,),dtype=bool)] Disjoint union of complete graphs What you are trying to do is essentially combine disjoint complete graphs. Adjacency matrices for such graphs have complete connections for selective items along it's diagonal. You can use networkx to solve these. While slower than your current solution, you will find that working on these graphs objects would be much much easier and rewarding than using NumPy to represent graphs. Approach 1: Assuming that nodes are in sequence, calculate the number of nodes in each subgraph as i Create a complete matrix filled with 1s of the shape i*i Combine the graphs using nx.disjoint_union_all Fetch the edges of this graph import numpy as np import networkx as nx def get_coo(borders): graphs = [nx.from_numpy_matrix(np.ones((i,i))).to_directed() for i in np.diff(borders)] edges = nx.disjoint_union_all(graphs).edges() return edges %timeit get_coo(borders) #Small- 277 µs ± 33.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) #Large- 300 ms ± 36.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Approach 2: Iterate over the rolling 2-gram tuples of the borders using zip Create nx.complete_graph using ranges(start, end) from these tuples Combine the graphs using nx.disjoint_union_all Fetch the edges of this graph import numpy as np import networkx as nx def get_coo(borders): graphs = [nx.complete_graph(range(*i),nx.DiGraph()) for i in zip(borders, borders[1:])] edges = nx.disjoint_union_all(graphs).edges() return edges %timeit get_coo(borders) #Small- 116 µs ± 13.4 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) #Large- 207 ms ± 35.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Output is a bit faster than previous but lacks the self-loops on the nodes which would have to be added separately Using itertools.product Approach 3: Iterate over the rolling 2-gram tuples of the borders using zip Use itertools.product completely connected edge list for each subgraph Use itertools.chain to "append" the two iterators Return them as edges import numpy as np from itertools import product, chain def get_coo(borders): edges = chain(*[product(range(*i),repeat=2) for i in zip(borders, borders[1:])]) return list(edges) %timeit get_coo(borders) #Small- 3.91 µs ± 787 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) #Large- 183 µs ± 21.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) This approach is approximately 25 times faster than your current approach Your current approach - benchmark def get_coo(borders): edge_list = [] for s, e in zip(borders, borders[1:]): # create fully-connected subgraph arr = np.arange(s, e) t = np.array(np.meshgrid(arr, arr)).T.reshape(-1, 2) t = t.T edge_list.append(t) edge_list = np.concatenate(edge_list, axis=1) return edge_list %timeit get_coo(borders) #Small- 95.1 µs ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) #Large- 3.91 ms ± 962 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) | 5 | 5 |
69,260,530 | 2021-9-20 | https://stackoverflow.com/questions/69260530/panel-data-regression-with-fixed-effects-using-python | I have the following panel stored in df: state district year y constant x1 x2 time 0 01 01001 2009 12 1 0.956007 639673 1 1 01 01001 2010 20 1 0.972175 639673 2 2 01 01001 2011 22 1 0.988343 639673 3 3 01 01002 2009 0 1 0 33746 1 4 01 01002 2010 1 1 0.225071 33746 2 5 01 01002 2011 5 1 0.450142 33746 3 6 01 01003 2009 0 1 0 45196 1 7 01 01003 2010 5 1 0.427477 45196 2 8 01 01003 2011 9 1 0.854955 45196 3 y is the number of protests in each district constant is a column full of ones x1 is the proportion of the district's area covered by a mobile network provider x2 is the population count in each district (note that it is fixed in time) How can I run the following model in Python? Here's what I tried # Transform `x2` to match model df['x2'] = df['x2'].multiply(df['time'], axis=0) # District fixed effects df['delta'] = pd.Categorical(df['district']) # State-time fixed effects df['eta'] = pd.Categorical(df['state'] + df['year'].astype(str)) # Set indexes df.set_index(['district','year']) from linearmodels.panel import PanelOLS m = PanelOLS(dependent=df['y'], exog=df[['constant','x1','x2','delta','eta']]) ValueError: exog does not have full column rank. If you wish to proceed with model estimation irrespective of the numerical accuracy of coefficient estimates, you can set rank_check=False. What am I doing wrong? | I dug around the documentation and the solution turned out to be quite simple. After setting the indexes and turning the fixed effect columns to pandas.Categorical types (see question above): # Import model from linearmodels.panel import PanelOLS # Model m = PanelOLS(dependent=df['y'], exog=df[['constant','x1','x2']], entity_effects=True, time_effects=False, other_effects=df['eta']) m.fit(cov_type='clustered', cluster_entity=True) That is, DO NOT pass your fixed effect columns to exog. You should pass them to entity_effects (boolean), time_effects (boolean) or other_effects (pandas.Categorical). | 7 | 9 |
69,274,391 | 2021-9-21 | https://stackoverflow.com/questions/69274391/how-to-convert-tokenized-words-back-to-the-original-ones-after-inference | I'm writing a inference script for already trained NER model, but I have trouble with converting encoded tokens (their ids) into original words. # example input df = pd.DataFrame({'_id': [1], 'body': ['Amazon and Tesla are currently the best picks out there!']}) # calling method that handles inference: ner_model = NER() ner_model.recognize_from_df(df, 'body') # here is only part of larger NER class that handles the inference: def recognize_from_df(self, df: pd.DataFrame, input_col: str): predictions = [] df = df[['_id', input_col]].copy() dataset = Dataset.from_pandas(df) # tokenization, padding, truncation: encoded_dataset = dataset.map(lambda examples: self.bert_tokenizer(examples[input_col], padding='max_length', truncation=True, max_length=512), batched=True) encoded_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'], device=device) dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32) encoded_dataset_ids = encoded_dataset['_id'] for batch in dataloader: output = self.model(**batch) # decoding predictions and tokens for i in range(batch['input_ids'].shape[0]): tags = [self.unique_labels[label_id] for label_id in output[i]] tokens = [t for t in self.bert_tokenizer.convert_ids_to_tokens(batch['input_ids'][i]) if t != '[PAD]'] ... The results are close to what I need: # tokens: ['[CLS]', 'am', '##az', '##on', 'and', 'te', '##sla', 'are', 'currently', 'the', 'best', 'picks', 'out', 'there', ...] # tags: ['X', 'B-COMPANY', 'X', 'X', 'O', 'B-COMPANY', 'X', 'O', 'O', 'O', 'O', 'O', 'O', 'O', ...] How to combine 'am', '##az', '##on' and 'B-COMPANY', 'X', 'X' into one token/tag? I know that there is a method called convert_tokens_to_string in Tokenizer, but it returns just one big string, which is hard to map to tag. Regards | Provided you only want to "merge" company names one could do that in a linear time with pure Python. Skipping the beginning of sentence token [CLS] for brevity: tokens = tokens[1:] tags = tags[1:] The function below will merge company tokens and increase pointer appropriately: def merge_company(tokens, tags): generated_tokens = [] i = 0 while i < len(tags): if tags[i] == "B-COMPANY": company_token = [tokens[i]] for j in range(i + 1, len(tags)): i += 1 if tags[j] != "X": break else: company_token.append(tokens[j][2:]) generated_tokens.append("".join(company_token)) else: generated_tokens.append(tokens[i]) i += 1 return generated_tokens Usage is pretty simple, please notice tags need their Xs removed as well though: tokens = merge_company(tokens, tags) tags = [tag for tag in tags if tag != "X"] This would give you: ['amazon', 'and', 'tesla', 'are', 'currently', 'the', 'best', 'picks', 'out', 'there'] ['B-COMPANY', 'O', 'B-COMPANY', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] | 6 | 2 |
69,270,836 | 2021-9-21 | https://stackoverflow.com/questions/69270836/whats-the-point-of-using-object-instance-self | I was checking the code of the toolz library's groupby function in Python and I found this: def groupby(key, seq): """ Group a collection by a key function """ if not callable(key): key = getter(key) d = collections.defaultdict(lambda: [].append) for item in seq: d[key(item)](item) rv = {} for k, v in d.items(): rv[k] = v.__self__ return rv Is there any reason to use rv[k] = v.__self__ instead of rv[k] = v? | This is a somewhat confusing trick to save a small amount of time: We are creating a defaultdict with a factory function that returns a bound append method of a new list instance with [].append. Then we can just do d[key(item)](item) instead of d[key(item)].append(item) like we would have if we create a defaultdict that contains lists. If we don't lookup append everytime, we gain a small amount of time. But now the dict contains bound methods instead of the lists, so we have to get the original list instance back via __self__. __self__ is an attribute described for instance methods that returns the original instance. You can verify that with this for example: >>> a = [] >>> a.append.__self__ is a True | 40 | 40 |
69,276,393 | 2021-9-21 | https://stackoverflow.com/questions/69276393/is-the-sqlalchemy-text-function-exposed-to-sql-injection | I'm learning how to use SQL Alchemy, and I'm trying to re-implement a previously defined API but now using Python. The REST API has the following query parameter: myService/v1/data?range=time:2015-08-01:2015-08-02 So I want to map something like field:FROM:TO to filter a range of results, like a date range, for example. This is what I'm using at this moment: rangeStatement = range.split(':') if(len(rangeStatement)==3): query = query.filter(text('{} BETWEEN "{}" AND "{}"'.format(*rangeStatement))) So, this will produce the following WHERE condition: WHERE time BETWEEN "2015-08-01" AND "2015-08-02" I know SQL Alchemy is a powerful tool that allows creating queries like Query.filter_by(MyClass.temp), but I need the API request to be as open as possible. So, I'm worried that someone could pass something like DROP TABLE in the range parameter and exploit the text function | If queries are constructed using string formatting then sqlalchemy.text will not prevent SQL injection - the "injection" will already be present in the query text. However it's not difficult to build queries dynamically, in this case by using getattr to get a reference to the column. Assuming that you are using the ORM layer with model class Foo and table foos you can do import sqlalchemy as sa ... col, lower, upper = 'time:2015-08-01:2015-08-02'.split(':') # Regardless of style, queries implement a fluent interface, # so they can be built iteratively # Classic/1.x style q1 = session.query(Foo) q1 = q1.filter(getattr(Foo, col).between(lower, upper)) print(q1) or # 2.0 style (available in v1.4+) q2 = sa.select(Foo) q2 = q2.where(getattr(Foo, col).between(lower, upper)) print(q2) The respective outputs are (parameters will be bound at execution time): SELECT foos.id AS foos_id, foos.time AS foos_time FROM foos WHERE foos.time BETWEEN ? AND ? and SELECT foos.id, foos.time FROM foos WHERE foos.time BETWEEN :time_1 AND :time_2 SQLAlchemy will delegate quoting of the values to the connector package being used by the engine, so you're protection against injection will be as good as that provided by the connector package*. * In general I believe correct quoting should be a good defence against SQL injections, however I'm not sufficiently expert to confidently state that it's 100% effective. It will be more effective than building queries from strings though. | 7 | 9 |
69,276,288 | 2021-9-21 | https://stackoverflow.com/questions/69276288/why-is-this-python-code-faster-than-its-equivalent-clojure-code | I've been told, and I believe, that Clojure is faster than Python. Why does this Python code run faster than this seemingly equivalent Clojure code? Is Python doing some optimizations at compile time? def find_fifty(n,memory=1,count=0): if memory < 0.5: return count else: return find_fifty(n,memory*(1 - count/n),count+1) find_fifty(100000) (defn fifty ([n] (fifty n 1 0)) ([n memory count] (if (< memory 0.5) count (recur n (* memory (- 1 (/ count n))) (inc count))))) (fifty 100000) It feels like the time complexity for Clojure is higher than for Python. The Python function can receive an input many times higher than the Clojure function before it has a significant increase in runtime. Update - Clojure Fix (defn fifty ([n] (fifty (float n) 1 0)) ([n memory count] (if (< memory 0.5) count (recur n (* memory (- 1 (/ count n))) (inc count))))) (fifty 10000000) As was answered, Clojure was keeping values as very large rational numbers. Converting it to a float simplifies the operations being done, thus decreasing the runtime. | Clojure's division operator, when applied to integers, does exact rational division. It does not round down to the next lowest integer as Python's does. Your algorithm involves memory becoming a very complex fraction, despite yielding a simple integer. I've amended your function to print its intermediate values before finally returning the result: (defn fifty ([n] (fifty n 1 0)) ([n memory count] (if (< memory 0.5) count (do (println n (* memory (- 1 (/ count n))) (inc count)) (fifty n (* memory (- 1 (/ count n))) (inc count)))))) And here is the outcome: (fifty 500) 500 1 1 500 499/500 2 500 124251/125000 3 500 61752747/62500000 4 500 1914335157/1953125000 5 500 189519180543/195312500000 6 500 46811237594121/48828125000000 7 500 23077940133901653/24414062500000000 8 500 2838586636469903319/3051757812500000000 9 500 1393746038506722529629/1525878906250000000000 10 500 68293555886829403951821/76293945312500000000000 11 500 33395548828659578532440469/38146972656250000000000000 12 500 2037128478548234290478868609/2384185791015625000000000000 13 500 992081569052990099463209012583/1192092895507812500000000000000 14 500 241075821279876594169559790057669/298023223876953125000000000000000 15 500 23384354664148029634447299635593893/29802322387695312500000000000000000 16 500 2829506914361911585768123255906861053/3725290298461914062500000000000000000 17 500 1366651839636803295926003532603013888599/1862645149230957031250000000000000000000 18 500 329363093352469594318166851357326347152359/465661287307739257812500000000000000000000 19 500 158423647902537874867038255502873972980284679/232830643653869628906250000000000000000000000 20 500 475270943707613624601114766508621918940854037/727595761418342590332031250000000000000000000 21 500 227654782035946926183933973157629899172669083723/363797880709171295166015625000000000000000000000 22 500 54409492906591315357960219584673545902267911009797/90949470177292823791503906250000000000000000000000 23 500 25953328116444057425747024741889281395381793551673169/45474735088646411895751953125000000000000000000000000 24 500 3088446045856842833663895944284824486050433432649107111/5684341886080801486968994140625000000000000000000000000 25 500 58680474871280013839614022941411665234958235220333035109/113686837721616029739379882812500000000000000000000000000 26 500 13907272544493363279988523437114564660685101747218929320833/28421709430404007434844970703125000000000000000000000000000 27 27 I hope you can see how expensive it would be to compute these unreasonable fractions for even small inputs. If you want the same logic as the Python version, you could simply replace / with quot. | 6 | 10 |
69,272,911 | 2021-9-21 | https://stackoverflow.com/questions/69272911/altair-chart-show-less-lines-in-the-grid | I'm working on a chart using Altair, and I'm trying to figure out how to have less lines in the background grid. Is there a term for that background grid? Here's a chart that looks like mine, that I took from the tutorial: Let's say that I want to have half as many grid lines on the X axis. How could I do that? | Grid lines are drawn at the location of ticks, so to adjust the grid lines you can adjust the ticks. For example: import altair as alt import numpy as np import pandas as pd x = np.arange(100) source = pd.DataFrame({ 'x': x, 'f(x)': np.sin(x / 5) }) alt.Chart(source).mark_line().encode( x=alt.X('x', axis=alt.Axis(tickCount=4)), y='f(x)' ) You can see other tick-related properties in the documentation for alt.Axis. | 5 | 3 |
69,273,242 | 2021-9-21 | https://stackoverflow.com/questions/69273242/dynamically-create-function-with-typehint-in-memory | I'd like to dynamically create a function in memory, with a type hint of the argument. I do have some working code, but it feels extremely hacky and fragile. import typing func_name = 'some_function_name' req_type=int a = None exec(f'''def {func_name}(my_argument:{req_type.__name__}): pass a = {func_name}''') print(a) # <function some_function_name at 0x000001B89F3311F0> print(typing.get_type_hints(a)) # {'my_argument': <class 'int'>} There has got to be a better way to do this. Any suggestions? | You can use FunctionType to create new functions. You can copy a template function and change its type hints and name. You also can change the code of the function object with compile Python function. I made an example that copy and change the name and the type hint of template function (without changing code) import types import typing def copy_func(f, func_types, name=None): # add your code to first parameter new_func = types.FunctionType(f.__code__, f.__globals__, name or f.__name__, f.__defaults__, f.__closure__) new_func.__annotations__ = func_types return new_func def template(arg): print('called template func') a = copy_func(template, {'my_argument': int}, "test") a(2) # can call it print("types:", typing.get_type_hints(a)) # types: {'my_argument': <class 'type'>} | 5 | 3 |
69,266,375 | 2021-9-21 | https://stackoverflow.com/questions/69266375/different-access-time-to-a-value-of-a-dictionary-when-mixing-int-and-str-keys | Let's say I have two dictionaries and I know want to measure the time needed to check if a key is in the dictionary. I tried to run this piece of code: from timeit import timeit dct1 = {str(i): 1 for i in range(10**7)} dct2 = {i: 1 for i in range(10**7)} print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8)) print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8)) Here are the results that I get: 2.529034548999334 2.212983401999736 Now, let's say I try to mix integers and strings in both dictionaries, and measure access time again: dct1[7] = 1 dct2["7"] = 1 print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8)) print(timeit('7 in dct1', setup='from __main__ import dct1', number=10**8)) print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8)) print(timeit('"7" in dct2', setup='from __main__ import dct2', number=10**8)) I get something weird: 3.443614432000686 2.6335261530002754 2.1873921409987815 2.272667104998618 The first value is much higher than what I had before (3.44 vs 2.52). However, the third value is basically the same as before (2.18 vs 2.21). Why is this happening? Can you reproduce the same thing or is this only me? Also, I can't understand the big difference between the first and the second value: it looks like it's more difficult to access a string key, but the same thing seems to apply only slightly to the second dictionary. Why? Update You don't even need to actually add a new key. All you need to do to see an increase in complexity is just checking if a key with different type exists!! This is much weirder than I thought. Look at the example here: from timeit import timeit dct1 = {str(i): 1 for i in range(10**7)} dct2 = {i: 1 for i in range(10**7)} print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8)) # 2.55 print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8)) # 2.26 7 in dct1 "7" in dct2 print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8)) # 3.34 print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8)) # 2.35 | Let me try to answer my own question. The dict implementation in CPython is optimised for lookups of str keys. Indeed, there are two different functions that are used to perform lookups: lookdict is a generic dictionary lookup function that is used with all types of keys lookdict_unicode is a specialised lookup function used for dictionaries composed of str-only keys Python will use the string-optimised version until a search for non-string data, after which the more general function is used. And it looks like you cannot even reverse the behaviour of a particular dict instance: once it starts using the generic function, you can't go back to using the specialised one! | 9 | 5 |
69,263,878 | 2021-9-21 | https://stackoverflow.com/questions/69263878/adding-water-to-stack-of-glasses | I always wanted to know if there is any real-world application of Pascal's triangle than just coefficients of the binomial expansion. I tried to solve this problem: But, What if I am adding K units of water and wants to find a glass which has the least water in it: Where, Glass will be found as: c-th glass in r-th row And I believe. If we could find this then it will not be difficult for us to find amount of water in any glass for which {i<r and j<c} Problem: Input Water added- K units and capacity of each glass as 1- unit Output Expected : c th glass in r th row having least water in it. I tried to solve problem by keeping note of capacity of each row when it starts overflow: and wants to know how to keep going with this method. 1 max = 1, cap = 1 1 1 max = 1, sum(coefficients)=2**1 = 2, cap = 2/1 1 2 1 max = 2, sum = 2**2, cap = 4/2 = 2 units 1 3 3 1 max = 3, sum = 2**3, cap = 8/3 units 1 4 6 4 1 max = 6, sum = 2**4, cap = 16/6 units #Not sure but this is how it seems to me for the rate @which water is being added. 1 1/2 1/2 1/4 2/4 1/4 1/8 3/8 3/8 1/8 1/16 4/16 6/16 4/16 1/16 Should I use 2-D list and define as : Δ1, Δ2 = 0, 0 if g(n-1, k)>1 and k <= n-1: Δ1 = g(n-1, k) -1 if g(n-1, k-1)>1 and k-1 <= n-1: Δ2 = g(n-1, k-1) - 1 g(n, k) = Δ1/2 + Δ2/2 g(n,k) = g(n-1, k-1) + g(n-1, k) g = [[0]*(i+1) for i in range(11)] def f(g, K): g[1][1] += 1 K = K-1 d1, d2 = 0, 0 for n in range(2, 10): for k in range(1, n+1): if k ==1: g[n][k] = g[n-1][k]/2 if k == n: g[n][k] = g[n-1][k-1]/2 else: if g[n-1][k-1]>1: d1 = g[n-1][k-1] -1 if g[n-1][k] > 1: d2 = g[n-1][k] -1 g[n][k] = d1/2 + d2/2 return g, K k = int(input()) while k: g, k = f(g, k) for x in g: print(x) I don't know what is missing? | For such small K constraint simple row-by-row filling is enough (we can store only two rows, here 2D list is used for simplicity) def fillGlasses(k, row, col): gl = [[k]] level = 1 overflow_occured = True while overflow_occured: # also can stop when at needed row print(gl[level-1]) #before overflow level += 1 overflow_occured = False gl.append([0]*level) for i in range(level - 1): t = gl[level-2][i] - 1 if t > 0: gl[level-1][i] += t/2 gl[level-1][i+1] += t/2 gl[level-2][i] = 1 overflow_occured = True #print(gl) #after all return gl[row-1][col-1] print(fillGlasses(21,8,4)) [21] [10.0, 10.0] [4.5, 9.0, 4.5] [1.75, 5.75, 5.75, 1.75] [0.375, 2.75, 4.75, 2.75, 0.375] [0, 0.875, 2.75, 2.75, 0.875, 0] [0, 0, 0.875, 1.75, 0.875, 0, 0] [0, 0, 0, 0.375, 0.375, 0, 0, 0] 0.375 | 6 | 1 |
69,261,606 | 2021-9-20 | https://stackoverflow.com/questions/69261606/how-can-i-make-a-key-dynamic-in-a-pydantic-model | i have an api entrypoint: @app.get('/dealers_get_users/', response_model = schemas.SellSideUserId, status_code=200) def getdata(db: database.SessionLocal = _Depends(database.get_db)): result = {} i = db.query(models.sellSideUser).all() for dealer_users in i: result[str(dealer_users.user_id)] = { 'user_name' : dealer_users.user_name, 'user_pass' : dealer_users.user_pass, } m = schemas.SellSideUserId(user_id=result) return m the data coming in from the db is just 3 fields: user_id,user_name,user_pass when i call the api, i get this: { "user_id": { "1": { "user_name": "testname", "user_pass": "testpass" } } } ok, cool - i got my data. but i'm still trying to understand how these models work, and am not really grasping how to move in or out of a nested dictionary. for example, i would like it to look like this, instead: { "1": { "user_name": "testname", "user_pass": "testpass" } } but i'm not sure how to pass the 'result' variable into this class - any way i do it, i'm met with an error. my model: class SellSideUserId(_BaseModel): user_id : dict class Config: orm_mode = True am i supposed to build two pydantic models - one being based on another? could use some help with this thanks! | You can easily make a model with dynamic keys using a dict as custom root type: class User(BaseModel): name: str password: str class ProductModel(BaseModel): __root__: Dict[str, User] Moreover, you can constrain dictionary keys using constr сapabilities, such as regex pattern: UserId = constr(regex=r'^\d+$') class ProductModel(BaseModel): __root__: Dict[UserId, User] This will work well for validating returned responses against your model. But generated OpenAPI schema will be rather scanty, since at the moment FastAPI 0.68 and swagger UI does not support yet OpenAPI 3.1, which contains such keyword as patternProperties or propertyNames. | 4 | 10 |
69,240,807 | 2021-9-19 | https://stackoverflow.com/questions/69240807/how-to-change-colors-of-the-tracking-points-and-connector-lines-on-the-output-vi | I am referring to 33 body points and connector lines between them. I'd like to change the colors of those, especially of the white default color of the connector lines. Here's my code, I have created a class module for mediapipe which I can import and use in my other programs import cv2 import mediapipe as mp class poseDetector(): def __init__(self, mode=False, complex=1, smooth_landmarks=True, segmentation=True, smooth_segmentation=True, detectionCon=0.5, trackCon=0.5): self.mode = mode self.complex = complex self.smooth_landmarks = smooth_landmarks self.segmentation = segmentation self.smooth_segmentation = smooth_segmentation self.detectionCon = detectionCon self.trackCon = trackCon self.mpDraw = mp.solutions.drawing_utils self.mpDrawStyle = mp.solutions.drawing_styles self.mpPose = mp.solutions.pose self.pose = self.mpPose.Pose(self.mode, self.complex, self.smooth_landmarks, self.segmentation, self.smooth_segmentation, self.detectionCon, self.trackCon) def findPose(self, img, draw=True): imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.pose.process(imgRGB) if self.results.pose_landmarks: if draw: self.mpDraw.draw_landmarks(img, self.results.pose_landmarks, self.mpPose.POSE_CONNECTIONS) return img def main(): cap = cv2.VideoCapture("..//assets//videos//v4.mp4") detector = poseDetector() while True: success, img = cap.read() img = detector.findPose(img) cv2.imshow("Image", img) cv2.waitKey(1) if __name__ == "__main__": main() | So as per the documentation, this is the code for draw_landmarks mp_drawing.draw_landmarks( image: numpy.ndarray, landmark_list: mediapipe.framework.formats.landmark_pb2.NormalizedLandmarkList, connections: Optional[List[Tuple[int, int]]] = None, landmark_drawing_spec: mediapipe.python.solutions.drawing_utils.DrawingSpec = DrawingSpec(color=(0, 0, 255), thickness=2, circle_radius=2), connection_drawing_spec: mediapipe.python.solutions.drawing_utils.DrawingSpec = DrawingSpec(color=(0, 255, 0), thickness=2, circle_radius=2), ) So in your findPose function you need to update only one line of code def findPose(self, img, draw=True): imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.pose.process(imgRGB) if self.results.pose_landmarks: if draw: self.mpDraw.draw_landmarks(img, self.results.pose_landmarks, self.mpPose.POSE_CONNECTIONS, self.mpDraw.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=2), self.mpDraw.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2)) return img The first self.mpDraw.DrawingSpec argument corresponds to the points of the landmark. The second self.mpDraw.DrawingSpec argument corresponds to the COnnection between those landmarks points. The color is in (B, G, R) format | 4 | 8 |
69,262,878 | 2021-9-21 | https://stackoverflow.com/questions/69262878/what-are-the-differences-between-pickle-dump-load-and-pickle-dumps-loads | I've started to learn about the pickle module used for object serialization and deserialization. I know that pickle.dump is used to store the code as a stream of bytes (serialization), and pickle.load is essentially the opposite, turning a stream of bytes back into a python object. (deserialization). But what are pickle.dumps and pickle.loads, and what are the differences between them and pickle.dump and pickle.load? I've looked at the documentation, but I am having trouble differentiating between the two. From the documentation: pickle.dumps(obj, protocol=None, *, fix_imports=True, buffer_callback=None) ; Return the pickled representation of the object obj as a bytes object, instead of writing it to a file. pickle.loads(data, /, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None) Return the reconstituted object hierarchy of the pickled representation data of an object. data must be a bytes-like object. | The difference between dump and dumps is that dump writes the pickled object to an open file, and dumps returns the pickled object as bytes. The file must be opened for writing in binary mode. The pickled version of the object is exactly the same with both dump and dumps. So, if you did the following for object obj: with open("pickle1", "wb") as f: pickle.dump(obj, f) with open("pickle2", "wb") as f: f.write(pickle.dumps(obj)) you'd end up with two files with exactly the same contents. The same applies to loading - load "unpickles" from an open (readable) file object, and loads uses a bytes object. | 11 | 22 |
69,260,910 | 2021-9-20 | https://stackoverflow.com/questions/69260910/better-help-for-argparse-subcommands | Given the following code snippet: import argparse import sys parser = argparse.ArgumentParser() subparsers = parser.add_subparsers(help="subcommand help") command1 = subparsers.add_parser("foo", description="Run foo subcommand") command2 = subparsers.add_parser("bar", description="Run bar subcommand") opts = parser.parse_args(sys.argv[1:]) When I print help for this I get this: usage: test.py [-h] {foo,bar} ... positional arguments: {foo,bar} subcommand help optional arguments: -h, --help show this help message and exit Is there a way to make it print something like this instead: usage: test.py [-h] {foo,bar} ... subcommands: foo Run foo subcommand bar Run bar subcommand optional arguments: -h, --help show this help message and exit without supplying a custom formatter? If I change the formatter then it also changes everything else about how the help is printed, but in my case I just want to change the way that subcommand help is printed from the parent (sub)command. | You need to set the help parameter, not the description parameter, to get the output you desire: import argparse import sys parser = argparse.ArgumentParser() subparsers = parser.add_subparsers(help="subcommand help") command1 = subparsers.add_parser("foo", help="Run foo subcommand") command2 = subparsers.add_parser("bar", help="Run bar subcommand") opts = parser.parse_args(sys.argv[1:]) Output: usage: test.py [-h] {foo,bar} ... positional arguments: {foo,bar} subcommand help foo Run foo subcommand bar Run bar subcommand optional arguments: -h, --help show this help message and exit The argparse docs have this to say about the help value: The help value is a string containing a brief description of the argument. When a user requests help (usually by using -h or --help at the command line), these help descriptions will be displayed with each argument. And this to say to about the description value: This argument gives a brief description of what the program does and how it works. In help messages, the description is displayed between the command-line usage string and the help messages for the various arguments. | 5 | 6 |
69,254,808 | 2021-9-20 | https://stackoverflow.com/questions/69254808/the-simplest-interface-to-let-subprocess-output-to-both-file-and-stdout-stderr | I want something have similar effect of cmd > >(tee -a {{ out.log }}) 2> >(tee -a {{ err.log }} >&2) in python subporcess without calling tee. Basically write stdout to both stdout and out.log files and write stderr to both stderr and err.log. I knew I could use a loop to handle it. But since I have lots of Popen, subprocess.run calls in my code already and I do not want to rewrite the entire thing I wonder is there any easier interface provided by some package could just allow me to do something like: subprocess.run(["ls", "-l"], stdout=some_magic_file_object(sys.stdout, 'out.log'), stderr=some_magic_file_object(sys.stderr, 'out.log') ) | No simple way as far as I can tell, but here is a way: import os class Tee: def __init__(self, *files, bufsize=1): files = [x.fileno() if hasattr(x, 'fileno') else x for x in files] read_fd, write_fd = os.pipe() pid = os.fork() if pid: os.close(read_fd) self._fileno = write_fd self.child_pid = pid return os.close(write_fd) while buf := os.read(read_fd, bufsize): for f in files: os.write(f, buf) os._exit(0) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() def fileno(self): return self._fileno def close(self): os.close(self._fileno) os.waitpid(self.child_pid, 0) This Tee object takes a list of file objects (i.e. objects that either are integer file descriptors, or have a fileno method). It creates a child process that reads from its own fileno (which is what subprocess.run will write to) and writes that content to all of the files it was provided. There's some lifecycle management needed, as its file descriptor must be closed, and the child process must be waited on afterwards. For that you either have to manage it manually by calling the Tee object's close method, or by using it as a context manager as shown below. Usage: import subprocess import sys logfile = open('out.log', 'w') stdout_magic_file_object = Tee(sys.stdout, logfile) stderr_magic_file_object = Tee(sys.stderr, logfile) # Use the file objects with as many subprocess calls as you'd like here subprocess.run(["ls", "-l"], stdout=stdout_magic_file_object, stderr=stderr_magic_file_object) # Close the files after you're done with them. stdout_magic_file_object.close() stderr_magic_file_object.close() logfile.close() A cleaner way would be to use context managers, shown below. It would require more refactoring though, so you may prefer manually closing the files instead. import subprocess import sys with open('out.log', 'w') as logfile: with Tee(sys.stdout, logfile) as stdout, Tee(sys.stderr, logfile) as stderr: subprocess.run(["ls", "-l"], stdout=stdout, stderr=stderr) One issue with this approach is that the child process writes to stdout immediately, and so Python's own output will often get mixed up in it. You can work around this by using Tee on a temp file and the log file, and then printing the content of the temp file (and deleting it) once the Tee context block is exited. Making a subclass of Tee that does this automatically would be straightforward, but using it would be a bit cumbersome since now you need to exit the context block (or otherwise have it run some code) to print out the output of the subprocess. | 5 | 2 |
69,257,426 | 2021-9-20 | https://stackoverflow.com/questions/69257426/meson-finds-python3-binary-fails-to-find-python3-dependency | I'm trying to build a certain repository using meson on Cygwin. This is what happens: $ meson build_dir The Meson build system Version: 0.58.2 Source dir: /home/joeuser/src/meld-3.21.0 Build dir: /home/joeuser/src/meld-3.21.0/build_dir Build type: native build Project name: meld Project version: 3.21.0 Host machine cpu family: x86_64 Host machine cpu: x86_64 Program python3 found: YES (/usr/bin/python3) Found pkg-config: /usr/bin/pkg-config (1.6.3) Run-time dependency python3 found: NO (tried pkgconfig and sysconfig) meson.build:18:0: ERROR: Dependency "python3" not found, tried pkgconfig and sysconfig Why is this happening? And how can I get meson to find the python3 dependency? Note: I've installed python38-pkgconfig in case that matters. | meson will find python3 if you also install the python3-devel package using the Cygwin installer. | 6 | 4 |
69,246,880 | 2021-9-19 | https://stackoverflow.com/questions/69246880/notifications-in-postgresql-with-pythonpsycopg2-does-not-work | I want to be notified when there is a new entry in a specific table "FileInfos" in PostgreSQL 12, so I wrote the following trigger: create trigger trigger1 after insert or update on public."FileInfos" for each row execute procedure notify_id_trigger(); and the following function: create or replace function notify_id_trigger() returns trigger as $$ begin perform pg_notify('new_Id'::text, NEW."Id"::text); return new; end; $$ language plpgsql; to get the notifications I use the python library psycopg2: import psycopg2 from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT import select def dblistener(): connection = psycopg2.connect( host="127.0.0.1", database="DBNAME", user="postgres", password="....") connection.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) cur = connection.cursor() cur.execute("LISTEN new_Id;") while True: select.select([connection], [], []) connection.poll() while connection.notifies: notify = connection.notifies.pop() print("Got NOTIFY:", notify.pid, notify.channel, notify.payload) if __name__ == '__main__': dblistener() But unfortunately my python code does not work, what did I do wrong? BTW: The database and the table were created with the Entity Framework (C#). | According to NOTIFY syntax, channel is an identifier. That means that new_Id in LISTEN new_Id is automaticaly converted to new_id. Unfortunately, pg_notify('new_Id'::text, new."Id"::text) notifies on channel new_Id. You have two options. Change the channel in the trigger: perform pg_notify('new_id'::text, new."Id"::text); or enclose the channel in double-quotes in LISTEN: LISTEN "new_Id" The use of capital letters in Postgres can cause surprises. | 5 | 5 |
69,247,270 | 2021-9-19 | https://stackoverflow.com/questions/69247270/argument-unpacking-with-custom-getitem-method-never-terminates | Could someone explain what is going on under the hood and why this program does not finish? class A: def __getitem__(self, key): return 1 print(*A()) | This program doesn't finish because the class you defined is iterable, using the old sequence iteration protocol. Basically, __getitem__ is called with integers increasing from 0, ..., n until an IndexError is raised. >>> class A: ... def __getitem__(self, key): ... return 1 ... >>> it = iter(A()) >>> next(it) 1 >>> next(it) 1 >>> next(it) 1 >>> next(it) 1 You are unpacking an infinite iterator, so eventually you'll run out of memory. From the iter docs: Without a second argument, object must be a collection object which supports the iteration protocol (the __iter__() method), or it must support the sequence protocol (the __getitem__() method with integer arguments starting at 0). If it does not support either of those protocols, TypeError is raised. | 4 | 7 |
69,246,724 | 2021-9-19 | https://stackoverflow.com/questions/69246724/python-append-a-list-to-a-list | I'm doing some exercises in Python and I came across a doubt. I have to set a list containing the first three elements of list, with the .append method. The thing is, I get an assertion error, lists don't match. If I print list_first_3 I get "[['cat', 3.14, 'dog']]", so the double square brackets are the problem. But how can I define the list so the output matches? list = ["cat", 3.14, "dog", 81, 6, 41] list_first_3 = [] list_first_3.append(list[:3]) assert list_first_3 == ["cat", 3.14, "dog"] | append can only add a single value. I think what you may be thinking of is the extend method (or the += operator) list1 = ["cat", 3.14, "dog", 81, 6, 41] list_first_3 = [] list_first_3.extend(list1[:3]) assert list_first_3 == ["cat", 3.14, "dog"] or list1 = ["cat", 3.14, "dog", 81, 6, 41] list_first_3 = [] list_first_3 += list1[:3] assert list_first_3 == ["cat", 3.14, "dog"] otherwise you'll need a loop: list1 = ["cat", 3.14, "dog", 81, 6, 41] list_first_3 = [] for value in list1[:3]: list_first_3.append(value) assert list_first_3 == ["cat", 3.14, "dog"] with append but without a loop would be possible using a little map() trickery: list1 = ["cat", 3.14, "dog", 81, 6, 41] list_first_3 = [] any(map(list_first_3.append,list1[:3])) assert list_first_3 == ["cat", 3.14, "dog"] | 6 | 11 |
69,243,145 | 2021-9-19 | https://stackoverflow.com/questions/69243145/error-when-trying-to-produce-a-graph-in-plotly-dash | When trying to produce a figure with the following code @app.callback( [ Output("selected-plot", "figure") ], [ Input("submit-selected-plotting", "n_clicks"), State("table", "data") ], ) def plot(button_clicked, data) fig = go.Scatter(x=data["index"], y=data["result"], mode='lines', name='result') return fig and dbc.Col( [ dcc.Graph(id='selected-plot') ], width=6, ) I get a strange error with the app expecting a different object: dash._grouping.SchemaTypeValidationError: Schema: [<Output selected-plots.figure>] Path: () Expected type: (<class 'tuple'>, <class 'list'>) Received value of type <class 'plotly.graph_objs._scatter.Scatter'>: Scatter({...}) I have tried everything but I can't seem to go around this error. Thanks for any suggestions in advance! | The error is due to the fact that the app expects a figure object, you can fix it by updating the callback as follows: @app.callback( [ Output("selected-plot", "figure") ], [ Input("submit-selected-plotting", "n_clicks"), State("table", "data") ], ) def plot(button_clicked, data) trace = go.Scatter( x=data["index"], y=data["result"], mode='lines', name='result' ) return [go.Figure(data=trace)] | 4 | 12 |
69,234,878 | 2021-9-18 | https://stackoverflow.com/questions/69234878/using-shared-dockerfile-for-multiple-dockerfiles | What I have, are multi similar and simple dockerfiles But what I want is to have a single base dockerfile and my dockerfiles pass their variables into it. In my case the only difference between dockerfiles are simply their EXPOSE, so I think it's better to keep a base dockerfile and other dockerfiles only inject that variables into base dockerfile like a template engine A sample dockerfile: FROM golang:1.17 AS builder WORKDIR /app COPY . . RUN go mod download RUN go build -o /bin/app ./cmd/root.go FROM alpine:latest WORKDIR /bin/ COPY --from=builder /bin/app . EXPOSE 8080 LABEL org.opencontainers.image.source="https://github.com/mohammadne/bookman-auth" ENTRYPOINT ["/bin/app"] CMD ["server", "--env=dev"] | as yasen said, it's impossible to have import directive. finally what I have did is as follow: link to github repository create a template text file with EXPOSE ${{ EXPOSED_PORT }}: FROM golang:1.17 AS builder WORKDIR /app COPY . . RUN go mod download && make ent-generate RUN go build -o /bin/app ./cmd/root.go FROM alpine:latest WORKDIR /bin/ COPY --from=builder /bin/app . EXPOSE ${{ EXPOSED_PORT }} LABEL org.opencontainers.image.source="https://github.com/mohammadne/bookman-library" ENTRYPOINT ["/bin/app"] CMD ["server", "--env=dev"] and then create a python script #!/usr/bin/python from shutil import copyfile import os class Config: def __init__(self, service, port): self.service = service self.port = port configs = [ Config("auth", "8080"), Config("user", "8081"), Config("library", "8082"), ] pathToDir = "../build" template = f"{pathToDir}/template.txt" for config in configs: outputDir = f"{pathToDir}/{config.service}" os.mkdir(outputDir) fileName = copyfile(template, f"{outputDir}/Dockerfile") with open(fileName, "rt") as file: replacedText = file.read().replace('${{ EXPOSED_PORT }}', config.port) with open(fileName, "wt") as file: file.write(replacedText) then in the python script you can replace your patterns !!! | 4 | 2 |
69,233,701 | 2021-9-18 | https://stackoverflow.com/questions/69233701/finding-the-coordinates-of-pixels-over-a-line-in-an-image | I have an image represented as a 2D array. I would like to get the coordinates of pixels over a line from point 1 to point 2. For example, let's say I have an image with size 5x4 like in the image below. And I have a line from point 1 at coordinates (0, 2) to point 2 at (4, 1). Like the red line on the image below: So here I would like to get the coordinates of the blue pixels as a list like this: [(0, 2), (1, 2), (2, 2), (2, 1), (3, 1), (4, 1)] How can I achieve this? I am using Python and numpy, but actually a solution in any language including pseudo code would be helpful. I can then try to convert it into a numpy solution. | You can do this with scikit-image: from skimage.draw import line # Get coordinates, r=rows, c=cols of your line rr, cc = line(0,2,4,1) print(list(zip(rr,cc))) [(0, 2), (1, 2), (2, 1), (3, 1), (4, 1)] The source code to see the implemented algorithm: https://github.com/scikit-image/scikit-image/blob/main/skimage/draw/_draw.pyx#L44 It is an implementation of the Bresenham's line algorithm | 7 | 4 |
69,230,525 | 2021-9-17 | https://stackoverflow.com/questions/69230525/why-does-poetry-build-raise-moduleorpackagenotfound-exception | I want to use poetry to build and distribute Python source packages, but after poetry init I get an error running poetry build. ModuleOrPackageNotFound No file/folder found for package mdspliter.tree | Reason The reason it can't be found is most likely because the directory hierarchy is incorrect. The released package is not directly the source code folder, there are many things in it that are not needed in the final package such as version control, testing and dependency management. You should put this folder with the same name as the package as a package in that folder. Solution Change the directory hierarchy so that there are packages with the corresponding names in the folder.for example: D:\GitRepository\python_distribution\temp\tree ├──_init__.py ├──tree.py ├──pyproject.toml └──README.rst ↓ D:\GitRepository\python_distribution\temp\tree ├──tree │ ├──__init__.py │ └──tree.py ├──pyproject.toml └──README.rst Specify the folder in pyproject.toml packages = [ { include = "your_folder_as_pack" } ] Variants If the name of the project is mdspliter.tree, then it is not useful at all to include the folder mdspliter.tree, because this naming scheme does not conform to the specification, if you use poetry new mdspliter.tree, you will find that the name of the folder actually should be mdspliter_tree. (in version 1.2, this behavior has been changed to generate multi-layer folders, mdsplitter/tree) | 28 | 33 |
69,232,157 | 2021-9-18 | https://stackoverflow.com/questions/69232157/checking-if-elements-in-an-array-exist-in-a-pandas-dataframe | I have a pandas Dataframe and a pandas Series that looks like below. df0 = pd.DataFrame({'col1':['a','b','c','d'],'col2':['b','c','e','f'],'col3':['d','f','g','a']}) col1 col2 col3 0 a b d 1 b c f 2 c e g 3 d f a df1 = pd.Series(['b','g','g'], index=['col1','col2','col3']) col1 b col2 g col3 g dtype: object As you can see, the columns of df0 and the indices of df1 are the same. For each index of df1, I want to know if the value at that index exists in the corresponding column of df0. So, df1.col1 is b and we need to look for b only in df0.col1 and check if it exists. Desired output: array([True, False, True]) Is there a way to do this without using a loop? Maybe a method native to numpy or pandas? | You can make use of broadcasting: (df0 == df1).any().values It also works with NumPy ndarrays: assert (df0.columns == df1.columns).all() (df0.values == df1.values).any(axis=0) Output: array([ True, False, True]) | 8 | 0 |
69,229,901 | 2021-9-17 | https://stackoverflow.com/questions/69229901/changed-properties-do-not-trigger-signal | For a subclass of QObject I'd like to define a property. On change, a signal should be emitted. According to the documentation, something like this: p = Property(int, _get_p, _set_p, notify=p_changed) should work, but the signal is not emitted by the change. Full example here: from PySide2.QtCore import QObject, Property, Slot, Signal class A(QObject): p_changed = Signal() def __init__(self): super(A, self).__init__() self._p = 0 def _get_p(self): print(f"Getting p (val: {self._p})") return self._p def _set_p(self, v): print(f"Setting new p: {v}") self._p = v p = Property(int, _get_p, _set_p, notify=p_changed) if __name__ == "__main__": import sys from PySide2.QtWidgets import QApplication, QWidget app = QApplication([]) class W(QWidget): def __init__(self, *args): super().__init__(*args) self.a = A() self.a.p_changed.connect(self.test_slot) print(self.a.p) self.a.p = 42 # Expecting "Got notified"! print(self.a.p) # This would print "Got notified": # self.a.p_changed.emit() @Slot() def test_slot(self): print("Got notified") w = W() w.show() sys.exit(app.exec_()) | That you associate a signal to a QProperty does not imply that it will be emitted automatically but that you have to emit it explicitly. def _set_p(self, v): if self._p != v: print(f"Setting new p: {v}") self._p = v self.p_changed.emit() For more information read The Property System. | 4 | 3 |
69,224,622 | 2021-9-17 | https://stackoverflow.com/questions/69224622/get-fastapi-to-handle-requests-in-parallel | Here is my trivial fastapi app: from datetime import datetime import asyncio import uvicorn from fastapi import FastAPI app = FastAPI() @app.get("/delayed") async def get_delayed(): started = datetime.now() print(f"Starting at: {started}") await asyncio.sleep(10) ended = datetime.now() print(f"Ending at: {ended}") return {"started": f"{started}", "ended": f"{ended}"} if __name__ == "__main__": uvicorn.run("fastapitest.main:app", host="0.0.0.0", port=8000, reload=True, workers=2) When I make 2 consecutive calls to it, the code in the function for the second one doesn't start executing until the first request finishes, producing an output like: Starting at: 2021-09-17 14:52:40.317915 Ending at: 2021-09-17 14:52:50.321557 INFO: 127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK Starting at: 2021-09-17 14:52:50.328359 Ending at: 2021-09-17 14:53:00.333032 INFO: 127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK Given that the function is marked async and I am awaiting the sleep, I would expect a different output, like: Starting at: ... Starting at: ... Ending at: ... INFO: 127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK Ending at: ... INFO: 127.0.0.1:58539 - "GET /delayed HTTP/1.1" 200 OK [for the calls I just opened up 2 browser tabs at localhost:8000/delayed ] What am I missing? | It works in parallel as expected - it is just a browser thing: chrome on detecting the same endpoint being requested in different tabs, will wait for the first to be completly resolved to check if the result can be cached. If instead you place 3 http requests from different processes in the shell, the results are as expected: content-length: 77 content-type: application/json date: Fri, 17 Sep 2021 19:51:39 GMT server: uvicorn { "ended": "2021-09-17 16:51:49.956629", "started": "2021-09-17 16:51:39.955487" } HTTP/1.1 200 OK content-length: 77 content-type: application/json date: Fri, 17 Sep 2021 19:51:39 GMT server: uvicorn { "ended": "2021-09-17 16:51:49.961173", "started": "2021-09-17 16:51:39.960850" } HTTP/1.1 200 OK content-length: 77 content-type: application/json date: Fri, 17 Sep 2021 19:51:39 GMT server: uvicorn { "ended": "2021-09-17 16:51:49.964156", "started": "2021-09-17 16:51:39.963510" } Adding a random, even if unused, query parameter on the URL for each browser tab will all cancel the trying-to-cache behavior. related question: Chrome stalls when making multiple requests to same resource? | 6 | 6 |
69,227,434 | 2021-9-17 | https://stackoverflow.com/questions/69227434/how-to-get-aws-glue-schema-registry-schema-definition-using-boto3 | My goal is to receive csv files in S3, convert them to avro, and validate them against the appropriate schema in AWS. I created a series of schemas in AWS Glue Registry based on the .avsc files I already had: { "namespace": "foo", "type": "record", "name": "bar.baz", "fields": [ { "name": "column1", "type": ["string", "null"] }, { "name": "column2", "type": ["string", "null"] }, { "name": "column3", "type": ["string", "null"] } ] } But once I try and pull the schemas from Glue the API doesn't seem to provide definition details: glue = boto3.client('glue') glue.get_schema( SchemaId={ 'SchemaArn': schema['SchemaArn'] } ) returns: { 'Compatibility': 'BACKWARD', 'CreatedTime': '2021-08-11T21:09:15.312Z', 'DataFormat': 'AVRO', 'LatestSchemaVersion': 2, 'NextSchemaVersion': 3, 'RegistryArn': '[my-registry-arn]', 'RegistryName': '[my-registry-name]', 'ResponseMetadata': { 'HTTPHeaders': { 'connection': 'keep-alive', 'content-length': '854', 'content-type': 'application/x-amz-json-1.1', }, 'HTTPStatusCode': 200, 'RetryAttempts': 0, }, 'SchemaArn': '[my-schema-arn]', 'SchemaCheckpoint': 2, 'SchemaName': '[my-schema-name]', 'SchemaStatus': 'AVAILABLE', 'UpdatedTime': '2021-08-11T21:09:17.312Z', } Is there a way to programmatically retrieve the Glue Schema Registry definitions for a schema? Or am I taking the wrong approach here with what I'm trying to do? | After some more digging I found the somewhat confusingly named get_schema_version() method that I had been overlooking which returns the SchemaDefinition: { 'SchemaVersionId': 'string', 'SchemaDefinition': 'string', 'DataFormat': 'AVRO'|'JSON', 'SchemaArn': 'string', 'VersionNumber': 123, 'Status': 'AVAILABLE'|'PENDING'|'FAILURE'|'DELETING', 'CreatedTime': 'string' } | 5 | 4 |
69,223,702 | 2021-9-17 | https://stackoverflow.com/questions/69223702/python-tkinter-how-can-i-make-ttk-notebook-tabs-change-their-order | Is it possible to do this: with ttk.Notebook widget? | Yes, it is possible. You have to bind B1-Motion to a function, then use notebook.index("@x,y") to get the index of the tab at mouse position. You can then make use of notebook.insert() to insert at a particular position. import tkinter as tk from tkinter import ttk def reorder(event): try: index = notebook.index(f"@{event.x},{event.y}") notebook.insert(index, child=notebook.select()) except tk.TclError: pass root = tk.Tk() root.geometry("500x500") notebook = ttk.Notebook(root) notebook.pack(fill="both", expand=True) notebook.bind("<B1-Motion>", reorder) frame1 = ttk.Frame(notebook) frame2 = ttk.Frame(notebook) frame1.pack(fill='both', expand=True) frame2.pack(fill='both', expand=True) notebook.add(frame1, text='Stackoverflow') notebook.add(frame2, text='Github') root.mainloop() output: | 5 | 12 |
69,222,860 | 2021-9-17 | https://stackoverflow.com/questions/69222860/pip3-command-to-upgrade-all-packages-that-is-careful-about-dependency-conflicts | So far, I have used (via How to upgrade all Python packages with pip) pip3 list --format freeze --outdated | cut -d= -f1 | xargs pip3 install --upgrade-strategy eager --upgrade to upgrade all of my Python pip packages. It has so far worked fine for me - except for once, when I got a sort of a conflict message, unfortunately I didn't keep a copy of it; my guess is, it was something similar to this noted here https://pip.pypa.io/en/stable/user_guide/#fixing-conflicting-dependencies : Due to conflicting dependencies pip cannot install package_coffee and package_tea: - package_coffee depends on package_water<3.0.0,>=2.4.2 - package_tea depends on package_water==2.3.1 Anyways, now I just tried to install voila for my Jupyter installation, and it ended up like this: (notebook) user@server:/home/web/Jupyter$ pip3 install voila ... Installing collected packages: jupyter-client, voila Attempting uninstall: jupyter-client Found existing installation: jupyter-client 7.0.3 Uninstalling jupyter-client-7.0.3: Successfully uninstalled jupyter-client-7.0.3 Successfully installed jupyter-client-6.1.12 voila-0.2.13 In other words: I've had jupyter-client-7.0.3 installed before as latest; but now that I wanted to install voila, due to voila requirements, that latest version got uninstalled, and an earlier version, 6.1.12, compatible with voila, got installed instead. So now if I want to check outdated packages, I get, as expected, jupyter-client listed: (notebook) user@server:/home/web/Jupyter$ pip3 list --format freeze --outdated jupyter-client==6.1.12 ... but then, if I run the full pipe command, pip3 list --format freeze --outdated | cut -d= -f1 | xargs pip3 install --upgrade-strategy eager --upgrade, then it will want to upgrade jupyter-client to 7.0.3, which will then break voila (I guess, I dare not try it). So, is there an upgrade command, that would take a situation like this, and upon such a state during upgrade, prevent changes and give me a notification? Say, something like: WARNING: There is an upgrade to jupyter-client=6.1.12 (newest version 7.0.3) - however, installing that package would cause a conflict with the currently installed voila=0.2.13 package; not proceeding with this upgrade. To force this upgrade regardless, use [...] | Upgrading packages in python is never easy due to overlapping (sub)dependencies. There are some tools out there that try and help you manage. At my current job we use pip-tools. And in some projects we use poetry but I'm less happy about it's handling. For pip-tools you define your top-level packages in requirements.in file, which then resolves the sub(sub-sub)dependencies and outputs them into a requirements.txt file. The benefit of this is that you only worry about your main packages. You can still upgrade sub dependencies if so desired. Long story short; blindly updating all your packages will most likely never work out as intended or expected. Either packages ARE upgraded, but stop working, or they do work but don't work with another package that was updated because they needed a lower version of that package. My advice would be to start with your main packages and build up from there using one of the tools mentioned. There isn't a silver bullet for this. Dependency hell is a very real thing in python. | 6 | 8 |
69,220,221 | 2021-9-17 | https://stackoverflow.com/questions/69220221/use-of-torch-stack | t1 = torch.tensor([1,2,3]) t2 = torch.tensor([4,5,6]) t3 = torch.tensor([7,8,9]) torch.stack((t1,t2,t3),dim=1) When implementing the torch.stack(), I can't understand how stacking is done for different dim. Here stacking is done for columns but I can't understand the details as to how it is done. It becomes more complicated dealing with 2-d or 3-D tensors. tensor([[1, 4, 7], [2, 5, 8], [3, 6, 9]]) | Imagine have n tensors. If we stay in 3D, those correspond to volumes, namely rectangular cuboids. Stacking corresponds to combining those n volumes on an additional dimension: here a 4th dimension is added to host the n 3D volumes. This operation is in clear contrast with concatenation, where the volumes would be combined on one of the existing dimensions. So concatenation of three-dimensional tensors would result in a 3D tensor. Here is a possible representation of the stacking operations for limited dimensions sizes (up to three-dimensional inputs): Where you chose to perform the stacking defines along which new dimension the stack will take place. In the above examples, the newly created dimension is last, hence the idea of "added dimension" makes more sense. In the following visualization, we observe how tensors can be stacked on different axes. This in turn affects the resulting tensor shape For the 1D case, for instance, it can also happen on the first axis, see below: With code: >>> x_1d = list(torch.empty(3, 2)) # 3 lines >>> torch.stack(x_1d, 0).shape # axis=0 stacking torch.Size([3, 2]) >>> torch.stack(x_1d, 1).shape # axis=1 stacking torch.Size([2, 3]) Similarly for two-dimensional inputs: With code: >>> x_2d = list(torch.empty(3, 2, 2)) # 3 2x2-squares >>> torch.stack(x_2d, 0).shape # axis=0 stacking torch.Size([3, 2, 2]) >>> torch.stack(x_2d, 1).shape # axis=1 stacking torch.Size([2, 3, 2]) >>> torch.stack(x_2d, 2).shape # axis=2 stacking torch.Size([2, 2, 3]) With this state of mind, you can intuitively extend the operation to n-dimensional tensors. | 10 | 17 |
69,216,791 | 2021-9-17 | https://stackoverflow.com/questions/69216791/creating-an-edge-list-from-a-pandas-dataframe | I'd like to create an edge list with weights as an attribute (counts number of pair occurrences - e.g., how many months have the pair a-b been together in the same group). The dataframe contains a monthly snapshot of people in a particular team (there are no duplicates on the monthly groups) monthyear name jun2020 a jun2020 b jun2020 c jul2020 a jul2020 b jul2020 d The output should look like the following (it's non-directional so a-b pair is the same as b-a): node1 node2 weight a b 2 b c 1 a c 1 a d 1 b d 1 I managed to create a new dataframe with the names combinations using the following: df1 = pd.DataFrame(data=list(combinations(df['name'].unique().tolist(), 2)), columns=['node1', 'node2']) Now I'm not sure how to iterate over this new dataframe to populate the weights. How can this be done? | Assuming that there are no duplicates within each monthyear group, you can get all 2-combinations of names within each group and then group by the node names to obtain the weight. from itertools import combinations def get_combinations(group): return pd.DataFrame([sorted(e) for e in list(combinations(group['name'].values, 2))], columns=['node1', 'node2']) df = df.groupby('monthyear').apply(get_combinations) This will give you an intermediate result: node1 node2 monthyear jul2020 0 a b 1 a d 2 b d jun2020 0 a b 1 a c 2 b c Now, calculate the weight: df = df.groupby(['node1', 'node2']).size().to_frame('weight').reset_index() Final result: node1 node2 weight 0 a b 2 1 a c 1 2 a d 1 3 b c 1 4 b d 1 | 7 | 3 |
69,205,577 | 2021-9-16 | https://stackoverflow.com/questions/69205577/fill-gaps-in-time-series-pandas-dataframe | I have a pandas dataframe with gaps in time series. It looks like the following: Example Input -------------------------------------- Timestamp Close 2021-02-07 09:30:00 124.624 2021-02-07 09:31:00 124.617 2021-02-07 10:04:00 123.946 2021-02-07 16:00:00 123.300 2021-02-09 09:04:00 125.746 2021-02-09 09:05:00 125.646 2021-02-09 15:58:00 125.235 2021-02-09 15:59:00 126.987 2021-02-09 16:00:00 127.124 Desired Output -------------------------------------------- Timestamp Close 2021-02-07 09:30:00 124.624 2021-02-07 09:31:00 124.617 2021-02-07 09:32:00 124.617 2021-02-07 09:33:00 124.617 'Insert a line for each minute up to the next available timestamp with the Close value form the last available timestamp' 2021-02-07 10:03:00 124.617 2021-02-07 10:04:00 123.946 2021-02-07 16:00:00 123.300 'I dont want lines inserted here. As this date is not present in the original dataset (could be a non trading day so I dont want to fill this gap)' 2021-02-09 09:04:00 125.746 2021-02-09 09:05:00 125.646 2021-02-09 15:58:00 125.235 'Fill the gaps here again but only between 09:30 and 16:00 time' 2021-02-09 15:59:00 126.987 2021-02-09 16:00:00 127.124 What I have tried is: '# set the index column' df_process.set_index('Exchange DateTime', inplace=True) '# resample and forward fill the gaps' df_process_out = df_process.resample(rule='1T').ffill() '# filter and return only timestamps between 09:30 and 16:00' df_process_out = df_process_out.between_time(start_time='09:30:00', end_time='16:00:00') However if I do it like this it also resamples and generates new timestamps on dates that are not existent in the original dataframe. In the example above it would also generate timestamps on a minute basis for 2021-02-08 How can I avoid this? Furthermore is there a better way to avoid resampling over the whole time. df_process_out = df_process.resample(rule='1T').ffill() This generates timestamps from 00:00 to 24:00 and in the next line of code I have to filter most timestamps out again. Doesn't seem efficient. Any help/guidance would be highly appreciated Thanks Edit: As requested a small sample set df_in: Input data df_out_error: Wrong Output Data df_out_OK: How the output data should look like In the following ColabNotebook I prepeared a small sample. https://colab.research.google.com/drive/1Fps2obTv1YPDpTzXTo7ivLI5njoI-y4n?usp=sharing Notice that this is only a small subset of the data. I'm trying to clean multiple years of data that is structured and shows missing minutes timestamps like this. | You can achieve what you need with a combination of df.groupby() (over dates) and resampling using rule = "1Min". Try this - df_new = (df.assign(date=df.Timestamp.dt.date) #create new col 'date' from the timestamp .set_index('Timestamp') #set timestamp as index .groupby('date') #groupby for each date .apply(lambda x: x.resample('1Min') #apply resampling for 1 minute from start time to end time for that date .ffill()) #ffill values .reset_index('date', drop=True) #drop index 'date' that was created by groupby .drop('date',1) #drop 'date' column created before .reset_index() #reset index to get back original 2 cols ) df_new Explanation 1. Resampling for limited time period only "Furthermore is there a better way to avoid resampling over the whole time. This generates timestamps from 00:00 to 24:00 and in the next line of code I have to filter most timestamps out again. Doesn't seem efficient." As in the above solution, you can resample and then ffill (or any other type of fill) using rule = 1Min. This ensures that you are not resampling from 00:00 to 24:00 but only from the start to end time stamps available in your data. To prove, I show this applied to a single date in the data - #filtering for a single day ddd = df[df['date']==df.date.unique()[0]] #applying resampling on that given day ddd.set_index('Timestamp').resample('1Min').ffill() Notice the start (09:30:00) and end (16:00:00) timestamps for the given date. 2. Applying resample over existing dates only "In the example above it would also generate timestamps on a minute basis for 2021-02-08. How can I avoid this?" As in the above solution, you can apply the resampling method over date groups separately. In this case, I apply the method using a lambda function after separating out the date from the timestamps. So the resample happens only for the date that exist in the dataset df_new.Timestamp.dt.date.unique() array([datetime.date(2021, 2, 7), datetime.date(2021, 2, 9)], dtype=object) Notice, that the output only contains the 2 unique dates from the original dataset. | 4 | 7 |
69,214,628 | 2021-9-16 | https://stackoverflow.com/questions/69214628/invoking-a-constructor-in-a-with-statement | I have the following code: class Test: def __init__(self, name): self.name = name def __enter__(self): print(f'entering {self.name}') def __exit__(self, exctype, excinst, exctb) -> bool: print(f'exiting {self.name}') return True with Test('first') as test: print(f'in {test.name}') test = Test('second') with test: print(f'in {test.name}') Running it produces the following output: entering first exiting first entering second in second exiting second But I expected it to produce: entering first in first exiting first entering second in second exiting second Why isn't the code within my first example called? | The __enter__ method should return the context object. with ... as ... uses the return value of __enter__ to determine what object to give you. Since your __enter__ returns nothing, it implicitly returns None, so test is None. with Test('first') as test: print(f'in {test.name}') test = Test('second') with test: print(f'in {test.name}') So test is none. Then test.name is an error. That error gets raised, so Test('first').__exit__ gets called. __exit__ returns True, which indicates that the error has been handled (essentially, that your __exit__ is acting like an except block), so the code continues after the first with block, since you told Python everything was fine. Consider def __enter__(self): print(f'entering {self.name}') return self You might also consider not returning True from __exit__ unless you truly intend to unconditionally suppress all errors in the block (and fully understand the consequences of suppressing other programmers' errors, as well as KeyboardInterrupt, StopIteration, and various system signals) | 51 | 61 |
69,200,881 | 2021-9-15 | https://stackoverflow.com/questions/69200881/how-to-get-python-unittest-to-show-log-messages-only-on-failed-tests | Issue I've been trying to use the unittest --buffer flag to suppress logs for successful tests and show them for failing tests. But it seems to show the log output regardless. Is this a quirk of the logging module? How can I get the log output only on failing tests? Is there a special config on the logger that is required? Other questions and answers I've found have taken a brute force approach to disable all logging during tests. Sample Code import logging import unittest import sys logger = logging.getLogger('abc') logging.basicConfig( format = '%(asctime)s %(module)s %(levelname)s: %(message)s', level = logging.INFO, stream = sys.stdout) class TestABC(unittest.TestCase): def test_abc_pass(self): logger.info('log abc in pass') print('print abc in pass') self.assertTrue(True) def test_abc_fail(self): logger.info('log abc in fail') print('print abc in fail') self.assertTrue(False) Test Output $ python -m unittest --buffer 2021-09-15 17:38:48,462 test INFO: log abc in fail F Stdout: print abc in fail 2021-09-15 17:38:48,463 test INFO: log abc in pass . ====================================================================== FAIL: test_abc_fail (test.TestABC) ---------------------------------------------------------------------- Traceback (most recent call last): File ".../test.py", line 22, in test_abc_fail self.assertTrue(False) AssertionError: False is not true Stdout: print abc in fail ---------------------------------------------------------------------- Ran 2 tests in 3.401s FAILED (failures=1) So the buffer does successfully suppress the output from the print statement in the passing test. But it doesn't suppress the log output. | A Solution for the Sample Code Just before the test runs we need to update the stream on the log handler to point to the buffer unittest has set up for capturing the test output. import logging import unittest import sys logger = logging.getLogger('abc') logging.basicConfig( format = '%(asctime)s %(module)s %(levelname)s: %(message)s', level = logging.INFO, stream = sys.stdout) class LoggerRedirector: # Keep a reference to the real streams so we can revert _real_stdout = sys.stdout _real_stderr = sys.stderr @staticmethod def all_loggers(): loggers = [logging.getLogger()] loggers += [logging.getLogger(name) for name in logging.root.manager.loggerDict] return loggers @classmethod def redirect_loggers(cls, fake_stdout=None, fake_stderr=None): if ((not fake_stdout or fake_stdout is cls._real_stdout) and (not fake_stderr or fake_stderr is cls._real_stderr)): return for logger in cls.all_loggers(): for handler in logger.handlers: if hasattr(handler, 'stream'): if handler.stream is cls._real_stdout: handler.setStream(fake_stdout) if handler.stream is cls._real_stderr: handler.setStream(fake_stderr) @classmethod def reset_loggers(cls, fake_stdout=None, fake_stderr=None): if ((not fake_stdout or fake_stdout is cls._real_stdout) and (not fake_stderr or fake_stderr is cls._real_stderr)): return for logger in cls.all_loggers(): for handler in logger.handlers: if hasattr(handler, 'stream'): if handler.stream is fake_stdout: handler.setStream(cls._real_stdout) if handler.stream is fake_stderr: handler.setStream(cls._real_stderr) class TestABC(unittest.TestCase): def setUp(self): # unittest has reassigned sys.stdout and sys.stderr by this point LoggerRedirector.redirect_loggers(fake_stdout=sys.stdout, fake_stderr=sys.stderr) def tearDown(self): LoggerRedirector.reset_loggers(fake_stdout=sys.stdout, fake_stderr=sys.stderr) # unittest will revert sys.stdout and sys.stderr after this def test_abc_pass(self): logger.info('log abc in pass') print('print abc in pass') self.assertTrue(True) def test_abc_fail(self): logger.info('log abc in fail') print('print abc in fail') self.assertTrue(False) The How and Why The issue is a side effect from both how unittest is capturing the stdout and stderr for the test and how logging is usually set up. Usually logging is set up very early in the program execution and this means the log handlers will store a reference to sys.stdout and sys.stderr in their instances (code link). However, just before the test runs, unittest creates a io.StringIO() buffer for both streams and reassigns sys.stdout and sys.stderr to the new buffers (code link). So right before the test runs, in order to get unittest to capture the log output, we need to tell the log handlers to point their streams to the buffer that unittest has set up. After the test has finished, the streams are reverted back to normal. However, unittest creates a new buffer for each test so we need to update the log handlers both before and after each test. Since the log handlers are pointed to the buffer that unittest set up, if there was a failed test, then all the logs for that test will be displayed when using the --buffer option. The LoggerRedirector class in the solution above just offers convenience methods to reassign all the handlers that might be pointed to sys.stdout or sys.stderr to the new buffer that unittest has set up and then an easy way to revert them. Since by the time setUp() runs, unittest has already reassigned sys.stdout and sys.stderr we are using these to reference the new buffer unittest has set up. | 5 | 6 |
69,185,877 | 2021-9-15 | https://stackoverflow.com/questions/69185877/should-i-use-python-native-multithread-or-multiple-tasks-in-airflow | I'm refactoring a .NET application to airflow. This .NET application uses multiple threads to extract and process data from a mongoDB (Without multiple threads the process takes ~ 10hrs, with multi threads i can reduce this) . In each documment on mongoDB I have a key value namedprocess. This value is used to control which thread process the documment. I'm going to develop an Airflow DAG to optimize this process. My doubt is about performance and the best way to do this. My application should have multiple tasks (I will control the process variable in the input of the python method). Or should I use only 1 task and use Python MultiThreading inside this task? The image below illustrates my doubt. Multi Task X Single Task (Multi Threading) I know that using MultiTask I'm going to do more DB Reads (1 per task). Although, using Python Multi Threading I know I'll have to do a lot of control processing inside de task method. What is the best, fastest and optimized way to do this? | It really depends on the nature of your processing. Multi-threading in Python can be limiting because of GIL (Global Interpreter Lock) - there are some operations that require exclusive lock, and this limit the parallelism it can achieve. Especially if you mix CPU and I/O operations the effects might be that a lot of time is spent by threads waiting for the lock. But it really depends on what you do - you need to experiment to see if GIL affects your multithreading. Multiprocessing (which is used by Airflow for Local Executor) is better because each process runs effectively a separate Python interpreter. So each process has it's own GIL - at the expense of resources used (each process uses it's own memory, sockets and so on). Each task in Airlfow will run in a separate process. However Airflow offers a bit more - it also offers multi-machine. You can run separate workers With X processes on Y machines, effectively running up to X*Y processes at a time. Unfortunately, Airflow is (currently) not well suited to run dynamic number of parallel tasks of the same type. Specifically if you would like to split load to process to N pieces and run each piece in a separate task - this would only really work if N is constant and does not change over time for the same DAG (like if you know you have 10 machines, with 4 CPUs, you's typically want to run 10*4 = 40 tasks at a time, so you'd have to split your job into 40 tasks. And it cannot change dynamically between runs really - you'd have to write your DAG to run 40 parallel tasks every time it runs. Not sure if I helped, but there is no single "best optimised" answer - you need to experiment and check what works best for your case. | 5 | 3 |
69,205,854 | 2021-9-16 | https://stackoverflow.com/questions/69205854/iterating-over-dictionary-in-python-and-using-each-value | I am trying to iterate over a dictionary that looks like this: account_data = {"a": "44196397", "b": "2545086098", "c": "210623431", "d": "1374059147440820231", "e": "972970759416111104", "f": "1060627757812641792", "g": "1368361032796700674", "h": "910899153772916736", "i": "887748030304329728", "j": "1381341090", "k": "2735504155", "l": "150324112", } The goal is to use each ID to scrape some data, therefor I got a method that takes the corresponding userID and gets the data from it. At first I had a method for every ID in the dict, but now I want to change it so that I got one method which iterates over the dictionary, takes one ID at a time and makes the API request, if finished the next is made and so on. The problem is I can't iterate over the dictionary, I always just access the first one in here. I am relatively new to Python since I mainly used Java. Maybe dictionary is the wrong data structure for this task? Any help appreciated. Edit: This is my old code to iterate over the dictionary: def iterate_over_dict(): for key, value in account_data.items(): return value I then continue with using the id in this function: def get_latest_data(): chosen_id = iterate_over_dict() print('id: ', chosen_id) # get my tweets tweets = get_tweets_from_user_id(chosen_id) # get tweet_id of latest tweet tweet_id = tweets.id.values[0] # get tweet_text of latest tweet tweets = tweets.text.values[0] # check if new tweet - if true -> check if contains data = check_for_new_tweet(tweet_id, tweets) if data is not None: print("_________") print('1 ', data) But I always only use the first one. I think in Java it wouldn't be a problem for me since I can just use an index to iterate from 0 to n, but is there something similar for dictionaries? I also want to run the get_latest_data method every time a new ID is chosen from the dict | Use for loop for iteration. dict = {'a': 1, 'b': 2, 'c': 3} for key, value in dict.items(): print(key+" "+ str(value)) for key in dict: print(key+ " "+str(dict[key])) The first one iterates over items and gives you keys and values. The second one iterates over keys and then it is accessing value from the dictionary using the key. | 22 | 47 |
69,192,732 | 2021-9-15 | https://stackoverflow.com/questions/69192732/default-python-paths-when-using-vscode-interactive-window | Suppose the Python package mypackage is at a non-standard location on my machine and I am running Python code in the VSCode interactive window. If I type import mypackage it will not be found. This can be remedied by doing sys.path.append("/path/to/mypackage"). However, I would like to set things up so that within a given project each time I open the interactive window a set of paths, like /path/to/mypackage, have already been added to the search path. Is there a way to do this? | You can do this to modify the PYTHONPATH: Add these in the settings.json file to Modify the PYTHONPATH in the terminal: "terminal.integrated.env.windows": { "PYTHONPATH": "xxx/site-packages" } Create a .env file under your workspace, and add these settings in it to modify the PYTHONPATH for the extension and debugger: PYTHONPATH=xxx/site-packages You can refer to [here][1] to understand the effects of these two configurations. | 6 | 3 |
69,201,761 | 2021-9-16 | https://stackoverflow.com/questions/69201761/python-selenium-chrome-driver-ssl-certificate-verify-failed-unable-to-get-local | When trying to run undetected-chromedriver I was running into the following error: urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate | If you're using macOS go to Macintosh HD > Applications > Python3.9 folder (or whatever version of python you're using) > double click on "Install Certificates.command" file. | 4 | 13 |
69,201,168 | 2021-9-16 | https://stackoverflow.com/questions/69201168/modulenotfounderror-no-module-named-project-when-using-sys-path-append | I'm trying to import models from a folder in the parent directory. Im using sys.path.append(). My project structure: -Project folder1 file1.py ... folder2 file2.py ... In file1.py file: sys.path.append('../Project') from Project.folder2 import file2 I then get a: ModuleNotFoundError: No module named Project I know there are other ways but this seems like the simplest. I'm not sure if I need to put the absolute path to the Project folder, but I'm hoping not since I'll be running this Project on different computers (diff abs path). | 2 errors in your code: The Project directory is not just 1-level up. From the point of view of file1.py, it is actually 2 levels up. See this: $ cd .. (venv) nponcian 1$ tree . └── Project ├── folder1 │ └── file1.py └── folder2 └── file2.py (venv) nponcian 1$ cd Project/folder1/ (venv) nponcian folder1$ ls .. folder1 folder2 (venv) nponcian folder1$ ls ../.. Project Even if the above works, adding a relative path as a string would literally append that string as its raw value. So if you add a print(sys.path), it would display something like this: ['/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '../Project', '.'] It literally added '../Project', so if python started searching for the target modules in folder2, it wouldn't still find them because how would it know where exactly is '../Project' relative to. Solution What you need to add is the absolute path. If your problem is it might change, it is fine because we don't need a fixed absolute path. We can get the absolute path through the location of the current file being executed e.g. file1.py and then extracting the parent directory needed. Thus, this will work regardless if the absolute paths change because the way we are getting it is always relative to file1.py. Try this: Project/folder1/file1.py from pathlib import Path import sys sys.path.append(str(Path(__file__).parent.parent.parent)) # 1. <.parent> contains this file1.py 2. <.parent.parent> contains folder1 3. <.parent.parent.parent> contains Project ... the rest of the file | 6 | 6 |
69,198,303 | 2021-9-15 | https://stackoverflow.com/questions/69198303/sets-the-default-value-of-a-parameter-based-on-the-value-of-another-parameter | So I want to create a function that generates consecutive numbers from 'start' to 'end' as many as 'size'. For the iteration, it will be calculated inside the function. But I have problem to set default value of parameter 'end'. Before I explain further, here's the code: # Look at this ------------------------------- # || # \/ def consecutive_generator(size=20, start=0, end=(size+start)): i = start iteration = (end-start)/size arr = [] temp_size = 0 while temp_size < size: arr.append(i) i += iteration temp_size += 1 return arr # with default end, so the 'end' parameter will be 11 c1= consecutive_generator(10, start=1) print(c1) # with end set c2= consecutive_generator(10, end=20) print(c2) As can be seen above (on the default value of the 'end' parameter), what I want to achieve is the 'end' parameter whose default value is 'start' + 'size' parameters (then the iteration will be 1) The output will definitely be an error. So how can i do this? (this is my first time asking on stackoverflow sorry if i made a mistake) (Closed) | This is a pretty standard pattern: def consecutive_generator(size=20, start=0, end=None): if end is None: end = size + start | 6 | 6 |
69,193,013 | 2021-9-15 | https://stackoverflow.com/questions/69193013/adding-a-column-with-one-single-categorical-value-to-a-pandas-dataframe | I have a pandas.DataFrame df and would like to add a new column col with one single value "hello". I would like this column to be of dtype category with the single category "hello". I can do the following. df["col"] = "hello" df["col"] = df["col"].astype("category") Do I really need to write df["col"] three times in order to achieve this? After the first line I am worried that the intermediate dataframe df might take up a lot of space before the new column is converted to categorical. (The dataframe is rather large with millions of rows and the value "hello" is actually a much longer string.) Are there any other straightforward, "short and snappy" ways of achieving this while avoiding the above issues? An alternative solution is df["col"] = pd.Categorical(itertools.repeat("hello", len(df))) but it requires itertools and the use of len(df), and I am not sure how memory usage is under the hood. | We can explicitly build the Series of the correct size and type instead of implicitly doing so via __setitem__ then converting: df['col'] = pd.Series('hello', index=df.index, dtype='category') Sample Program: import pandas as pd df = pd.DataFrame({'a': [1, 2, 3]}) df['col'] = pd.Series('hello', index=df.index, dtype='category') print(df) print(df.dtypes) print(df['col'].cat.categories) a col 0 1 hello 1 2 hello 2 3 hello a int64 col category dtype: object Index(['hello'], dtype='object') | 6 | 4 |
69,186,176 | 2021-9-15 | https://stackoverflow.com/questions/69186176/determine-if-subclass-has-a-base-classs-method-implemented-in-python | I have a class that extends a base class. Upon instantiation, I want to check if the subclass has one of the classes implemented from its base, but I'm not sure the best way. hasattr(self, '[method]') returns the method from super if not implemented by the child, so I'm trying to tell the difference. Here is an example: class Base : def __init__ ( self,) : pass def fail (self,) : pass # Now create the subclass w/o .fail class Task ( Base ) : def __init__ ( self, ): print( hasattr( self, 'fail' ) ) # < returns True When Task() is instantiated it prints True because Task inherits .fail from Base. But in this case, I want to know that Task Does Not implement .fail, so I want a False returned somehow. It seems like I'm looking for something like isimplemented( self, 'fail' ). What am I missing? | I'm not sure I understand correctly, but it sounds like you might be looking for Abstract Base Classes. (Documentation here, tutorial here.) If you specify an abstractmethod in a base class that inherits from abc.ABC, then attempting to instantiate a subclass will fail unless that subclass overrides the abstractmethod. from abc import ABC, abstractmethod class Base(ABC): @abstractmethod def fail(self): pass class Task(Base): pass class Task2(Base): def fail(self): pass # this raises an exception # `fail` method has not been overridden in the subclass. t1 = Task() # this succeeds # `fail` method has been overridden in the subclass. t2 = Task2() If you want a check to happen at class definition time rather than instance instantiation time, another option is to write an __init_subclass__ method in your base class, which is called every time you subclass your base class or you subclass a class inheriting from your base class. (You don't have to raise an exception in __init_subclass__ — you could just add a fail_overriden boolean attribute to the class, or do anything you like really.) class Base: def fail(self): pass def __init_subclass__(cls, **kwargs): if cls.fail == Base.fail: raise TypeError( 'Subclasses of `Base` must override the `fail` method' ) super().__init_subclass__(**kwargs) # this class definition raises an exception # because `fail` has not been overridden class Task(Base): pass # this class definition works fine. class Task2(Base): def fail(self): pass And if you just want each instance to tell you whether fail was overridden in their subclass, you can do this: class Base: def __init__(self): print(type(self).fail != Base.fail) def fail(self): pass class Task(Base): def __init__(self): super().__init__() class Task2(Base): def __init__(self): super().__init__() def fail(self): pass t1 = Task() # prints "True" t2 = Task2() # prints "False" | 5 | 4 |
69,188,655 | 2021-9-15 | https://stackoverflow.com/questions/69188655/how-to-add-a-row-in-a-special-form | I have a pandas.DataFrame of the form index df df1 0 0 111 1 1 111 2 2 111 3 3 111 4 0 111 5 2 111 6 3 111 7 0 111 8 2 111 9 3 111 10 0 111 11 1 111 12 2 111 13 3 111 14 0 111 15 1 111 16 2 111 17 3 111 18 1 111 19 2 111 20 3 111 I want to create a dataframe in which column df repeats 0,1,2,3. But there is something missing in the data. I'm trying to fill in the blanks with 0 by appending row values. Here is my expected result: index df df1 0 0 111 1 1 111 2 2 111 3 3 111 4 0 111 5 1 0 6 2 111 7 3 111 8 0 111 9 1 0 10 2 111 11 3 111 12 0 111 13 1 111 14 2 111 15 3 111 16 0 111 17 1 111 18 2 111 19 3 111 20 0 0 21 1 111 22 2 111 23 3 111 How can I achieve this? edit: What should I do if my input is as below? index df1 df2 0 0 111 1 1 111 2 2 111 3 3 111 4 0 111 5 3 111 6 1 111 7 2 111 Here is my expected result: index df1 df2 0 0 111 1 1 111 2 2 111 3 3 111 4 0 111 5 1 0 6 2 0 7 3 111 8 0 0 9 1 111 10 2 111 11 3 0 | Using @Mozway's idea, and combining with some helper functions from pyjanitor, the missing values can be made explicit, and later filled. Again, this is just another option : # pip install pyjanitor import pandas as pd import janitor as jn (df.assign(temp = df.df.diff().le(0).cumsum()) .complete('df', 'temp') # helper function .fillna(0) # relevant if you care about the order .sort_values('temp', kind='mergesort') # helper function .select_columns('df*') # or .drop(columns='temp') ) df df1 0 0 111.0 6 1 111.0 12 2 111.0 18 3 111.0 1 0 111.0 7 1 0.0 13 2 111.0 19 3 111.0 2 0 111.0 8 1 0.0 14 2 111.0 20 3 111.0 3 0 111.0 9 1 111.0 15 2 111.0 21 3 111.0 4 0 111.0 10 1 111.0 16 2 111.0 22 3 111.0 5 0 0.0 11 1 111.0 17 2 111.0 23 3 111.0 | 5 | 5 |
69,188,743 | 2021-9-15 | https://stackoverflow.com/questions/69188743/how-to-use-a-service-account-to-authorize-google-sheets | I am trying to open a private google sheet using python. The end goal here is to read that private sheet data into a json object. I have made sure to create a google cloud project, enable the API's, and service account. The service account email has been shared and added as an editor. I also created OAuth keys for a desktop application. This is required since the file is private. I know I need to somehow request a token to use for access to the sheets API, but I am at a loss for how to create a request, and utilize the client_secret file generated from OAuth keys. I figured the googleAPI would have a function where you can pass this file directly, but I am lost in documentation. Any insight would be appreciated! | All you need to do is supply the library with the location of the clientSecret.json file you should have downloaded from Google cloud console. This method should build the service for you and you can make the requests to the api. It will handle all the authorization. from apiclient.discovery import build from oauth2client.service_account import ServiceAccountCredentials def get_service(api_name, api_version, scopes, key_file_location): """Get a service that communicates to a Google API. Args: api_name: The name of the api to connect to. api_version: The api version to connect to. scopes: A list auth scopes to authorize for the application. key_file_location: The path to a valid service account JSON key file. Returns: A service that is connected to the specified API. """ credentials = ServiceAccountCredentials.from_json_keyfile_name( key_file_location, scopes=scopes) # Build the service object. service = build(api_name, api_version, credentials=credentials) return service The best example I know of for service account authentication with python is the Google analytics quickstart If you have any issues altering it for google sheets let me know i can try and help. Calling it should be something like this. def main(): # Define the auth scopes to request. scope = 'https://www.googleapis.com/auth/spreadsheets' key_file_location = '<REPLACE_WITH_JSON_FILE>' # Authenticate and construct service. service = get_service( api_name='sheets', api_version='v4', scopes=[scope], key_file_location=key_file_location) data = your_method_to_call_sheets(service) How to create clientSecret.json remember to enable the google sheets api under libary | 6 | 6 |
69,188,132 | 2021-9-15 | https://stackoverflow.com/questions/69188132/how-to-convert-all-float64-columns-to-float32-in-pandas | Is there a generic way to convert all float64 values in a pandas dataframe to float32 values? But not changing uint16 to float32? I don't know the signal names in advance but just want to have no float64. Something like: if float64, then convert to float32, else nothing? The structure of the data is: DF.dtypes Counter uint16 p_007 float64 p_006 float64 p_005 float64 p_004 float64 | Try this: df[df.select_dtypes(np.float64).columns] = df.select_dtypes(np.float64).astype(np.float32) | 16 | 16 |
69,187,685 | 2021-9-15 | https://stackoverflow.com/questions/69187685/getting-attributeerror-module-base64-has-no-attribute-decodestring-error-wh | Issue description: Getting AttributeError: module 'base64' has no attribute 'decodestring' error while running on python 3.9.6 Steps to reproduce: Below is a dummy program, while running on python 3.9.6, I am getting `AttributeError: module 'base64' has no attribute 'decodestring'`` error: from ldif3 import LDIFParser parser = LDIFParser(open('dse3.ldif', 'rb')) for dn, entry in parser.parse(): if dn == "cn=Schema Compatibility,cn=plugins,cn=config": if entry['nsslapd-pluginEnabled'] == ['on']: print('Entry record: %s' % dn) Error message: python3.9 1.py ✔ venvpy3.9 11:12:01 Traceback (most recent call last): File "/Users/rasrivas/local_test/1.py", line 4, in <module> for dn, entry in parser.parse(): File "/Users/rasrivas/local_test/venvpy3.9/lib/python3.9/site-packages/ldif3.py", line 384, in parse yield self._parse_entry_record(block) File "/Users/rasrivas/local_test/venvpy3.9/lib/python3.9/site-packages/ldif3.py", line 357, in _parse_entry_record attr_type, attr_value = self._parse_attr(line) File "/Users/rasrivas/local_test/venvpy3.9/lib/python3.9/site-packages/ldif3.py", line 315, in _parse_attr attr_value = base64.decodestring(line[colon_pos + 2:]) AttributeError: module 'base64' has no attribute 'decodestring' python version python --version Python 3.9.6 Operating system: macOS 11.5.2 Python version: python --version Python 3.9.6 python-ldap version: ldif3-3.2.2 | From the docs for Python 3.8, base64.decodestring() is described as a: Deprecated alias of decodebytes(). It looks like the base64.decodestring() function has been deprecated since Python 3.1, and removed in Python 3.9. You will want to use the bas64.decodebytes() function instead. | 15 | 32 |
69,186,311 | 2021-9-15 | https://stackoverflow.com/questions/69186311/python-input-function-not-working-in-vs-code | balance = 100 print('Current Balance: ', balance) while balance > 0: print('1. WITHDRAW') print('2. DEPOSIT') choice = input("Select an option... ") if (choice == 1): print('1') elif (choice == 2): print('2') else: print('test') When I run the code with the code runner extension the code shows in the terminal however when it gets to the input function it freezes like its asking me to input some data however I cant eve type a number or letter. This is what the terminal shows... [Running] python -u "c:\Users\bowen\Desktop\CSE 120\PROJECT 3\main.py" Current Balance: 100 1. WITHDRAW 2. DEPOSIT Select an option... | Code Runner shows results in OUTPUT and doesn't accept inputs by default. Add "code-runner.runInTerminal": true in Settings.json, then you can input data. | 9 | 18 |
69,186,179 | 2021-9-15 | https://stackoverflow.com/questions/69186179/2d-alpha-shape-concave-hull-problem-in-python | I have a large set of 2D points that I've downsampled into a 44x2 numpy array (array defined later). I am trying to find the bounding shape of those points which are effectively a concave hull. In the 2nd image I've manually marked an approximate bounding shape that I am hoping to get. I have tried using alphashape and the Delauney triangulation method from here, both methods providing the same answer. Unfortunately, I don't seem to be able to achieve what I need, regardless of the alpha parameters. I've tried some manual settings and alphaoptimize, some examples of which are below. Is there something critical I'm misunderstanding about alphashape? The documentation seems very clear, but obviously I'm missing something. import numpy as np import alphashape from descartes import PolygonPatch import matplotlib.pyplot as plt points = np.array( [[0.16,3.98], [-0.48,3.33], [-0.48,4.53], [0.1,3.67], [0.04,5.67], [-7.94,3.02], [-18.16,3.07], [-0.15,5.67], [-0.26,5.14], [-0.1,5.11], [-0.96,5.48], [-0.03,3.86], [-0.12,3.16], [0.32,4.64], [-0.1,4.32], [-0.84,4.28], [-0.56,3.16], [-6.85,3.28], [-0.7,3.24], [-7.2,3.03], [-1.0,3.28], [-1.1,3.28], [-2.4,3.28], [-2.6,3.28], [-2.9,3.28], [-4.5,3.28], [-12.3,3.28], [-14.8,3.28], [-16.7,3.28], [-17.8,3.28], [-0,3.03], [-1,3.03], [-2.1,3.03], [-2.8,3.03], [-3.2,3.03], [-5,3.03], [-12,3.03], [-14,3.03], [-17,3.03], [-18,3.03], [-0.68,4.86], [-1.26,3.66], [-1.71,3.51], [-9.49,3.25]]) alpha = 0.1 alphashape = alphashape.alphashape(points, alpha) fig = plt.figure() ax = plt.gca() ax.scatter(points[:,0],points[:,1]) ax.add_patch(PolygonPatch(alphashape,alpha=0.2)) plt.show() | The plots that you attached are misleading, since the scales on the x-axis and the y-axis are very different. If you set both axes to the same scale, you obtain the following plot: . Since differences between x-coordinates of points are on the average much larger than differences between y-coordinates, you cannot obtain an alpha shape resembling your desired result. For larger values of alpha points scattered along the x-axis will not be connected by edges, since alpha shape will use circles too small to connect these points. For values of alpha small enough that these points get connected you will obtain the long edges on the right-hand side of the plot. You can fix this issue by rescaling y-coordinates of all points, effectively stretching the plot in the vertical direction. For example, multiplying y-coordinates by 7 and setting alpha = 0.4 gives the following picture: | 5 | 5 |
69,137,780 | 2021-9-10 | https://stackoverflow.com/questions/69137780/provide-additional-custom-metric-to-lightgbm-for-early-stopping | I running a binary classification in LightGBM using the training API and want to stop on a custom metric while still tracking one or more builtin metrics. It's not clear if this is possible, though. Here we can disable the default binary_logloss metric and only track our custom metric: import lightgbm as lgb def my_eval_metric(...): ... d_train = lgb.Dataset(...) d_validate = lgb.Dataset(...) params = { "objective": "binary", "metric": "custom", } evals_result = {} model = lgb.train( params, d_train, valid_sets=[d_validate], feval=my_eval_metric, early_stopping_rounds=10, evals_result=evals_result, ) If instead we let metric be default, we will also track binary_logloss, but we will stop on both metrics instead of just on our custom metric: params = { "objective": "binary", # "metric": "custom", } We can set first_metric_only in the params, but now we will stop only on binary_logloss as, apparently, it's the first metric: params = { "objective": "binary", "first_metric_only": True, } Other things that probably work but seem like a pain: It appears in the sklearn API that you can specify a list of evaluation metrics that intersperse callables for custom metrics and strings for builtin metrics; however, I would prefer not to switch to the sklearn API. I could reimplement binary_logloss and pass it as a custom evaluation metric in a list with my other custom metric and use first_metric_only; however, it seems like I shouldn't have to do that. Things that don't work: feval=[my_eval_metric, 'binary_logloss'] in the lgb.train call. Complains that a string is not callable. metric: [my_eval_metric, 'binary_logloss'] in the params set. Warns Unknown parameter: my_eval_metric and then errors when training starts with ValueError: For early stopping, at least one dataset and eval metric is required for evaluation. Am I missing something obvious or is this a small hole in the LightGBM API? This is on version 3.2.1. On version 3.0.0, it seems like it's totally impossible to pass multiple custom evaluation metrics in the training API. I'm not sure with the sklearn API there. | If you are asking "how do I perform early stopping based on a custom evaluation metric function?", that can be achieved by setting parameter metric to the string "None". That will lead LightGBM to skip the default evaluation metric based on the objective function (binary_logloss, in your example) and only perform early stopping on the custom metric function you've provided in feval. The example below, using lightgbm==3.2.1 and scikit-learn==0.24.1 on Python 3.8.8 reproduces this behavior. import lightgbm as lgb from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split X, y = load_breast_cancer(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) dtrain = lgb.Dataset( data=X_train, label=y_train ) dvalid = lgb.Dataset( data=X_test, label=y_test, reference=dtrain ) def _constant_metric(dy_pred,dy_true): """An eval metric that always returns the same value""" metric_name = 'constant_metric' value = 0.708 is_higher_better = False return metric_name, value, is_higher_better evals_result = {} model = lgb.train( params={ "objective": "binary", "metric": "None", "num_iterations": 100, "first_metric_only": True, "verbose": 0, "num_leaves": 8 }, train_set=dtrain, valid_sets=[dvalid], feval=_constant_metric, early_stopping_rounds=5, evals_result=evals_result, ) You can see in the logs that the custom metric function I've provided is evaluated against the validation set, and training stops after early_stopping_rounds consecutive rounds without improvement. [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000846 seconds. You can set `force_col_wise=true` to remove the overhead. [1] valid_0's constant_metric: 0.708 Training until validation scores don't improve for 5 rounds [2] valid_0's constant_metric: 0.708 [3] valid_0's constant_metric: 0.708 [4] valid_0's constant_metric: 0.708 [5] valid_0's constant_metric: 0.708 [6] valid_0's constant_metric: 0.708 Early stopping, best iteration is: [1] valid_0's constant_metric: 0.708 Evaluated only: constant_metric If you are asking "how do I provide a mix of built-in metrics and custom evaluation functions to lgb.train() and get all metrics evaluated, but only use the custom one for early stopping?"...then yes, that is not supported as of lightgbm 3.2.1. | 5 | 14 |
69,183,922 | 2021-9-14 | https://stackoverflow.com/questions/69183922/playwright-auto-scroll-to-bottom-of-infinite-scroll-page | I am trying to automate the scraping of a site with "infinite scroll" with Python and Playwright. The issue is that Playwright doesn't include, as of yet, a scroll functionnality let alone an infinite auto-scroll functionnality. From what I found on the net and my personnal testing, I can automate an infinite or finite scroll using the page.evaluate() function and some Javascript code. For example, this works: for i in range(20): page.evaluate('var div = document.getElementsByClassName("comment-container")[0];div.scrollTop = div.scrollHeight') page.wait_for_timeout(500) The problem with this approach is that it will either work by specifying a number of scrolls or by telling it to keep going forever with a while True loop. I need to find a way to tell it to keep scrolling until the final content loads. This is the Javascript that I am currently trying in page.evaluate(): var intervalID = setInterval(function() { var scrollingElement = (document.scrollingElement || document.body); scrollingElement.scrollTop = scrollingElement.scrollHeight; console.log('fail') }, 1000); var anotherID = setInterval(function() { if ((window.innerHeight + window.scrollY) >= document.body.offsetHeight) { clearInterval(intervalID); }}, 1000) This does not work either in my firefox browser or in the Playwright firefox browser. It returns immediately and doesn't execute the code in intervals. I would be grateful if someone could tell me how I can, using Playwright, create an auto-scroll function that will detect and stop when it reaches the bottom of a dynamically loading webpage. | The new Playwright version has a scroll function. it's called mouse.wheel(x, y). In the below code, we'll be attempting to scroll through youtube.com which has an "infinite scroll": from playwright.sync_api import Playwright, sync_playwright import time def run(playwright: Playwright) -> None: browser = playwright.chromium.launch(headless=False) context = browser.new_context() # Open new page page = context.new_page() page.goto('https://www.youtube.com/') # page.mouse.wheel(horizontally, vertically(positive is # scrolling down, negative is scrolling up) for i in range(5): #make the range as long as needed page.mouse.wheel(0, 15000) time.sleep(2) time.sleep(15) # --------------------- context.close() browser.close() with sync_playwright() as playwright: run(playwright) | 19 | 17 |
69,140,016 | 2021-9-11 | https://stackoverflow.com/questions/69140016/grayscale-image-different-in-cv2-imshow-and-matplotlib-pyplot-show | import cv2 import numpy as np import math import sys import matplotlib.pyplot as plt import utils as ut imgGray = cv2.imread(imgfile, cv2.IMREAD_GRAYSCALE) plt.imshow(imgGray, cmap = 'gray') plt.show() cv2.imshow("",imgGray) cv2.waitKey(0) cv2.destroyAllWindows() sys.exit() plt.show() result cv2.imshow() result I thought both of them would be same. But as you can see, two pictures have different grayscale. Seems plt.show() darker than cv2.imshow() How do I have to make grayscale in plt.show() same as cv2.imshow()? Python : 3.9.6 opencv-python : 4.5.3.56 mathplotlib : 3.4.3 | This is the behavior of matplotlib. It finds the minimum and maximum of your picture, makes those black and white, and scales everything in between. This is useful for arbitrary data that may have integer or floating point types, and value ranges between 0.0 and 1.0, or 0 .. 255, or anything else. You can set those limits yourself with vmin and vmax arguments: plt.imshow(imgGray, cmap='gray', vmin=0, vmax=255) # if your data ranges is uint8 OpenCV does no such auto-scaling. It has fixed rules. If it's floating point, 0.0 is black and 1.0 is white. If it's uint8, the range is 0 .. 255. To get such auto-ranging in OpenCV, you'll have to scale the data before displaying: normalized = cv.normalize( data, alpha=0.0, beta=1.0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F) | 4 | 7 |
69,145,633 | 2021-9-11 | https://stackoverflow.com/questions/69145633/how-to-initialize-a-database-connection-only-once-and-reuse-it-in-run-time-in-py | I am currently working on a huge project, which constantly executes queries. My problem is, that my old code always created a new database connection and cursor, which decreased the speed immensivly. So I thought it's time to make a new database class, which looks like this at the moment: class Database(object): _instance = None def __new__(cls): if cls._instance is None: cls._instance = object.__new__(cls) try: connection = Database._instance.connection = mysql.connector.connect(host="127.0.0.1", user="root", password="", database="db_test") cursor = Database._instance.cursor = connection.cursor() except Exception as error: print("Error: Connection not established {}".format(error)) else: print("Connection established") return cls._instance def __init__(self): self.connection = self._instance.connection self.cursor = self._instance.cursor # Do database stuff here The queries will use the class like so: def foo(): with Database() as cursor: cursor.execute("STATEMENT") I am not absolutly sure, if this creates the connection only once regardless of how often the class is created. Maybe someone knows how to initialize a connection only once and how to make use of it in the class afterwards or maybe knows if my solution is correct. I am thankful for any help! | Explanation The keyword here is clearly class variables. Taking a look in the official documentation, we can see that class variables, other than instance variables, are shared by all class instances regardless of how many class instances exists. Generally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class: So let us asume you have multiple instances of the class. The class itself is defined like below. class Dog: kind = "canine" # class variable shared by all instances def __init__(self, name): self.name = name # instance variable unique to each instance In order to better understand the differences between class variables and instance variables, I would like to include a small example here: >>> d = Dog("Fido") >>> e = Dog("Buddy") >>> d.kind # shared by all dogs "canine" >>> e.kind # shared by all dogs "canine" >>> d.name # unique to d "Fido" >>> e.name # unique to e "Buddy" Solution Now that we know that class variables are shared by all instances of the class, we can simply define the connection and cursor like shown below. class Database(object): connection = None def __init__(self): if Database.connection is None: try: Database.connection = mysql.connector.connect(host="127.0.0.1", user="root", password="", database="db_test") except Exception as error: print("Error: Connection not established {}".format(error)) else: print("Connection established") def execute_query(self, sql): cursor = Database.connection.cursor() cursor.execute(sql) As a result, the connection to the database is created once at the beginning and can then be used by every further instance. Note that the cursor is not cached, since it takes essentially no time at all to create a cursor. However, creating a connection is quite expensive, so it is sensible to cache them. | 9 | 18 |
69,142,306 | 2021-9-11 | https://stackoverflow.com/questions/69142306/auto-format-flake8-linting-errors-in-vscode | I'm using the flake8 linter for Python and I have many code formats issues like blank line contains whitespace flake8(W293) I'm trying to auto fix these linting issues. I have these settings: "python.linting.enabled": true, "python.linting.flake8Enabled": true, "python.linting.lintOnSave": true, "python.linting.flake8Args": [ "--ignore=E501", ], "editor.formatOnSave": true I'm using the default autopep8 formater but it seems that it does nothing. Nothing happens when I save the file or run the command Format Document. Is there any way to auto fix these linting errors? | I would suggest using a formatter, black for instance, to fix the issues detected by your linter. If so, pip install it and add this to your settings.json: "python.formatting.provider": "black" Then, pressing Alt+ShifT+F or Ctrl+S should trigger the formatting of your script. | 14 | 12 |
69,159,247 | 2021-9-13 | https://stackoverflow.com/questions/69159247/camera-calibration-focal-length-value-seems-too-large | I tried a camera calibration with python and opencv to find the camera matrix. I used the following code from this link https://automaticaddison.com/how-to-perform-camera-calibration-using-opencv/ import cv2 # Import the OpenCV library to enable computer vision import numpy as np # Import the NumPy scientific computing library import glob # Used to get retrieve files that have a specified pattern # Path to the image that you want to undistort distorted_img_filename = r'C:\Users\uid20832\3.jpg' # Chessboard dimensions number_of_squares_X = 10 # Number of chessboard squares along the x-axis number_of_squares_Y = 7 # Number of chessboard squares along the y-axis nX = number_of_squares_X - 1 # Number of interior corners along x-axis nY = number_of_squares_Y - 1 # Number of interior corners along y-axis # Store vectors of 3D points for all chessboard images (world coordinate frame) object_points = [] # Store vectors of 2D points for all chessboard images (camera coordinate frame) image_points = [] # Set termination criteria. We stop either when an accuracy is reached or when # we have finished a certain number of iterations. criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) # Define real world coordinates for points in the 3D coordinate frame # Object points are (0,0,0), (1,0,0), (2,0,0) ...., (5,8,0) object_points_3D = np.zeros((nX * nY, 3), np.float32) # These are the x and y coordinates object_points_3D[:,:2] = np.mgrid[0:nY, 0:nX].T.reshape(-1, 2) def main(): # Get the file path for images in the current directory images = glob.glob(r'C:\Users\Kalibrierung\*.jpg') # Go through each chessboard image, one by one for image_file in images: # Load the image image = cv2.imread(image_file) # Convert the image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Find the corners on the chessboard success, corners = cv2.findChessboardCorners(gray, (nY, nX), None) # If the corners are found by the algorithm, draw them if success == True: # Append object points object_points.append(object_points_3D) # Find more exact corner pixels corners_2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria) # Append image points image_points.append(corners) # Draw the corners cv2.drawChessboardCorners(image, (nY, nX), corners_2, success) # Display the image. Used for testing. #cv2.imshow("Image", image) # Display the window for a short period. Used for testing. #cv2.waitKey(200) # Now take a distorted image and undistort it distorted_image = cv2.imread(distorted_img_filename) # Perform camera calibration to return the camera matrix, distortion coefficients, rotation and translation vectors etc ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(object_points, image_points, gray.shape[::-1], None, None) But I think I always get wrong parameters. My focal length is around 1750 in x and y direction from calibration. I think this couldnt be rigth, it is pretty much. The camera documentation says the focal lentgh is between 4-7 mm. But I am not sure, why it is so high from the calibration. Here are some of my photos for the calibration. Maybe something is wrong with them. I moved the chessboard under the camera in different directions, angles and high. I was also wondering, why I dont need the size of the squares in the code. Can someone explains it to me or did I forgot this input somewhere? | Your misconception is about "focal length". It's an overloaded term. "focal length" (unit mm) in the optical part: it describes the distance between the lens plane and image/sensor plane, assuming a focus to infinity "focal length" (unit pixels) in the camera matrix: it describes a scale factor for mapping the real world to a picture of a certain resolution 1750 may very well be correct, if you have a high resolution picture (Full HD or something). The calculation goes: f [pixels] = (focal length [mm]) / (pixel pitch [µm / pixel]) (take care of the units and prefixes, 1 mm = 1000 µm) Example: a Pixel 4a phone, which has 1.40 µm pixel pitch and 4.38 mm focal length, has f = ~3128.57 (= fx = fy). Another example: A Pixel 4a has a diagonal Field of View of approximately 77.7 degrees, and a resolution of 4032 x 3024 pixels, so that's 5040 pixels diagonally. You can calculate: f = (5040 / 2) / tan(~77.7° / 2) f = ~3128.6 [pixels] And that calculation you can apply to arbitrary cameras for which you know the field of view and picture size. Use horizontal FoV and horizontal resolution if the diagonal resolution is ambiguous. That can happen if the sensor isn't 16:9 but the video you take from it is cropped to 16:9... assuming the crop only crops vertically, and leaves the horizontal alone. Why don't you need the size of the chessboard squares in this code? Because it only calibrates the intrinsic parameters (camera matrix and distortion coefficients). Those don't depend on the distance to the board or any other object in the scene. If you were to calibrate extrinsic parameters, i.e. the distance of cameras in a stereo setup, then you would need to give the size of the squares. | 4 | 17 |
69,117,617 | 2021-9-9 | https://stackoverflow.com/questions/69117617/how-to-find-the-lag-between-two-time-series-using-cross-correlation | Say the two series are: x = [4,4,4,4,6,8,10,8,6,4,4,4,4,4,4,4,4,4,4,4,4,4,4] y = [4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,6,8,10,8,6,4,4] Series x clearly lags y by 12 time periods. However, using the following code as suggested in Python cross correlation: import numpy as np c = np.correlate(x, y, "full") lag = np.argmax(c) - c.size/2 leads to an incorrect lag of -0.5. What's wrong here? | If you want to do it the easy way you should simply use scipy correlation_lags Also, remember to subtract the mean from the inputs. import numpy as np from scipy import signal x = [4,4,4,4,6,8,10,8,6,4,4,4,4,4,4,4,4,4,4,4,4,4,4] y = [4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,6,8,10,8,6,4,4] correlation = signal.correlate(x-np.mean(x), y - np.mean(y), mode="full") lags = signal.correlation_lags(len(x), len(y), mode="full") lag = lags[np.argmax(abs(correlation))] This gives lag=-12, that is the difference between the index of the first six in x and in y, if you swap inputs it gives +12 Edit Why to subtract the mean If the signals have non-zero mean the terms at the center of the correlation will become larger, because there you have a larger support sample to compute the correlation. Furthermore, for very large data, subtracting the mean makes the calculations more accurate. Here I illustrate what would happen if the mean was not subtracted for this example. plt.plot(abs(correlation)) plt.plot(abs(signal.correlate(x, y, mode="full"))) plt.plot(abs(signal.correlate(np.ones_like(x)*np.mean(x), np.ones_like(y)*np.mean(y)))) plt.legend(['subtracting mean', 'constant signal', 'keeping the mean']) Notice that the maximum on the blue curve (at 10) does not coincide with the maximum of the orange curve. | 7 | 9 |
69,181,347 | 2021-9-14 | https://stackoverflow.com/questions/69181347/stable-baselines3-log-rewards | How can I add the rewards to tensorboard logging in Stable Baselines3 using a custom environment? I have this learning code model = PPO( "MlpPolicy", env, learning_rate=1e-4, policy_kwargs=policy_kwargs, verbose=1, tensorboard_log="./tensorboard/") | You can access the local variables available to the logger callback using self.locals. Any variables exposed in your custom environment will be accessible via locals dict. The example below shows how to access a key in a custom dictionary called my_custom_info_dict in vectorized environments. import numpy as np from stable_baselines3 import SAC from stable_baselines3.common.callbacks import BaseCallback from stable_baselines3.common.logger import TensorBoardOutputFormat from stable_baselines3.common.vec_env import SubprocVecEnv def make_env(env): """ See https://colab.research.google.com/github/Stable-Baselines-Team/rl-colab-notebooks/blob/sb3/multiprocessing_rl.ipynb for more details on vectorized environments Utility function for multiprocessed env. :param env_id: (str) the environment ID :param num_env: (int) the number of environment you wish to have in subprocesses :param seed: (int) the inital seed for RNG :param rank: (int) index of the subprocess :return: (Callable) """ def _init(): return env return _init class SummaryWriterCallback(BaseCallback): ''' Snippet skeleton from Stable baselines3 documentation here: https://stable-baselines3.readthedocs.io/en/master/guide/tensorboard.html#directly-accessing-the-summary-writer ''' def _on_training_start(self): self._log_freq = 10 # log every 10 calls output_formats = self.logger.output_formats # Save reference to tensorboard formatter object # note: the failure case (not formatter found) is not handled here, should be done with try/except. self.tb_formatter = next(formatter for formatter in output_formats if isinstance(formatter, TensorBoardOutputFormat)) def _on_step(self) -> bool: ''' Log my_custom_reward every _log_freq(th) to tensorboard for each environment ''' if self.n_calls % self._log_freq == 0: rewards = self.locals['my_custom_info_dict']['my_custom_reward'] for i in range(self.locals['env'].num_envs): self.tb_formatter.writer.add_scalar("rewards/env #{}".format(i+1), rewards[i], self.n_calls) if __name__ == "__main__": env_id = "CartPole-v1" envs = SubprocVecEnv([make_env(env_id, i) for i in range(4)]) # 4 environments model = SAC("MlpPolicy", envs, tensorboard_log="/tmp/sac/", verbose=1) model.learn(50000, callback=TensorboardCallback()) | 9 | 9 |
69,143,423 | 2021-9-11 | https://stackoverflow.com/questions/69143423/is-there-a-way-to-write-two-in-statements-in-one | Is there a short version of n = [5, 3, 17] if 5 in n and 17 in n: print("YES") something like that doesn't seem to work if (5 and 17) in n: print("YES") Any suggestions? | You could use something like this instead: n = [5,3,7] if all(item in n for item in [5,7]): print("YES") | 4 | 3 |
69,149,494 | 2021-9-12 | https://stackoverflow.com/questions/69149494/how-to-build-an-aab-using-buildozer-via-docker | I have just seen that support for AAB files have just been introduced in Python for Android (p4a). Considering that, fom August 2021, new apps are required to publish with the Android App Bundle on Google Play, this is a crucial addition for any Python dev working on Android apps. Since I'm currently using Buildozer via Docker, I'd like to know which are the steps to make it generating an .aab instead of (or along to) the traditional .apk For the sake of clarity, here is what I use to run Buildozer from inside a container (using Docker for Windows) to make the .apk file: docker run --interactive --tty --rm --volume "<full_path_to_project_dir>":/home/user/hostcwd kivy/buildozer -v android debug I've seen that there is a temporary workaround, but it involves using Android Studio, that I don't use and would like to avoid using. Moreover, it refers to virtual machine users, but I'm not sure if this applies to Docker users too. | The community has finally completed the AAB support for Buildozer. Although it is still a pending pull request, it is already possible to create the AAB, and I have figured out how to do it using Docker. I have found two very interesting gists that helped me a lot (this one about creating an AAB with Buildozer on Ubuntu, and another one about signing an AAB on the same platform). However, I have run everything on Windows via Docker, so I think it is a good idea to share how I did it. Clone the feat/aab-support branch of the Buildozer repository in your local machine: git clone --single-branch --branch feat/aab-support https://github.com/misl6/buildozer.git Move into the root folder of the project you have just cloned, and build the container: cd buildozer docker build -t buildozer-aab . Before using Buildozer to actually build the AAB, we need to generate a new buildozer.spec file, since there are new fields that need to be included for build AAB. To do that, move into the root folder of your app project, remove or rename any old buildozer.spec file, and run the following command: docker run --interactive --tty --rm --volume "<full_path_to_app_project_dir>":/home/user/hostcwd kivy/buildozer -v init Change the following fields in the newly generated buildozer.spec: android.archs = arm64-v8a, armeabi-v7a android.release_artifact = aab p4a.branch = develop Now we need to create a keystore to sign our AAB. To this end, run the following commands on a WSL shell (I used WSL 2 with Ubuntu on Windows 10): mkdir -p /path/to/keystores/ keytool -genkey -v -keystore /path/to/keystores/<keystore>.keystore -alias <keystore-alias> -keyalg RSA -keysize 2048 -validity 10000 keytool -importkeystore -srckeystore /path/to/keystores/<your-new-key>.keystore -destkeystore /path/to/keystores/<keystore>.keystore -deststoretype pkcs12 The second line will generate a keystore with a validity of 10000 days (which is higher than the minimum of 25 years required by Google). You need to replace <keystore> with the filename you want to use for your keystore, and set an <keystore-alias> (typically the name of your application). You will be asked to add a password. Try to avoid special characters. Now move your keystores folder (the one in /path/to/keystores/) to a folder that is reachable from outside WSL (e.g. you can move it in your desktop). In the following, I will assume that your keystores folder is now in C:\Users\test\Desktop\keystores We are now finally ready to build the AAB. First, be sure to remove any .buildozer folder within your app root folder. Then run the following: docker run --interactive --tty --rm \ --volume "<app-project-folder>":/home/user/hostcwd \ --volume "<app-project-folder>\.buildozer":/home/user/.buildozer \ --volume "C:\Users\test\Desktop\keystores":/home/user/keystores \ -e P4A_RELEASE_KEYSTORE=/home/user/keystores/<keystore>.keystore \ -e P4A_RELEASE_KEYSTORE_PASSWD="<your-password>" \ -e P4A_RELEASE_KEYALIAS_PASSWD="<your-password>" \ -e P4A_RELEASE_KEYALIAS="<keystore-alias>" \ buildozer-aab -v android release | 5 | 2 |
69,170,874 | 2021-9-14 | https://stackoverflow.com/questions/69170874/how-to-plot-a-regression-line-on-a-timeseries-line-plot | I have a question about the value of the slope in degrees which I have calculated below: import pandas as pd import yfinance as yf import matplotlib.pyplot as plt import datetime as dt import numpy as np df = yf.download('aapl', '2015-01-01', '2021-01-01') df.rename(columns = {'Adj Close' : 'Adj_close'}, inplace= True) x1 = pd.Timestamp('2019-01-02') x2 = df.index[-1] y1 = df[df.index == x1].Adj_close[0] y2 = df[df.index == x2].Adj_close[0] slope = (y2 - y1)/ (x2 - x1).days angle = round(np.rad2deg(np.arctan2(y2 - y1, (x2 - x1).days)), 1) fig, ax1 = plt.subplots(figsize= (15, 6)) ax1.grid(True, linestyle= ':') ax1.set_zorder(1) ax1.set_frame_on(False) ax1.plot(df.index, df.Adj_close, c= 'k', lw= 0.8) ax1.plot([x1, x2], [y1, y2], c= 'k') ax1.set_xlim(df.index[0], df.index[-1]) plt.show() It returns the value of the angle of the slope as 7.3 degrees. Which doesnt look true looking at the chart: It looks close to 45 degrees. What is wrong here? Here is the line for which I need to calculate the angle: | The implementation in the OP is not the correct way to determine, or plot a linear model. As such, the question about determining the angle to plot the line is bypassed, and a more rigorous approach to plotting the regression line is shown. A regression line can be added by converting the datetime dates to ordinal. The model can be calculated with sklearn, or added to the plot with seaborn.regplot, as show below. Plot the full data with pandas.DataFrame.plot Tested in python 3.8.11, pandas 1.3.2, matplotlib 3.4.3, seaborn 0.11.2, sklearn 0.24.2 Imports and Data import yfinance as yf import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np from sklearn.linear_model import LinearRegression # download the data df = yf.download('aapl', '2015-01-01', '2021-01-01') # convert the datetime index to ordinal values, which can be used to plot a regression line df.index = df.index.map(pd.Timestamp.toordinal) # display(df.iloc[:5, [4]]) Adj Close Date 735600 24.782110 735603 24.083958 735604 24.086227 735605 24.423975 735606 25.362394 # convert the regression line start date to ordinal x1 = pd.to_datetime('2019-01-02').toordinal() # data slice for the regression line data=df.loc[x1:].reset_index() Plot a Regression Line with seaborn Using seaborn.regplot no calculations are required to add the regression line to the line plot of the data. Convert the x-axis labels to datetime format Play around with the xticks and labels if you need the endpoints adjusted. # plot the Adj Close data ax1 = df.plot(y='Adj Close', c='k', figsize=(15, 6), grid=True, legend=False, title='Adjusted Close with Regression Line from 2019-01-02') # add a regression line sns.regplot(data=data, x='Date', y='Adj Close', ax=ax1, color='magenta', scatter_kws={'s': 7}, label='Linear Model', scatter=False) ax1.set_xlim(df.index[0], df.index[-1]) # convert the axis back to datetime xticks = ax1.get_xticks() labels = [pd.Timestamp.fromordinal(int(label)).date() for label in xticks] ax1.set_xticks(xticks) ax1.set_xticklabels(labels) ax1.legend() plt.show() Calculate the Linear Model Use sklearn.linear_model.LinearRegression to calculate any desired points from the linear model, and then plot the corresponding line with matplotlib.pyplot.plot In regards to your other question, Extending the trendline of a stock chart to the right, you would calculate the model over a specified range, and then extend the line by predicting y1 and y2, given x1 and x2. This answer shows how to convert the ordinal axis values back to a date format. # create the model model = LinearRegression() # extract x and y from dataframe data x = data[['Date']] y = data[['Adj Close']] # fit the mode model.fit(x, y) # print the slope and intercept if desired print('intercept:', model.intercept_) print('slope:', model.coef_) intercept: [-90078.45713565] slope: [[0.1222514]] # calculate y1, given x1 y1 = model.predict(np.array([[x1]])) print(y1) array([[28.27904095]]) # calculate y2, given the last date in data x2 = data.Date.iloc[-1] y2 = model.predict(np.array([[x2]])) print(y2) array([[117.40030862]]) # this can be added to `ax1` with ax1 = df.plot(y='Adj Close', c='k', figsize=(15, 6), grid=True, legend=False, title='Adjusted Close with Regression Line from 2019-01-02') ax1.plot([x1, x2], [y1[0][0], y2[0][0]], label='Linear Model', c='magenta') ax1.legend() Angle of the Slope This is an artifact of the aspect of the axes, which is not equal for x and y. When the aspect is equal, see that the slope is 7.0 deg. x = x2 - x1 y = y2[0][0] - y1[0][0] slope = y / x print(round(slope, 7) == round(model.coef_[0][0], 7)) [out]: True angle = round(np.rad2deg(np.arctan2(y, x)), 1) print(angle) [out]: 7.0 # given the existing plot ax1 = df.plot(y='Adj Close', c='k', figsize=(15, 6), grid=True, legend=False, title='Adjusted Close with Regression Line from 2019-01-02') ax1.plot([x1, x2], [y1[0][0], y2[0][0]], label='Linear Model', c='magenta') # make the aspect equal ax1.set_aspect('equal', adjustable='box') | 4 | 9 |
69,100,275 | 2021-9-8 | https://stackoverflow.com/questions/69100275/error-while-downloading-the-requirements-using-pip-install-setup-command-use-2 | version pip 21.2.4 python 3.6 The command: pip install -r requirements.txt The content of my requirements.txt: mongoengine==0.19.1 numpy==1.16.2 pylint pandas==1.1.5 fawkes The command is failing with this error ERROR: Command errored out with exit status 1: command: /Users/*/Desktop/ml/*/venv/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"'; __file__='"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-pip-egg-info-97994d6e cwd: /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/ Complete output (1 lines): error in mongoengine setup command: use_2to3 is invalid. ---------------------------------------- WARNING: Discarding https://*/pypi/packages/mongoengine-0.19.1.tar.gz#md5=68e613009f6466239158821a102ac084 (from https://*/pypi/simple/mongoengine/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement mongoengine==0.19.1 (from versions: 0.15.0, 0.19.1) ERROR: No matching distribution found for mongoengine==0.19.1 | It looks like setuptools>=58 breaks support for use_2to3: setuptools changelog for v58 So you should update setuptools to setuptools<58 or avoid using packages with use_2to3 in the setup parameters. I was having the same problem, pip==19.3.1 | 108 | 205 |
69,097,732 | 2021-9-8 | https://stackoverflow.com/questions/69097732/what-is-the-meaning-of-in-python-grammar | I was going through the python grammer specification and find the following statement for_stmt: | 'for' star_targets 'in' ~ star_expressions ':' [TYPE_COMMENT] block [else_block] What does ~ means in this grammar rule?. The other symbols used in the grammer(like &, !, |) are already documented but not ~. The notation is a mixture of EBNF and PEG. In particular, & followed by a symbol, token or parenthesized group indicates a positive lookahead (i.e., is required to match but not consumed), while ! indicates a negative lookahead (i.e., is required not to match). We use the | separator to mean PEG’s “ordered choice” (written as / in traditional PEG grammars) | It's documented in PEP 617 under Grammar Expressions: ~ Commit to the current alternative, even if it fails to parse. rule_name: '(' ~ some_rule ')' | some_alt In this example, if a left parenthesis is parsed, then the other alternative won’t be considered, even if some_rule or ‘)’ fail to be parsed. The ~ basically indicates that once you reach it, you're locked into the particular rule and cannot move onto the next rule if the parse fails. PEP 617 mentions earlier that | some_alt can be written in the next line. | 7 | 9 |
69,176,092 | 2021-9-14 | https://stackoverflow.com/questions/69176092/how-to-change-font-size-of-jupyter-notebook-in-vs-code | I am using jupyter notebook to create python notes(sort of) for a virtual lecture. I like to use vscode instead of jupyter lab. But unfortunately the font size of the markdown output is too small(to see on participants' screen on virtual call). While using jupyter lab, i used to zoom the whole browser. But i can't do that in vscode, also i can't find any setting that changes the font size for it. The setting to change font size only applies to the vs code editor window. Does anyone has any idea on how i can increase the size. P.S.- I know that we could use # on each line to increase its size but i can't go around putting # before every line as i have to make a rather large document. And you get the idea how small the standard size is from the image. | A new setting is being added to vscode: notebook.markup.fontSize Should be in the Insiders Build v1.63 soon. See https://github.com/microsoft/vscode/issues/126294#issuecomment-964601412 | 5 | 18 |
69,131,840 | 2021-9-10 | https://stackoverflow.com/questions/69131840/how-to-invoke-a-cloud-function-from-google-cloud-composer | For a requirement I want to call/invoke a cloud function from inside a cloud composer pipeline but I cant find much info on it, I tried using SimpleHTTP airflow operator but I get this error: [2021-09-10 10:35:46,649] {taskinstance.py:1503} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1158, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1333, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1363, in _execute_task result = task_copy.execute(context=context) File "/home/airflow/gcs/dags/to_gcf.py", line 51, in execute if not self.response_check(response): File "/home/airflow/gcs/dags/to_gcf.py", line 83, in <lambda> response_check=lambda response: False if len(response.json()) == 0 else True, File "/opt/python3.8/lib/python3.8/site-packages/requests/models.py", line 900, in json return complexjson.loads(self.text, **kwargs) File "/opt/python3.8/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "/opt/python3.8/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/python3.8/lib/python3.8/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None Thanks in advance!! | I faced the same issue as you, but I managed to figure it out by studying the Airflow 2.0 provider packages for Google and using a PythonOperator instead. from airflow.providers.google.common.utils import id_token_credentials as id_token_credential_utils import google.auth.transport.requests from google.auth.transport.requests import AuthorizedSession def invoke_cloud_function(): url = "<your_cloud_function_url>" #the url is also the target audience. request = google.auth.transport.requests.Request() #this is a request for obtaining the the credentials id_token_credentials = id_token_credential_utils.get_default_id_token_credentials(url, request=request) # If your cloud function url has query parameters, remove them before passing to the audience resp = AuthorizedSession(id_token_credentials).request("GET", url=url) # the authorized session object is used to access the Cloud Function print(resp.status_code) # should return 200 print(resp.content) # the body of the HTTP response Thus, invoke the function as below: task = PythonOperator(task_id="invoke_cf", python_callable=invoke_cloud_function) From my understanding, accessing an authenticated HTTP Cloud Function strictly requires a credential based on ID Tokens. Thus to obtain the required type of credentials, get_default_id_token_credentials() executes the Application Default Credentials(ADC) authorization flow, which is a process that obtains credentials from environment variables, known locations. or the Compute Engine metadata server. Composer should have the associated service account keyfile made avaliable via environment variables (probably GOOGLE_APPLICATION_CREDENTIALS). Once you have the right type of credentials, you can use the AuthorizedSessions object to authenticate your requests to the cloud function. | 6 | 7 |
69,109,980 | 2021-9-8 | https://stackoverflow.com/questions/69109980/unclear-why-groupby-with-single-group-produces-row-dataframe | Here's two groupby operations on a pandas.DataFrame: import pandas d = pandas.DataFrame({"a": [1, 2, 3, 4, 5, 6], "b": [1, 2, 4, 3, -1, 5]}) grp1 = pandas.Series([1, 1, 1, 1, 1, 1]) ans1 = d.groupby(grp1).apply(lambda x: x.a * x.b.iloc[0]) grp2 = pandas.Series([1, 1, 1, 2, 2, 2]) ans2 = d.groupby(grp2).apply(lambda x: x.a * x.b.iloc[0]) print(ans1.reset_index(drop=True)) # a 0 1 2 3 4 5 # 0 1 2 3 4 5 6 print(ans2.reset_index(drop=True)) # 0 1 # 1 2 # 2 3 # 3 12 # 4 15 # 5 18 # Name: a, dtype: int64 I want the output in the format of ans2. If the grouping Series has more than one group (as in grp2), then there is no issue with the output format. However, when grouping Series has only one group (as in grp1), the output is a DataFrame with a single row. Why is this? How can I ensure that the output will always be like ans2 regardless of the number of groups in the grouping Series? Is there a quicker/better approach than Checking if the output is a DataFrame and coercing into a Series Checking if the grouping Series has only one group and avoiding groupby if that's the case | A simple solution is to return a DataFrame from apply: import pandas d = pandas.DataFrame({"a": [1, 2, 3, 4, 5, 6], "b": [1, 2, 4, 3, -1, 5]}) grp1 = pandas.Series([1, 1, 1, 1, 1, 1]) ans1 = d.groupby(grp1).apply(lambda x: x[['a']] * x.b.iloc[0]) grp2 = pandas.Series([1, 1, 1, 2, 2, 2]) ans2 = d.groupby(grp2).apply(lambda x: x[['a']] * x.b.iloc[0]) print(ans1.reset_index(drop=True)) # a # 0 1 # 1 2 # 2 3 # 3 4 # 4 5 # 5 6 print(ans2.reset_index(drop=True)) # a # 0 1 # 1 2 # 2 3 # 3 12 # 4 15 # 5 18 To understand why, the documentation of apply function is helpful. When the function given to apply returns a Series they are converted to a row and final output is a DataFrame with one row per group. So the behaviour of grp1 is actually expected. This begs the question why does the second case using grp2 return a Series. I think that is because the two groups return Series with different index values. Thus the results of the two groups are appended in a single row with multi-level indexing (as seen below). d = pandas.DataFrame({"a": [1, 2, 3, 4, 5, 6], "b": [1, 2, 4, 3, -1, 5]}) grp2 = pandas.Series([1, 1, 1, 2, 2, 2]) def func(x): z= x.a * x.b.iloc[0] print(z.index) return z ans2 = d.groupby(grp2).apply(func) # Int64Index([0, 1, 2], dtype='int64') # Int64Index([3, 4, 5], dtype='int64') print(ans2) # 1 0 1 # 1 2 # 2 3 # 2 3 12 # 4 15 # 5 18 # Name: a, dtype: int64 | 9 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.