question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
72,621,273 | 2022-6-14 | https://stackoverflow.com/questions/72621273/numba-parallelization-with-prange-is-slower-when-used-more-threads | I tried a simple code to parallelize a loop with numba and prange. But for some reason when I use more threads instead of going faster it gets slower. Why is this happening? (cpu ryzen 7 2700x 8 cores 16 threads 3.7GHz) from numba import njit, prange,set_num_threads,get_num_threads @njit(parallel=True,fastmath=True) def test1(): x=np.empty((10,10)) for i in prange(10): for j in range(10): x[i,j]=i+j Number of threads : 1 897 ns ± 18.3 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 2 1.68 µs ± 262 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 3 2.4 µs ± 163 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 4 4.12 µs ± 294 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 5 4.62 µs ± 283 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 6 5.01 µs ± 145 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 7 5.52 µs ± 194 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 8 4.85 µs ± 140 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 9 6.47 µs ± 348 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 10 6.88 µs ± 120 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 11 7.1 µs ± 154 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 12 7.47 µs ± 159 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 13 7.91 µs ± 160 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 14 9.04 µs ± 472 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 15 9.74 µs ± 581 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) Number of threads : 16 11 µs ± 967 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) | This is totally normal. Numba needs to create threads and distribute the work between them so they can execute the computation in parallel. Numba can use different threading backends. The default if generally OpenMP and the default OpenMP implementation should be IOMP (OpenMP runtime of ICC/Clang) which try to create threads only once. Still, sharing the work between threads is far slower than iterating over 100 values. A modern mainstream processor should be able to execute the 2 nested loops in sequential in less than 0.1-0.2 us. Numba should also be able to unroll the two loops. The Numba function overhead is also generally about few hundreds of nanoseconds. The allocation of the Numpy array should be far slower than the actual loops. Additionally, there are other overheads causing this code to be significantly slower with multiple threads even if the previous overhead would be negligible. For example, false-sharing causes the writes to be mostly serialized and thus slower than if they would be done one 1 unique threads (because of a cache line bouncing effect operating on the LLC on x86-64 platforms). Note that the time to create a thread is generally significantly more than 1 us because a system call is required. Put it shortly: use threads when the work to do is big enough and can be efficiently parallelized. | 6 | 6 |
72,580,483 | 2022-6-10 | https://stackoverflow.com/questions/72580483/aws-cdk-reference-existing-image-on-ecr | New to AWS CDK and I'm trying to create a load balanced fargate service with the construct ApplicationLoadBalancedFargateService. I have an existing image on ECR that I would like to reference and use. I've found the ecs.ContainerImage.from_ecr_repository function, which I believe is what I should use in this case. However, this function takes an IRepository as a parameter and I cannot find anything under aws_ecr.IRepository or aws_ecr.Repository to reference a pre-existing image. These constructs all seem to be for making a new repository. Anyone know what I should be using to get the IRepository object for an existing repo? Is this just not typically done this way? Code is below. Thanks in Advance. from aws_cdk import ( # Duration, Stack, # aws_sqs as sqs, ) from constructs import Construct from aws_cdk import (aws_ec2 as ec2, aws_ecs as ecs, aws_ecs_patterns as ecs_patterns, aws_route53,aws_certificatemanager, aws_ecr) class NewStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) _repo = aws_ecr.Repository(self, 'id1', repository_uri = repo_uri) vpc = ec2.Vpc(self, "applications", max_azs=3) # default is all AZs in region cluster = ecs.Cluster(self, "id2", vpc=vpc) hosted_zone = aws_route53.HostedZone.from_lookup(self, 'id3', domain_name = 'domain' ) certificate = aws_certificatemanager.Certificate.from_certificate_arn(self, id4, 'cert_arn' ) image = ecs.ContainerImage.from_ecr_repository(self, _repo) ecs_patterns.ApplicationLoadBalancedFargateService(self, "id5", cluster=cluster, # Required cpu=512, # Default is 256 desired_count=2, # Default is 1 task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions( image = image, container_port=8000), memory_limit_mib=2048, # Default is 512 public_load_balancer=True, domain_name = 'domain_name', domain_zone = hosted_zone, certificate = certificate, redirect_http = True) | You are looking for from_repository_attributes() to create an instance of IRepository from an existing ECR repository. | 8 | 4 |
72,620,119 | 2022-6-14 | https://stackoverflow.com/questions/72620119/turn-off-corner-rounding-in-matplotlib-plot-with-thicker-lines | Question: Is it possible to turn off "corner rounding" when using plot in matplotlib? Setup: I am trying to present a complicated nonsmooth function in a presentation. As a default (understandably) matplotlib rounds corners. (This is especially visible when the linewidth is increased.) I need more linewidth so the plot can be seen when the figure is projected. Example: import numpy as np import matplotlib.pyplot as plt x = np.linspace(-1, 1, 1001) y = abs(x) plt.plot(x, y, linewidth=10) plt.show() produces the image: Attempts: Increasing the number of points in x does not resolve this issue. Note that the point x=0 is included in the plot. Summary: The plotted curve above appears rounded at x=0 when the function is not. | You can specify a JoinStyle: plt.plot(x, y, linewidth=10, solid_joinstyle='miter') | 4 | 5 |
72,619,143 | 2022-6-14 | https://stackoverflow.com/questions/72619143/unable-to-import-psutil-on-m1-mac-with-miniforge-mach-o-file-but-is-an-incomp | I'm using a miniforge environment on an M1 mac, and unable to import psutil: ImportError: dlopen(/Users/caspsea/miniforge3/lib/python3.9/site-packages/psutil/_psutil_osx.cpython-39-darwin.so, 0x0002): tried: '/Users/caspsea/miniforge3/lib/python3.9/site-packages/psutil/_psutil_osx.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/_psutil_osx.cpython-39-darwin.so' (no such file), '/usr/lib/_psutil_osx.cpython-39-darwin.so' (no such file) I tried uninstalling and reinstalling using pip but that did not work. I'm using python 3.9, OS Monterey 12.2.1 | Have you tried: pip uninstall psutil followed by: pip install --no-binary :all: psutil | 6 | 18 |
72,609,159 | 2022-6-13 | https://stackoverflow.com/questions/72609159/how-do-i-format-a-date-and-also-pad-it-with-spaces | Formatting appears to work differently if the object you're formatting is a date. today = "aaa" print(f'{today:>10}') returns aaa i.e. it has the padding. If I now do this: today = datetime.date.today() print(f'{today:>10}') then the response is >10 Which is obviously not what I want. I've tried various other combinations where I put in the date format as well, but all it does is draw the date out and then also add in '>10' also. How do I format a date with padding? | Python's scheme for formatting via f-strings (and the .format method of strings) allows the inserted data to override how the format specification works, using the __format__ magic method: >>> class Example: ... def __format__(self, template): ... return f'{template} formatting of {self.__class__.__name__} instance' ... >>> f'{Example():test}' 'test formatting of Example instance' datetime.date does this, so that time.strftime is used to do the formatting (after some manipulation, e.g. inserting a proxy time for dates and vice-versa): >>> help(today.__format__) Help on built-in function __format__: __format__(...) method of datetime.date instance Formats self with strftime. This means that codes like %Y etc. can be used, but field width specifiers (like >10) are not supported. The format string >10 doesn't contain any placeholders for any components of the date (or time), so you just get a literal >10 back. Fortunately, it is trivial to work around this. Simply coerce the date to string, and pad the string: >>> f'{str(today):>20}' ' 2022-06-13' Or better yet, use the built-in syntax for such coercion: >>> f'{today!s:>20}' # !s for str(), !r for repr() ' 2022-06-13' If you want to use the strftime formatting as well, do the formatting in two steps: >>> formatted = f'{today:%B %d, %Y}' >>> f'{formatted:>20}' ' June 13, 2022' Note that it does not work to nest format specifiers: >>> # the {{ is interpreted as an escaped literal { >>> f'{{today:%B %d, %Y}:>20}' File "<stdin>", line 1 SyntaxError: f-string: single '}' is not allowed >>> # the inner {} looks like a dict, but %B isn't an identifier >>> f'{ {today:%B %d, %Y}:>20}' File "<fstring>", line 1 ( {today:%B %d, %Y}) ^ SyntaxError: invalid syntax However, f-strings themselves can be nested (this is obviously not very elegant and will not scale well): >>> # instead of trying to format the result from another placeholder, >>> # we reformat an entire separately-formatted string: >>> f'{f"{today:%B %d, %Y}":>20}' ' June 13, 2022' | 4 | 5 |
72,592,670 | 2022-6-12 | https://stackoverflow.com/questions/72592670/why-is-accessing-elements-using-tolist-faster-than-accessing-it-directly-throu | I have a dataframe and I wanted to apply a certain function on a set of columns. Something like: data[["A","B","C","D","E"]].apply(some_func, axis=1) In the some_func function, the first step is extracting out all the column values into separate variables. def some_func(x): a,b,c,d,e = x # or x.tolist() #Some more processing To reproduce, the result, use x = pd.Series([1,2,3,4,5], index=["A","B","C","D","E"]) Now, my question is, why does %%timeit a,b,c,d,e = x.tolist() Output: 538 ns ± 2.82 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) perform better than %%timeit a,b,c,d,e = x Output: 1.61 µs ± 15.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) | Let's define two functions and inspect them with dis: from dis import dis from pandas import Series x = Series([1,2,3,4,5], index=["A","B","C","D","E"]) def a(): a, b, c, d, e = x.tolist() def b(): a, b, c, d, e = x dis(a) dis(b) Executing the above will yield: # dis(a) 7 0 LOAD_GLOBAL 0 (x) 2 LOAD_METHOD 1 (tolist) 4 CALL_METHOD 0 6 UNPACK_SEQUENCE 5 8 STORE_FAST 0 (a) 10 STORE_FAST 1 (b) 12 STORE_FAST 2 (c) 14 STORE_FAST 3 (d) 16 STORE_FAST 4 (e) 18 LOAD_CONST 0 (None) 20 RETURN_VALUE # dis(b) 10 0 LOAD_GLOBAL 0 (x) 2 UNPACK_SEQUENCE 5 4 STORE_FAST 0 (a) 6 STORE_FAST 1 (b) 8 STORE_FAST 2 (c) 10 STORE_FAST 3 (d) 12 STORE_FAST 4 (e) 14 LOAD_CONST 0 (None) 16 RETURN_VALUE From the above, it seems that, if anything, function (a) has more instructions. So why is it faster? As explained in this answer, looking at the contents of UNPACK_SEQUENCE, one can see that there are some special-cases, such as when the number of left-hand side variables is equal to the length of the right-hand side object. So, x.tolist() under the hood uses numpy method to create a list from the array data, which allows making use of the optimization for this special case (you can check the deterioration in performance by changing the number of arguments on the left-hand side, e.g. a, *b = range(3), will work, but will be slower than a, b, c = range(3)). When the right-hand side object is not a Python tuple or a list, then Python iterates over the contents of the object, which appears to be less efficient. For practical reasons, if you really want best performance (with the current versions of the modules), you can swap x.tolist() with x._values.tolist(), which should give about 10-15% boost in performance (you're just removing one layer of pandas to numpy call, and doing it directly here). The caveat is that these types of optimizations are sensitive to what's happening in lower-level code, so there is no guarantee that performance gains will be there in future Python/library combinations. | 4 | 3 |
72,586,296 | 2022-6-11 | https://stackoverflow.com/questions/72586296/pytest-invoked-in-code-only-run-tests-in-current-file | I have a python file with some code and simple tests on that code. I would like to invoke pytest from that python file, have it collect only the tests in that file, and run them. For example, foo.py: # ... various code above... def test_foo(): foo = Foo() assert foo() def test_bar(): bar = Bar() assert bar.baz if __name__ == '__main__': import pytest pytest.main() I now would like to run python foo.py and have pytest run the two tests in that file, and those tests only. However, when I run python foo.py, pytest collects ALL tests from all modules that are rooted from the path in which I ran the command. How can I run python foo.py and have the pytest.main() invoked in foo.py only call the tests in foo.py and no others? | According to the pytest documentation, options and arguments can be passed to pytest.main. To run tests in foo.py, this would work: # ... various code above... def test_foo(): foo = Foo() assert foo() def test_bar(): bar = Bar() assert bar.baz if __name__ == '__main__': import pytest pytest.main(["foo.py"]) # consider using below # pytest.main([__file__]) | 6 | 6 |
72,588,287 | 2022-6-11 | https://stackoverflow.com/questions/72588287/django-how-to-set-foreignkey-related-name-in-abstract-model-class | I want to create on Abstract Model class for future inheriting like this: class AbstractModel(models.Model): created_at = models.DateTimeField( auto_now_add=True, blank=True, null=True, ) created_by = models.ForeignKey( settings.AUTH_USER_MODEL, on_delete=models.SET_NULL, related_name='XXX_created_by', blank=True, null=True, ) class Meta: abstract = True Field 'created_at' is working fine, but how to generate related_name in 'created_by' for my child classes to prevent clashing? | As the Be careful with related_name and related_query_name section of the documentation says, you can: To work around this problem, when you are using related_name or related_query_name in an abstract base class (only), part of the value should contain '%(app_label)s' and '%(class)s'. '%(class)s' is replaced by the lowercased name of the child class that the field is used in. '%(app_label)s' is replaced by the lowercased name of the app the child class is contained within. Each installed application name must be unique and the model class names within each app must also be unique, therefore the resulting name will end up being different. You thus can work with: class AbstractModel(models.Model): # … created_by = models.ForeignKey( settings.AUTH_USER_MODEL, on_delete=models.SET_NULL, related_name='%(class)s_created_by', blank=True, null=True, ) class Meta: abstract = True Then the related_name will be foo_created_by if the name of the model that inherits is named foo. Or if the same model name can occur in different apps: class AbstractModel(models.Model): # … created_by = models.ForeignKey( settings.AUTH_USER_MODEL, on_delete=models.SET_NULL, related_name='%(app_label)s_%(class)s_created_by', blank=True, null=True, ) class Meta: abstract = True Then the related_name will be bar_foo_created_by if the name of the model that inherits is named foo in an app named bar. | 6 | 9 |
72,585,750 | 2022-6-11 | https://stackoverflow.com/questions/72585750/django-subquery-sum-with-no-results-returns-none-instead-of-0 | I have a Django Subquery which returns a Sum. If the subquery finds at least one result, it works fine. But, if the subquery finds no records, it returns None which then causes any other calculations using this result (I use it later in my query in an F expression) to also result in None. I am trying to Sum all non-null 'consumed' values - sometimes, they are all null and therefore there are no rows upon which to sum. I would like this to result in a 0 instead of None ... annotate(tapx=Subquery(InvTrx.objects.filter(job=OuterRef('job')).\ filter(consumed__isnull=False).\ filter(inventory__inv_type='APX').\ values('job__job').\ annotate(tot_cons=Sum('consumed', default=0)).\ values('tot_cons') )).\ ... I've tried Coalesce with and without Value(0) annotate(tot_cons=Coalesce(Sum('consumed', default=0)), 0).\ annotate(tot_cons=Coalesce(Sum('consumed', default=0)), Value(0)).\ the value of tapx (which I reuse in F expressions in another part of the query) = None if no rows are returned. If at least one row is returned, this works fine. If no rows are returned, I would like the value of tapx to be 0 instead of None so that the value of fg_total in the following annotation results in a number and not None: annotate(fg_total=F('fg') + F('tapx')) Doing this outside a subquery, I have used "or 0" to force the value to 0 if the result is None - is there a way to do this inside a subquery? I also tried, when I do use the result of the Subquery in an F expression, to use a conditional like so: annotate(fg_total=F('fg') + Case(When(~Exists('tapx'), then=F('tapx')),default=Value(0))) but this results in an error - 'str' object has no attribute 'order_by' (I am not sure what this means in this context since I am not using 'order_by' anywhere?) I am using Django 3.0. I'm stuck. | You should not Coalesce the .annotate(…) in the Subquery, but the result of the Subquery, so: .annotate(tapx=Coalesce( Subquery(InvTrx.objects.filter( job=OuterRef('job'), consumed__isnull=False, inventory__inv_type='APX' ).values('job_id').annotate(tot_cons=Sum('consumed')).values('tot_cons')), Value(0)) ) | 5 | 8 |
72,586,906 | 2022-6-11 | https://stackoverflow.com/questions/72586906/how-to-ensure-only-one-entry-is-true-in-a-django-model | I'm stuck on thinking about implementing a "only one entry might be True for one combination". A Project has n members (Guards) through an intermediate table. every Guard may be member of n Projects only one combination of Guard <-> Project is allowed (unique_together) a MemberShip might be the 'Main' one (is_main) BUT: Only one of the memberships may be Main. Do I oversee something or do I have to implement a custom validation on my own? To complete this, see the given Model: class Project(models.Model): client = models.ForeignKey(Client, on_delete=models.CASCADE) shortname = models.CharField(_('shortname'), max_length=50) description = models.TextField(_('description'), blank=True) members = models.ManyToManyField(Guard, through='ProjectMembership') class Meta: unique_together = ['client', 'shortname'] class ProjectMembership(models.Model): guard = models.ForeignKey(Guard, on_delete=models.CASCADE) project = models.ForeignKey(Project, on_delete=models.CASCADE) is_main = models.BooleanField(_('is main project'), default=False) class Meta: unique_together = ['guard', 'project'] | You can work with a UniqueConstraint [Django-doc] that is filtered: from django.db.models import UniqueConstraint, Q class ProjectMembership(models.Model): guard = models.ForeignKey(Guard, on_delete=models.CASCADE) project = models.ForeignKey(Project, on_delete=models.CASCADE) is_main = models.BooleanField(_('is main project'), default=False) class Meta: constraints = [ UniqueConstraint(fields=('guard', 'project'), name='unique_guard'), UniqueConstraint(fields=('guard',), condition=Q(is_main=True), name='one_main_project_per_guard'), ] Here we thus ensure that if we filter ProjectMembership for is_main=True, that the set of guards is unique, hence a certain Guard can only occur once for is_main, and this thus means that a Guard has at most one Project for which is_main is True. Note: As the documentation on unique_together [Django-doc] says, the unique_together constraint will likely become deprecated. The documentation advises to use the UniqueConstraint [Django-doc] from Django's constraint framework. | 7 | 10 |
72,582,066 | 2022-6-11 | https://stackoverflow.com/questions/72582066/will-fastapi-application-with-only-async-endpoints-encounter-the-gil-problem | If all the fastapi endpoints are defined as async def, then there will only be 1 thread that is running right? (assuming a single uvicorn worker). Just wanted to confirm in such a setup, we will never hit the python's Global Interpreter Lock. If the same was to be done in a flask framework with multiple threads for the single gunicorn worker, then we would be facing the GIL which hinders the true parallelism between threads. So basically, in the above fastapi, the parallelism is limited to 1 since there is only one thread. And to make use of all the cores, we would need to increase the number of workers either using gunicorn or uvicorn. Is my understanding correct? | Your understanding is correct. When using 1 worker with uvicorn, only one process is run. That means, there is only one thread that can take a lock on the interpreter that is running your application. Due to the asynchronous nature of your FastAPI app, it will be able to handle multiple simultaneous requests, but not in parallel. If you want multiple instances of your application run in parallel, you can increase your workers. This will spin up multiple processes (all single threaded as above) and Uvicorn will distribute the requests among them. Note that you cannot have shared global variables across workers. These are separate instances of your FastAPI app and do not communicate with each other. See this answer for more info on that and how to use databases or caches to work around that. | 5 | 7 |
72,577,929 | 2022-6-10 | https://stackoverflow.com/questions/72577929/dask-multi-stage-resource-setup-causes-failed-to-serialize-error | Using the exact code from Dask's documentation at https://jobqueue.dask.org/en/latest/examples.html In case the page changes, this is the code: from dask_jobqueue import SLURMCluster from distributed import Client from dask import delayed cluster = SLURMCluster(memory='8g', processes=1, cores=2, extra=['--resources ssdGB=200,GPU=2']) cluster.scale(2) client = Client(cluster) def step_1_w_single_GPU(data): return "Step 1 done for: %s" % data def step_2_w_local_IO(data): return "Step 2 done for: %s" % data stage_1 = [delayed(step_1_w_single_GPU)(i) for i in range(10)] stage_2 = [delayed(step_2_w_local_IO)(s2) for s2 in stage_1] result_stage_2 = client.compute(stage_2, resources={tuple(stage_1): {'GPU': 1}, tuple(stage_2): {'ssdGB': 100}}) This results in an error of such: distributed.protocol.core - CRITICAL - Failed to Serialize Traceback (most recent call last): File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/protocol/core.py", line 76, in dumps frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True) File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/msgpack/__init__.py", line 38, in packb return Packer(**kwargs).pack(o) File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 229, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 291, in msgpack._cmsgpack.Packer._pack TypeError: can not serialize 'Delayed' object distributed.comm.utils - ERROR - can not serialize 'Delayed' object Traceback (most recent call last): File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/comm/utils.py", line 33, in _to_frames return list(protocol.dumps(msg, **kwargs)) File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/protocol/core.py", line 76, in dumps frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True) File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/msgpack/__init__.py", line 38, in packb return Packer(**kwargs).pack(o) File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 229, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 291, in msgpack._cmsgpack.Packer._pack TypeError: can not serialize 'Delayed' object distributed.batched - ERROR - Error in batched write Traceback (most recent call last): File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/batched.py", line 94, in _background_send nbytes = yield self.comm.write( File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/tornado/gen.py", line 762, in run value = future.result() File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/comm/tcp.py", line 250, in write frames = await to_frames( File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/comm/utils.py", line 50, in to_frames return _to_frames() File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/comm/utils.py", line 33, in _to_frames return list(protocol.dumps(msg, **kwargs)) File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/distributed/protocol/core.py", line 76, in dumps frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True) File "/opt/eagleseven/pyenv/e7cloudv0/lib/python3.8/site-packages/msgpack/__init__.py", line 38, in packb return Packer(**kwargs).pack(o) File "msgpack/_packer.pyx", line 294, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 300, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 297, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 231, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 229, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 264, in msgpack._cmsgpack.Packer._pack File "msgpack/_packer.pyx", line 291, in msgpack._cmsgpack.Packer._pack TypeError: can not serialize 'Delayed' object Python verion: 3.8.10 dask: 2022.2.0 dask-jobqueue: 0.7.3 The problem is self-evident. Setup is just like in the documentation. There is nothing more I can explain, but stackoverflow is saying my details-to-code is too low, so I need to write more stuff to allow this question to be posted. | As noted by @Michael Delgado in the comments, this appears to be a problem with the documentation (raised here). Resources are a dictionary with each key being name of a resource and value representing the amount used by a task. In an answer to a related question, Matt Rocklin, the initial commit author, mentions that this feature (specifying task-level resources) is frequently requested, but not available as of now: https://stackoverflow.com/a/63310721/10693596 One possibility is to use annotation for specific components of the graph, see this answer. | 4 | 5 |
72,580,747 | 2022-6-10 | https://stackoverflow.com/questions/72580747/why-time-zone-is-0456 | The following code print 2022-01-01 05:30:00-04:56. What's -04:56? import pytz, datetime tz = pytz.timezone("America/New_York") t = datetime.datetime(2022, 1,1, 5, 30) u = t.replace(tzinfo=tz) print(u) 2022-01-01 05:30:00-04:56 In jupyter, u has the value of datetime.datetime(2022, 1, 1, 5, 30, tzinfo=<DstTzInfo 'America/New_York' LMT-1 day, 19:04:00 STD>). What's DstTzInfo? And what's 19:04:00 STD? | (Delete previous answer) My bad: I parsed datetime.datetime(2022, 1,1, 5, 30) as 30 May (I'm answering this on 10 June.). We're not in daylight savings time in January. ^ For me, this whole question is yet another reminder that I don't * really * understand time. OP. will you please remove this as an accepted solution? (It will not allow me to delete an accepted solution.) | 5 | 2 |
72,576,024 | 2022-6-10 | https://stackoverflow.com/questions/72576024/smtplib-smtpauthenticationerror-535-b5-7-8-username-and-password-not-accepte | I'm writing a site on Flask and I came across the problem that when sending emails to email users there is such an error. smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 https://support.google.com/mail/?p=BadCredentials i1-20020ac25221000000b00478f5d3de95sm4732790lfl.120 - gsmtp') I searched Google for a solution to the problem, but it said that you need to disable a feature that has not been supported by GMAIL recently. Maybe someone knows how to solve this problem now? Here is my config connection: app.config['MAIL_SERVER'] = 'smtp.googlemail.com' app.config['MAIL_PORT'] = 587 app.config['MAIL_USE_TLS'] = True app.config['MAIL_USERNAME'] = os.getenv('MAIL_USERNAME') app.config['MAIL_PASSWORD'] = os.getenv('MAIL_PASSWORD') Help me please | Since You can't use the less secure app now as the deadline was 30th May 2022. An alternative way to send Gmail via flask using an App password Before generating the APP password your google account must enable the 2FA. If the 2FA is enabled then you can hop on to the security section in the manage accounts and there you can see the APP password section. For generating the APP password you also can read -> here The below code I tried and it's working I have used python another dependencies pip install Flask-Mail from flask import Flask from flask_mail import Mail, Message app = Flask(__name__) mail= Mail(app) app.config['MAIL_SERVER']='smtp.gmail.com' app.config['MAIL_PORT'] = 465 app.config['MAIL_USERNAME'] = '[email protected]' app.config['MAIL_PASSWORD'] = 'YOUR_APP_PASSWORD' app.config['MAIL_USE_TLS'] = False app.config['MAIL_USE_SSL'] = True mail = Mail(app) @app.route("/") def index(): msg = Message('Hello', sender = '[email protected]', recipients = ['[email protected]']) msg.body = "Hello Flask message sent from Flask-Mail" mail.send(msg) return "Sent" if __name__ == '__main__': app.run(debug = True) | 5 | 11 |
72,576,260 | 2022-6-10 | https://stackoverflow.com/questions/72576260/whats-solution-for-sending-emails-from-python-while-gmail-the-less-secure-app | I recently was using smtp library for sending emails from gmail account but recently it stopped working after research I found out the google can not let you enable the less secure app anymore . So is there any workaround this ? | If you want to continue using Gmail SMTP, you can set it up by setting an app password. An app password works like an alternate password for your account. It can only be used by the applications you share it with, so it’s more secure than sharing your primary password. Here's how you can set it up: https://support.google.com/accounts/answer/185833?hl=en | 4 | 3 |
72,574,761 | 2022-6-10 | https://stackoverflow.com/questions/72574761/what-is-the-correct-way-to-mock-psycopg2-with-pytest | I need to simulate DB connection without actual connection. All answers I found are trying to mock methods in different ways, connect to docker db, connect to actual PostgreSQL running locally. I believe I need mocking variant but I cannot formulate in my head how should I mock. Am I missing something? Am I moving into wrong direction? I use PostgreSQL and psycopg2. Package psycopg2-binary Database connection: import os import psycopg2 from loguru import logger from psycopg2.extensions import parse_dsn def init_currency_history_table(cursor): create_users_table_query = """ CREATE TABLE IF NOT EXISTS history( id BIGINT PRIMARY KEY NOT NULL, event TEXT, creation_date TIMESTAMPTZ DEFAULT NOW() ); """ cursor.execute(create_users_table_query) def load_db(db_url): db = psycopg2.connect(**db_url) db.autocommit = True return db class PostgresqlApi(object): def __init__(self, load=load_db): logger.info(os.environ.get('DATABASE_URL')) db_url = parse_dsn(os.environ.get('DATABASE_URL')) db_url['sslmode'] = 'require' logger.info('HOST: {0}'.format(db_url.get('host'))) self.db = load_db(db_url) self.cursor = self.db.cursor() init_currency_history_table(self.cursor) self.db.commit() def add_event(self, *, event): insert_event_table = """ INSERT INTO history (event) VALUES (%s); """ self.cursor.execute(insert_event_table, (event)) def events(self): select_event_table = """SELECT * FROM event;""" self.cursor.execute(select_event_table) return self.cursor.fetchall() def close(self): self.cursor.close() self.db.close() I use DB for Falcon API. from fastapi import Depends, FastAPI, HTTPException, status from fastapi.security import HTTPBasic, HTTPBasicCredentials from decimal import Decimal, getcontext from db import PostgresqlApi app = FastAPI() security = HTTPBasic() database = None def db_connection(): global database if not database: database = PostgresqlApi() return database def check_basic_auth_creds(credentials: HTTPBasicCredentials = Depends(security)): correct_username = secrets.compare_digest(credentials.username, os.environ.get('APP_USERNAME')) correct_password = secrets.compare_digest(credentials.password, os.environ.get('APP_PASSWORD')) if not (correct_username and correct_password): raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Incorrect username and password", headers={'WWW-Authenticate': 'Basic'} ) return credentials @app.get("/currencies") def read_currencies(credentials: HTTPBasicCredentials = Depends(check_basic_auth_creds)): db = db_connection() return {'get events': 'ok'} I have tried different methods and plugins. Among others arepytest-pgsql, pytest-postgresql. | The solution I landed at is below. Created fake class that has exactly structure of PostgresqlApi. (see implementation below) Created fixture for db_connection method. (see implementation below) Fake class implementation class FakePostgresqlApi(PostgresqlApi): event_list = [] def __init__(self): pass def add_event(self, *, event): self.event_list.append([1, 'magic trick', 1653630607]) def events(self): return self.event_list def close(self): self.event_list.clear() Fixture from unittest.mock import MagicMock @pytest.fixture def mock_db_connection(mocker): mocker.patch('src.main.db_connection', MagicMock(return_value=FakePostgresqlApi())) The test itself was: def test_read_events(mock_db_connection): # Do whatever I need here, in my case call Falcon API test client | 5 | 3 |
72,574,603 | 2022-6-10 | https://stackoverflow.com/questions/72574603/how-to-fix-integrityerror-psycopg2-errors-foreignkeyviolation-update-or-delet | I have two tables created with Flask-SQLAlchemy below - they have a one to one relationship. class Logo(db.Model): __tablename__ = "logo" id = db.Column(db.Integer, primary_key=True) filename = db.Column(db.String(100)) data = db.Column(db.LargeBinary) username = db.Column(db.String(100), db.ForeignKey("users.username")) users = db.relationship("User", backref=backref("logo", uselist=False)) def __init__(self, filename: str, data, username: str): self.filename = filename self.data = data self.username = username def __repr__(self) -> str: return "<Logo (filename='{}', username='{}')>".format( self.filename, self.username ) class User(UserMixin, db.Model): __tablename__ = "users" id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(100), unique=True) password = db.Column( db.String(200), primary_key=False, unique=False, nullable=False ) is_admin = db.Column(db.Boolean, default=False, nullable=True) def __init__( self, username: str, password: str, is_admin: bool = False, ): self.username = username self.password = self.set_password(password) self.is_admin = is_admin def get_id(self): return self.username def set_password(self, password: str) -> str: return generate_password_hash(password, method="sha256") def check_password(self, password: str): return check_password_hash(self.password, password) def __repr__(self) -> str: return "<User {}>".format(self.username) I would like to update the user table in a case when the user would like to have a new username: user01 = User.query.filter_by(username="user01").first() logo = Logo.query.filter_by(username="user01").first() new_username= "newusertest" user01.username = new_username logo.users = user01 logo.username = new_username db.session.add(user01) db.session.add(logo) db.session.commit() The db.session.commit throws the following error: IntegrityError: (psycopg2.errors.ForeignKeyViolation) update or delete on table "users" violates foreign key constraint "logo_username_fkey" on table "logo" DETAIL: Key (username)=(user01) is still referenced from table "logo". [SQL: UPDATE users SET username=%(username)s WHERE users.id = %(users_id)s] [parameters: {'username': 'newusertest', 'users_id': 2}] (Background on this error at: https://sqlalche.me/e/14/gkpj) The error says the logo table still has the old username but I have updated it and I don't know why that shows up again, I have spent the last 2 hours debugging and trying different stuff but nothing works. | You could temporarily make the foreign key constraint deferrable and make the update in psql. Say we have these tables: test# \d parent Table "public.parent" Column │ Type │ Collation │ Nullable │ Default ════════╪═══════════════════╪═══════════╪══════════╪══════════════════════════════ id │ integer │ │ not null │ generated always as identity name │ character varying │ │ │ Indexes: "parent_name_key" UNIQUE CONSTRAINT, btree (name) Referenced by: TABLE "child" CONSTRAINT "child_pname_fkey" FOREIGN KEY (pname) REFERENCES parent(name) test# \d child Table "public.child" Column │ Type │ Collation │ Nullable │ Default ════════╪═══════════════════╪═══════════╪══════════╪══════════════════════════════ id │ integer │ │ not null │ generated always as identity pname │ character varying │ │ │ Foreign-key constraints: "child_pname_fkey" FOREIGN KEY (pname) REFERENCES parent(name) then the statements would be test# alter table child alter constraint child_pname_fkey deferrable; ALTER TABLE SET CONSTRAINTS test# begin; BEGIN test#* set constraints child_pname_fkey deferred; SET CONSTRAINTS test#* update child set pname = 'Alice' where id = 1; UPDATE 1 test#* update parent set name = 'Alice' where id = 1; UPDATE 1 test#* commit; COMMIT test# alter table child alter constraint child_pname_fkey not deferrable; ALTER TABLE test# Deferring the constraint means updates are evaluated at the end of the transaction rather than immediately, so the the point of view of the database the columns are not out of sync. The long term solution is to use users.id as the foreign key, as it is less likely to change. | 4 | 2 |
72,571,235 | 2022-6-10 | https://stackoverflow.com/questions/72571235/can-i-install-node-js-18-on-centos-7-and-do-i-need-python-3-install-too | I'm not sure if node.js 18 supports centos 7 and is it a requirement to install python 3 for node.js 18? | Step 1 - curl --silent --location https://rpm.nodesource.com/setup_18.x | sudo bash - Step 2 - sudo yum -y install nodejs I don't think you need Python 3. Reference - https://computingforgeeks.com/install-node-js-on-centos-rhel-rocky-linux/ | 16 | -6 |
72,558,809 | 2022-6-9 | https://stackoverflow.com/questions/72558809/python-attributeerror-numpy-ndarray-object-has-no-attribute-to | I now have the updated code as follows: # Hyperparameters random_seed = 123 learning_rate = 0.01 num_epochs = 10 batch_size = 128 device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu") for epoch in range(num_epochs): model = resnet34.train() for batch_idx, (features, targets) in enumerate(train_generator): features = features.to(device) targets = targets.to(device) ### FORWARD AND BACK PROP logits = model(features) cost = torch.nn.functional.cross_entropy(logits, targets) optimizer.zero_grad() cost.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(datagen)//batch_size, cost)) model = model.eval() # eval mode to prevent upd. batchnorm params during inference with torch.set_grad_enabled(False): # save memory during inference print('Epoch: %03d/%03d training accuracy: %.2f%%' % ( epoch+1, num_epochs, compute_accuracy(model, train_generator))) When having only one image, the code runs fine. But, when I add another image or more, I get the following: features = features.to(device) targets = targets.to(device) AttributeError: 'numpy.ndarray' object has no attribute 'to' | It would be nice to see your train_generator code for clarity, but it does not seem to be a torch DataLoader. In this case, you should probably convert your arrays to tensors manually. There are several ways to do so: torch.from_numpy(numpy_array) - for numpy arrays; torch.as_tensor(list) - for common lists and tuples; torch.tensor(array) should also work but the above ways will avoid copying the data when possible. | 6 | 7 |
72,554,445 | 2022-6-9 | https://stackoverflow.com/questions/72554445/pandas-convert-a-series-which-contains-strings-like-10-and-0-10-into-numer | What is the best way to convert a Pandas series that contains strings of the type "10%" and "0.10" into numeric values? I know that if I have a series with just "0.10" type strings I can just do pd.to_numeric. I also know that if I have a series of "10%" type strings I can do str.replace("%","") and then do pd.to_numeric and divide by 100. The issue I have is for a series with a mix of "0.10" and "10%" type strings. How do I best convert this into a series with the correct numeric types. I think I could do it by first making a temporary series with True / False depending on if the string has "%" in it or not and then based on that applying a function. But this seems inefficient. Is there a better way? What I Have Tried for Reference: mixed = pd.Series(["10%","0.10","5.5%","0.02563"]) mixed.str.replace("%","").astype("float")/100 0 0.100000 1 0.001000 2 0.055000 3 0.000256 dtype: float64 # This doesn't work, because even the 0.10 and 0.02563 are divided by 100. | The easiest solution is to select entries using a mask and handle them in bulk: from pandas import Series, to_numeric mixed = Series(["10%", "0.10", "5.5%", "0.02563"]) # make an empty series with similar shape and dtype float converted = Series(index=mixed.index, dtype='float') # use a mask to select specific entries mask = mixed.str.contains("%") converted.loc[mask] = to_numeric(mixed.loc[mask].str.replace("%", "")) / 100 converted.loc[~mask] = to_numeric(mixed.loc[~mask]) print(converted) # 0 0.10000 # 1 0.10000 # 2 0.05500 # 3 0.02563 # dtype: float64 | 7 | 5 |
72,553,699 | 2022-6-9 | https://stackoverflow.com/questions/72553699/what-is-the-kotlin-equivalent-of-python-generators | If I have a python generator function, let's say this one: def gen(): x = 0 while (true): yield x x += 1 This function remembers its current state, and every time you call gen(), yields a new value. Essentially, I would like a Kotlin sequence which can remember its state. | def gen(): x = 0 while (true): yield x x += 1 This function remembers its current state, and every time you call gen(), yields a new value. This is incorrect. Every time you call gen() you get a new "generator object" whose state is independent of any other generator object created by this function. You then query the generator object to get the next number. For example: def demo(): numbers = gen() # 'gen()' from your question for _ in range(0, 3): next_number = next(numbers) print(next_number) if __name__ == '__main__' demo() print() demo() Output: 0 1 2 0 1 2 As you can see, the sequence of numbers "starts over" when you call gen() again (though if you kept a reference to the old generator object it would continue from 2, even after calling gen() again). In Kotlin, you can use the kotlin.sequences.iterator function. It creates an Iterator which lazily yields the next value, just like a Python generator object. For example: fun gen() = iterator { var x = 0 while (true) { yield(x) x++ } } fun demo() { val numbers = gen() repeat(3) { val nextNumber = numbers.next() println(nextNumber) } } fun main() { demo() println() demo() } Which will output: 0 1 2 0 1 2 Just like the Python code. Note you can do the essentially the same thing with a Kotlin Sequence, you just have to convert the Sequence into an Iterator if you want to use it like a Python generator object. Though keep in mind that Kotlin sequences are meant more for defining a series of operations and then lazily processing a group of elements in one go (sort of like Java streams, if you're familiar with them). | 5 | 7 |
72,555,346 | 2022-6-9 | https://stackoverflow.com/questions/72555346/calculation-of-sales-with-a-dataframe-takes-too-long | I have a problem. I would like to calculate the turnover for a customer in the last 6 months. The methods work on my dummy record, unfortunately the whole thing does not work on my real record as it is too slow. How can I rewrite this so that it performs faster? Dataframe customerId fromDate sales 0 1 2022-06-01 100 1 1 2022-05-25 20 2 1 2022-05-25 50 3 1 2022-05-20 30 4 1 2021-09-05 40 5 2 2022-06-02 80 6 3 2021-03-01 50 7 3 2021-02-01 20 Code from datetime import datetime from dateutil.relativedelta import relativedelta import pandas as pd def find_last_date(date_: datetime) -> datetime: six_months = date_ + relativedelta(months=-6) return six_months def sum_func(row: pd.DataFrame, df: pd.DataFrame) -> int : return df[ (df["customerId"] == row["customerId"]) & (row["fromDate"] + relativedelta(months=-6)<= df["fromDate"]) & (df["fromDate"] <= row["fromDate"]) ]["sales"].sum() d = { "customerId": [1, 1, 1, 1, 1, 2, 3, 3], "fromDate": [ "2022-06-01", "2022-05-25", "2022-05-25", "2022-05-20", "2021-09-05", "2022-06-02", "2021-03-01", "2021-02-01", ], "sales": [100, 20, 50, 30, 40, 80, 50, 20], } df = pd.DataFrame(data=d) df["fromDate"] = pd.to_datetime(df["fromDate"], errors="coerce") df["last_month"] = df["fromDate"].apply(find_last_date) df["total_sales"]=df[["customerId", "fromDate"]].apply(lambda x: sum_func(x, df), axis=1) print(df) What I want customerId fromDate sales last_month total_sales 0 1 2022-06-01 100 2022-03-01 200 # 100 + 20 + 50 + 30 1 1 2022-05-25 20 2022-02-25 100 # 20 + 50 + 30 2 1 2022-05-25 50 2022-02-25 100 # 50 + 20 + 30 3 1 2022-05-20 30 2022-02-20 30 # 30 4 1 2021-09-05 40 2021-06-05 40 # 40 5 2 2022-06-02 80 2022-03-02 80 # 80 6 3 2021-03-01 50 2020-12-01 70 # 50 + 20 7 3 2021-02-01 20 2020-11-01 20 # 20 print(df['customerId'].value_counts().describe()) count 53979.000 mean 87.404 std 1588.450 min 1.000 25% 2.000 50% 6.000 75% 22.000 max 205284.000 print(df['fromDate'].agg((min, max))) min 2021-02-22 max 2022-03-26 | Using multiprocessing and consider 6 months as 180 days to reduce the memory size and the time computing. Copy the following code to a python file and run it from the console (not from a Jupyter Notebook) import pandas as pd import numpy as np import multiprocessing as mp import time def sum_sales(customer, df): # 1st pass: sum sales of same days, reduce the row numbers df1 = df.groupby('fromDate')['sales'].sum() # Generate all missing dates df1 = df1.reindex(pd.date_range(df1.index.min(), df1.index.max(), freq='D'), fill_value=0) # 2nd pass: use a sliding window of 180 days to sum df1 = df1.rolling(90, min_periods=0).sum().astype(int) # Restore original index for the group df1 = df1.reindex(df['fromDate']).reset_index(drop=True).to_frame().set_index(df.index) return df1 if __name__ == '__main__': # Do not remove this line! Mandatory # Setup a minimal reproducible example N = 3_000_000 D = pd.to_datetime('2021-1-1') rng = np.random.default_rng(2022) dti = D + pd.to_timedelta(rng.integers(0, 365*2, N), unit='D') cust = rng.integers(0, 75000, N) sales = rng.integers(1, 200, N) df = pd.DataFrame({'customerId': cust, 'fromDate': dti, 'sales': sales}) # Ensure your dataframe is sorted by fromDate for rolling window df.sort_values(['customerId', 'fromDate'], ignore_index=True) start = time.time() with mp.Pool(mp.cpu_count() - 1) as p: results = p.starmap(sum_sales, df.groupby('customerId')) df['total_sales'] = pd.concat(results) end = time.time() print(f"Elapsed time: {end - start:.2f} seconds") For 3mio records and 75k different customers on 2 years (730 days) [...]$ python mp.py Elapsed time: 24.36 seconds However the number of sales per customer is well balanced than your: >>> df['customerId'].value_counts().describe(percentiles=np.linspace(0, 1, 11) count 75000.000000 mean 40.000000 std 6.349157 min 15.000000 0% 15.000000 10% 32.000000 20% 35.000000 30% 37.000000 40% 38.000000 50% 40.000000 60% 41.000000 70% 43.000000 80% 45.000000 90% 48.000000 # <- check the 90th percentile of your data 100% 73.000000 max 73.000000 # <- max transactions for a single customer Name: customerId, dtype: float64 Because the sales are properly distributed per customer, my sample takes advantage of multiprocessing. In your case, I don't think it will be the case (check the 90th percentile). The check with your dataframe: >>> df customerId fromDate sales total_sales 0 1 2022-06-01 100 200 1 1 2022-05-25 20 100 2 1 2022-05-25 50 100 3 1 2022-05-20 30 30 4 1 2021-09-05 40 40 5 2 2022-06-02 80 80 6 3 2021-03-01 50 70 7 3 2021-02-01 20 20 If you decide to choose to keep a variable moving window of 6 months instead of a fixed moving window of 180 days, the algorithm will me the same. The important point in the code is to reduce the number of rows per customer. In your sample, you can group the sales for a same (customer, date). The customer 1 have 2 rows for 2022-05-25 so you can sum them immediately. IIUC, in your real data, you have a customer with 205284 sales between 2021-02-22 and 2022-03-26 (397 days), so this user has an average of 517 transactions per day (?). If you sum sales of same days, you reduce the number of records from 205284 to 397... | 4 | 1 |
72,550,789 | 2022-6-8 | https://stackoverflow.com/questions/72550789/generate-a-type-from-another-type-and-change-fields-to-optional | I have this type: class SomeResource: id: int name: str And I need this type: class SomeResourceQuery: id: Optional[int] name: Optional[str] But I'd like to avoid having to write it by hand. Is it possible to generate this SomeResourceQuery type from the SomeResource type? Just convert all the types of the fields to optional. (Update: Already optional fields can stay optional - no nested optionals.) I plan to use this SomeResourceQuery in a repository, like this: class SomeResourceRepository: def get_one_or_none(self, query: SomeResourceQuery) -> Optional[SomeResource]: ... Update: Just to show what I'm thinking currently: class SomeResource: id: int name: str # I don't want to write this by hand: # class SomeResourceQuery: # id: Optional[int] # name: Optional[str] # So what can I do here to make all fields that are not already optional, optional? SomeResourceQuery = type("SomeResourceQuery", SomeResource) # What to do here? | You can use the type constructor to construct the new type with the appropriate annotations. def construct_query_class(cls: type) -> type: annotations = {key: typing.Optional[value] for key, value in cls.__annotations__.items()} return dataclasses.dataclass(type(cls.__name__ + 'Query', (), {'__annotations__': annotations})) class SomeResource: id: int name: str SomeResourceQuery = construct_query_class(SomeResource) # type: typing.Any | 4 | 6 |
72,558,859 | 2022-6-9 | https://stackoverflow.com/questions/72558859/transforming-in-pandas-groupby | Index E_Id P_Id Date 121 701 9002 2021 122 701 9001 2019 123 702 9002 2021 124 702 9002 2019 125 703 9001 2021 126 704 9002 2019 127 704 9003 2019 Now I want to Create another DataFrame groupedby E_Id But I want to Count the number of rows against each P_Id call it 'x', and then sum of 'x' for whoever is linked with each E_Id So E_Id TotalOfPIds 701 6 Can anyone help me? As an intermediary step, I did this: data['_Pid_Total'] = data.groupby('P_Id')[['P_Id']].transform('count') And then for single E_Id it works like this: data.loc[data['E_Id'] == '701', ['E_Id', 'P_Id', '_Pid_Total']].groupby(['E_Id', 'P_Id']).first().sum() returning a single integer. However I want to use this in Transform method or just do it for entire DataFrame. | Thank you for an interesting question. My approach is: You creates a df1 with unique combination of E_id and P_id Create df2 to count number of unique occur of P_id Merge df1 and df2, then use groupby with sum I modify your sample a bit # Dataframe: df = pd.DataFrame({'e':[701, 701,701,701, 701, 701,702,702,703,704, 704, 704], 'p':[9002, 9002, 9002, 9002, 9001, 9003, 9002, 9002, 9001, 9002, 9003, 9001]}) # Step 1: Unique combination df1 = df.groupby(['e', 'p']).count().reset_index() # Step 2: Count Number of occurs for each unique value P_id df2 = df.groupby('p').count().reset_index().rename(columns={'e':'val'}) # Step 3: Merge then use groupby and sum df3 = pd.merge(df1, df2, on=['p'], how='left') df3.groupby('e')[['val']].sum().reset_index() | 4 | 2 |
72,552,217 | 2022-6-8 | https://stackoverflow.com/questions/72552217/what-is-the-equivalent-of-connecting-to-google-cloud-storagegcs-like-in-aws-s3 | I want to access google cloud storage as in the code below. # amazon s3 connection import s3fs as fs with fs.open("s3://mybucket/image1.jpg") as f: image = Image.open(f).convert("RGB") # Is there an equivalent code like this GCP side? with cloudstorage.open("gs://my_bucket/image1.jpg") as f: image = Image.open(f).convert("RGB") | You're looking for gcsfs. Both s3fs and gcsfs are part of the fsspec project and have very similar APIs. import gcsfs fs = gcsfs.GCSFileSystem() with fs.open("gs://my_bucket/image1.jpg") as f: image = Image.open(f).convert("RGB") Note that both of these can be accessed from the fsspec interface, as long as you have the underlying drivers installed, e.g.: import fsspec with fsspec.open('s3://my_s3_bucket/image1.jpg') as f: image1 = Image.open(f).convert("RGB") with fsspec.open('gs://my_gs_bucket/image1.jpg') as f: image2 = Image.open(f).convert("RGB") # fsspec handles local paths too! with fsspec.open('/Users/myname/Downloads/image1.jpg') as f: image3 = Image.open(f).convert("RGB") fsspec is the file system handler underlying pandas and other libraries which parse cloud URLs. The reason the following "just works" is because fsspec is providing the cloud URI handling: pd.read_csv("s3://path/to/my/aws.csv") pd.read_csv("gs://path/to/my/google.csv") pd.read_csv("my/local.csv") | 5 | 8 |
72,551,878 | 2022-6-8 | https://stackoverflow.com/questions/72551878/how-to-flatten-a-multi-column-dataframe-into-2-columns | Given the following table: group_a = {'ba':[2.0,9.4,10.8], 'bb':[4.2,7.1,3], 'bc':[8.1,9.5,6.1]} A = pd.DataFrame(group_a, index=['aa','ab','ac']) That looks like this: ba bb bc aa 2.0 4.2 8.1 ab 9.4 7.1 9.5 ac 10.8 3.0 6.1 How can I flatten this table so that it looks like this: Values aa_ba 2.0 aa_bb 4.2 aa_bc 8.1 ab_ba 9.4 ab_bb 7.1 ab_bc 9.5 ac_ba 10.8 ac_bb 3.0 ac_bc 6.1 | You can use stack and rework the index: B = A.stack() B.index = B.index.map('_'.join) out = B.to_frame('Values') output: Values aa_ba 2.0 aa_bb 4.2 aa_bc 8.1 ab_ba 9.4 ab_bb 7.1 ab_bc 9.5 ac_ba 10.8 ac_bb 3.0 ac_bc 6.1 | 3 | 7 |
72,547,116 | 2022-6-8 | https://stackoverflow.com/questions/72547116/modulenotfounderror-no-module-named-discord-slash | I'm trying to install a module called "discord_slash" and when I use it in a python file it displays the error "ModuleNotFoundError: No module named 'discord_slash'". I've tried uninstalling it and installing it again but it's not working. | You're attempting to use the legacy v3 version import of the library, available here. As of 4.0, you should be using import interactions. Based on a comment, I understand you're watching a video from 2021, if you'd like to use a similar version you can use discord-py-slash-command 3.0.3 which is the latest release of that branch. You can install that like this: pip install discord-py-interactions=3.0.3 I'd highly recommend you go ahead and find a newer video guide or read their Quickstart documentation here so you can have the up to date library and features. | 4 | 7 |
72,545,775 | 2022-6-8 | https://stackoverflow.com/questions/72545775/django-orm-postgresql-delete-query | I have 30 instances of the Room objects, i.e. 30 rows in the database table. In Python code I have Room.objects.all().delete(). I see that Django ORM translated it into the following PostgreSQL query: DELETE FROM "app_name_room" WHERE "app_name_room"."id" IN ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30). Why doesn't the Django ORM use a more parsimonious DELETE FROM app_name_room query? Is there any way to switch to it and avoid listing all IDs? | Interesting question. It got me thinking so I went a little deeper. The main reason could be that using DELETE FROM app_name_room doesn't take care of CASCADE delete However, answering your question Is there any way to switch to it and avoid listing all IDs? You can do this using the private method _raw_delete. For instance: objects_to_delete = Foo.objects.all() objects_to_delete._raw_delete(objects_to_delete.db) This will execute the following query: DELETE FROM "objects_to_delete" PS: According to the function docstring: Delete objects found from the given queryset in single direct SQL query. No signals are sent and there is no protection for cascades. | 4 | 1 |
72,456,234 | 2022-6-1 | https://stackoverflow.com/questions/72456234/why-does-print-i-e-three-dots-in-a-row-print-blank | I'd like to print three dots in a row (to form an ellipsis), but print() prints blank. print("one moment...") one moment... print("...") print("..") .. print("...abc...") abc... print("\u2026") … What's happening here? Why is "..." parsed in an exceptional way? I am using ipython in PyCharm. | Update 2024-07-31 According to the YouTrack issue (archive) this issue has been fixed and is available in builds: 242.6184, 241.16163, 241.17011.12, 242.10180.17. Original Post Looks like this is a known issue with Pycharm where its interactive console removes the leading three periods from a print statement. Here’s the ticket tracking this issue. A possible workaround for now is defining something like: def iprint(obj): if (s:=str(obj)).startswith("..."): print(" "+s) else: print(s) which looks like: >>> iprint("...ymmv") ...ymmv | 63 | 69 |
72,474,673 | 2022-6-2 | https://stackoverflow.com/questions/72474673/what-is-the-recommended-way-for-retrieving-row-numbers-index-for-polars | I know polars does not support index by design, so df.filter(expr).index isn't an option, another way I can think of is by adding a new column before applying any filters, not sure if this is an optimal way for doing so in polars df.with_columns(pl.Series('index', range(len(df))).filter(expr).index | Use with_row_index(): df = pl.DataFrame([pl.Series("a", [5, 9, 6]), pl.Series("b", [8, 3, 4])]) In [20]: df.with_row_index() Out[20]: shape: (3, 3) ┌────────┬─────┬─────┐ │ index ┆ a ┆ b │ │ --- ┆ --- ┆ --- │ │ u32 ┆ i64 ┆ i64 │ ╞════════╪═════╪═════╡ │ 0 ┆ 5 ┆ 8 │ │ 1 ┆ 9 ┆ 3 │ │ 2 ┆ 6 ┆ 4 │ └────────┴─────┴─────┘ # Start from 1 instead of 0. In [21]: df.with_row_index(offset=1) Out[21]: shape: (3, 3) ┌────────┬─────┬─────┐ │ index ┆ a ┆ b │ │ --- ┆ --- ┆ --- │ │ u32 ┆ i64 ┆ i64 │ ╞════════╪═════╪═════╡ │ 1 ┆ 5 ┆ 8 │ │ 2 ┆ 9 ┆ 3 │ │ 3 ┆ 6 ┆ 4 │ └────────┴─────┴─────┘ # Start from 1 and call column "my_index". In [22]: df.with_row_index(name="my_index", offset=1) Out[22]: shape: (3, 3) ┌──────────┬─────┬─────┐ │ my_index ┆ a ┆ b │ │ --- ┆ --- ┆ --- │ │ u32 ┆ i64 ┆ i64 │ ╞══════════╪═════╪═════╡ │ 1 ┆ 5 ┆ 8 │ │ 2 ┆ 9 ┆ 3 │ │ 3 ┆ 6 ┆ 4 │ └──────────┴─────┴─────┘ | 11 | 16 |
72,482,298 | 2022-6-2 | https://stackoverflow.com/questions/72482298/why-isnt-my-class-initialized-by-def-int-or-def-init-why-do-i-get-a | If your question was closed as a duplicate of this, it is because you had a code sample including something along the lines of either: class Example: def __int__(self, parameter): self.attribute = parameter or: class Example: def _init_(self, parameter): self.attribute = parameter When you subsequently attempt to create an instance of the class, an error occurs: >>> Example("an argument") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Example() takes no arguments (In some versions of Python, the error may instead say TypeError: object.__new__() takes no parameters.) Alternately, instances of the class seem to be missing attributes: >>> class Example: ... def __int__(self): # or _init_ ... self.attribute = 'value' >>> Example().attribute Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Example' object has no attribute 'attribute' You might also wonder: what do these exception messages mean, and how do they relate to the problem? Why didn't a problem occur earlier, for example, with the class definition itself? How else might the problem manifest? How can I guard against this problem in the future? This is an artificial canonical question created specifically to head off two of the most common typographical errors in code written by new Python programmers. While questions caused by a typo are normally closed for that reason, there are some useful things to explain in this case, and having a duplicate target allows for closing questions faster. I have tried to design the question to be easy to search for. See also TypeError: __init__() should return None, not 'int' for the opposite problem - writing __init__ instead of __int__ when trying to make a class that can be converted to integer. | This is because the code has a simple typographical error: the method should instead be named __init__ - note the spelling, and note that there are two underscores on each side. What do the exception messages mean, and how do they relate to the problem? As one might guess, a TypeError is an error that has to do with the type of something. In this case, the meaning is a bit strained: Python also uses this error type for function calls where the arguments (the things you put in between () in order to call a function, class constructor or other callable) cannot be properly assigned to the parameters (the things you put between () when writing a function using the def syntax). In the examples where a TypeError occurs, the class constructor for Example does not take arguments. Why? Because it is using the base object constructor, which does not take arguments. That is just following the normal rules of inheritance: there is no __init__ defined locally, so the one from the superclass - in this case, object - is used. Similarly, an AttributeError is an error that has to do with an attribute of something. This is quite straightforward: the instance of Example doesn't have any .attribute attribute, because the constructor (which, again, comes from object due to the typo) did not set one. Why didn't a problem occur earlier, for example, with the class definition itself? Because the method with a wrongly typed name is still syntactically valid. Only syntax errors (reported as SyntaxError) can be caught before the code runs. Python does not assign any special meaning to methods named _init_ (with one underscore on each side), so it does not care what the parameters are. While __int__ is used for converting instances of the class to integer, and shouldn't have any parameters besides self, it is still syntactically valid. Your IDE might be able to warn you about an __int__ method that takes suspicious parameters (i.e., anything besides self). However, a) that doesn't completely solve the problem (see below), and b) the IDE might have helped you get it wrong in the first place (by making a bad autocomplete suggestion). The _init_ typo seems to be much less common nowadays. My guess is that people used to do this after reading example code out of books with poor typesetting. How else might the problem manifest? In the case where an instance is successfully created (but not properly initialized), any kind of problem could potentially happen later (depending on why proper initialization was needed). For example: BOMB_IS_SET = True class DefusalExpert(): def __int__(self): global BOMB_IS_SET BOMB_IS_SET = False def congratulate(self): if BOMB_IS_SET: raise RuntimeError("everything blew up, gg") else: print("hooray!") >>> d = DefusalExpert() >>> d.congratulate() Traceback (most recent call last): ... RuntimeError: everything blew up, gg If you intend for the class to be convertible to integer and also wrote __int__ deliberately, the last one will take precedence: class LoneliestNumber: def __int__(self): return 1 def __int__(self): # was supposed to be __init__ self.two = "can be as bad" >>> int(LoneliestNumber()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: __int__ returned non-int (type NoneType) How might I guard against the problem in the future? There is no magic bullet. I find it helps a little to have the convention of always putting __init__ (and/or __new__) as the first method in a class, if the class needs one. However, there is no substitute for proofreading, or for training. | 12 | 7 |
72,472,167 | 2022-6-2 | https://stackoverflow.com/questions/72472167/deprecation-warning-the-system-version-of-tk-is-deprecated-m1-mac-in-vs-code | (M1 MBA 2020, MacOS 12.3.1) So inside of Vs Code, when I select my interpreter as Python 3.8.9 from my usr/local/bin Tkinter it runs as I want it to. Here is the running code for reference. The problem arises when I am trying to use the Global Python 3.8.9 interpreter (usr/bin/python3). When the code runs, the application ends up looking like this. Additionally, when I run the code the terminal reads the following: DEPRECATION WARNING: The system version of Tk is deprecated and may be removed in a future release. Please don't rely on it. Set TK_SILENCE_DEPRECATION=1 to suppress this warning. How is it possible for me to fix this error? Or update my global Tkinter version without straying away from Python 3.8.9. Furthermore if any more info is needed I'll be happy to provide, sorry I'm new to this stuff 😅 Packages used in the app: tkinter, Pillow, tkmacosx One last thing, when I get rid of all mentions of the package Tkmacosx, the app appears like this: | If you have Homebrew installed, you can update tk with: brew uninstall tcl-tk --devel brew install tcl-tk Which is the recommended option Then you may need to add export PATH="/usr/local/opt/tcl-tk/bin:$PATH" to your .zshrc file: If you are using a zsh terminal: Use: echo "# For tkinter export PATH=\"/usr/local/opt/tcl-tk/bin:\$PATH\"" >> ~/.zshrc source ~/.zshrc Or if you are using a bash terminal: echo "# For tkinter export PATH=\"/usr/local/opt/tcl-tk/bin:\$PATH\"" >> ~/.bashrc source ~/.bashrc If the steps above didn't work, you may need to install another package (@goker): brew install [email protected] Homebrew Reference Python's offical tk upgrade docs | 10 | 15 |
72,504,576 | 2022-6-5 | https://stackoverflow.com/questions/72504576/why-use-from-module-import-a-as-a-instead-of-just-from-module-import-a | When reading source code of fastapi, this line make me fuzzy: from starlette.testclient import TestClient as TestClient Why not just: from starlette.testclient import TestClient? | From the point of view of executable code, there is absolutely no difference in terms of the Python bytecode being generated by the two different code examples (using Python 3.9): >>> dis.dis('from starlette.testclient import TestClient as TestClient') 1 0 LOAD_CONST 0 (0) 2 LOAD_CONST 1 (('TestClient',)) 4 IMPORT_NAME 0 (starlette.testclient) 6 IMPORT_FROM 1 (TestClient) 8 STORE_NAME 1 (TestClient) 10 POP_TOP 12 LOAD_CONST 2 (None) 14 RETURN_VALUE >>> dis.dis('from starlette.testclient import TestClient') 1 0 LOAD_CONST 0 (0) 2 LOAD_CONST 1 (('TestClient',)) 4 IMPORT_NAME 0 (starlette.testclient) 6 IMPORT_FROM 1 (TestClient) 8 STORE_NAME 1 (TestClient) 10 POP_TOP 12 LOAD_CONST 2 (None) 14 RETURN_VALUE As shown, they are exactly identical. (Related question asked before existence of PEP-0484). However, the comment by Graham501617 noted how modern type hinting validators (such as mypy) accept this particular syntax to denote the re-export of that imported name (the other being the __all__, which thankfully they did end up correctly supporting as that has been a standard syntax to denote symbols to (re-)export since Python 2). Specifically, as per the description of Stub Files in the referenced PEP 0484, quote: Modules and variables imported into the stub are not considered exported from the stub unless the import uses the import ... as ... form or the equivalent from ... import ... as ... form. (UPDATE: To clarify, the intention here is that only names imported using the form X as X will be exported, i.e. the name before and after as must be the same.) Looking at git blame for the relevant file in the packages pointed to this commit (direct link to relevant diff for the file) indicated that this particular type hinting issue was being addressed (as part of this issue) to ensure mypy will treat those imported names as re-export, thus allowing the usage of the --no-implicit-reexport flag (which --strict has likely implicitly enabled). This particular re-export syntax however is very much strict about the "name before and after as must be the same", the syntax referred in the related question (i.e. import foo.bar as bar) can be found in recommendation by certain modern packages (e.g. PyTorch recommends the use of import torch.nn as nn, as discussed in this question), does not in fact allow the bar (or nn) symbol be re-exported from the current module as foo.bar is not the same as bar (likewise torch.nn is not the same as nn, as the whole set of tokens torch.nn is evaluated instead of the just the final identifier after the final .). | 11 | 16 |
72,478,573 | 2022-6-2 | https://stackoverflow.com/questions/72478573/how-to-send-an-email-using-python-after-googles-policy-update-on-not-allowing-j | I am trying to learn how to send an email using python. All the tutorials on the web that I have read explain how to do it using Gmail. But, from 30/05/2022 (despite the fact that everybody is free to do whatever he wants with his account) Google has a new policy that states: To help keep your account secure, starting May 30, 2022, Google will no longer support the use of third-party apps or devices that only ask for your username and password for you. Sign in to your Google account. Source: https://support.google.com/accounts/answer/6010255 And we get: So my question is there any other way to send an email using python, (including email accounts belonging to an other company)? Here is my function to send an email: def send_email_fct(filename, filepath, fromaddr, mdpfrom, toaddr): """" filename: file name to be sent with extension filepath: file path of the file to be sent fromaddr: sender email address mdpfrom: password of sender email address toaddr: receiver email address""" msg = MIMEMultipart() # instance of MIMEMultipart msg['From'] = fromaddr msg['To'] = toaddr msg['Subject'] = "data file" body_email = "Body_of_the_mail" msg.attach(MIMEText(body_email, 'plain')) attachment = open(filepath, 'rb') # open the file to be sent p = MIMEBase('application', 'octet-stream') # instance of MIMEBase p.set_payload(attachment.read()) encoders.encode_base64(p) p.add_header('Content-Disposition', "attachment; filename= %s" % filename) msg.attach(p) # attach the instance 'p' to instance 'msg' s = smtplib.SMTP('smtp.gmail.com', 587) # SMTP s.starttls() s.login(fromaddr, mdpfrom) text = msg.as_string() s.sendmail(from_email_addr, toaddr, text) # sending the email s.quit() # terminating the session And I get this error: smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 https://support.google.com/mail/?p=BadCredentials c12-20020aa7d60c000000b0042be14040c1sm2612116edr.86 - gsmtp') To fix this problem, I think that the only line that need to be change is this one: s = smtplib.SMTP('smtp.gmail.com', 587) If you know by what I can change it or if you see any other error, it will help me a lot! :-) | Here is a more precise answer with all the main steps. I hope it will help other people. Log in into your email account: https://myaccount.google.com Then go to the security part Be sure that you have turn on two steps verification and click on "App password." As of 9th July 2023, this might not be visible, but is reachable directly via myaccount.google.com/apppasswords After select email and the corresponding device It will generate a password that will look like this; it is this password that you have to use in your python script. This password will appear here; in this place, you will have the option to erase it (so it will be no longer be useful to connect to your email account). Hope it will help other people! | 21 | 47 |
72,490,783 | 2022-6-3 | https://stackoverflow.com/questions/72490783/how-do-you-use-patch-as-a-context-manager | I have a class that mocks database functionality which does not subclass Mock or MagicMock because it defines its own __init__() method: class DatabaseMock(): def __init__(self, host=None): self.host = host self.x = {} # other methods that mutate x There is a function I want to test that makes an API call to the real database, so I patched it out: from unittest.mock import patch class TestFunctions(): def test_function(self): with patch("path.to.database.call", DatabaseMock) as mock: result = function_i_am_testing() assert mock.x == result There is a field of the DatabaseMock called x, but in the patch context, mock.x returns an AttributeError. This leads to me believe mock is not really an instance of DatabaseMock(). Also, I had tried making x a class level object which makes x visible, but its state would persist through separate test calls which I do not want. What is mock and how can I reference the mocked object instance in the context? | I have figured out the issue. When patch is given a class, it will return a class, not an object instance of that class. So in my example, mock is not a DataBaseMock object instance, but a reference to the class. This is why class level variables are visible, but not object instance fields. In order to get my desired functionality, I did this: from unittest.mock import patch class TestFunctions(): def test_function(self): with patch("path.to.database.call") as mock: mock.return_value = DataBaseMock() result = function_i_am_testing() assert mock.return_value.x == result Now, mock is a MagicMock object, whose return value is the object I need. | 4 | 2 |
72,461,176 | 2022-6-1 | https://stackoverflow.com/questions/72461176/dictstr-unknown-is-incompatible-with-my-custom-typeddict | Given this custom TypedDict TMyDict: class TMyDict(TypedDict, total=False): prop_a: int prop_b: int This is ok: def get_my_dict_works() -> TMyDict: return { 'prop_a': 0, 'prop_b': 1 } But this don't: def get_my_dict_fail() -> TMyDict: d= { 'prop_a': 0, 'prop_b': 1 } return d Error message is: Expression of type "dict[str, Unknown]" cannot be assigned to return type "TMyDict" "dict[str, Unknown]" is incompatible with "TMyDict" And it works if I add the type annotation when assigning var: def get_my_dict_fix() -> TMyDict: d: TMyDict = { 'prop_a': 0, 'prop_b': 1 } return d Why? | Your type checker determined the type of d before you used it In order to be performant and to keep type inference simple, type checkers typically only scan the code once and try to determine the type of a variable the moment it is assigned. It saw the line d = { 'prop_a': 0, 'prop_b': 1 } and assumed that d: dict[str, int] or maybe d: dict[str, Unknown]. If the type checker tried to accept your code, it would have to jump back and check all before uses of d the moment d is used as TMyDict. This could be very costly and as just adding the annotation fixes the error, it is not supported in most type checkers. Also, the code is cleaner if the user can directly see that d is supposed to be an TMyDict instead of just any dict. | 5 | 3 |
72,465,421 | 2022-6-1 | https://stackoverflow.com/questions/72465421/how-to-use-poetry-with-docker | How do I install poetry in my image? (should I use pip?) Which version of poetry should I use? Do I need a virtual environment? There are many examples and opinions in the wild which offer different solutions. | TL;DR Install poetry with pip, configure virtualenv, install dependencies, run your app. FROM python:3.10 # Configure Poetry ENV POETRY_VERSION=1.2.0 ENV POETRY_HOME=/opt/poetry ENV POETRY_VENV=/opt/poetry-venv ENV POETRY_CACHE_DIR=/opt/.cache # Install poetry separated from system interpreter RUN python3 -m venv $POETRY_VENV \ && $POETRY_VENV/bin/pip install -U pip setuptools \ && $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION} # Add `poetry` to PATH ENV PATH="${PATH}:${POETRY_VENV}/bin" WORKDIR /app # Install dependencies COPY poetry.lock pyproject.toml ./ RUN poetry install # Run your app COPY . /app CMD [ "poetry", "run", "python", "-c", "print('Hello, World!')" ] In Detail Installing Poetry How do I install poetry in my image? (should I use pip?) Install it with pip You should install poetry with pip. but you need to isolate it from the system interpreter and the project's virtual environment. For maximum control in your CI environment, installation with pip is fully supported ... offers the best debugging experience, and leaves you subject to the fewest external tools. ENV POETRY_VERSION=1.2.0 ENV POETRY_VENV=/opt/poetry-venv # Install poetry separated from system interpreter RUN python3 -m venv $POETRY_VENV \ && $POETRY_VENV/bin/pip install -U pip setuptools \ && $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION} # Add `poetry` to PATH ENV PATH="${PATH}:${POETRY_VENV}/bin" Poetry Version Which version of poetry should I use? Specify the latest stable version explicitly in your installation. Forgetting to specify POETRY_VERSION will result in undeterministic builds, as the installer will always install the latest version - which may introduce breaking changes Virtual Environment (virtualenv) Do I need a virtual environment? Yes, and you need to configure it a bit. ENV POETRY_CACHE_DIR=/opt/.cache The reasons for this are somewhat off topic: By default, poetry creates a virtual environment in $HOME/.cache/pypoetry/virtualenvs to isolate the system interpreter from your application. This is the desired behavior for most development scenarios. When using a container, the $HOME variable may be changed by certain runtimes, so creating the virtual environment in an independent directory solves any reproducibility issues that may arise. Bringing It All Together To use poetry in a docker image you need to: Install your desired version of poetry Configure virtual environment location Install your dependencies Use poetry run python ... to run your application A Working Example: This is a minimal flask project managed with poetry. You can copy these contents to your machine to test it out (expect for poerty.lock) Project structure python-poetry-docker/ |- Dockerfile |- app.py |- pyproject.toml |- poetry.lock Dockerfile FROM python:3.10 as python-base # https://python-poetry.org/docs#ci-recommendations ENV POETRY_VERSION=1.2.0 ENV POETRY_HOME=/opt/poetry ENV POETRY_VENV=/opt/poetry-venv # Tell Poetry where to place its cache and virtual environment ENV POETRY_CACHE_DIR=/opt/.cache # Create stage for Poetry installation FROM python-base as poetry-base # Creating a virtual environment just for poetry and install it with pip RUN python3 -m venv $POETRY_VENV \ && $POETRY_VENV/bin/pip install -U pip setuptools \ && $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION} # Create a new stage from the base python image FROM python-base as example-app # Copy Poetry to app image COPY --from=poetry-base ${POETRY_VENV} ${POETRY_VENV} # Add Poetry to PATH ENV PATH="${PATH}:${POETRY_VENV}/bin" WORKDIR /app # Copy Dependencies COPY poetry.lock pyproject.toml ./ # [OPTIONAL] Validate the project is properly configured RUN poetry check # Install Dependencies RUN poetry install --no-interaction --no-cache --without dev # Copy Application COPY . /app # Run Application EXPOSE 5000 CMD [ "poetry", "run", "python", "-m", "flask", "run", "--host=0.0.0.0" ] app.py from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, Docker!' pyproject.toml [tool.poetry] name = "python-poetry-docker-example" version = "0.1.0" description = "" authors = ["Someone <[email protected]>"] [tool.poetry.dependencies] python = "^3.10" Flask = "^2.1.2" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" poetry.lock [[package]] name = "click" version = "8.1.3" description = "Composable command line interface toolkit" category = "main" optional = false python-versions = ">=3.7" [package.dependencies] ... more lines ommitted Full contents in gist. | 31 | 59 |
72,466,218 | 2022-6-1 | https://stackoverflow.com/questions/72466218/ufeff-is-appearing-while-reading-csv-using-unicodecsv-module | I have following code import unicodecsv CSV_PARAMS = dict(delimiter=",", quotechar='"', lineterminator='\n') unireader = unicodecsv.reader(open('sample.csv', 'rb'), **CSV_PARAMS) for line in unireader: print(line) and it prints ['\ufeff"003', 'word one"'] ['003,word two'] ['003,word three'] The CSV looks like this "003,word one" "003,word two" "003,word three" I am unable to figure out why the first row has \ufeff (which is i believe a file marker). Moreover, there is " at the beginning of first row. The CSV file is comign from client so i can't dictate them how to save a file etc. Looking to fix my code so that it can handle encoding. Note: I have already tried passing encoding='utf8' to CSV_PARAMS and it didn't solve the problem | encoding='utf-8-sig' will remove the UTF-8-encoded BOM (byte order mark) used as a UTF-8 signature in some files: import unicodecsv with open('sample.csv','rb') as f: r = unicodecsv.reader(f, encoding='utf-8-sig') for line in r: print(line) Output: ['003,word one'] ['003,word two'] ['003,word three'] But why are you using the third-party unicodecsv with Python 3? The built-in csv module handles Unicode correctly: import csv # Note, newline='' is a documented requirement for the csv module # for reading and writing CSV files. with open('sample.csv', encoding='utf-8-sig', newline='') as f: r = csv.reader(f) for line in r: print(line) | 4 | 14 |
72,468,241 | 2022-6-1 | https://stackoverflow.com/questions/72468241/exception-closing-connection-using-sqlalchemy-with-asyncio-and-postgresql | I have an API server using Python 3.7.10. I am using the FastAPI framework with sqlalchemy, asyncio, psycopg2-binary, asyncpg along with postgresql. I am deploying this using aws elasticbeanstalk. The application seems to work fine but everytime my frontend calls an endpoint, it seems like the connection is not closing correctly. Error Jun 1 21:17:33 web: ERROR:sqlalchemy.pool.impl.AsyncAdaptedQueuePool:Exception closing connection <AdaptedConnection <asyncpg.connection.Connection object at 0x7fd8b005cb90>> Jun 1 21:17:33 web: Traceback (most recent call last): Jun 1 21:17:33 web: File "/var/app/venv/staging-LQM1lest/lib64/python3.7/site-packages/sqlalchemy/pool/base.py", line 247, in _close_connection Jun 1 21:17:33 web: self._dialect.do_close(connection) Jun 1 21:17:33 web: File "/var/app/venv/staging-LQM1lest/lib64/python3.7/site-packages/sqlalchemy/engine/default.py", line 688, in do_close Jun 1 21:17:33 web: dbapi_connection.close() Jun 1 21:17:33 web: File "/var/app/venv/staging-LQM1lest/lib64/python3.7/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 749, in close Jun 1 21:17:33 web: self.await_(self._connection.close()) Jun 1 21:17:33 web: File "/var/app/venv/staging-LQM1lest/lib64/python3.7/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 68, in await_only Jun 1 21:17:33 web: return current.driver.switch(awaitable) Jun 1 21:17:33 web: File "/var/app/venv/staging-LQM1lest/lib64/python3.7/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 121, in greenlet_spawn Jun 1 21:17:33 web: value = await result Jun 1 21:17:33 web: File "/var/app/venv/staging-LQM1lest/lib64/python3.7/site-packages/asyncpg/connection.py", line 1334, in close Jun 1 21:17:33 web: await self._protocol.close(timeout) Jun 1 21:17:33 web: File "asyncpg/protocol/protocol.pyx", line 581, in close Jun 1 21:17:33 web: concurrent.futures._base.CancelledError Here is my setup for the engine and session: from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from app.model.base import CustomBase from app.core.config import SQLALCHEMY_DATABASE_URI engine = create_async_engine(SQLALCHEMY_DATABASE_URI) SessionLocal = sessionmaker( autocommit=False, autoflush=False, class_=AsyncSession, bind=engine, expire_on_commit=False, ) I am using FastAPI's dependency injection to get the session with the following: async def get_db() -> AsyncSession: async with SessionLocal() as session: yield session This error only shows up in my deployment and not my local environment, and seems to only when using sqlalchemy asynchronously with asyncio. Thanks for the help! | My problem was fixed by using NullPool class. from sqlalchemy.pool import NullPool from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from app.model.base import CustomBase from app.core.config import SQLALCHEMY_DATABASE_URI engine = create_async_engine( SQLALCHEMY_DATABASE_URI, pool_pre_ping=True, poolclass=NullPool ) SessionLocal = sessionmaker( autocommit=False, autoflush=False, class_=AsyncSession, bind=engine, expire_on_commit=False, ) | 5 | 1 |
72,523,108 | 2022-6-6 | https://stackoverflow.com/questions/72523108/decompressing-gzip-chunks-of-response-from-http-client-call | I have the following code that I am using to try to chunk response from a http.client.HTTPSConnection get request to an API (please note that the response is gzip encoded: connection = http.client.HTTPSConnection(api, context = ssl._create_unverified_context()) connection.request('GET', api_url, headers = auth) response = connection.getresponse() while chunk := response.read(20): data = gzip.decompress(chunk) data = json.loads(chunk) print(data) This always gives out an error that it is not a gzipped file (b'\xe5\x9d'). Not sure how I can chunk data and still achieve what I am trying to do here. Basically, I am chunking so that I don't have to load the entire response in memory. Please note I can't use any other libraries like requests, urllib etc. | The most probable reason for that is, the response your received is indeed not a gzipped file. I notice that in your code, you pass a variable called auth. Typically, a server won't send you a compressed response if you don't specify in the request headers that you can accept it. If there is only auth-related keys in your headers like your variable name suggests, you won't receive a gzipped response. First, make sure you have 'Accept-Encoding': 'gzip' in your headers. Going forward, you will face another problem: Basically, I am chunking so that I don't have to load the entire response in memory. gzip.decompress will expect a complete file, so you would need to reconstruct it and load it entirely in memory before doing that, which would undermine the whole point of chunking the response. Trying to decompress a part of a gzip with gzip.decompress will most likely give you an EOFError saying something like Compressed file ended before the end-of-stream marker was reached. I don't know if you can manage that directly with the gzip library, but I know how to do it with zlib. You will also need to convert your chunk to a file-like object, you can do that with io.BytesIO. I see you have very strong constraints on libraries, but zlib and io are part of the python default, so hopefully you have them available. Here is a rework of your code that should help you going on: import http import ssl import gzip import zlib from io import BytesIO # your variables here api = 'your_api_host' api_url = 'your_api_endpoint' auth = {'AuhtKeys': 'auth_values'} # add the gzip header auth['Accept-Encoding'] = 'gzip' # prepare decompressing object decompressor = zlib.decompressobj(16 + zlib.MAX_WBITS) connection = http.client.HTTPSConnection(api, context = ssl._create_unverified_context()) connection.request('GET', api_url, headers = auth) response = connection.getresponse() while chunk := response.read(20): data = decompressor.decompress(BytesIO(chunk).read()) print(data) | 5 | 0 |
72,472,220 | 2022-6-2 | https://stackoverflow.com/questions/72472220/dataclass-inheritance-fields-without-default-values-cannot-appear-after-fields | Context I created two data classes to handle table metadata. TableMetadata apply to any kind of tables, while RestTableMetadata contains information relevant for data extracted from REST apis @dataclass class TableMetadata: """ - entity: business entity represented by the table - origin: path / query / url from which data withdrawn - id: field to be used as ID (unique) - historicity: full, delta - upload: should the table be uploaded """ entity: str origin: str view: str id: str = None historicity: str = "full" upload: bool = True columns: list = field(default_factory=list) @dataclass class RestTableMetadata(TableMetadata): """ - method: HTTP method to be used - payloadpath: portion of the response payload to use to build the dataframe """ method: str payloadpath: str = None Problem Because of inheritance, method (without default values) comes after columns, resulting in the following Pylance error: Fields without default values cannot appear after fields with default values I'm looking for a way to fix it without overriding __init__ (if there is such a way). I also noticed a method called __init_subclass__ (This method is called when a class is subclassed.) that might affect how RestTableMetadata.__init__ and other subclasses is generated. | Here is a working solution for python > 3.10 @dataclass(kw_only=True) class TableMetadata: """ - entity: business entity represented by the table - origin: path / query / url from which data withdrawn - id: field to be used as ID (unique) - historicity: full, delta - upload: should the table be uploaded """ entity: str origin: str view: str id: str = None historicity: str = "full" upload: bool = True columns: list = field(default_factory=list) @dataclass(kw_only=True) class RestTableMetadata(TableMetadata): """ - method: HTTP method to be used - payloadpath: portion of the response payload to use to build the dataframe """ method: str payloadpath: str = None | 13 | 10 |
72,507,137 | 2022-6-5 | https://stackoverflow.com/questions/72507137/get-data-of-a-document-in-firestore-by-giving-a-field-value-belonged-to-that-doc | I'm new to python. I'm using a firebase firestore database for this project. After entering the Admission_No, I want to retrieve all the data in the relevant document such as name, grade, phone. I tried to write a program for it. But I failed. Please help me to complete my project. Thank you all. The Code might be full of mistakes. Please don't mind it and fix them for me. Sample Data Here is my Code .py import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate("cred.json") firebase_admin.initialize_app(cred) db = firestore.client() data = { 'Admission_No': input('enter ad_no : ') } query_ad = db.collection(u'Users').where(u"Admission_No", u"==", data["Admission_No"]).get() get_data = db.collection(u'Users').where(u"Name", u"==", data["Name"]).get() if query_ad: print('Exist') print(get_data) else: print("Doesn't Exist") | In this case, I tried many ways. But finally, I realized we should provide the correct Document ID for retrieve all fields of data from firestore. Getting Data from Firestore - Method given in firestore documentation doc_ref = db.collection(u'cities').document(u'SF') doc = doc_ref.get() if doc.exists: print(f'Document data: {doc.to_dict()}') else: print(u'No such document!') In my case I tried with this Code and it was succeed. u_name = input('Enter User Name : ') ad_no = input('Enter Admission Number : ') doc_name = ad_no + " - " + u_name doc_ref = self.db.collection(u'Users').document(doc_name) doc = doc_ref.get() if doc.exists: Full_Name = str(doc.to_dict()['Full_Name']) Admission_No = str(doc.to_dict()['Admission_No']) Grade = str(doc.to_dict()['Grade']) Phone_Number = str(doc.to_dict()['Phone_Number']) print(Full_Name, Admission_No, Grade, Phone_Number) else: print('No such document!') Thank you everyone who helped me with this issue. (@RJC) | 4 | 2 |
72,522,003 | 2022-6-6 | https://stackoverflow.com/questions/72522003/xgbregressor-with-weights-and-base-margin-out-of-sample-validation-possible | I have an old linear model which I wish to improve using XGBoost. I have the predictions from the old model, which I wish to use as a base margin. Also, due to the nature of what I'm modeling, I need to use weights. My old glm is a poisson regression with formula number_of_defaults/exposure ~ param_1 + param_2 and weights set to exposure (same as denominator in response variable). When training the new XGBoost model on data, I do this: xgb_model = xgb.XGBRegressor(n_estimators=25, max_depth=100, max_leaves=100, learning_rate=0.01, n_jobs=4, eval_metric="poisson-nloglik", nrounds=50) model = xgb_model.fit(X=X_train, y=y_train, sample_weight=_WEIGHT, base_margin=_BASE_MARGIN) , where _WEIGHT and _BASE_MARGIN are the weights and predictions (popped out of X_train). But how do I do cross validation or out of sample analysis when I need to specify weights and base margin? As far as I see I can use sklearn and GridSearchCV, but then I would need to specify weights and base margin in XGBRegressor() (instead of in fit() as above). The equivalent of base_margin in XGBRegressor() is the argument base_score, but there is no argument for weight. Also, I could potentially forget about doing cross-validation, and just use a training and test dataset, and I would then use eval_set argument in XGBRegressor(), but if I did that there is no way of specifying what is weight and what is base margin in the different sets. Any guidance in the right direction is much appreciated! | You can use cross_val_predict with fit_params argument, or GridSearchCV.fit with **fit_params. Here is a working proof of concept import xgboost as xgb from sklearn import datasets from sklearn.model_selection import cross_val_predict, GridSearchCV import numpy as np # Sample dataset diabetes = datasets.load_diabetes() X = diabetes.data[:150] y = diabetes.target[:150] xgb_model = xgb.XGBRegressor(n_estimators=5) fit_params = dict(sample_weight=np.abs(X[:, 0]), base_margin=np.abs(X[:, 1])) # Simple fit xgb_model.fit(X, y, **fit_params) # cross_val_predict y_pred = cross_val_predict(xgb_model, X, y, cv=3, fit_params=fit_params) print(y_pred.shape, y.shape) # grid search grid = GridSearchCV(xgb_model, param_grid={"n_estimators": [5, 10, 15]}) grid.fit(X, y, **fit_params) You can see what happen in the code source: here, here and here. The last link is where fit_params get indexing following cross validation splits. | 6 | 3 |
72,499,414 | 2022-6-4 | https://stackoverflow.com/questions/72499414/i-got-an-error-about-error-cant-find-libdevice-directory-cuda-dir-nvvm-libd | Windows Version: Windows 10 Pro 21H2 19044.1706 GPU: rtx2070 import tensorflow as tf import torch print(torch.__version__) #1.10.1+cu113 print(torch.version.cuda) #11.3 print(tf.__version__) #2.9.1 and i run python .\object_detection\builders\model_builder_tf2_test.py i can get 'Ran 24 tests in 18.279s OK (skipped=1)' result; But when I want to train my model, i use feature_extractor { type: 'faster_rcnn_inception_resnet_v2_keras' } in my pipeline_config, and i run python .\object_detection\model_main_tf2.py --logtostderr --pipeline_config_path=LOCATION_OF_MY_PIPECONFIG --model_dir=LOCATION_OF_MY_MODEL_DIR And then i get the following error In my system environment variable , 'CUDA_DIR' is variable and can be accessed | I had the same problem and just fixed it. The library can't find the folder even if you set the "CUDA_DIR" because it's not using that variable or any other I tried. This post is helpful in understanding the issue. The only solution I was able to find is just copying the required files. Steps for a quick fix: Find where your CUDA nvvm is installed (for me it is "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6"). Find the working directory for your script (the environment or the directory you are running the script in). Copy the entire nvvm folder into the working directory and your script should work. This is not a great solution but until someone else posts a answer you can at least run your code. | 6 | 12 |
72,476,094 | 2022-6-2 | https://stackoverflow.com/questions/72476094/pydantic-error-wrappers-validationerror-11-validation-errors-for-for-trip-type | Im getting this error with my pydantic schema, but oddly it is generating the object correctly, and sending it to the SQLAlchemy models, then it suddenly throws error for all elements in the model. response -> id field required (type=value_error.missing) response -> date field required (type=value_error.missing) response -> time field required (type=value_error.missing) response -> price field required (type=value_error.missing) response -> distance field required (type=value_error.missing) response -> origin_id field required (type=value_error.missing) response -> destination_id field required (type=value_error.missing) response -> driver_id field required (type=value_error.missing) response -> passenger_id field required (type=value_error.missing) response -> vehicle_id field required (type=value_error.missing) response -> status field required (type=value_error.missing) i must say that all the fields should have values. And the error trace do not references any part of my code so i dont even know where to debug. Im a noob in SQLAlchemy/pydantic here are some parts of the code class Trip(BaseModel): id: int date: str time: str price: float distance: float origin_id: int destination_id: int driver_id: int passenger_id: int vehicle_id: int status: Status class Config: orm_mode = True class TripDB(Base): __tablename__ = 'trip' __table_args__ = {'extend_existing': True} id = Column(Integer, primary_key=True, index=True) date = Column(DateTime, nullable=False) time = Column(String(64), nullable=False) price = Column(Float, nullable=False) distance = Column(Float, nullable=False) status = Column(String(64), nullable=False) origin_id = Column( Integer, ForeignKey('places.id'), nullable=False) destination_id = Column( Integer, ForeignKey('places.id'), nullable=False) origin = relationship("PlaceDB", foreign_keys=[origin_id]) destination = relationship("PlaceDB", foreign_keys=[destination_id]) driver_id = Column( Integer, ForeignKey('driver.id'), nullable=False) vehicle_id = Column( Integer, ForeignKey('vehicle.id'), nullable=False) passenger_id = Column( Integer, ForeignKey('passenger.id'), nullable=False) def create_trip(trip: Trip, db: Session): origin = db.query(models.PlaceDB).filter(models.PlaceDB.id == trip.origin_id).first() destination = db.query(models.PlaceDB).filter(models.PlaceDB.id == trip.destination_id).first() db_trip = TripDB( id=(trip.id or None), date=trip.date or None, time=trip.time or None, price=trip.price or None, distance=trip.distance or None, origin_id=trip.origin_id or None, destination_id=(trip.destination_id or None), status=trip.status or None, driver_id=trip.driver_id or None, passenger_id=trip.passenger_id or None, vehicle_id=trip.vehicle_id or None, origin=origin, destination=destination) try: db.add(db_trip) db.commit() db.refresh(db_trip) return db_trip except: return "Somethig went wrong" | It seems like a bug on the pydantic model, it happened to me as well, and i was not able to fix it, but indeed if you just skip the type check in the route it works fine | 7 | 3 |
72,503,309 | 2022-6-4 | https://stackoverflow.com/questions/72503309/save-a-bert-model-with-custom-forward-function-and-heads-on-hugginface | I have created my own BertClassifier model, starting from a pretrained and then added my own classification heads composed by different layers. After the fine-tuning, I want to save the model using model.save_pretrained() but when I print it upload it from pretrained i don't see my classifier head. The code is the following. How can I save the all structure on my model and make it full accessible with AutoModel.from_preatrained('folder_path') ? . Thanks! class BertClassifier(PreTrainedModel): """Bert Model for Classification Tasks.""" config_class = AutoConfig def __init__(self,config, freeze_bert=True): #tuning only the head """ @param bert: a BertModel object @param classifier: a torch.nn.Module classifier @param freeze_bert (bool): Set `False` to fine-tune the BERT model """ #super(BertClassifier, self).__init__() super().__init__(config) # Instantiate BERT model # Specify hidden size of BERT, hidden size of our classifier, and number of labels self.D_in = 1024 #hidden size of Bert self.H = 512 self.D_out = 2 # Instantiate the classifier head with some one-layer feed-forward classifier self.classifier = nn.Sequential( nn.Linear(self.D_in, 512), nn.Tanh(), nn.Linear(512, self.D_out), nn.Tanh() ) def forward(self, input_ids, attention_mask): # Feed input to BERT outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask) # Extract the last hidden state of the token `[CLS]` for classification task last_hidden_state_cls = outputs[0][:, 0, :] # Feed input to classifier to compute logits logits = self.classifier(last_hidden_state_cls) return logits configuration=AutoConfig.from_pretrained('Rostlab/prot_bert_bfd') model = BertClassifier(config=configuration,freeze_bert=False) Saving the model after fine-tuning model.save_pretrained('path') Loading the fine-tuned model model = AutoModel.from_pretrained('path') Printing the model after loading shows I have as the last layer the following and missing my 2 linear layer: (output): BertOutput( (dense): Linear(in_features=4096, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.0, inplace=False) (adapters): ModuleDict() (adapter_fusion_layer): ModuleDict() ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=1024, out_features=1024, bias=True) (activation): Tanh() ) (prefix_tuning): PrefixTuningPool( (prefix_tunings): ModuleDict() ) ) | Maybe something is wrong with the config_class attribute inside your BertClassifier class. According to the documentation you need to create an additional config class which inherits form PretrainedConfig and initialises the model_type attribute with the name of your custom model. The BertClassifier's config_class has to be consistent with your custom config class type. Afterwards you can register your config and model with the following calls: AutoConfig.register('CustomModelName', CustomModelConfigClass) AutoModel.register(CustomModelConfigClass, CustomModelClass) And load your finetuned model with AutoModel.from_pretrained('YourCustomModelName') An incomplete example based on your code could look like this: class BertClassifierConfig(PretrainedConfig): model_type="BertClassifier" class BertClassifier(PreTrainedModel): config_class = BertClassifierConfig # ... configuration = BertClassifierConfig() bert_classifier = BertClassifier(configuration) # do your finetuning and save your custom model bert_classifier.save_pretrained("CustomModels/BertClassifier") # register your config and your model AutoConfig.register("BertClassifier", BertClassifierConfig) AutoModel.register(BertClassifierConfig, BertClassifier) # load your model with AutoModel bert_classifier_model = AutoModel.from_pretrained("CustomModels/BertClassifier") Printing the model output should be similiar to this: (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (classifier): Sequential( (0): Linear(in_features=1024, out_features=512, bias=True) (1): Tanh() (2): Linear(in_features=512, out_features=2, bias=True) (3): Tanh() (4): Linear(in_features=2, out_features=512, bias=True) (5): Tanh() ) Hope this helps. https://huggingface.co/docs/transformers/custom_models#registering-a-model-with-custom-code-to-the-auto-classes | 5 | 3 |
72,535,079 | 2022-6-7 | https://stackoverflow.com/questions/72535079/python-easiest-way-to-insert-a-dictionary-into-a-database-table | I have a dictionary with keys and values like: my_dict = {'a':33, 'b': 'something', 'c': GETDATE(), 'd': 55} Assume column names in the SQL table are also named like the keys of the dict, i.e. "a,b,c,d". The actual dictionary is 20+ key:value pairs. Code I have used pyodbc.connect to create a cursor which I could use to execute an SQL INSERT statement: for k in my_dict.keys(): cursor.execute( ''' INSERT INTO TABLEabc (%s) VALUES (%s) ''' % (k, my_dict[k]) ) This seems inefficient though because it's a new SQL operation each time. What is the easiest way to insert the values using a loop? How could I write it so that it just makes one insert with all the values? | If you're using pyodbc then this might work: columns = {row.column_name for row in cursor.columns(table='TABLEabc')} safe_dict = {key: val for key, val in my_dict.items() if key in columns} # generate a parameterised query for the keys in our dict query = "INSERT INTO TABLEabc ({columns}) VALUES ({value_placeholders})".format( columns=", ".join(safe_dict.keys()), value_placeholders=", ".join(["?"] * len(safe_dict)), ) cursor.execute(query, list(safe_dict.values())) It is intended to be safe from SQL injection because: we filter for only keys which are actual column names in the db we use pyodbc cursor execute params, so the values will be escaped properly Where it possibly won't work: if any of the column names need to be quoted and escaped, this won't happen automatically so it will fail Quoting/escaping is db-specific so we would have to check the rules for our actual db and apply that to the dict keys that we format into the query. (or find some way to get pyodbc to do that for us, not sure if possible) If you trust your my_dict not to contain malicious code then you can simplify to just: query = "INSERT INTO TABLEabc ({columns}) VALUES ({value_placeholders})".format( columns=", ".join(my_dict.keys()), value_placeholders=", ".join(["?"] * len(my_dict)), ) cursor.execute(query, list(my_dict.values())) | 5 | 3 |
72,534,575 | 2022-6-7 | https://stackoverflow.com/questions/72534575/fastapi-fileresponse-cannot-find-file-in-tempdirectory | I'm trying to write an endpoint that just accepts an image and attempts to convert it into another format, by running a command on the system. Then I return the converted file. It's slow and oh-so-simple, and I don't have to store files anywhere, except temporarily. I'd like all the file-writing to happen in a temporary directory, so it gets cleaned up. The route works fine if the output file is not in the temporary directory. But if I try to put the output file in the temporary directory, the FileResponse can't find it, and requests fail. RuntimeError: File at path /tmp/tmpp5x_p4n9/out.jpg does not exist. Is there something going on related to the asynchronous nature of FastApi that FileResponse can't wait for the subprocess to create the file its making? Can I make it wait? (removing async from the route does not help). @app.post("/heic") async def heic(img: UploadFile): with TemporaryDirectory() as dir: inname = os.path.join(dir, "img.heic") f = open(inname,"wb") f.write(img.file.read()) f.flush() # setting outname in the temp dir fails! # outname = os.path.join(dir, 'out.jpg') outname = os.path.join('out.jpg') cmd = f"oiiotool {f.name} -o {outname}" process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) process.wait() return FileResponse(outname, headers={'Content-Disposition':'attachment; filename=response.csv'}) Thank you for any insights! | According to the documentation of TemporaryDirectory() This class securely creates a temporary directory using the same rules as mkdtemp(). The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the temporary directory object, the newly created temporary directory and all its contents are removed from the filesystem. It seems that the directory and contents are already being released before the FastAPI request is returned. However you can use Dependency injection in FastAPI, so why no inject a temporary directory? First define the dependency: async def get_temp_dir(): dir = TemporaryDirectory() try: yield dir.name finally: del dir And add the dependency to your endpoint: @app.post("/heic") async def heic(imgfile: UploadFile = File(...), dir=Depends(get_temp_dir)): inname = os.path.join(dir, "img.heic") f = open(inname,"wb") f.write(imgfile.file.read()) f.flush() outname = os.path.join(dir, 'out.jpg') cmd = f"oiiotool {f.name} -o {outname}" process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) process.wait() return FileResponse(inname, headers={'Content-Disposition':'attachment; filename=response.csv'}) I've tested it with returning the incoming file and that works. | 7 | 10 |
72,467,711 | 2022-6-1 | https://stackoverflow.com/questions/72467711/is-memory-supposed-to-be-this-high-during-model-fit-using-a-generator | The tensorflow versions that I can still recreate this behavior are: 2.7.0, 2.7.3, 2.8.0, 2.9.0. Actually, these are all the versions I've tried; I wasn't able to resolve the issue in any version. OS: Ubuntu 20 GPU: RTX 2060 RAM: 16GB I am trying to feed my data to a model using a generator: class DataGen(tf.keras.utils.Sequence): def __init__(self, indices, batch_size): self.X = X self.y = y self.indices = indices self.batch_size = batch_size def __getitem__(self, index): X_batch = self.X[self.indices][ index * self.batch_size : (index + 1) * self.batch_size ] y_batch = self.y[self.indices][ index * self.batch_size : (index + 1) * self.batch_size ] return X_batch, y_batch def __len__(self): return len(self.y[self.indices]) // self.batch_size train_gen = DataGen(train_indices, 32) val_gen = DataGen(val_indices, 32) test_gen = DataGen(test_indices, 32) where X, y is my dataset loaded from a .h5 file using h5py, and train_indices, val_indices, test_indices are the indices for each set that will be used on X and y. I am creating the model and feeding the data using: # setup model base_model = tf.keras.applications.MobileNetV2(input_shape=(128, 128, 3), include_top=False) base_model.trainable = False mobilenet1 = Sequential([ base_model, Flatten(), Dense(27, activation='softmax') ]) mobilenet1.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.CategoricalCrossentropy(), metrics=['accuracy']) # model training hist_mobilenet = mobilenet1.fit(train_gen, validation_data=val_gen, epochs=1) The memory right before training is 8%, but the moment training starts it begins getting values from 30% up to 60%. Since I am using a generator and loading the data in small parts of 32 observations at a time, it seems odd to me that the memory climbs this high. Also, even when training stops, memory stays above 30%. I checked all global variables but none of them has such a large size. If I start another training session memory starts having even higher usage values and eventually jupyter notebook kernel dies. Is something wrong with my implementation or this is normal? Edit 1: some additional info. Whenever the training stops, memory usage drops a little, but I can decrease it even more by calling garbage collector. However, I cannot bring it back down to 8%, even when I delete the history created by fit the x and y batches' size sum up to 48 bytes; this outrages me! how come loading 48 of data at a time is causing the memory usage to increase that much? Supposedly I am using HDF5 dataset to be able to handle the data without overloading RAM. The next thing that comes to my mind is that fit creates some variables, but it doesn't make sense that it needs so many GBs of memory to store them | How to minimize RAM usage From the very helpful comments and answers of our fellow friends, I came to this conclusion: First, we have to save the data to an HDF5 file, so we would not have to load the whole dataset in memory. import h5py as h5 import gc file = h5.File('data.h5', 'r') X = file['X'] y = file['y'] gc.collect() I am using garbage collector just to be safe. Then, we would not have to pass the data to the generator, as the X and y will be same for training, validation and testing. In order to differentiate between the different data, we will use index maps # split data for validation and testing val_split, test_split = 0.2, 0.1 train_indices = np.arange(len(X))[:-int(len(X) * (val_split + test_split))] val_indices = np.arange(len(X))[-int(len(X) * (val_split + test_split)) : -int(len(X) * test_split)] test_indices = np.arange(len(X))[-int(len(X) * test_split):] class DataGen(tf.keras.utils.Sequence): def __init__(self, index_map, batch_size): self.X = X self.y = y self.index_map = index_map self.batch_size = batch_size def __getitem__(self, index): X_batch = self.X[self.index_map[ index * self.batch_size : (index + 1) * self.batch_size ]] y_batch = self.y[self.index_map[ index * self.batch_size : (index + 1) * self.batch_size ]] return X_batch, y_batch def __len__(self): return len(self.index_map) // self.batch_size train_gen = DataGen(train_indices, 32) val_gen = DataGen(val_indices, 32) test_gen = DataGen(test_indices, 32) Last thing to notice is how I implemented the the data fetching inside __getitem__. Correct solution: X_batch = self.X[self.index_map[ index * self.batch_size : (index + 1) * self.batch_size ]] Wrong solution: X_batch = self.X[self.index_map][ index * self.batch_size : (index + 1) * self.batch_size ] same for y Notice the difference? In the wrong solution I am loading the whole dataset (training, validation or testing) in memory! Instead, in the correct solution I am only loading the batch meant to feed in the fit method. With this setup, I managed to raise RAM only to 2.88 GB, which is pretty cool! | 5 | 3 |
72,511,979 | 2022-6-6 | https://stackoverflow.com/questions/72511979/valueerror-install-dbtypes-to-use-this-function | I'm using BigQuery for the first time. client.list_rows(table, max_results = 5).to_dataframe(); Whenever I use to_dataframe() it raises this error: ValueError: Please install the 'db-dtypes' package to use this function. I found this similar problem (almost exactly the same), but I can't understand how to implement their proposed solution. | I was able to replicate your use case as shown below. Easiest solution is to pip install db-dtypes as mentioned by @MattDMo. Or you can specify previous version of google-cloud-bigquery by creating a requirements.txt with below contents: google-cloud-bigquery==2.34.3 And then pip install by using command as shown below: pip install -r /path/to/requirements.txt Output of my sample replication: | 32 | 20 |
72,505,627 | 2022-6-5 | https://stackoverflow.com/questions/72505627/mediapipe-not-running-even-after-installation-macbook-m1 | I am trying to get MediaPipe Pose estimation running on VS code on my MacBook M1. I installed it using pip install mediapipe-silicon and it installed successfully. I am running the generic MediaPipe code without modifications: import cv2 import mediapipe as mp import numpy as np mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_pose = mp.solutions.pose cap = cv2.VideoCapture(0) with mp_pose.Pose( min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") # If loading a video, use 'break' instead of 'continue'. continue # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = pose.process(image) # Draw the pose annotation on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) mp_drawing.draw_landmarks( image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS, landmark_drawing_spec=mp_drawing_styles.get_default_pose_landmarks_style()) # Flip the image horizontally for a selfie-view display. cv2.imshow('MediaPipe Pose', cv2.flip(image, 1)) if cv2.waitKey(5) & 0xFF == 27: break cap.release() When I run this code, the webcam light turns on for 2 seconds and then off. I do not get a pop up window to see the webcam. I also get this on the terminal: objc[90068]: Class CaptureDelegate is implemented in both /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cv2/cv2.cpython-310-darwin.so (0x10abe6458) and /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mediapipe/.dylibs/libopencv_videoio.3.4.16.dylib (0x10d4cc860). One of the two will be used. Which one is undefined. objc[90068]: Class CVWindow is implemented in both /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cv2/cv2.cpython-310-darwin.so (0x10abe64a8) and /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mediapipe/.dylibs/libopencv_highgui.3.4.16.dylib (0x10c430a68). One of the two will be used. Which one is undefined. objc[90068]: Class CVView is implemented in both /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cv2/cv2.cpython-310-darwin.so (0x10abe64d0) and /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mediapipe/.dylibs/libopencv_highgui.3.4.16.dylib (0x10c430a90). One of the two will be used. Which one is undefined. objc[90068]: Class CVSlider is implemented in both /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cv2/cv2.cpython-310-darwin.so (0x10abe64f8) and /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mediapipe/.dylibs/libopencv_highgui.3.4.16.dylib (0x10c430ab8). One of the two will be used. Which one is undefined. [libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format mediapipe.CalculatorGraphConfig: 22:9: Message type "mediapipe.CalculatorOptions" has no field named "ext". Traceback (most recent call last): File "/Users/jana/Documents/GitHub/AS2-MLC-Project/multi_pose.py", line 7, in <module> with mp_pose.Pose( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mediapipe/python/solutions/pose.py", line 146, in __init__ super().__init__( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mediapipe/python/solution_base.py", line 247, in __init__ self._graph = calculator_graph.CalculatorGraph( RuntimeError: Failed to parse: node { calculator: "ImagePropertiesCalculator" input_stream: "IMAGE_CPU:image" output_stream: "SIZE:image_size" } node { calculator: "PreviousLoopbackCalculator" input_stream: "MAIN:image" input_stream: "LOOP:pose_rect_from_landmarks" output_stream: "PREV_LOOP:prev_pose_rect_from_landmarks" input_stream_info { tag_index: "LOOP" back_edge: true } } node { calculator: "GateCalculator" input_stream: "prev_pose_rect_from_landmarks" output_stream: "gated_prev_pose_rect_from_landmarks" input_side_packet: "ALLOW:use_prev_landmarks" options { ext { allow: true } } } node { calculator: "PacketPresenceCalculator" input_stream: "PACKET:gated_prev_pose_rect_from_landmarks" output_stream: "PRESENCE:prev_pose_rect_from_landmarks_is_present" } node { calculator: "GateCalculator" input_stream: "image" input_stream: "image_size" input_stream: "DISALLOW:prev_pose_rect_from_landmarks_is_present" output_stream: "image_for_pose_detection" output_stream: "image_size_for_pose_detection" options { ext { empty_packets_as_allow: true } } } node { name: "posedetectioncpu__ImageToTensorCalculator" calculator: "ImageToTensorCalculator" input_stream: "IMAGE:image_for_pose_detection" output_stream: "TENSORS:posedetectioncpu__input_tensors" output_stream: "LETTERBOX_PADDING:posedetectioncpu__letterbox_padding" options { ext { output_tensor_width: 224 output_tensor_height: 224 keep_aspect_ratio: true output_tensor_float_range { min: -1 max: 1 } gpu_origin: TOP_LEFT border_mode: BORDER_ZERO } } } node { name: "posedetectioncpu__SsdAnchorsCalculator" calculator: "SsdAnchorsCalculator" output_side_packet: "posedetectioncpu__anchors" options { ext { input_size_width: 224 input_size_height: 224 min_scale: 0.1484375 max_scale: 0.75 anchor_offset_x: 0.5 anchor_offset_y: 0.5 num_layers: 5 strides: 8 strides: 16 strides: 32 strides: 32 strides: 32 aspect_ratios: 1 fixed_anchor_size: true } } } node { name: "poselandmarkbyroicpu__ImagePropertiesCalculator" calculator: "ImagePropertiesCalculator" input_stream: "IMAGE_CPU:image" output_stream: "SIZE:poselandmarkbyroicpu__image_size" } node { name: "posedetectioncpu__inferencecalculator__posedetectioncpu__InferenceCalculator" calculator: "InferenceCalculatorCpu" input_stream: "TENSORS:posedetectioncpu__input_tensors" output_stream: "TENSORS:posedetectioncpu__detection_tensors" options { ext { model_path: "mediapipe/modules/pose_detection/pose_detection.tflite" delegate { xnnpack { } } } } } node { name: "posedetectioncpu__TensorsToDetectionsCalculator" calculator: "TensorsToDetectionsCalculator" input_stream: "TENSORS:posedetectioncpu__detection_tensors" output_stream: "DETECTIONS:posedetectioncpu__unfiltered_detections" input_side_packet: "ANCHORS:posedetectioncpu__anchors" options { ext { num_classes: 1 num_boxes: 2254 num_coords: 12 keypoint_coord_offset: 4 num_keypoints: 4 num_values_per_keypoint: 2 box_coord_offset: 0 x_scale: 224 y_scale: 224 w_scale: 224 h_scale: 224 reverse_output_order: true sigmoid_score: true score_clipping_thresh: 100 min_score_thresh: 0.5 } } } node { name: "posedetectioncpu__NonMaxSuppressionCalculator" calculator: "NonMaxSuppressionCalculator" input_stream: "posedetectioncpu__unfiltered_detections" output_stream: "posedetectioncpu__filtered_detections" options { ext { min_suppression_threshold: 0.3 overlap_type: INTERSECTION_OVER_UNION algorithm: WEIGHTED } } } node { name: "posedetectioncpu__DetectionLetterboxRemovalCalculator" calculator: "DetectionLetterboxRemovalCalculator" input_stream: "DETECTIONS:posedetectioncpu__filtered_detections" input_stream: "LETTERBOX_PADDING:posedetectioncpu__letterbox_padding" output_stream: "DETECTIONS:pose_detections" } node { calculator: "SplitDetectionVectorCalculator" input_stream: "pose_detections" output_stream: "pose_detection" options { ext { ranges { begin: 0 end: 1 } element_only: true } } } node { name: "posedetectiontoroi__AlignmentPointsRectsCalculator" calculator: "AlignmentPointsRectsCalculator" input_stream: "DETECTION:pose_detection" input_stream: "IMAGE_SIZE:image_size_for_pose_detection" output_stream: "NORM_RECT:posedetectiontoroi__raw_roi" options { ext { rotation_vector_start_keypoint_index: 0 rotation_vector_end_keypoint_index: 1 rotation_vector_target_angle_degrees: 90 } } } node { name: "posedetectiontoroi__RectTransformationCalculator" calculator: "RectTransformationCalculator" input_stream: "NORM_RECT:posedetectiontoroi__raw_roi" input_stream: "IMAGE_SIZE:image_size_for_pose_detection" output_stream: "pose_rect_from_detection" options { ext { scale_x: 1.25 scale_y: 1.25 square_long: true } } } node { calculator: "MergeCalculator" input_stream: "pose_rect_from_detection" input_stream: "gated_prev_pose_rect_from_landmarks" output_stream: "pose_rect" } node { name: "poselandmarkbyroicpu__ImageToTensorCalculator" calculator: "ImageToTensorCalculator" input_stream: "IMAGE:image" input_stream: "NORM_RECT:pose_rect" output_stream: "TENSORS:poselandmarkbyroicpu__input_tensors" output_stream: "LETTERBOX_PADDING:poselandmarkbyroicpu__letterbox_padding" output_stream: "MATRIX:poselandmarkbyroicpu__transformation_matrix" options { ext { output_tensor_width: 256 output_tensor_height: 256 keep_aspect_ratio: true output_tensor_float_range { min: 0 max: 1 } } } } node { name: "poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__InverseMatrixCalculator" calculator: "InverseMatrixCalculator" input_stream: "MATRIX:poselandmarkbyroicpu__transformation_matrix" output_stream: "MATRIX:poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__inverse_transformation_matrix" } node { name: "poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__SwitchDemuxCalculator" calculator: "SwitchDemuxCalculator" input_side_packet: "SELECT:model_complexity" options { ext { select: 1 } } input_stream_handler { input_stream_handler: "ImmediateInputStreamHandler" } } node { name: "poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__ConstantSidePacketCalculator_1" calculator: "ConstantSidePacketCalculator" output_side_packet: "PACKET:poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__c0__poselandmarkbyroicpu__poselandmarkmodelloader__model_path" options { ext { packet { string_value: "mediapipe/modules/pose_landmark/pose_landmark_lite.tflite" } } } } node { name: "poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__ConstantSidePacketCalculator_2" calculator: "ConstantSidePacketCalculator" output_side_packet: "PACKET:poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__c1__poselandmarkbyroicpu__poselandmarkmodelloader__model_path" options { ext { packet { string_value: "mediapipe/modules/pose_landmark/pose_landmark_full.tflite" } } } } node { name: "poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__ConstantSidePacketCalculator_3" calculator: "ConstantSidePacketCalculator" output_side_packet: "PACKET:poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__c2__poselandmarkbyroicpu__poselandmarkmodelloader__model_path" options { ext { packet { string_value: "mediapipe/modules/pose_landmark/pose_landmark_heavy.tflite" } } } } node { name: "poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__SwitchMuxCalculator" calculator: "SwitchMuxCalculator" input_side_packet: "SELECT:model_complexity" input_side_packet: "C0__PACKET:poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__c0__poselandmarkbyroicpu__poselandmarkmodelloader__model_path" input_side_packet: "C1__PACKET:poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__c1__poselandmarkbyroicpu__poselandmarkmodelloader__model_path" input_side_packet: "C2__PACKET:poselandmarkbyroicpu__poselandmarkmodelloader__switchcontainer__c2__poselandmarkbyroicpu__poselandmarkmodelloader__model_path" output_side_packet: "PACKET:poselandmarkbyroicpu__poselandmarkmodelloader__model_path" options { ext { select: 1 } } input_stream_handler { input_stream_handler: "ImmediateInputStreamHandler" } } node { name: "poselandmarkbyroicpu__poselandmarkmodelloader__LocalFileContentsCalculator" calculator: "LocalFileContentsCalculator" input_side_packet: "FILE_PATH:poselandmarkbyroicpu__poselandmarkmodelloader__model_path" output_side_packet: "CONTENTS:poselandmarkbyroicpu__poselandmarkmodelloader__model_blob" options { ext { text_mode: false } } } node { name: "poselandmarkbyroicpu__poselandmarkmodelloader__TfLiteModelCalculator" calculator: "TfLiteModelCalculator" input_side_packet: "MODEL_BLOB:poselandmarkbyroicpu__poselandmarkmodelloader__model_blob" output_side_packet: "MODEL:poselandmarkbyroicpu__model" } node { name: "poselandmarkbyroicpu__inferencecalculator__poselandmarkbyroicpu__InferenceCalculator" calculator: "InferenceCalculatorCpu" input_stream: "TENSORS:poselandmarkbyroicpu__input_tensors" output_stream: "TENSORS:poselandmarkbyroicpu__output_tensors" input_side_packet: "MODEL:poselandmarkbyroicpu__model" options { ext { delegate { xnnpack { } } } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__SplitTensorVectorCalculator" calculator: "SplitTensorVectorCalculator" input_stream: "poselandmarkbyroicpu__output_tensors" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__landmark_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__pose_flag_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__segmentation_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__heatmap_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__world_landmark_tensor" options { ext { ranges { begin: 0 end: 1 } ranges { begin: 1 end: 2 } ranges { begin: 2 end: 3 } ranges { begin: 3 end: 4 } ranges { begin: 4 end: 5 } } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__TensorsToFloatsCalculator" calculator: "TensorsToFloatsCalculator" input_stream: "TENSORS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__pose_flag_tensor" output_stream: "FLOAT:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__pose_presence_score" } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ThresholdingCalculator" calculator: "ThresholdingCalculator" input_stream: "FLOAT:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__pose_presence_score" output_stream: "FLAG:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__pose_presence" options { ext { threshold: 0.5 } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__GateCalculator_1" calculator: "GateCalculator" input_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__landmark_tensor" input_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__world_landmark_tensor" input_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__segmentation_tensor" input_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__heatmap_tensor" input_stream: "ALLOW:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__pose_presence" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_landmark_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_world_landmark_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_segmentation_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_heatmap_tensor" } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__TensorsToLandmarksCalculator_1" calculator: "TensorsToLandmarksCalculator" input_stream: "TENSORS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_landmark_tensor" output_stream: "NORM_LANDMARKS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__raw_landmarks" options { ext { num_landmarks: 39 input_image_width: 256 input_image_height: 256 visibility_activation: SIGMOID presence_activation: SIGMOID } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__RefineLandmarksFromHeatmapCalculator" calculator: "RefineLandmarksFromHeatmapCalculator" input_stream: "NORM_LANDMARKS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__raw_landmarks" input_stream: "TENSORS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_heatmap_tensor" output_stream: "NORM_LANDMARKS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__all_landmarks" options { } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__SplitNormalizedLandmarkListCalculator" calculator: "SplitNormalizedLandmarkListCalculator" input_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__all_landmarks" output_stream: "poselandmarkbyroicpu__roi_landmarks" output_stream: "poselandmarkbyroicpu__roi_auxiliary_landmarks" options { ext { ranges { begin: 0 end: 33 } ranges { begin: 33 end: 35 } } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__TensorsToLandmarksCalculator_2" calculator: "TensorsToLandmarksCalculator" input_stream: "TENSORS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_world_landmark_tensor" output_stream: "LANDMARKS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__all_world_landmarks" options { ext { num_landmarks: 39 } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__SplitLandmarkListCalculator" calculator: "SplitLandmarkListCalculator" input_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__all_world_landmarks" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__world_landmarks_without_visibility" options { ext { ranges { begin: 0 end: 33 } } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__VisibilityCopyCalculator" calculator: "VisibilityCopyCalculator" input_stream: "NORM_LANDMARKS_FROM:poselandmarkbyroicpu__roi_landmarks" input_stream: "LANDMARKS_TO:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__world_landmarks_without_visibility" output_stream: "LANDMARKS_TO:poselandmarkbyroicpu__roi_world_landmarks" options { } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__GateCalculator_2" calculator: "GateCalculator" input_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__ensured_segmentation_tensor" output_stream: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__enabled_segmentation_tensor" input_side_packet: "ALLOW:enable_segmentation" options { ext { allow: false } } } node { name: "poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__TensorsToSegmentationCalculator" calculator: "TensorsToSegmentationCalculator" input_stream: "TENSORS:poselandmarkbyroicpu__tensorstoposelandmarksandsegmentation__enabled_segmentation_tensor" output_stream: "MASK:poselandmarkbyroicpu__roi_segmentation_mask" options { ext { gpu_origin: TOP_LEFT activation: SIGMOID } } } node { name: "poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__LandmarkLetterboxRemovalCalculator_1" calculator: "LandmarkLetterboxRemovalCalculator" input_stream: "LANDMARKS:poselandmarkbyroicpu__roi_landmarks" input_stream: "LETTERBOX_PADDING:poselandmarkbyroicpu__letterbox_padding" output_stream: "LANDMARKS:poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__adjusted_landmarks" } node { name: "poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__LandmarkLetterboxRemovalCalculator_2" calculator: "LandmarkLetterboxRemovalCalculator" input_stream: "LANDMARKS:poselandmarkbyroicpu__roi_auxiliary_landmarks" input_stream: "LETTERBOX_PADDING:poselandmarkbyroicpu__letterbox_padding" output_stream: "LANDMARKS:poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__adjusted_auxiliary_landmarks" } node { name: "poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__LandmarkProjectionCalculator_1" calculator: "LandmarkProjectionCalculator" input_stream: "NORM_LANDMARKS:poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__adjusted_landmarks" input_stream: "NORM_RECT:pose_rect" output_stream: "NORM_LANDMARKS:unfiltered_pose_landmarks" } node { name: "poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__LandmarkProjectionCalculator_2" calculator: "LandmarkProjectionCalculator" input_stream: "NORM_LANDMARKS:poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__adjusted_auxiliary_landmarks" input_stream: "NORM_RECT:pose_rect" output_stream: "NORM_LANDMARKS:unfiltered_auxiliary_landmarks" } node { name: "poselandmarkfiltering__LandmarksToDetectionCalculator" calculator: "LandmarksToDetectionCalculator" input_stream: "NORM_LANDMARKS:unfiltered_auxiliary_landmarks" output_stream: "DETECTION:poselandmarkfiltering__aux_detection" } node { name: "poselandmarkfiltering__AlignmentPointsRectsCalculator" calculator: "AlignmentPointsRectsCalculator" input_stream: "DETECTION:poselandmarkfiltering__aux_detection" input_stream: "IMAGE_SIZE:image_size" output_stream: "NORM_RECT:poselandmarkfiltering__roi" options { ext { rotation_vector_start_keypoint_index: 0 rotation_vector_end_keypoint_index: 1 rotation_vector_target_angle_degrees: 90 } } } node { name: "poselandmarkfiltering__VisibilitySmoothingCalculator" calculator: "VisibilitySmoothingCalculator" input_stream: "NORM_LANDMARKS:unfiltered_auxiliary_landmarks" output_stream: "NORM_FILTERED_LANDMARKS:poselandmarkfiltering__filtered_aux_visibility" options { ext { low_pass_filter { alpha: 0.1 } } } } node { name: "poselandmarkfiltering__LandmarksSmoothingCalculator" calculator: "LandmarksSmoothingCalculator" input_stream: "NORM_LANDMARKS:poselandmarkfiltering__filtered_aux_visibility" input_stream: "IMAGE_SIZE:image_size" input_stream: "OBJECT_SCALE_ROI:poselandmarkfiltering__roi" output_stream: "NORM_FILTERED_LANDMARKS:auxiliary_landmarks" options { ext { one_euro_filter { min_cutoff: 0.01 beta: 10 derivate_cutoff: 1 } } } } node { name: "poselandmarkstoroi__LandmarksToDetectionCalculator" calculator: "LandmarksToDetectionCalculator" input_stream: "NORM_LANDMARKS:auxiliary_landmarks" output_stream: "DETECTION:poselandmarkstoroi__detection" } node { name: "poselandmarkstoroi__AlignmentPointsRectsCalculator" calculator: "AlignmentPointsRectsCalculator" input_stream: "DETECTION:poselandmarkstoroi__detection" input_stream: "IMAGE_SIZE:image_size" output_stream: "NORM_RECT:poselandmarkstoroi__raw_roi" options { ext { rotation_vector_start_keypoint_index: 0 rotation_vector_end_keypoint_index: 1 rotation_vector_target_angle_degrees: 90 } } } node { name: "poselandmarkstoroi__RectTransformationCalculator" calculator: "RectTransformationCalculator" input_stream: "NORM_RECT:poselandmarkstoroi__raw_roi" input_stream: "IMAGE_SIZE:image_size" output_stream: "pose_rect_from_landmarks" options { ext { scale_x: 1.25 scale_y: 1.25 square_long: true } } } node { name: "poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__WorldLandmarkProjectionCalculator" calculator: "WorldLandmarkProjectionCalculator" input_stream: "LANDMARKS:poselandmarkbyroicpu__roi_world_landmarks" input_stream: "NORM_RECT:pose_rect" output_stream: "LANDMARKS:unfiltered_world_landmarks" } node { name: "poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__WarpAffineCalculator" calculator: "WarpAffineCalculator" input_stream: "IMAGE:poselandmarkbyroicpu__roi_segmentation_mask" input_stream: "MATRIX:poselandmarkbyroicpu__poselandmarksandsegmentationinverseprojection__inverse_transformation_matrix" input_stream: "OUTPUT_SIZE:poselandmarkbyroicpu__image_size" output_stream: "IMAGE:unfiltered_segmentation_mask" options { ext { border_mode: BORDER_ZERO gpu_origin: TOP_LEFT } } } node { name: "posesegmentationfiltering__PreviousLoopbackCalculator" calculator: "PreviousLoopbackCalculator" input_stream: "MAIN:unfiltered_segmentation_mask" input_stream: "LOOP:filtered_segmentation_mask" output_stream: "PREV_LOOP:posesegmentationfiltering__prev_filtered_segmentation_mask" input_stream_info { tag_index: "LOOP" back_edge: true } } node { name: "posesegmentationfiltering__GateCalculator" calculator: "GateCalculator" input_stream: "posesegmentationfiltering__prev_filtered_segmentation_mask" output_stream: "posesegmentationfiltering__gated_prev_filtered_segmentation_mask" input_side_packet: "ALLOW:smooth_segmentation" options { ext { allow: true } } } node { name: "posesegmentationfiltering__SegmentationSmoothingCalculator" calculator: "SegmentationSmoothingCalculator" input_stream: "MASK:unfiltered_segmentation_mask" input_stream: "MASK_PREVIOUS:posesegmentationfiltering__gated_prev_filtered_segmentation_mask" output_stream: "MASK_SMOOTHED:filtered_segmentation_mask" options { } } node { calculator: "FromImageCalculator" input_stream: "IMAGE:filtered_segmentation_mask" output_stream: "IMAGE_CPU:segmentation_mask" } node { name: "poselandmarkfiltering__switchcontainer_1__SwitchDemuxCalculator" calculator: "SwitchDemuxCalculator" input_stream: "NORM_LANDMARKS:unfiltered_pose_landmarks" output_stream: "C0__NORM_LANDMARKS:poselandmarkfiltering__switchcontainer_1__c0__unfiltered_pose_landmarks" output_stream: "C1__NORM_LANDMARKS:poselandmarkfiltering__switchcontainer_1__c1__unfiltered_pose_landmarks" input_side_packet: "ENABLE:smooth_landmarks" options { ext { enable: true } } input_stream_handler { input_stream_handler: "ImmediateInputStreamHandler" } } node { name: "poselandmarkfiltering__switchcontainer_1__VisibilitySmoothingCalculator_1" calculator: "VisibilitySmoothingCalculator" input_stream: "NORM_LANDMARKS:poselandmarkfiltering__switchcontainer_1__c0__unfiltered_pose_landmarks" output_stream: "NORM_FILTERED_LANDMARKS:poselandmarkfiltering__switchcontainer_1__c0__poselandmarkfiltering__filtered_visibility" options { ext { no_filter { } } } } node { name: "poselandmarkfiltering__switchcontainer_1__VisibilitySmoothingCalculator_2" calculator: "VisibilitySmoothingCalculator" input_stream: "NORM_LANDMARKS:poselandmarkfiltering__switchcontainer_1__c1__unfiltered_pose_landmarks" output_stream: "NORM_FILTERED_LANDMARKS:poselandmarkfiltering__switchcontainer_1__c1__poselandmarkfiltering__filtered_visibility" options { ext { low_pass_filter { alpha: 0.1 } } } } ......(could not paste all due to character limit) input_stream_handler { input_stream_handler: "ImmediateInputStreamHandler" } } input_stream: "IMAGE:image" output_stream: "LANDMARKS:pose_landmarks" output_stream: "WORLD_LANDMARKS:pose_world_landmarks" output_stream: "SEGMENTATION_MASK:segmentation_mask" output_stream: "DETECTION:pose_detection" output_stream: "ROI_FROM_LANDMARKS:pose_rect_from_landmarks" output_stream: "ROI_FROM_DETECTION:pose_rect_from_detection" input_side_packet: "SMOOTH_LANDMARKS:smooth_landmarks" input_side_packet: "ENABLE_SEGMENTATION:enable_segmentation" input_side_packet: "SMOOTH_SEGMENTATION:smooth_segmentation" input_side_packet: "MODEL_COMPLEXITY:model_complexity" input_side_packet: "USE_PREV_LANDMARKS:use_prev_landmarks" executor { } type: "PoseLandmarkCpu" Do you have any idea why it is not working properly? | I was facing this exact same issue today and installing mediapipe-silicon and opencv-python in a new 3.9.10 Python environment did the trick. I think it might have something to do with the under-the-hood operation of mediapipe and the compiled version "media pipe-silicon" that prevents it from playing nicely with other Python distros. | 5 | 2 |
72,520,627 | 2022-6-6 | https://stackoverflow.com/questions/72520627/python-big-o-seems-to-return-completely-incorrect-results-what-am-i-doing-wron | I am comparing runtimes of different ways of flattening a list of lists using the big_o module, and for following methods my function does not return the expected results, namely: This one: def itertools_chain_from_iterable(arr): return list(chain.from_iterable(arr)) returns "constant", which can't possibly be true. This one: def merge_extend(self,arr): output = [] for l in arr: output.extend(l) return output returns "cubic" (shouldn't it be quadratic at most?), while... ..this one def merge_w_sum(self,arr): return sum(arr,[]) returns "linear" (I'm quite sure it should be quadratic, see proof here. Furthermore, the list comprehension one def merge_bucket(self,bucket): return [number for n in bucket for number in n] returns "polynomial", which seems terrifying (would expect linear here as well) Code used to calculate the complexities: print('<function name>:', big_o.big_o(<function name>, lambda n:[big_o.datagen.integers(9900,1,9999999) for n in range(50)], n_measures=20)[0]) Output: complexity of itertools_chain_from_iterable: Constant: time = 0.0013 (sec) complexity of merge_w_sum: Linear: time = 0.46 + 6.2E-07*n (sec) complexity of merge_extend: Cubic: time = 0.048 + -2.3E-18*n^3 (sec) complexity of merge_bucket: Polynomial: time = 0.2 * x^-0.019 (sec) What is it that I'm doing (or understanding) wrong? Many thanks in advance for useful tips! | Your lambda ignores its argument n and instead always produce the same constant-size input. Produce input of size n instead. (A note: originally the question had 8 functions and 7 of them were judged "constant time". It was edited to a larger constant and then got other judgements. I guess your computer's speed is somewhat unstable, as the constant input should still lead to constant runtimes and thus "constant time" judgements. In any case, the input-generating function should be fixed like I said.) Given that it's two-dimensional, you could for example produce n lists of size 1, one list of size n, or sqrt(n) lists of size sqrt(n). Presumably n lists of size 1 is what you want if your goal is to demonstrate that sum(arr, []) is bad. For example: lambda n: [[i] for i in range(n)] A full program: import big_o from itertools import chain def chained(arr): return list(chain.from_iterable(arr)) def merge_extend(arr): output = [] for l in arr: output.extend(l) return output def merge_w_sum(arr): return sum(arr,[]) def merge_bucket(bucket): return [number for n in bucket for number in n] funcs = [ (chained, 10**5), (merge_extend, 10**5), (merge_w_sum, 10**4), (merge_bucket, 10**5), ] for _ in range(3): for func, max_n in funcs: complexity = big_o.big_o( func, lambda n: [[0]] * n, max_n=max_n, n_timings=10, )[0] print( f'{func.__name__:13}', complexity ) print() Sample results: chained Linear: time = 8.2E-05 + 5.8E-08*n (sec) merge_extend Linear: time = -4.2E-06 + 8.4E-08*n (sec) merge_w_sum Quadratic: time = 0.00013 + 2.4E-09*n^2 (sec) merge_bucket Linear: time = 0.00046 + 8E-08*n (sec) chained Logarithmic: time = -0.0075 + 0.0014*log(n) (sec) merge_extend Linear: time = -2E-05 + 8.5E-08*n (sec) merge_w_sum Quadratic: time = 0.00011 + 2.4E-09*n^2 (sec) merge_bucket Linear: time = -4.2E-05 + 2.6E-07*n (sec) chained Linear: time = -1.8E-05 + 1.6E-07*n (sec) merge_extend Logarithmic: time = -0.01 + 0.0019*log(n) (sec) merge_w_sum Quadratic: time = 8.3E-05 + 2.4E-09*n^2 (sec) merge_bucket Linear: time = 7.1E-05 + 8.3E-08*n (sec) You can see it gets it right most of the time, but still sometimes thinks logarithmic instead of linear. On a more stable system it might work better. Larger max_n values should help, too, but I tried and then big_o crashed with some known internal error. Also note that I used different max_n values for the different functions. It's no good to use the same for all. If you use the same for all, then if it's large, a quadratic time solution takes unbearably long, and if it's small, a linear time solution takes so little time that big_o has trouble differentiating it from logarithmic or linearithmic. There doesn't seem to be a medium value that's good for all. Ideally, big_o would automatically adjust max_n appropriately, but I don't think it supports that. | 4 | 4 |
72,522,021 | 2022-6-6 | https://stackoverflow.com/questions/72522021/is-there-a-way-for-snakemake-to-evaluate-dynamic-snakefile-constructs-like-eval | I would like to have various dynamic "shortcuts" (rule names) in my Snakemake workflow without needing marker files. The method I have in mind is similar to eval in GNU Make, but it doesn't seem like Snakemake can evaluate variable-expanded code in the Snakefile syntax. Is there a way to accomplish this? Here's a simplified example Snakefile. I want to have a rule name corresponding to each output "stage", and right now I have to define them manually. Imagine if I had more "stages" and "steps" and wanted to have a rule that could make all "b", "d", or "z" files if I add those stages. It would be far more concise to dynamically define the rule name than to define every single combination, updated each time I add a new stage. stages = ['a', 'b'] steps = [1, 2] rule all: input: expand('{stage}{step}_file', stage=stages, step=steps) rule: output: touch('{stage}{step}_file') # Can these two be combined so that I don't have to add more # rules for each new "stage" above while retaining the shorthand # rule name corresponding to the stage? rule a: input: expand('a{step}_file', step=steps) rule b: input: expand('b{step}_file', step=steps) | Since Snakefile is a Python file, the following might help to achieve what you are after: for stage in stages: rule: name: f'{stage}' input: expand(f'{stage}{{step}}_file', step=steps) | 7 | 8 |
72,524,450 | 2022-6-6 | https://stackoverflow.com/questions/72524450/replace-none-with-incremental-value-excluding-values-from-skip-list | I am trying to replace all None values in all sublists within a list to incremental numbers starting from 0 but excluding the numbers from the skip list. And there is one more requirement. If first element of a sublist matches to any other sublists' first element, then they all need to have the same value that replaces None in them if present. This is what I could try so far. skip = [1,2] a = [[1, None, 2], [3, 4, 5], [1, None, 7], [8, 9, 10],[11, None, 12]] b = 0 d = {} for i in range(len(a)): if a[i][1]==None: if b in skip: print("found b in skip") b = b + 1 if a[i][1] in d.keys(): a[i][1] = d[a[i][1]] else: a[i][1] = b d[a[i][0]] = b b = b + 1 print(d) print(a) Output: found b in skip {1: 2, 11: 3} [[1, 0, 2], [3, 4, 5], [1, 2, 7], [8, 9, 10], [11, 3, 12]] Expected output: [[1, 0, 2], [3, 4, 5], [1, 0, 7], [8, 9, 10], [11, 3, 12]] | You're looking up the wrong element in the cache in a couple places, and not skipping properly when the skip list contains consecutive elements. Here's the minimal fix with inline comments indicating changed lines: skip = [1,2] a = [[1, None, 2], [3, 4, 5], [1, None, 7], [8, 9, 10],[11, None, 12]] b = 0 d = {} for i in range(len(a)): if a[i][1]==None: while b in skip: # Loop until skipped all elements in skip print("found b in skip") b = b + 1 if a[i][0] in d.keys(): # Check for first, not second element a[i][1] = d[a[i][0]] # Use first element, not second, for lookup else: a[i][1] = b d[a[i][0]] = b # Indent so we only set cached value on cache miss b = b + 1 # Indent so we only increment b on new first element print(d) print(a) Try it online! And here's a more heavily modified version that is somewhat more Pythonic, using names, not indexing (when possible): skip = {1,2} # Use set instead of list; if skip is large, list membership check will be expensive, set will stay cheap a = [[1, None, 2], [3, 4, 5], [1, None, 7], [8, 9, 10],[11, None, 12]] b = 0 d = {} for sublist in a: # Loop over values instead of indexing a over and over, C-style (slow and less readable) first, middle, _ = sublist # Unpack to useful names (reducing risk of misindexing, and making more readable code) # Not unpacking in for loop itself because we need to reassign elements of sublist if middle is None: # Always use is/is not to compare to None while b in skip: b += 1 # Use += to avoid repeating variable name if first in d: # No need for .keys(); "in d" has same effect as "in d.keys()" and avoids creating unnecessary keys view sublist[1] = d[first] # Indexing needed for assignment to modify original list else: sublist[1] = d[first] = b # Can populate cache and reassign middle value all at once b += 1 print(d) print(a) Try it online! Either version gets the expected output from the question. | 4 | 2 |
72,521,182 | 2022-6-6 | https://stackoverflow.com/questions/72521182/how-to-reduce-space-between-columns-in-a-horizontal-legend-python | I would like to reduce space between legend columns (An example is shown in the attached image). So, what I want to do is, [before] (Sym.)A ------ (Sym.)B ------ (Sym.)C ------ (Sym.)D [after] (Sym.)A -- (Sym.)B -- (Sym.)C -- (Sym.)D Is there a way to do it? (e.g., plt.legend(ncol=4, [a hidden parameter??])) Thanks! Image of the current output: | Take a look at the description of columnspacing from documentation. You can try: plt.legend(ncol=4, columnspacing=0.8) | 7 | 13 |
72,520,366 | 2022-6-6 | https://stackoverflow.com/questions/72520366/why-does-functools-partial-not-inherit-name-and-other-meta-data-by-defau | I am wondering why meta data (e.g. __name__, __doc__) for the wrapped method/function by partial is not inherited by default. Instead, functools provides the update_wrapper function. Why it is not done by default is not mentioned anywhere (as far as I could see) e.g. here and many tutorials on functools.partial talk about how to "solve the issue" of a missing __name__. Are there examples where inheriting this information causes problems/confusion? | Think about what that would actually look like: def add(x, y): "Adds two numbers" return x + y add_5 = partial(add, 5) Would it actually make sense for add_5 to have __name__ set to "add" and __doc__ set to "Adds two numbers"? The callable created by partial behaves completely differently from the original function. It wouldn't be appropriate for the new callable to inherit the name and docstring of a function with completely different behavior. | 8 | 16 |
72,496,224 | 2022-6-4 | https://stackoverflow.com/questions/72496224/is-sgd-optimizer-in-pytorch-actually-does-gradient-descent-algorithm | I'm working on trying to compare the converge rate of SGD and GD algorithms for the neural networks. In PyTorch, we often use SGD optimizer as follows. train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True) optimizer = torch.optim.SGD(model.parameters(), lr=0.001) for epoch in range(epochs): running_loss = 0 for input_batch, labels_batch in train_dataloader: input = input_batch y_hat = model(input) y = labels_batch L = loss(y_hat, y) optimizer.zero_grad() L.backward() optimizer.step() running_loss += L.item() My understanding about the optimizer here is that the SGD optimizer actually does the Mini-batch Gradient Descent algorithm because we feed the optimizer one batch of data at one time. So, if we set the batch_size parameter as the size of all data, the code actually does Gradient Descent for the neural network. Is my understanding correct? | Your understanding is correct. SGD is just updating weights based on the gradient computed by backpropagation. The flavor of gradient descent that it performs is therefore determined by the data loader. Gradient descent (aka batch gradient descent): Batch size equal to the size of the entire training dataset. Stochastic gradient descent: Batch size equal to one and shuffle=True. Mini-batch gradient descent: Any other batch size and shuffle=True. By far the most common in practical applications. | 8 | 6 |
72,518,160 | 2022-6-6 | https://stackoverflow.com/questions/72518160/detect-strings-containing-only-digits-letters-and-one-or-more-question-marks | I am writing a python regex that matches only string that consists of letters, digits and one or more question marks. For example, regex1: ^[A-Za-z0-9?]+$ returns strings with or without ? I want a regex2 that matches expressions such as ABC123?A, 1AB?CA?, ?2ABCD, ???, 123? but not ABC123, ABC.?1D1, ABC(a)?1d on mysql, I did that and it works: select * from ( select * from norm_prod.skill_patterns where pattern REGEXP '^[A-Za-z0-9?]+$') AS XXX where XXX.pattern not REGEXP '^[A-Za-z0-9]+$' | How about something like this: ^(?=.*\?)[a-zA-Z0-9\?]+$ As you can see here at regex101.com Explanation The (?=.*\?) is a positive lookahead that tells the regex that the start of the match should be followed by 0 or more characters and then a ? - i.e., there should be a ? somewhere in the match. The [a-zA-Z0-9\?]+ matches one-or-more occurrences of the characters given in the character class i.e. a-z, A-Z and digits from 0-9, and the question mark ?. Altogether, the regex first checks if there is a question mark somewhere in the string to be matched. If yes, then it matches the characters mentioned above. If either the ? is not present, or there is some foreign character, then the string is not matched. | 4 | 6 |
72,512,921 | 2022-6-6 | https://stackoverflow.com/questions/72512921/python-tqdm-package-miniters-not-working-occasionally | Since I'm looping over permutations I don't want too frequent updates of the progress bar, so I set miniters to be one-tenth of the total length: total_len = len(list(itertools.permutations(range(N), 2))) for row_a, row_b in tqdm(itertools.permutations(range(N), 2), total=total_len , miniters=int(total_len/10), disable=disable): Any idea what's causing this unstable performance of miniters? | I think I found the reason, another argument maxinterval controls the maximum update interval which is set to 10 (seconds) by default, so when the progress bar updates too slowly it'll automatically modify the miniters parameter. Therefore I'll need to specify a larger maxinterval as well. So the code should be total_len = len(list(itertools.permutations(range(N), 2))) # or a larger value than 200 (seconds) for row_a, row_b in tqdm(itertools.permutations(range(N), 2), total=total_len , miniters=int(total_len/10), maxinterval=200, disable=disable): | 4 | 6 |
72,509,839 | 2022-6-5 | https://stackoverflow.com/questions/72509839/how-to-use-jax-vmap-over-zipped-arguments | I have the following example code that works with a regular map def f(x_y): x, y = x_y return x.sum() + y.sum() xs = [jnp.zeros(3) for i in range(4)] ys = [jnp.zeros(2) for i in range(4)] list(map(f, zip(xs, ys))) # returns: [DeviceArray(0., dtype=float32), DeviceArray(0., dtype=float32), DeviceArray(0., dtype=float32), DeviceArray(0., dtype=float32)] How can I use jax.vmap instead? The naive thing is: jax.vmap(f)(zip(xs, ys)) but this gives: ValueError: vmap was requested to map its argument along axis 0, which implies that its rank should be at least 1, but is only 0 (its shape is ()) | For using jax.vmap, you do not need to zip your variables. You can write what you want like below: import jax.numpy as jnp from jax import vmap def f(x_y): x, y = x_y return x.sum() + y.sum() xs = jnp.zeros((4,3)) ys = jnp.zeros((4,2)) vmap(f)((xs, ys)) Output: DeviceArray([0., 0., 0., 0.], dtype=float32) | 4 | 3 |
72,504,678 | 2022-6-5 | https://stackoverflow.com/questions/72504678/can-a-dataclass-field-format-its-value-for-the-repr | I have a Node class holding RGB data in both hex and HSV form. I'll be using this to sort colors in various ways and would prefer the HSV tuple to remain in float form for comparisons instead of converting from a string for every use. Is there a way to specify to the dataclass field that it should format the value in a specific way similar to default values with the default_factory, i.e. a repr_factory? def RGB2HSV(r, g, b): '''Returns HSV values in the range H = [0, 360], S = [0, 100], V = [0, 100]''' r, g, b = r / 255, g / 255, b / 255 maxRGB = max(r, g, b) minRGB = min(r, g, b) delta = maxRGB - minRGB V = maxRGB if V == 0: return 0, 0, V S = delta / V * 100 if S == 0: return 0, S, V * 100 if V == r: H = (g - b) / delta elif V == g: H = 2 + (b - r) / delta else: H = 4 + (r - g) / delta H *= 60 if H < 0: H += 360 return H, S, V * 100 @dataclass class Node: r: int = field(repr=False) g: int = field(repr=False) b: int = field(repr=False) hex: tuple[int, int, int] = field(init=False) hsv: tuple[float, float, float] = field(init=False) def __post_init__(self): self.hex = self.r, self.g, self.b # Generating random r, g, b numbers self.hsv = RGB2HSV(self.hex) # Converts the r, g, b to a tuple of floats While I'm working out the different sorts, I'm printing out the Nodes and seeing 10 unnecessary digits of a float is distracting. As far as I can think of, would I just be better off implementing my own __repr__ for the class instead of relying on the dataclass generated one? The reason I'm looking at the __repr__ value is because it's automatically generated by the dataclass and can make distinguishing between nearly identical colors easier than just looking at the visual output. It'll be easier to find out what to change or do next if I know what the actual numbers a color are. A portion of the end of the output: Node(hex=(238, 0, 0), hsv=(0.0, 100.0, 93.33333333333333)) Node(hex=(238, 17, 0), hsv=(4.285714285714286, 100.0, 93.33333333333333)) Node(hex=(238, 34, 0), hsv=(8.571428571428571, 100.0, 93.33333333333333)) Node(hex=(238, 51, 0), hsv=(12.857142857142858, 100.0, 93.33333333333333)) Node(hex=(255, 0, 0), hsv=(0.0, 100.0, 100.0)) Node(hex=(255, 17, 0), hsv=(4.0, 100.0, 100.0)) Node(hex=(255, 34, 0), hsv=(8.0, 100.0, 100.0)) Node(hex=(255, 51, 0), hsv=(12.0, 100.0, 100.0)) Basically, can a format be specified to a dataclass field, similar to how a function can be specified to default_factory, in order for the generated __repr__ to format the field for me so I don't have to write my own? ... hsv: tuple[float, float, float] = field(init=False, repr_factory=lambda x: "{:.3f"}.format(x) for x in self.hsv) ... Node(hex=(238, 51, 0), hsv=(12.857, 100.000, 93.333)) | The dataclasses library currently does not support formatting fields like that. The code generated in the default __repr__ for each included field is always in the formf'field={self.field!r}'. You will have to write your own __repr__. | 7 | 6 |
72,502,292 | 2022-6-4 | https://stackoverflow.com/questions/72502292/fatal-error-python-h-no-such-file-or-directory-when-compiling-pybind11-example | I am starting out with pybind11, trying to compile the first example. I am using Xubuntu 20.04. My system python is 3.8, but I have install pybind11 only for python 3.10, which is the version that executes when I type python at the command prompt. When I run the compilation command given in the pybind11 docs: c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3.10 -m pybind11 --includes) example.cpp -o example$(python3.10-config --extension-suffix) I get the error message: fatal error: Python.h: No such file or directory 213 | #include <Python.h> I followed the advice give in the accepted answer to fatal error: Python.h: No such file or directory, and ran sudo apt install python3.10-dev but this had no effect, even though Python.h now exists in /usr/include/python3.10. I should perhaps mention that I am not using a virtual environment at this point. EDIT python3.10-config --cflags -I/usr/include/python3.10 -I/usr/include/python3.10 -Wno-unused-result -Wsign-compare -g -fstack-protector-strong -Wformat -Werror=format-security -DNDEBUG -g -fwrapv -O2 -Wall EDIT I changed the command as suggested by 9769953: Pybind11Test$ c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3.10 -m pybind11 python3.10-config --includes) example.cpp -o example$(python3.10-config --extension-suffix) usage: __main__.py [-h] [--includes] [--cmakedir] __main__.py: error: unrecognized arguments: python3.10-config example.cpp:1:10: fatal error: pybind11/pybind11.h: No such file or directory 1 | #include <pybind11/pybind11.h> | ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Adding python3.10-config gave me an unrecognized argumnents error, but the compilation failed at a different place. #include <pybind11/pybind11.h> is the first line of example.cpp. The file exists at /home/saul/.local/lib/python3.10/site-packages/pybind11/include/pybind11/pybind11.h EDIT Latest attempt Pybind11Test$ c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3.10 -m pybind11 --includes) example.cpp -o example.out In file included from /home/saul/.local/lib/python3.10/site-packages/pybind11/include/pybind11/detail/../cast.h:13, from /home/saul/.local/lib/python3.10/site-packages/pybind11/include/pybind11/detail/../attr.h:13, from /home/saul/.local/lib/python3.10/site-packages/pybind11/include/pybind11/detail/class.h:12, from /home/saul/.local/lib/python3.10/site-packages/pybind11/include/pybind11/pybind11.h:13, from example.cpp:1: /home/saul/.local/lib/python3.10/site-packages/pybind11/include/pybind11/detail/../detail/common.h:213:10: fatal error: Python.h: No such file or directory 213 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. | The problem is that the OS installs the header files for Python 3.10 in /usr/include/python3.10/include. This keeps them separate from the default sytem Python (3.8), in case that development package is also installed (otherwise, one set of files would overwrite the other set of files). But /usr/include/python3.10/include is not in the default search path for includes for the C++ compiler; you have to add it. You could do that manually, but the python-config tool takes (better) care of that normally. Add python3.10-config --includes as a subshell command ($(...)) to the command line. Similar to what pybind does, with $(python -m pybind --includes) to let the compiler find pybind's include files. Obviously, the correct python-config variant needs to be used, hence python3.10-config. The shared library files are fewer, and named by having the Python version in their filename, so these can all live safely together in /usr/lib/x86_64-linux-gnu/. You'll find libpython3.8.so and libpython3.10.so next to each other in that directory. So there is no need to add the library search path (that option doesn't even exist for python-config, but it is included in python3.10-config --ldflags). All in all, the command would then look like: c++ -O3 -Wall -shared -std=c++11 -fPIC \ $(python3.10-config --includes) \ $(python3.10 -m pybind11 --includes) \ -o example$(python3.10-config --extension-suffix) \ example.cpp (broken across a few lines for readability.) | 6 | 5 |
72,463,669 | 2022-6-1 | https://stackoverflow.com/questions/72463669/how-to-set-xaxis-ticks-in-plotly | This is my data: z=[[2.021127032949411, 2.405934869144868, 6.005238252515181, 0.43308358365566557, 6.80624302463342, 1.4243920241458794], [6.754588097147502, 17.66441844606136, 17.66863225189955, 4.796490376261003, 4.100119672126023, 2.6461740133188454], [30.522227304933793, 25.244026806049867, 44.77345477001106, 24.58566233495368, 5.070470289616061, 0.9441603389397017], [2.154134557312937, 4.310863690800093, 6.1213216109229505, 3.1274613380516687, 2.5391663573164514, 0.3578307481864878], [19.520038969668185, 10.092300407536902, 1.980581522863168, 2.792253899319521, 0.7083651529637687, 1.7654249187194606]] x=['0.0', '6.784919041781837', '13.569838083563674', '20.354757125345515', '27.139676167127348', '33.92459520890919'] y= [0.0, 1306.1224489795918, 2612.2448979591836, 3918.3673469387754, 5224.489795918367] My code looks like this: fig = go.Figure(data=go.Heatmap( z=z, x=x, y=y, colorscale='Viridis')) fig.update_layout( xaxis = dict( tickmode = 'linear', tick0 = 0, dtick = 20, ), font=dict(size=18, color="black")) plotly.offline.plot(fig) And the result is How can I set nice x axis ticks, for example [0, 5, 10...] with no decimal points? Looks like my attempt above with tick0 and dtick is not helpful. | Well, hello o/ Follow the code below and adapt/update it how you want ;) import plotly.graph_objects as go fig = go.Figure(data=go.Heatmap( z=z, x=x, y=y, colorscale='Viridis')) fig.update_layout( xaxis = dict( tickmode='array', #change 1 tickvals = x, #change 2 ticktext = [0,5,10,15,20,25], #change 3 ), font=dict(size=18, color="black")) fig.show() Result (click to zoom in and check): | 4 | 5 |
72,500,067 | 2022-6-4 | https://stackoverflow.com/questions/72500067/how-to-use-python-variable-in-sql-query-in-databricks | I am trying to convert a SQL stored procedure to databricks notebook. In the stored procedure below 2 statements are to be implemented. Here the tables 1 and 2 are delta lake tables in databricks cluster. I want to use a python variable in place of max_date in SQL query. How to do it? %sql DELETE FROM table1 WHERE Date = max_date; INSERT INTO table1 SELECT * FROM table2 WHERE Date = max_date; | If you are going to run it cell by cell then you can use databricks widgets like First cell x=str(datetime.date.today()) dbutils.widgets.text("max_date",x) Second cell %sql select getArgument("max_date") AS max_date will give you max_date 2022-06-04 but as mentioned here it does not work when run all is used and ideal way will be to create separate language based notebook and pass variables using %run Other way is to use spark conf like below First set a value for the conf [Note-the conf name should have .(dot) in it] max_date2=str(datetime.date.today()) spark.conf.set("abc.max_dt2", max_date2) Next try selecting value like below %sql select "${abc.max_dt2}" as max_date It should give same value as above | 5 | 11 |
72,497,046 | 2022-6-4 | https://stackoverflow.com/questions/72497046/skipping-a-certain-range-of-a-list-at-time-in-python | I have a array, I want to pick first 2 or range, skip the next 2, pick the next 2 and continue this until the end of the list list = [2, 4, 6, 7, 9,10, 13, 11, 12,2] results_wanted = [2,4,9,10,12,2] # note how it skipping 2. 2 is used here as and example Is there way to achieve this in python? | Taking n number of elements and skipping the next n. l = [2, 4, 6, 7, 9, 10, 13, 11, 12, 2] n = 2 wanted = [x for i in range(0, len(l), n + n) for x in l[i: i + n]] ### Output : [2, 4, 9, 10, 12, 2] | 5 | 4 |
72,491,761 | 2022-6-3 | https://stackoverflow.com/questions/72491761/difference-of-gaussian-variable-results | I am trying to get my head around this one. I am quite new to python and even more to image filters and manipulation. I'd like tu use OpenCV to apply this function : def gaussianFilter(img, kernel_size=(3,3), standard_dev=1): return cv2.GaussianBlur(img, ksize=kernel_size, sigmaX=standard_dev, sigmaY=standard_dev) Later on in my code I'd want to apply a DoG. According to the definitions I've found, I could have done the following : diff1 = gaussianFilter(img, standard_dev=1) diff2 = gaussianFilter(img, standard_dev=10) res = cv2.subtract(diff2,diff1) Which should equal to using the following (from skimage.filters): def differenceOfGaussianFilter(img,low_sigma=1, high_sigma=10): return difference_of_gaussians(img, low_sigma=low_sigma, high_sigma=high_sigma) Obviously the results aren't equal. Even more, changing the parameters in gaussianFilter() doesn't change the output value much... while changing them in differenceOfGaussianFilter() render very different pictures. My aim would be to write a simple function for differenceOfGaussianFilter() but which uses the difference between 2 gaussianFilter() calls. here is an example of the 2 differents results (left is gaussianFilter(10) - gaussianFilter(1) // right is differenceOfGaussianFilter(1,10) ) Any idea on how I could do that ? and why my results aren't similar ? thanks :) | Kernel size should be proportional to sigma. You didn't adjust that kernel size in these calls, so your sigma=10 lowpass amounts to a 3x3 box blur... which has a sigma of around 1-2, but isn't a gaussian anymore. Rule of thumb: kernel radius ~= 3*sigma. Then you get +- three sigmas of gaussian distribution, which is 99.7% of its probability mass. Best not give an explicit kernel size, but let cv.GaussianBlur calculate that on its own. Pass (0,0) for the size. | 4 | 4 |
72,489,775 | 2022-6-3 | https://stackoverflow.com/questions/72489775/python-dict-comprehension-convert-list-of-tuples-of-dictionaries-and-integers-in | I have list of dictionaries and list of integers x = [ { "name": "tom", "job": "judge" }, { "name":"bob", "job": "policeman" } ] y = [1000, 2200] I want to zip them and add y elements to dictionaries as "payroll": y_element The desired output would be: [ { "name": "tom", "job": "judge", "payroll": 1000 }, { "name":"bob", "job": "policeman", "payroll": 2200 } ] I actually achieved that by: z = zip(x, y) for i in z: i[0]["payroll"] = i[1] z = [i[0] for i in z] But I was wondering whether it could be done in dict comprehension in list comprehension. This is what I tried so far: z = [{k: v, "value": o} for d, o in z for k, v in d.items()] Obviously it is wrong because the output is: {'name': 'bob', 'job': 'policeman', 'value': 2} | You can merge the dict with the required data using ** here. [{**d, 'payroll':i} for d, i in zip(x, y)] # [{'name': 'tom', 'job': 'judge', 'payroll': 1000}, # {'name': 'bob', 'job': 'policeman', 'payroll': 2200}] | 4 | 9 |
72,487,738 | 2022-6-3 | https://stackoverflow.com/questions/72487738/insert-array-into-all-places-in-another-array | For two arrays, say, a = np.array([1,2,3,4]) and b = np.array([5,6]), is there a way, if any, to obtain a 2d array of the following form without looping: [[5 6 1 2 3 4] [1 5 6 2 3 4] [1 2 5 6 3 4] [1 2 3 5 6 4] [1 2 3 4 5 6]] i.e. to insert b in all possible places of a. And if loops are unavoidable, how to do it the most computationally efficient way (a can be long, the length of b is irrelevant)? Example of how it can be done using loops is trivial: a = np.array([1,2,3,4]) b = np.array([5,6]) rows = len(a) + 1 cols = len(a) + len(b) res = np.empty([rows, cols]) for i in range(rows): res[i, i:len(b)+i] = b res[i, len(b)+i:] = a[i:] res[i, 0:i] = a[0:i] print(rows.astype(int)) [[5 6 1 2 3 4] [1 5 6 2 3 4] [1 2 5 6 3 4] [1 2 3 5 6 4] [1 2 3 4 5 6]] | Consider using numba acceleration. This happens to be what numba is best at. For your example, it can speed up nearly 6 times: from timeit import timeit import numpy as np from numba import njit a = np.arange(1, 5) b = np.array([5, 6]) def fill(a, b): rows = a.size + 1 cols = a.size + b.size res = np.empty((rows, cols)) for i in range(rows): res[i, i:b.size + i] = b res[i, b.size + i:] = a[i:] res[i, :i] = a[:i] return res if __name__ == '__main__': print('before:', timeit(lambda: fill(a, b))) fill = njit(fill) print('after:', timeit(lambda: fill(a, b))) Output: before: 9.488150399993174 after: 1.6149254000047222 | 4 | 1 |
72,474,765 | 2022-6-2 | https://stackoverflow.com/questions/72474765/make-reusable-iterable-out-of-generator | I want to convert generators to reusable iterables. For example, consider generator: def myrange(n): for i in range(n): yield i It generates one-use iterator: x = myrange(3) print(list(x)) # [0, 1, 2] print(list(x)) # [] I want to add a decorator to the definition of myrange such that it produces reusable iterable (like the usual range) instead of one-use iterator: x = myrange(3) print(list(x)) # [0, 1, 2] print(list(x)) # [0, 1, 2] | This should work: def mk_reusable(f): """ Makes a reusable iterable out of generator by remembering its arguments """ class MyIterable: def __init__(self, *args, **kwargs): self._args = args self._kwargs = kwargs def __iter__(self): yield from f(*self._args, **self._kwargs) return MyIterable # TEST @mk_reusable def myrange(n): for i in range(n): yield i x = myrange(3) assert list(x) == [0, 1, 2] assert list(x) == [0, 1, 2] # can consume it twice | 5 | 1 |
72,485,331 | 2022-6-3 | https://stackoverflow.com/questions/72485331/python-get-child-class-inside-parent-class-static-class-method | The output of: class Dog(): def get_class(): return __class__ class Cat(): def get_class(): return __class__ print(Dog.get_class()) print(Cat.get_class()) is: <class '__main__.Dog'> <class '__main__.Cat'> I want to DRY up my code with a subclass. But the output of: class BaseClass(): def get_class(): return __class__ class Dog(BaseClass): pass class Cat(BaseClass): pass print(Dog.get_class()) print(Cat.get_class()) is <class '__main__.BaseClass'> <class '__main__.BaseClass'> How do I change the code in the second case to obtain the same output as the first case? | you are almost there : class BaseClass: @classmethod def get_class(cls): return cls class Dog(BaseClass): pass class Cat(BaseClass): pass print(Dog.get_class()) print(Cat.get_class()) <class '__main__.Dog'> <class '__main__.Cat'> | 6 | 3 |
72,484,789 | 2022-6-3 | https://stackoverflow.com/questions/72484789/numpy-array-tolist-converts-numpy-datetime64-to-int | I have an array of datetimes that I need to convert to a list of datetimes. My array looks like this: import numpy as np my_array = np.array(['2017-06-28T22:47:51.213500000', '2017-06-28T22:48:37.570900000', '2017-06-28T22:49:46.736800000', '2017-06-28T22:50:41.866800000', '2017-06-28T22:51:17.024100000', '2017-06-28T22:51:24.038300000'], dtype='datetime64[ns]') my_list = my_array.tolist() I need a list of datetime values, but when I do my_array.tolist(), I get a list of numerical time stamps: [1498690071213500000, 1498690117570900000, 1498690186736800000, 1498690241866800000, 1498690277024100000, 1498690284038300000] My question is how do I preserve the datetime format when going from an array to a list, or how do I convert the list of time stamps to a list datetime values? | Explicitly casting the numpy.ndarray as a native Python list will preserve the contents as numpy.datetime64 objects: >>> list(my_array) [numpy.datetime64('2017-06-28T22:47:51.213500000'), numpy.datetime64('2017-06-28T22:48:37.570900000'), numpy.datetime64('2017-06-28T22:49:46.736800000'), numpy.datetime64('2017-06-28T22:50:41.866800000'), numpy.datetime64('2017-06-28T22:51:17.024100000'), numpy.datetime64('2017-06-28T22:51:24.038300000')] However, if you wanted to go back from an integer timestamp to a numpy.datetime64 object, the number given here by numpy.ndarray.tolist is given in nanosecond format, so you could also use a list comprehension like the following: >>> [np.datetime64(x, "ns") for x in my_list] [numpy.datetime64('2017-06-28T22:47:51.213500000'), numpy.datetime64('2017-06-28T22:48:37.570900000'), numpy.datetime64('2017-06-28T22:49:46.736800000'), numpy.datetime64('2017-06-28T22:50:41.866800000'), numpy.datetime64('2017-06-28T22:51:17.024100000'), numpy.datetime64('2017-06-28T22:51:24.038300000')] And if you want the final result as a Python datetime.datetime object instead of a numpy.datetime64 object, you can use a method like this (adjusted as needed for locality): >>> from datetime import datetime >>> list(map(datetime.utcfromtimestamp, my_array.astype(np.uint64) / 1e9)) [datetime.datetime(2017, 6, 28, 22, 47, 51, 213500), datetime.datetime(2017, 6, 28, 22, 48, 37, 570900), datetime.datetime(2017, 6, 28, 22, 49, 46, 736800), datetime.datetime(2017, 6, 28, 22, 50, 41, 866800), datetime.datetime(2017, 6, 28, 22, 51, 17, 24100), datetime.datetime(2017, 6, 28, 22, 51, 24, 38300)] Edit: Warren Weckesser's answer provides a more straightforward approach to go from a numpy.datetime64[ns] array to a list of Python datetime.datetime objects than is described here. | 6 | 4 |
72,483,752 | 2022-6-3 | https://stackoverflow.com/questions/72483752/how-to-limit-a-python-module-to-expose-specific-parts | I just made a module in python but I don't want people to do this: import mymodule mymodule. and this then shows all methods and variables I added to the module. I just want them to see specified ones because I have many additional ones that are only for internal use. | If this is your module mymodule.py: def expose_this(): print('yes please') def but_not_this(): print('definitely not') Importing it directly like this: import mymodule Gives a user access to mymodule.expose_this() and mymodule.but_not_this(), there's no way to change that, although you could call but_not_this something like _but_not_this instead and many editors would at least warn a user that they are not supposed to access that. However, the right way to do it would be to create a package. Put your module in a separate folder (called mymodule), and add an __init__.py with this: from .mymodule import expose_this Now, if someone imports your package using the same import statement as before, they only have access to mymodule.expose_this() import mymodule mymodule.expose_this() # this works mymodule.but_not_this() # this causes an error There's a lot more to say about creating packages and how to add content, but there's good documentation and tutorials for that out there. Note: if someone knows the internal structure of your module, they can still access but_not_this() with this: import mymodule mymodule.mymodule.but_not_this() But they'd have to really want to - making it impossible is hard and there's really no point, since someone will be able to get at your code anyway, if they need to. But if you want to make the intent clear, you could rename mymodule.py to _mymodule.py and prefix the functions you don't want exposed with a _ as well - this helps editors to reinforce your intentions with the user. | 4 | 5 |
72,482,384 | 2022-6-2 | https://stackoverflow.com/questions/72482384/how-to-read-emails-from-gmail | I am trying to connect my gmail to python, but show me this error: I already checked my password, any idea what can be? b'[AUTHENTICATIONFAILED] Invalid credentials (Failure)' Traceback (most recent call last): File "/Users/myuser/Documents/migrations/untitled3.py", line 29, in read_email_from_gmail mail.login(FROM_EMAIL,FROM_PWD) File "/Users/myuser/opt/anaconda3/lib/python3.9/imaplib.py", line 612, in login raise self.error(dat[-1]) imaplib.IMAP4.error: b'[AUTHENTICATIONFAILED] Invalid credentials (Failure)' Here my code: Also i want to know which port I can use? import smtplib import time import imaplib import email import traceback ORG_EMAIL = "@gmail.com" FROM_EMAIL = "myemail" + ORG_EMAIL FROM_PWD = "mypassword" SMTP_SERVER = "smtp.gmail.com" SMTP_PORT = ?? def read_email_from_gmail(): try: mail = imaplib.IMAP4_SSL(SMTP_SERVER) mail.login(FROM_EMAIL,FROM_PWD) mail.select('inbox') data = mail.search(None, 'ALL') mail_ids = data[1] id_list = mail_ids[0].split() first_email_id = int(id_list[0]) latest_email_id = int(id_list[-1]) for i in range(latest_email_id,first_email_id, -1): data = mail.fetch(str(i), '(RFC822)' ) for response_part in data: arr = response_part[0] if isinstance(arr, tuple): msg = email.message_from_string(str(arr[1],'utf-8')) email_subject = msg['subject'] email_from = msg['from'] print('From : ' + email_from + '\n') print('Subject : ' + email_subject + '\n') except Exception as e: traceback.print_exc() print(str(e)) read_email_from_gmail() My main goal is be able to get the CSV file from each email, but for now I just want to read messages. | You need to enable 'Less secure apps' in your Gmail account if you're going to check it this way. For that reason, it would be better to use the Gmail API. SMTP port is not set - ensure you are using the correct port (993) | 15 | 2 |
72,480,454 | 2022-6-2 | https://stackoverflow.com/questions/72480454/sending-email-with-python-google-disables-less-secure-apps | I am trying to send email using python. My code was working fine before Google disabled 'less secure apps'. My email address and password are both correct. server = smtplib.SMTP_SSL("smtp.gmail.com", 465) serverEmail = "EMAILADDRESS" serverPw = "QWERTY" server.login(serverEmail, serverPw) subject = "Rejection" body = "Hi! You've been unfortunately declined access to our system." message = f'Subject: {subject}\n\n{body}' server.sendmail("EMAILADDRESS", doctorEmail['email'], message) server.quit() I get this error now: smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted. I get this error when i use server.starttls(): smtplib.SMTPNotSupportedError: STARTTLS extension not supported by server. | This is working for me. You need to generate an app password for this. See https://support.google.com/accounts/answer/185833?hl=en import smtplib as smtp connection = smtp.SMTP_SSL('smtp.gmail.com', 465) email_addr = '[email protected]' email_passwd = 'app_password_generated_in_Google_Account_Settings' connection.login(email_addr, email_passwd) connection.sendmail(from_addr=email_addr, to_addrs='[email protected]', msg="Sent from my IDE. Hehe") connection.close() For some reason, all of my emails are ending up in SPAM folder of the recipient account though. | 8 | 6 |
72,479,262 | 2022-6-2 | https://stackoverflow.com/questions/72479262/shap-python-model-type-not-yet-supported-by-treeexplainer-class-sklearn-ensemb | I tried to use Shap (Tree Explainer) for sklearn.ensemble._stacking.StackingClassifier explainer = shap.TreeExplainer(clf) shap_values = explainer.shap_values(x) shap.initjs() return shap.force_plot(explainer.expected_value[1], shap_values[1], x) But I got an error: Model type not yet supported by TreeExplainer: <class 'sklearn.ensemble._stacking.StackingClassifier'> How can I use shap force_plot for sklearn StackingClassifier? Thank you. | TreeExplainer only works on tree-based models themselves, not on pipelines or metamodels that end with a tree-based model. If you want interpretability in terms of your original features, you will need to use the base Explainer class (or equivalently, the KernelExplainer class). Unfortunately, this will be approximate and more computationally expensive. | 4 | 5 |
72,476,514 | 2022-6-2 | https://stackoverflow.com/questions/72476514/python-aiohttp-returns-a-different-reponse-than-python-requests-i-need-help-und | after the whole evening yesterday and morning today i really an help for understanding why, aiohttp request returns differently than requests request. import requests reqUrl = "https://api-mainnet.magiceden.io/all_collections_with_escrow_data" headersList = { "Accept": "*/*", " User-Agent" : " Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36" } payload = "" response = requests.request("GET", reqUrl, data=payload, headers=headersList) print(response.text) returns whole content {"collections":[{"symbol".... import aiohttp import asyncio headersList = { 'authority': 'api-mainnet.magiceden.io', 'Accept': 'application/json, text/plain, */*', 'accept-language': 'en-US,en;q=0.9', 'origin': 'https://magiceden.io', 'referer': 'https://magiceden.io/', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-site', 'sec-gpc': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36', } payload = "" async def test(): async with aiohttp.ClientSession() as session: get_image_data = await session.get( 'https://api-mainnet.magiceden.io/all_collections_with_escrow_data', headers=headersList) result = get_image_data.text print(result) if __name__ == '__main__': asyncio.run(test()) returns: <bound method ClientResponse.text of <ClientResponse(https://api-mainnet.magiceden.io/all_collections_with_escrow_data) [200 OK]> <CIMultiDictProxy('Date': 'Thu, 02 Jun 2022 12:43:22 GMT', 'Content-Type': 'application/json; charset=utf-8', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'X-Powered-By': 'Express', 'X-RateLimit-Limit': '120', 'X-RateLimit-Remaining': '119', 'X-RateLimit-Reset': '1654173863', 'Access-Control-Allow-Origin': 'https://magiceden.io', 'Vary': 'Origin', 'Access-Control-Allow-Credentials': 'true', 'Cache-Control': 'public, max-age=300, s-maxage=300', 'CDN-Cache-Control': 'public, max-age=300, s-maxage=300', 'Etag': 'W/"4f27d7-Cndhwdfejd0aSIGFdSQriuQfbvE"', 'Set-Cookie': 'connect.sid=s%3AcUUsXzow-3-5kuLPJcNNndd5zVxtCIvc.ggQdFm%2FooB%2FpWho%2FqYiVWJQa4vCtQ9VZGRisUqFXigw; Domain=magiceden.io; Path=/; Expires=Thu, 02 Jun 2022 12:53:22 GMT; HttpOnly', 'X-Kong-Upstream-Latency': '242', 'X-Kong-Proxy-Latency': '0', 'Via': 'kong/2.7.2', 'CF-Cache-Status': 'DYNAMIC', 'Expect-CT': 'max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"', 'Set-Cookie': '__cf_bm=i5THtMhPjqGPXy8zFyXSP43DLHQESpOLbWff7n_W6qE-1654173802-0-AQkuHSP7Sv+YRRSr1wmUBDKb5EOjcAiPVXyx7lvqe0NF2wmLHcFK9JFLPVkiuTpMOjp/wpiMpQU377nAimriGP0=; path=/; expires=Thu, 02-Jun-22 13:13:22 GMT; domain=.magiceden.io; HttpOnly; Secure; SameSite=None', 'Server': 'cloudflare', 'CF-RAY': '715046367a9b0219-ZRH', 'Content-Encoding': 'gzip', 'alt-svc': 'h3=":443"; ma=86400, h3-29=":443"; ma=86400')> > can someone help me understanding why? Thanks a lot guys... | The text is actually in a StreamReader object in the content attribute get_image_data = await session.get('https://api-mainnet.magiceden.io/all_collections_with_escrow_data', headers=headersList) stream = get_image_data.content data = await stream.read() print(data) | 4 | 2 |
72,472,526 | 2022-6-2 | https://stackoverflow.com/questions/72472526/how-to-filter-out-nan-by-pydantic | How to filter out NaN in pytdantic float validation? from pydantic import BaseModel class MySchema(BaseModel): float_value: float | You can use confloat and set either the higher limit to infinity or the lower limit to minus infinity. As all numeric comparisons with NaN return False, that will make pydantic reject NaN, while leaving all other behaviour identical (including parsing, conversion from int to float, ...). from pydantic import BaseModel, confloat class MySchema(BaseModel): float_value: confloat(ge=-float('inf')) # or: # float_value: confloat(le=float('inf')) Note: you could additionally exclude infinity values by using the gt and lt arguments of confloat instead of ge and le. Testing: m = MySchema(float_value=float('nan')) Output: pydantic.error_wrappers.ValidationError: 1 validation error for MySchema float_value ensure this value is greater than or equal to -inf (type=value_error.number.not_ge; limit_value=-inf) | 6 | 4 |
72,473,515 | 2022-6-2 | https://stackoverflow.com/questions/72473515/why-python-code-generation-from-proto-is-not-generating-classes | I'm currently trying to generate python code from a proto file. My proto file looks like this: syntax = "proto3"; package display; message Hello { uint32 version = 1; uint32 value = 2; int32 id = 3; } I used this protoc command to generate the python code: protoc -I="." --python_out="." test.proto And here is the resulting python file: # -*- coding: utf-8 -*- # Generated by the protocol buffer compiler. DO NOT EDIT! # source: test.proto """Generated protocol buffer code.""" from google.protobuf.internal import builder as _builder from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import symbol_database as _symbol_database # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\ntest.proto\x12\x07\x64isplay\"3\n\x05Hello\x12\x0f\n\x07version\x18\x01 \x01(\r\x12\r\n\x05value\x18\x02 \x01(\r\x12\n\n\x02id\x18\x03 \x01(\x05\x62\x06proto3') _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'test_pb2', globals()) if _descriptor._USE_C_DESCRIPTORS == False: DESCRIPTOR._options = None _HELLO._serialized_start=23 _HELLO._serialized_end=74 # @@protoc_insertion_point(module_scope) It doesn't look at all like the documentation from Google on this page. Why isn't the metaclass generated? I'm using Python 3.9 with the latest version of the protobuf package and last version of protoc. | add --grpc_python_out="." to the protoc command. this will generate an additional script with the required classes | 6 | 5 |
72,457,638 | 2022-6-1 | https://stackoverflow.com/questions/72457638/why-isnt-mypy-seeing-types-from-typeshed | I'm trying to add more mypy type annotations to my existing codebase. I have a file that uses a lot of bs4. When I run the mypy checker on this file I get the error: error: Skipping analyzing "bs4": module is installed, but missing library stubs or py.typed marker In the mypy docs on "Missing library stubs" it says: Mypy will not try inferring the types of any 3rd party libraries you have installed unless they either have declared themselves to be PEP 561 compliant stub package (e.g. with a py.typed file) or have registered themselves on typeshed, the repository of types for the standard library and some 3rd party libraries. But when I look on typeshed, there is a set of stubs. How do I fix that error and tell it to look at the stub definition from typeshed? BS4 and mypy are both running the latest version. This question asks a similar question about black, but when I test black in my code mypy doesn't complain but it seems that they've solved it in a different way as there's no black stub in typeshed. | Here is the content of mypy typeshed. It doesn't include third-party libraries, limited only to standard library. So types for beautifulsoup are not shipped with mypy (and it would be really odd to do so, because these stubs are updated sometimes more often than mypy itself, plus other typecheckers rely on this typeshed). So you need to install required stubs separately (package home page): pip install types-beautifulsoup4 | 5 | 1 |
72,455,487 | 2022-6-1 | https://stackoverflow.com/questions/72455487/activating-venv-and-conda-environment-at-the-same-time | I am a beginner and was "playing around" with environments a bit. I came across a situation where it seemed that I had two environments activated: I create a directory, create an environment with venv, activate it and then also conda activate a conda environment which I created before. These are the commands: mkdir dummie_directory cd dummie_directory python -m venv . Scripts\activate conda activate old_env After this the beginning of my command line looks like this: (old_env)(dummie_directory) C:\Users\.... Does this mean that both environments are active? Is there any intended use for this or will it most likely lead to some kind of clash/conflict between the installed packages? Thanks | No, it does not mean they are both activated. Only one can have priority in the PATH, which is what I’d consider the simplest definition of what “activated” means, functionally. The indicators in the PS1 string (i.e., the shell’s prompt string) are not robustly managed. The two environment managers are simply unaware of each other, and the string is only manipulated when an activate or deactivate procedure is called. There isn’t any dynamic monitoring that a particular environment is remaining active. I wouldn’t rely on any behavior you observe in this state. It does not have a defined specification and is not intended to be used like this. | 10 | 9 |
72,460,444 | 2022-6-1 | https://stackoverflow.com/questions/72460444/fastest-way-to-iterate-through-multiple-2d-numpy-arrays-with-numba | When using numba and accessing elements in multiple 2d numpy arrays, is it better to use the index or to iterate the arrays directly, because I'm finding that a combination of the two is the fastest which seems counterintuitive to me? Or is there another better way to do it? For context, I am trying to speed up the implementation of the raytracing approach in this paper https://iopscience.iop.org/article/10.1088/1361-6560/ac1f38/pdf. I have a function which takes the intensity before propagation and the displacement maps that result from the propagation. The resulting intensity is then the original intensity displaced by the displacement maps pixel by pixel with sub-pixel displacements being proportionately shared between the respective adjacent pixels. On a side note, can this be implemented directly in numpy or in another library, as I've noticed it is similar to opencv's remap function. import numpy as np from numba import njit @njit def raytrace_range(intensity_0, d_y, d_x): """ Args: intensity_0 (2d numpy array): intensity before propagation d_y (2d numpy array): Displacement along y in pixels d_x (2d numpy array): Displacement along x in pixels Returns: intensity_z (2d numpy array): intensity after propagation """ n_y, n_x = intensity_0.shape intensity_z = np.zeros((n_y, n_x), dtype=np.float64) for i in range(n_x): for j in range(n_y): i_ij = intensity_0[i, j] dx_ij=d_x[i,j] dy_ij=d_y[i,j] # Always the same from here down if not dx_ij and not dy_ij: intensity_z[i,j]+=i_ij continue i_new=i j_new=j #Calculating displacement bigger than a pixel if np.abs(dx_ij)>1: x = np.floor(dx_ij) i_new=int(i+x) dx_ij=dx_ij-x if np.abs(dy_ij)>1: y = np.floor(dy_ij) j_new=int(j+y) dy_ij=dy_ij-y # Calculating sub-pixel displacement if 0<=i_new and i_new<n_y and 0<=j_new and j_new<n_x: intensity_z[i_new,j_new]+=i_ij*(1-np.abs(dx_ij))*(1-np.abs(dy_ij)) if i_new<n_y-1 and dx_ij>=0: if j_new<n_y-1 and dy_ij>=0: intensity_z[i_new+1, j_new]+=i_ij*dx_ij*(1-dy_ij) intensity_z[i_new+1, j_new+1]+=i_ij*dx_ij*dy_ij intensity_z[i_new, j_new+1]+=i_ij*(1-dx_ij)*dy_ij if j_new and dy_ij<0: intensity_z[i_new+1,j_new]+=i_ij*dx_ij*(1-np.abs(dy_ij)) intensity_z[i_new+1,j_new-1]+=i_ij*dx_ij*np.abs(dy_ij) intensity_z[i_new,j_new-1]+=i_ij*(1-dx_ij)*np.abs(dy_ij) if i_new and dx_ij<0: if j_new<n_x-1 and dy_ij>=0: intensity_z[i_new-1,j_new]+=i_ij*np.abs(dx_ij)*(1-dy_ij) intensity_z[i_new-1,j_new+1]+=i_ij*np.abs(dx_ij)*dy_ij intensity_z[i_new,j_new+1]+=i_ij*(1-np.abs(dx_ij))*dy_ij if j_new and dy_ij<0: intensity_z[i_new-1,j_new]+=i_ij*np.abs(dx_ij)*(1-np.abs(dy_ij)) intensity_z[i_new-1,j_new-1]+=i_ij*dx_ij*dy_ij intensity_z[i_new,j_new-1]+=i_ij*(1-np.abs(dx_ij))*np.abs(dy_ij) return intensity_z I've tried a few other approaches of which this is the fastest (includes the code from above after the comment # Always the same from here down which I've omitted to keep the question relatively short): @njit def raytrace_enumerate(intensity_0, d_y, d_x): n_y, n_x = intensity_0.shape intensity_z = np.zeros((n_y, n_x), dtype=np.float64) for i, i_i in enumerate(intensity_0): for j, i_ij in enumerate(i_i): dx_ij=d_x[i,j] dy_ij=d_y[i,j] @njit def raytrace_npndenumerate(intensity_0, d_y, d_x): n_y, n_x = intensity_0.shape intensity_z = np.zeros((n_y, n_x), dtype=np.float64) for (i, j), i_ij in np.ndenumerate(intensity_0): dx_ij=d_x[i,j] dy_ij=d_y[i,j] @njit def raytrace_zip(intensity_0, d_y, d_x): n_y, n_x = intensity_0.shape intensity_z = np.zeros((n_y, n_x), dtype=np.float64) for i, (i_i, dy_i, dx_i) in enumerate(zip(intensity_0, d_y, d_x)): for j, (i_ij, dy_ij, dx_ij) in enumerate(zip(i_i, dy_i, dx_i)): @njit def raytrace_stack1(idydx): n_y, _, n_x = idydx.shape intensity_z = np.zeros((n_y, n_x), dtype=np.float64) for i, (i_i, dy_i, dx_i) in enumerate(idydx): for j, (i_ij, dy_ij, dx_ij) in enumerate(zip(i_i, dy_i, dx_i)): @njit def raytrace_stack2(idydx): n_y, n_x, _ = idydx.shape intensity_z = np.zeros((n_y, n_x), dtype=np.float64) for i, k in enumerate(idydx): for j, (i_ij, dy_ij, dx_ij) in enumerate(k): Make up some test data and time: import timeit rng = np.random.default_rng() size = (2010, 2000) margin = 10 test_data = np.pad(10000*rng.random(size=size), margin) dx = np.pad(10*(rng.random(size=size)-0.5), margin) dy = np.pad(10*(rng.random(size=size)-0.5), margin) # Check results are the same L = [raytrace_range(test_data, dy, dx), raytrace_enumerate(test_data, dy, dx), raytrace_npndenumerate(test_data, dy, dx), raytrace_zip(test_data, dy, dx), raytrace_stack1(np.stack([test_data, dy, dx], axis=1)), raytrace_stack2(np.stack([test_data, dy, dx], axis=2))] print((np.diff(np.vstack(L).reshape(len(L),-1),axis=0)==0).all()) %timeit raytrace_range(test_data, dy, dx) %timeit raytrace_enumerate(test_data, dy, dx) %timeit raytrace_npndenumerate(test_data, dy, dx) %timeit raytrace_zip(test_data, dy, dx) %timeit raytrace_stack1(np.stack([test_data, dy, dx], axis=1)) #Note this would be the fastest if the arrays were pre-stacked %timeit raytrace_stack2(np.stack([test_data, dy, dx], axis=2)) Output: True 40.4 ms ± 233 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 37.5 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 46.8 ms ± 112 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 38.6 ms ± 243 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 42 ms ± 234 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) #Note this would be the fastest if the arrays were pre-stacked 47.4 ms ± 203 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) | In general, the fastest way to iterate over an array is a basic low-level integer iterator. Such a pattern cause the minimum number of transformation in Numba so the compiler should be able to optimize the code pretty well. Functions likes zip and enumerate often add an additional overhead due to indirect code transformations that are not perfectly optimized out. Here is a basic example: import numba as nb @nb.njit('(int_[::1],)') def test(arr): s1 = s2 = 0 for i in range(arr.shape[0]): s1 += i s2 += arr[i] return (s1, s2) arr = np.arange(200_000) test(arr) However, things are more complex when you read/write to multiple arrays simultaneously (which is your case). Indeed, Numpy array can be indexed with negative indices so Numba need to perform bound checking every time. This check is expensive compared to the actual access and it can even break some other optimizations (eg. vectorization). Consequently, Numba has been optimized so to analyse the code and detect cases where bound checking is not needed and prevent adding expensive checks at runtime. This is the case in the above code but not in your raytrace_range function. enumerate and enumerate+zip can help a lot to remove bound checking because Numba can easily prove that the index lies in the bound of the array (theoretically, it could prove this for raytrace_range but the current implementation is unfortunately not smart enough). You can mostly solve this problem using assertions. It is not only good for optimization but also to make your code more robust! Moreover, the indexing of multidimensional arrays is sometimes not perfectly optimized by the underlying JIT (LLVM-Lite). There is no reason for them not to be optimized but compiler use heuristics to optimize the code that are far from being perfect (though pretty good in average). You can help by computing views of lines. This generally result in a tiny improvement though. Here is the improved code: @njit def raytrace_range_opt(intensity_0, d_y, d_x): n_y, n_x = intensity_0.shape assert intensity_0.shape == d_y.shape assert intensity_0.shape == d_x.shape intensity_z = np.zeros((n_y, n_x), dtype=np.float64) for i in range(n_x): row_intensity_0 = intensity_0[i, :] row_d_x = d_x[i, :] row_d_y = d_y[i, :] for j in range(n_y): assert j >= 0 # Crazy optimization (see later) i_ij = row_intensity_0[j] dx_ij = row_d_x[j] dy_ij = row_d_y[j] # Always the same from here down if not dx_ij and not dy_ij: row_intensity_0[j] += i_ij continue # Remaining code left unmodified Notes Note that I think the indexing of the function raytrace_enumerate is bogus: It should be for i in range(n_y): for j in range(n_x): instead since the access are done with intensity_0[i, j] and you wrote n_y, n_x = intensity_0.shape. Note that swaping the axis also gives correct results based on your validation function (which is suspicious). The assert j >= 0 instruction alone results in a 8% speed up which is crazy since the integer iterator j is guaranteed to be positive if the n_x is positive which is always the case since it is a shape! This is clearly a missed optimization of Numba that LLVM-Lite cannot optimize (since LLVM-Lite does not know what is a shape and that they are always positive too). This apparent missing assumption in the Numba code causes additional bound checking (of each of the three arrays) that are pretty expensive. Benchmark Here are results on my machine: raytrace_range: 47.8 ms ± 265 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) raytrace_enumerate: 38.9 ms ± 208 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) raytrace_npndenumerate: 54.1 ms ± 363 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) raytrace_zip: 41 ms ± 657 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) raytrace_stack1: 86.7 ms ± 268 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) raytrace_stack2: 84 ms ± 432 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) raytrace_range_opt: 38.6 ms ± 421 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) As you can see raytrace_range_opt is the fastest implementation on my machine. | 7 | 2 |
72,461,109 | 2022-6-1 | https://stackoverflow.com/questions/72461109/sqlalchemy-does-not-insert-default-values | I have workspacestable workspaces_table = Table( "workspaces", metadata_obj, Column("id", UUID(as_uuid=False), primary_key=True, default=uuid.uuid4), Column("name", JSONB(), nullable=False), Column("created_at", TIMESTAMP(timezone=False), default=datetime.datetime.now(), nullable=False), Column("updated_at", TIMESTAMP(timezone=False), default=datetime.datetime.now(), nullable=False), Column("created_by", UUID(as_uuid=False), ForeignKey('users.id'), nullable=False), Column("updated_by", UUID(as_uuid=False), ForeignKey('users.id'), nullable=False), Column("email", Text(), nullable=False) ) In this table columns created_at and updated_at have default value datetime.datetime.now() But when I try to insert row in this table like await conn.execute(text( f""" WITH workspace_create AS ( INSERT INTO workspaces(id, name, created_by, updated_by, email) VALUES (:workspace_id, :workspace_name, :user_id, :user_id, :workspace_email) ), workspace_roles_create AS ( INSERT INTO workspace_roles(id, name, export_data, users, settings, projects, roles, system_name, workspace_id) VALUES {sql_query_workspace_roles_values} ) INSERT INTO m2m_users_to_workspace_or_project_roles(user_id, role_id, role_type, user_status) VALUES(:user_id, :superuser_id, '{RoleTypes.Workspace.name}', '{UserStatuses.Active.name}') """ ), params ) I get following error: null value in column "created_at" of relation "workspaces" violates not-null constraint DETAIL: Failing row contains (dd31dfb6-6d22-4794-b804-631e60b6e063, [{"locale": "ru", "text_value": "ru_team_1"}], null, null, 481b7a55-52b7-48f2-89ea-4ae0673d4ab6, 481b7a55-52b7-48f2-89ea-4ae0673d4ab6, [email protected]). I see that row contains null instead default value in created_at updated_at columns. How can I insert default values automatically? | Column(…, default=…) is a client-side default value that is used by SQLAlchemy Core (and SQLAlchemy ORM) when you do something like workspaces_table.insert(). Note that if SQLAlchemy creates the table then that column does not have a server-side DEFAULT: workspaces_table = Table( "workspaces", MetaData(), Column("id", Integer(), primary_key=True), Column("created_at", DateTime(), default=datetime.now()), ) engine.echo = False workspaces_table.drop(engine, checkfirst=True) engine.echo = True workspaces_table.create(engine) """ DDL emitted: CREATE TABLE workspaces ( id SERIAL NOT NULL, created_at TIMESTAMP WITHOUT TIME ZONE, PRIMARY KEY (id) ) """ Column(…, server_default=…) is what specifies the server-side DEFAULT which will be available to plain text INSERT statements like the one in your question: workspaces_table = Table( "workspaces", MetaData(), Column("id", Integer(), primary_key=True), Column("created_at", DateTime(), server_default=text("CURRENT_TIMESTAMP")), ) engine.echo = False workspaces_table.drop(engine, checkfirst=True) engine.echo = True workspaces_table.create(engine) """ DDL emitted: CREATE TABLE workspaces ( id SERIAL NOT NULL, created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (id) ) """ Note that changing the Table() definition from default= to server_default= will not update an existing table; you'll need to use ALTER TABLE for that. | 5 | 3 |
72,461,032 | 2022-6-1 | https://stackoverflow.com/questions/72461032/plot-a-route-in-a-map | I'm using python and I need to plot a route in a map. I have a dataframe that looks like this: latitude longitude 41.393095 -8.703483 41.393095 -8.703483 41.393095 -8.703483 41.392483 -8.703088 40.942170 -8.540572 40.942188 -8.540567 41.187624 -8.568143 41.321009 -8.711874 41.345618 -8.547567 The order of the dataframe represents the order of the route and I would like to plot it based on latitude and longitude. But i only find ways to plot it based on osm node IDs. Does anyone know a way of plotting this route with the exact geographic coordinates? Thanks! | Using This tutorial, I managed to plot the points on a map: # Create a bounding box to determine the size of the required map BBox = (df.longitude.min()-0.1, df.longitude.max()+0.1, df.latitude.min()-0.1, df.latitude.max()+0.1) # Downloaded using this tutorial: https://medium.com/@abuqassim115/thanks-for-your-response-frank-fb869824ede2 map_img = plt.imread('map.png') # Plotting the points on the graph fig, ax = plt.subplots(figsize=(8, 7)) ax.plot(df.longitude, df.latitude, 'xb-') # Setting limits for the plot ax.set_xlim(BBox[0], BBox[1]) ax.set_ylim(BBox[2], BBox[3]) # Showing the image behind the points ax.imshow(map_img, zorder=0, extent=BBox, aspect='equal') plt.show() | 4 | 7 |
72,400,478 | 2022-5-27 | https://stackoverflow.com/questions/72400478/with-python-oracledb-what-does-dpy-4027-no-configuration-directory-to-search-f | With the python-oracledb driver the code: import oracledb cs = "MYDB" c = oracledb.connect(user='cj', password=mypw, dsn=cs) gives the error: oracledb.exceptions.DatabaseError: DPY-4027: no configuration directory to search for tnsnames.ora The same error also occurs in a second case: import oracledb cs = "MYDB = (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orclpdb1)))" c = oracledb.connect(user='cj', password=mypw, dsn=cs) and with this: import oracledb cs = "MYDB = (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orclpdb1)))" cp = oracledb.ConnectParams() cp.parse_connect_string(cs) What does this mean? | This error means you used a connection string that python-oracledb took to be some kind of alias it needed to look up in a tnsnames.ora file, but it didn't know where to find that file. Database connection strings in python-oracledb can be one of: An Oracle Easy Connect string like myhost:1521/orclpdb1 An Oracle Net Connect Descriptor string like (DESCRIPTION=(ADDRESS=(...)) A Net Service Name alias mapping to a connect descriptor. These connect descriptors are commonly stored in a 'tnsnames.ora' file on the machine where you are running Python. They may also be accessed from an LDAP server. See the user documentation on connection strings If the connection string is an alias, or is not recognized as an Easy Connect string or Connect Descriptor, then you must have a tnsnames.ora configuration file that maps the alias to a connect descriptor. The tnsnames.ora file might look like: MYDB = (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orclpdb1))) Case 1: import oracledb cs = "MYDB" c = oracledb.connect(user='cj', password=mypw, dsn=cs) To use this connection string, you need to tell python-oracledb where to find the tnsnames.ora file that contains the mapping from the alias MYDB to the connect descriptor that really tells Oracle where the DB is located. (See this answer for why). If you have a file /opt/myconfigdir/tnsnames.ora, then in python-oracledb's default 'Thin' mode you can do this: import oracledb cs = "MYDB" c = oracledb.connect(user='cj', password=mypw, dsn=cs, config_dir='/opt/myconfigdir') Note that even if ORACLE_HOME is set, Thin mode will not automatically read $ORACLE_HOME/network/admin/tnsnames.ora. You must explicitly tell python-oracledb (in Thin mode) where to read the file from. In the Thick mode (which is the mode when the app calls init_oracle_client()), if the tnsnames.ora file is not put in a default location, then you can tell python-oracledb where to find it like: import oracledb oracledb.init_oracle_client(config_dir='/opt/myconfigdir') cs = "MYDB" c = oracledb.connect(user='cj', password=mypw, dsn=cs) In both modes you can alternatively set the environment variable TNS_ADMIN to the directory containing the file, and then run Python. See the configuration file link above for more information. Case 2: import oracledb cs = "MYDB = (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orclpdb1)))" c = oracledb.connect(user='cj', password=mypw, dsn=cs) This is a pure "typo". What was passed as the connection string is a string containing both a net service name alias and the connect descriptor, which is the syntax used in tnsnames.ora configuration files, not in applications themselves. Python-oracledb didn't understand this syntax and assumed you were trying to pass a net service name alias. It needed to look this up in a tnsnames.ora file, but failed to locate such a file. A solution is to pass only the connect descriptor component without the MYDB = part. For example like: import oracledb cs = "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orclpdb1)))" c = oracledb.connect(user='cj', password=mypw, dsn=cs) Or you can put the original, whole string in a tnsnames.ora file and then call: import oracledb cs = "MYDB" c = oracledb.connect(user='cj', password=mypw, dsn=cs) See the example above for where to locate the file. Another alternative is to use the Easy Connect syntax: import oracledb cs = "localhost:1521/orclpdb1" c = oracledb.connect(user='cj', password=mypw, dsn=cs) | 4 | 7 |
72,373,545 | 2022-5-25 | https://stackoverflow.com/questions/72373545/what-is-the-difference-between-torch-nn-functional-grid-sample-and-torch-nn-func | Let's say I have an image I want to downsample to half its resolution via either grid_sample or interpolate from the torch.nn.functional library. I select mode ='bilinear' for both cases. For grid_sample, I'd do the following: dh = torch.linspace(-1,1, h/2) dw = torch.linspace(-1,1, w/2) mesh, meshy = torch.meshgrid((dh,dw)) grid = torch.stack.((meshy,meshx),2) grid = grid.unsqueeze(0) #add batch dim x_downsampled = torch.nn.functional.grid_sample(img, grid, mode='bilinear') For interpolate, I'd do: x_downsampled = torch.nn.functional.interpolate(img, size=(h/2,w/2), mode='bilinear') What do both methods differently? Which one is better for my example? | Interploate is limited to scaling or resizing the input. Grid_sample is more flexible and can perform any form of warping of the input grid. The warping grid can specify the position where each element in the input will end up in the output. In simple terms, interpolate does not provide the ability to change the ordering of elements in the input grid (an element to the right of another element will still be to the right after interpolation). Grid_sample is capable of changing the order of the elements and any arbitrary order can be achieved based on warping grid that is passed to the function. A simple 2D illustration to show grid_sample in action: input = [[10, 20], [30, 40]] grid (contains coordinates) = [[(1,0),(0,0)], [(1,0),(0,0)]] output will be: [[20, 10], [30, 40]] For your example, while both can work, interpolate would be the preferred way to go. | 4 | 1 |
72,434,896 | 2022-5-30 | https://stackoverflow.com/questions/72434896/jupyter-kernel-doesnt-use-poetry-environment | I'm trying to install a Jupyter kernel for my poetry environment, but it seems like the kernel gets my base conda environment. Here's what I'm trying: poetry env list >ENV_NAME-HASH-py3.9 (Activated) poetry run which python >/Users/myusername/Library/Caches/pypoetry/virtualenvs/ENV_NAME-HASH-py3.9/bin/python poetry run ipython kernel install --name=ENV_NAME >Installed kernelspec ENV_NAME in /Users/myusername/Library/Jupyter/kernels/ENV_NAME Then if I open a Jupyter with this kernel I don't get the libraries that should be installed. Checking the Python version I get: !which python /Users/myusername/opt/anaconda3/bin/python Any help appreaciated! | Bonjour, I come tardily but I think I have your solution. If you just want to use your currently configured Poetry environment in VSCode with your .ipynb notebooks, you can do this for example: Create a new project with Poetry: poetry new test_ipynb cd test_ipynb Use a specific Python version with your environment: poetry env use 3.12 Add ipykernel to your environment: poetry add ipykernel Check your environment's executable path: poetry env info Example output: /home/guillaume/.cache/pypoetry/virtualenvs/test-ipynb-pH8Jwd_U-py3.12/bin/python Open your project in VSCode: code . Then, in VSCode: Install Jupyter extension Need to reload window after Ctrl + Shift + p Reload Window In the top right, click on the Select Kernel. Choose "Python Environments". And select the environment you got from the env info command. This should allow you to use your Poetry-managed virtual environment directly within VSCode for Jupyter notebooks. Additionally, if you want to use Jupyter or JupyterLab, just follow the steps above and add: poetry add jupyter poetry run jupyter notebook # For JupyterLab poetry add jupyterlab poetry run jupyter lab This ensures that your development environment is consistent across different tools and platforms, leveraging Poetry's dependency management and virtual environment capabilities directly within VSCode for Jupyter notebooks. | 11 | 6 |
72,439,540 | 2022-5-30 | https://stackoverflow.com/questions/72439540/change-the-number-of-resources-during-simulation-in-simpy | I am currently working on a model which simulates a delivery process throughout an operating day in simpy. The deliveries are conducted by riders which are delivering one package at a time. The packages need to be delivered within a certain time window. In addition, the system deals with fluctuating demand throughout the day. Therefore, I would like to adapt the number of employees available to the system matching the fluctuating demand on an hourly basis. I modelled the riders as a resource with a certain capacity. Is there any possibility to adjust the capacity of the resource during a simulation run or are there other ways to model the riders with my system? I already looked for possible solution within the simpy documentation, examples or other posts. However, I was not successful yet. Thus, for any advice or possible solutions I would be very grateful! Thank you in advance. | Use a store instead of a resource. Resources has a fix number of resources. Stores works a bit like a bit more like a queue backed with a list with a optional max capacity. To reduce then number in the store, just stop putting the object back into the store. So is a example I wrapped a store with a class to manage the number of riders. """ Simple demo of a pool of riders delivering packages where the number of riders can change over time Idel riders are keep in a store that is wrapped in a class to manage the number of riders Programmer Michael R. Gibbs """ import simpy import random class Rider(): """ quick class to track the riders that deliver packages """ # tracks the next id to be assigned to a rider next_id = 1 def __init__(self): self.id = Rider.next_id Rider.next_id += 1 class Pack(): """ quick class to track the packages """ # tracks the next id to be assigned to a pack next_id = 1 def __init__(self): self.id = Pack.next_id Pack.next_id += 1 class RiderPool(): """ Pool of riders where the number of riders can be changed """ def __init__(self, env, start_riders=10): self.env = env # tracks the number of riders we need self.target_cnt = start_riders # tracks the number of riders we have self.curr_cnt = start_riders # the store idle riders self.riders = simpy.Store(env) # stores do not start with objects like resource pools do. # need to add riders yourself as part of set up self.riders.items = [Rider() for _ in range(start_riders)] def add_rider(self): """ Add a rider to the pool """ self.target_cnt += 1 if self.curr_cnt < self.target_cnt: # need to add a rider to the pool to get to the target rider = Rider() self.riders.put(rider) self.curr_cnt += 1 print(f'{env.now:0.2f} rider {rider.id} added') else: # already have enough riders, # must have tried to reduce the rider pool while all riders were busy # In effect we are cancelling a previous remove rider call print(f'{env.now:0.2f} keeping rider scheduled to be removed instead of adding') def remove_rider(self): """ Remove a rider from the pool If all the riders are busy, the actual removal of a rider will happen when a that rider finishes it current task and is tried to be put/returned back into the pool """ self.target_cnt -= 1 if self.curr_cnt > self.target_cnt: if len(self.riders.items) > 0: # we have a idle rider that we can remove now rider = yield self.riders.get() self.curr_cnt -= 1 print(f'{env.now:0.2f} rider {rider.id} removed from store') else: # wait for a rider to be put back to the pool pass def get(self): """ Get a rider from the pool returns a get request that can be yield to, not a rider """ rider_req = self.riders.get() return rider_req def put(self, rider): """ put a rider pack into the pool """ if self.curr_cnt <= self.target_cnt: # still need the rider self.riders.put(rider) else: # have tool many riders, do not add back to pool self.curr_cnt -= 1 print(f'{env.now:0.2f} rider {rider.id} removed on return to pool') def gen_packs(env, riders): """ generates the arrival of packages to be delivered by riders """ while True: yield env.timeout(random.randint(1,4)) pack = Pack() env.process(ship_pack(env, pack, riders)) def ship_pack(env, pack, riders): """ The process of a rider delivering a packages """ print(f'{env.now:0.2f} pack {pack.id} getting rider') rider = yield riders.get() print(f'{env.now:0.2f} pack {pack.id} has rider {rider.id}') # trip time yield env.timeout(random.randint(5,22)) riders.put(rider) print(f'{env.now:0.2f} pack {pack.id} delivered') def rider_sched(env, riders): """ Changes the number of riders in rider pool over time """ yield env.timeout(30) # time to remove a few riders print(f'{env.now:0.2f} -- reducing riders') print(f'{env.now:0.2f} -- request queue len {len(riders.riders.get_queue)}') print(f'{env.now:0.2f} -- rider store len {len(riders.riders.items)}') for _ in range(5): env.process(riders.remove_rider()) yield env.timeout(60) # time to add back some riders print(f'{env.now:0.2f} -- adding riders ') for _ in range(2): riders.add_rider() # run the model env = simpy.Environment() riders = RiderPool(env, 10) env.process(gen_packs(env, riders)) env.process(rider_sched(env, riders)) env.run(100) print(f'{env.now:0.2f} -- end rider count {riders.target_cnt}') | 4 | 5 |
72,436,467 | 2022-5-30 | https://stackoverflow.com/questions/72436467/how-to-provide-type-hinting-to-userdict | I want to define a UserDict that reads values from JSON and stores a position for a given key. The JSON file looks like this: { "pages": [ { "areas": [ { "name": "My_Name", "x": 179.95495495495493, "y": 117.92792792792793, "height": 15.315315315315303, "width": 125.58558558558553 }, ... ] } ] } I would like to indicate to type linters (e.g. MyPy) that this dictionary as a key being a string and the values being a Position. My current code is the following: import json from collections import UserDict from dataclasses import dataclass, field from pathlib import Path from typing import Dict, List, Optional, Union from typing_extensions import Literal JsonPosition = Dict[str, Union[str, float]] JsonPage = Optional[Dict[Literal["areas"], List[JsonPosition]]] @dataclass class Position: """Information for a position""" name: str x: float y: float width: float height: float @classmethod def from_json(cls, dict_values: JsonPosition): return cls(**dict_values) # type: ignore # dynamic typing class Page(UserDict): """Information about positions on a page""" @classmethod def from_json(cls, page: JsonPage): """Get positions from JSON Dictionary""" if page is None: return cls() return cls({cast(str, p["name"]): Position.from_json(p) for p in page["areas"]}) JSON = Path("my_positions.json").read_text() positions = json.loads(JSON) page_1 = Page.from_json(positions["pages"][0]) I would like MyPy (or Pylance or whatever type hinter I use), to automatically recognize page_1["My_Name"] as being a Position. What could I change? | Actually, you can directly provide the type to UserDict with square brackets ([...]) like you would with a Dict: class Page(UserDict[str, Position]): ... For Python 3.6 or earlier, this will not work. For Python >=3.7 and <3.9, you need the following to subscript collections.UserDict and put it in a separate block specific to type checking (with constant TYPE_CHECKING): from __future__ import annotations from collections import UserDict from typing import TYPE_CHECKING if TYPE_CHECKING: TypedUserDict = UserDict[str, Position] else: TypedUserDict = UserDict class Page(TypedUserDict): ... For Python 3.9+, no additional import or trick is necessary. | 3 | 8 |
72,449,482 | 2022-5-31 | https://stackoverflow.com/questions/72449482/f-string-representation-different-than-str | I had always thought that f-strings invoked the __str__ method. That is, f'{x}' was always the same as str(x). However, with this class class Thing(enum.IntEnum): A = 0 f'{Thing.A}' is '0' while str(Thing.A) is 'Thing.A'. This example doesn't work if I use enum.Enum as the base class. What functionality do f-strings invoke? | From "Formatted string literals" in the Python reference: f-strings invoke the "format() protocol", meaning that the __format__ magic method is called instead of __str__. class Foo: def __repr__(self): return "Foo()" def __str__(self): return "A wild Foo" def __format__(self, format_spec): if not format_spec: return "A formatted Foo" return f"A formatted Foo, but also {format_spec}!" >>> foo = Foo() >>> repr(foo) 'Foo()' >>> str(foo) 'A wild Foo' >>> format(foo) 'A formatted Foo' >>> f"{foo}" 'A formatted Foo' >>> format(foo, "Bar") 'A formatted Foo, but also Bar!' >>> f"{foo:Bar}" 'A formatted Foo, but also Bar!' If you don't want __format__ to be called, you can specify !s (for str), !r (for repr) or !a (for ascii) after the expression: >>> foo = Foo() >>> f"{foo}" 'A formatted Foo' >>> f"{foo!s}" 'A wild Foo' >>> f"{foo!r}" 'Foo()' This is occasionally useful with strings: >>> key = 'something\n nasty!' >>> error_message = f"Key not found: {key!r}" >>> error_message "Key not found: 'something\\n nasty!'" | 74 | 105 |
72,454,359 | 2022-5-31 | https://stackoverflow.com/questions/72454359/python-uniswap-subgraph-constant-product-formula | I'm trying to calculate the price impact on trades and am getting strange results. I am using uniswap v2 subgraph to get current data for WETH/USDC. def loadUni2(): query = """ { pairs ( first: 10 orderBy: volumeUSD orderDirection:desc ){ id reserve0 token0Price token0 { id symbol decimals } token1Price reserve1 token1{ id symbol decimals } } } """ I then save the results of this query into individual variables and do the same math for the "constant product formula" that uniswap says it uses for its pools pair = pairs[0] #sort dataframe by lowest price low = pair.sort_values(by='token0Price', ascending=True) quoteReserve = low['reserve0'].values[0] #USDC Tokens in pair verified by checking info.uniswap.org baseReserve = low['reserve1'].values[0] #WETH tokens in pair verified by checking info.uniswap.org token0Price = low['token0Price'].values[0] token1Price = low['token1Price'].values[0] #Buy Low amount = 1 # purchase amount in USD constant = quoteReserve * baseReserve newReserve = (quoteReserve + amount) output = constant / newReserve wethPurchaseAmount = baseReserve - output pricePerToken = amount / wethPurchaseAmount if (newReserve * output) == constant: check = True print(f'Token0Price before trade : {token0Price}') print(f'Token1Price before trade: {token1Price}') print(f'Quote Reserves: {quoteReserve}') print(f'Base Reserves: {baseReserve}') print(f'Constant: {constant}') print(f'New Reserve: {newReserve}') print(f'Output: {output}') print(f'WETH Purchased Amount: {wethPurchaseAmount}') print(f'Price paid Per Token : {pricePerToken}') print(check) Since my amount is only $1 the price paid per token should match the token0Price. But my results look like: Token0Price : 1942.4506384054528 Token1Price: 0.0005148135969215 Quote Reserves: 121784650.548786 Base Reserves: 105869.64875708237 Constant: 12893298177603.992 New Reserve: 121784651.548786 Output: 105869.64788776389 WETH Purchased Amount: 0.0008693184790899977 Price Per Token: 1150.3264040203076 True I'm either missing something or somehow my math is wrong? Any suggestions/ideas would be greatly appreciated. Here's the link for where I found an example of the Constant Product Formula Also, the only imports I have are 'requests' and 'pandas' Running it in a google collab notebook. I apologize in advance if this is hard to read, I'm completely new to this. | you are not including the 0.3% percent fee which affects the amount of return value. initially we have "x" and "y" amount so constant product is k = x * y There is an inverse relationship between x and y because increase in one will lead to decrease on other this is our graph for this equation If you add some "x" token, the amount of "y" token will decrease, but "k" is constant in everwhere on the graph k = (x+dx) * (y-dy) xy = (x+dx) * (y-dy) If you multiply right side parantheses: xy = xy - xdy + ydx -dydx xy cancels out 0=ydx - xdy - dydx I am looking for dy so xdy + dydx = ydx dy(x+dx)=ydx leave dy alone dy = ydx / (x+dx) So far we have been swapping "x" token to receive "y" token and we are calculating how much "y" tokens will be decreased from the pool. When we swap "x" token we also have to pay 0.3% fee which is 0.3% "x" token. "dx" here amount that we are sending to swap, since 0.3% of this amount will be taken, we will be actually swapping (1 - 0.3%) dx (1 - (0.3/100)) dx (1 - (0.003))dx 0.997 dx this is total amount of "x" token that we are actually swapping. Our final formula will be dy = y *(0.997dx) / (x + 0.997dx) this is how you should calculate the weth purchase amount. | 4 | 2 |
72,394,322 | 2022-5-26 | https://stackoverflow.com/questions/72394322/how-to-use-sync-to-async-in-django-template | I am trying to make the Django tutorial codes polls into async with uvicorn async view. ORM query works with async view by wrapping in sync_to_async() as such. question = await sync_to_async(Question.objects.get, thread_sensitive=True)(pk=question_id) But I have no idea how to apply sync_to_async or thread inside Django templates. This code fails saying 'You cannot call this from an async context - use a thread or sync_to_async.' Or any other way to work around this? {% for choice in question.choice_set.all %} I use Python 3.10, Django 4.0.4 and uvicorn 0.17.6 | I have found several solutions for that: It's not safe. In your settings.py add: import os os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" Or get your Question object in separate function: @sync_to_async def get_question(question_id): return get_object_or_404(Question, pk=question_id) question = await get_question(pk=question_id) Or without splitting (works with Django 4.1+): question = await Question.objects.aget(id=question_id) | 5 | 1 |
72,401,377 | 2022-5-27 | https://stackoverflow.com/questions/72401377/error-could-not-build-wheels-for-pandas-which-is-required-to-install-pyproject | I'm trying to install pandas via pip install pandas on my laptop. Environment: Window 11 Pro Python 3.10.4 Pip version 22.0.4 Compatibility: Officially Python 3.8, 3.9 and 3.10. You must have pip>=19.3 to install from PyPI. C:\Users\PC>pip install pandas WARNING: Ignoring invalid distribution -ywin32 (c:\users\pc\appdata\local\programs\python\python310-32\lib\site-packages) WARNING: Ignoring invalid distribution -ywin32 (c:\users\pc\appdata\local\programs\python\python310-32\lib\site-packages) Collecting pandas Using cached pandas-1.4.2.tar.gz (4.9 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: numpy>=1.21.0 in c:\users\pc\appdata\local\programs\python\python310-32\lib\site-packages (from pandas) (1.22.4) Requirement already satisfied: python-dateutil>=2.8.1 in c:\users\pc\appdata\local\programs\python\python310-32\lib\site-packages (from pandas) (2.8.2) Collecting pytz>=2020.1 Using cached pytz-2022.1-py2.py3-none-any.whl (503 kB) Requirement already satisfied: six>=1.5 in c:\users\pc\appdata\local\programs\python\python310-32\lib\site-packages (from python-dateutil>=2.8.1->pandas) (1.16.0) Building wheels for collected packages: pandas Building wheel for pandas (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pandas (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [2010 lines of output] C:\Users\PC\AppData\Local\Temp\pip-build-env-q3kdt5nb\overlay\Lib\site-packages\setuptools\config\setupcfg.py:459: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) ... note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pandas Failed to build pandas ERROR: Could not build wheels for pandas, which is required to install pyproject.toml-based projects What I have tried: updated pip to 22.1.1 installed wheel 0.37.1 uninstalled and installed pip uninstalled and installed python 3.10.4 Error still reproducible with pandas 1.5.1 Thanks to @AKX which has pointed up that there is no and may will no 32-bit version of pandas in the future. See the discussion on GitHub. | Pandas doesn't require Anaconda to work, but based on python310-32 in your output, you're using a 32-bit build of Python. Pandas evidently does not ship 32-bit wheels for Python 3.10 (they do have win32 wheels for Python 3.8 and Python 3.9 though). (There could be alternate sources for pre-built 32-bit wheels, such as Gohlke's site.) In other words, on that platform you would need to install Pandas from source, which will likely be a rather difficult undertaking, and can't be done directly within pip anyway (as you noticed via error: metadata-generation-failed). If your system is capable of running 64-bit Python, you should switch to it. | 20 | 10 |
72,381,940 | 2022-5-25 | https://stackoverflow.com/questions/72381940/permanently-change-the-default-python3-version-in-linux-ubuntu-on-windows | I'm using WSL2 with Ubuntu on Windows 11 v2004.2022.10 and I have both Python 3.8 and 3.9 installed. I want to make the 3.9 version the default, and I'm happy to remove Python 3.8 altogether if necessary. If I type python --version in Ubuntu, I get Python 3.8.10. I tried the following: sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.9 1 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.8 0 and if I type sudo update-alternatives --config python I see There are 2 choices for the alternative python (providing /usr/bin/python). Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/bin/python3.9 1 auto mode 1 /usr/bin/python3.8 0 manual mode 2 /usr/bin/python3.9 1 manual mode However, if I type again python3 --version, it still says Python 3.8.10 I then tried sudo update-alternatives --remove python /usr/bin/python3.8 and now sudo update-alternatives --config python tells me that There is only one alternative in link group python (providing /usr/bin/python): /usr/bin/python3.9 Nothing to configure. And yet, python3 --version still says Python 3.8.10 I also tried sudo update-alternatives --set python /usr/bin/python3.9 and that didn't work either. This works: alias python='/usr/bin/python3.9': now python3 --version is Python 3.9.5 - but only temporarily, as upon closing and reopening Ubuntu it reverts to Python 3.8.10. I then tried creating a permanent alias by adding that same line to my .bashrc script (I followed these steps), and the same thing happened. I'm new to all of this, so please be patient. How can I change the default Python 3.8 to the 3.9 version, and/or remove Python 3.8 altogether? I tried deleting the python3.8 directory but that didn't work. Perhaps it's because I still have python3.8-config, which I didn't manage to delete? Thanks! | You can always use sudo update-alternatives --config python3 and then select your python3 version. That should solve your issue without needing to do weird configs. | 4 | 2 |
72,442,087 | 2022-5-31 | https://stackoverflow.com/questions/72442087/cant-install-proj-8-0-0-for-cartopy-linux | I am trying to install Cartopy on Ubuntu and need to install proj v8.0.0 binaries for Cartopy. However when I try to apt-get install proj-bin I can only get proj v6.3.1. How do I install the latest (or at least v8.0.0) proj for cartopy? | I'm answering my own question here partly to help others with this problem, and partly as an archive for myself so I know how to fix this issue if I come across it again. I spent quite a while trying to figure it out, and wrote detailed instructions, so see below: Installing cartopy is a huge pain, and I've found using conda to be a very bad idea (it has bricked itself and python along with it multiple times for me) THIS INSTALLATION IS FOR LINUX. Step 0. Update apt: apt update Step 1. Install GEOS: Run the following command to install GEOS: apt-get install libgeos-dev In case that doesn't do it, install all files with this: apt-get install libgeos-dev libgeos++-dev libgeos-3.8.0 libgeos-c1v5 libgeos-doc Step 2. Install proj dependencies: Install cmake: apt install cmake Install sqlite3: apt install sqlite3 Install curl devlopment package: apt install curl && apt-get install libcurl4-openssl-dev Step 3. Install Proj Trying apt-get just in case it works: Unfortunately, cartopy requires proj v8.0.0 as a minimum, but if you install proj using apt you can only install proj v6.3.1 Just for reference in case anything changes, this is the command to install proj from apt: apt-get install proj-bin I'm fairly sure this is all you need, but in case it's not, this command will install the remaining proj files: apt-get install proj-bin libproj-dev proj-data To remove the above installation, run: apt-get remove proj-bin or: apt-get remove proj-bin libproj-dev proj-data Building Proj from source So if the above commands don't work (it's not working as of 2022/4/8), then follow the below instructions to install proj from source: Go to your install folder and download proj-9.0.0 (or any version with proj-x.x.x.tar.gz): wget https://download.osgeo.org/proj/proj-9.0.0.tar.gz Extract the tar.gz file: tar -xf proj-9.0.0.tar.gz cd into the folder: cd proj-9.0.0 Make a build folder and cd into it: mkdir build && cd build Run (this may take a while): cmake .. cmake --build . cmake --build . --target install Run to make sure everything installed correctly: ctest The test command failed on one test for me (19 - nkg), but otherwise was fine. You should find the required files in the ./bin directory Finally: Move binaries to the /bin directory: cp ./bin/* /bin As per Justino, you may also need to move the libraries: cp ./lib/* /lib Now after all this, you can finally install cartopy with pip: pip install cartopy After doing this, my cartopy still wasn't working. I went home to work on this next week, came back, and all of a sudden it was working so maybe try restarting | 5 | 7 |
72,407,887 | 2022-5-27 | https://stackoverflow.com/questions/72407887/python-opencv-error-unsupported-combination-of-source-format | I am trying to run this code: import cv2 import numpy as np src = np.array([[10, 20, 40, 50], [50, 20, 50, 20], [10, 10, 30, 60], [20, 40, 60, 70]]) dst1 = cv2.blur(src, ksize=(3, 3), borderType = cv2.BORDER_CONSTANT) print(dst1) dst2 = cv2.GaussianBlur(src, ksize=(3, 3), sigmaX=0, borderType = cv2.BORDER_CONSTANT) And I got an error: dst2 = cv2.GaussianBlur(src, ksize=(3, 3), sigmaX=0, borderType = cv2.BORDER_CONSTANT) cv2.error: OpenCV(4.5.5) /Users/runner/work/opencv-python/opencv-python/opencv/modules/imgproc/src/filter.simd.hpp:3045: error: (-213:The function/feature is not implemented) Unsupported combination of source format (=4), and buffer format (=5) in function 'getLinearRowFilter' | if you construct an np.array like that, it's (default) format is np.int32, which is not supported. rather make it: src = np.array([[10, 20, 40, 50], [50, 20, 50, 20], [10, 10, 30, 60], [20, 40, 60, 70]], np.uint8) # <-- correct type !!! | 5 | 3 |
72,450,281 | 2022-5-31 | https://stackoverflow.com/questions/72450281/python-3-10-deprecation-warning-for-distutils | DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.dir_util import copy_tree Is it something that I should worry about? Which import should I use to replace the from distutils.dir_util import copy_tree line? | Change from: from distutils.dir_util import copy_tree copy_tree(loc_source, loc_destination) To: import shutil shutil.copytree(loc_source, loc_destination) Or, to replace existing files and folders: import shutil shutil.copytree(loc_source, loc_destination, dirs_exist_ok=True) Documentation: https://docs.python.org/3/library/shutil.html#shutil.copytree | 4 | 9 |
72,374,296 | 2022-5-25 | https://stackoverflow.com/questions/72374296/how-do-i-properly-change-package-name-built-with-poetry | I built a package using poetry package manager but I regret naming it because it sounds a bit childish. Besides, because poetry's default behavior is to force change the project's name to lower case (SuperPackage --> superpackage), it is difficult to import the package inside/outside the package's main directory. Here's an example directory structure: SuperPackage/ - superpackage/ - mysubpackage/ - __init__.py - utils.py - foo.py - tests/ - __init__.py - test_superpackage.py - poetry.lock - pyproject.toml - README.md - README.rst - .gitignore Because of this structure, from SuperPackage.mysubpackage import utils # Works outside SuperPackage/ directory. Doesn't work inside SuperPackage/. from superpackage.mysubpackage import utils # Works inside SuperPackage/. Doesn't work outside SuperPackage/ directory. Now, I want to change SuperPackage to nicepackage. How do I achieve this? I can't google it maybe because it's very uncommon or it's too obvious. Should I just change "name" field in pyproject.toml file? However, I'm not sure if it's okay (and recommended) to change "name" field directly. [tool.poetry] name = "SuperPackage" version = "0.1.0" description = "" authors = ["John-Doe <[email protected]>"] [tool.poetry.dependencies] python = "^3.8" pandas = "^1.3.4" matplotlib = "^3.4.3" beautifulsoup4 = "^4.10.0" | Change the name field in pyproject.toml, run poetry update (not sure that's needed but do it just in case?) and change the directories names. Note that the name of the virtual environment on your computer will stay the same, but that probably isn't a problem as it only is a local ting. | 11 | 12 |
72,437,583 | 2022-5-30 | https://stackoverflow.com/questions/72437583/difference-between-forward-and-train-step-in-pytorch-lightning | I have a transfer learning Resnet set up in Pytorch Lightning. the structure is borrowed from this wandb tutorial https://wandb.ai/wandb/wandb-lightning/reports/Image-Classification-using-PyTorch-Lightning--VmlldzoyODk1NzY and from looking at the documentation https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html I am confused about the difference between the def forward () and the def training_step() methods. Initially in the PL documentation, the model is not called in the training step, only in forward. But forward is also not called in the training step. I have been running the model on data and the outputs look sensible (I have an image callback and I can see that the model is learning, and getting a good accuracy result at the end). But I am worried that given the forward method is not being called, the model is somehow not being implemented? Model code is: class TransferLearning(pl.LightningModule): "Works for Resnet at the moment" def __init__(self, model, learning_rate, optimiser = 'Adam', weights = [ 1/2288 , 1/1500], av_type = 'macro' ): super().__init__() self.class_weights = torch.FloatTensor(weights) self.optimiser = optimiser self.thresh = 0.5 self.save_hyperparameters() self.learning_rate = learning_rate #add metrics for tracking self.accuracy = Accuracy() self.loss= nn.CrossEntropyLoss() self.recall = Recall(num_classes=2, threshold=self.thresh, average = av_type) self.prec = Precision( num_classes=2, average = av_type ) self.jacq_ind = JaccardIndex(num_classes=2) # init model backbone = model num_filters = backbone.fc.in_features layers = list(backbone.children())[:-1] self.feature_extractor = nn.Sequential(*layers) # use the pretrained model to classify damage 2 classes num_target_classes = 2 self.classifier = nn.Linear(num_filters, num_target_classes) def forward(self, x): self.feature_extractor.eval() with torch.no_grad(): representations = self.feature_extractor(x).flatten(1) x = self.classifier(representations) return x def training_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = self.loss(logits, y) # training metrics preds = torch.argmax(logits, dim=1) acc = self.accuracy(preds, y) recall = self.recall(preds, y) precision = self.prec(preds, y) jac = self.jacq_ind(preds, y) self.log('train_loss', loss, on_step=True, on_epoch=True, logger=True) self.log('train_acc', acc, on_step=True, on_epoch=True, logger=True) self.log('train_recall', recall, on_step=True, on_epoch=True, logger=True) self.log('train_precision', precision, on_step=True, on_epoch=True, logger=True) self.log('train_jacc', jac, on_step=True, on_epoch=True, logger=True) return loss def validation_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = self.loss(logits, y) # validation metrics preds = torch.argmax(logits, dim=1) acc = self.accuracy(preds, y) recall = self.recall(preds, y) precision = self.prec(preds, y) jac = self.jacq_ind(preds, y) self.log('val_loss', loss, prog_bar=True) self.log('val_acc', acc, prog_bar=True) self.log('val_recall', recall, prog_bar=True) self.log('val_precision', precision, prog_bar=True) self.log('val_jacc', jac, prog_bar=True) return loss def test_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = self.loss(logits, y) # validation metrics preds = torch.argmax(logits, dim=1) acc = self.accuracy(preds, y) recall = self.recall(preds, y) precision = self.prec(preds, y) jac = self.jacq_ind(preds, y) self.log('test_loss', loss, prog_bar=True) self.log('test_acc', acc, prog_bar=True) self.log('test_recall', recall, prog_bar=True) self.log('test_precision', precision, prog_bar=True) self.log('test_jacc', jac, prog_bar=True) return loss def configure_optimizers(self,): print('Optimise with {}'.format(self.optimiser) ) # optimizer = self.optimiser_dict[self.optimiser](self.parameters(), lr=self.learning_rate) # Support Adam, SGD, RMSPRop and Adagrad as optimizers. if self.optimiser == "Adam": optimiser = optim.AdamW(self.parameters(), lr = self.learning_rate) elif self.optimiser == "SGD": optimiser = optim.SGD(self.parameters(), lr = self.learning_rate) elif self.optimiser == "Adagrad": optimiser = optim.Adagrad(self.parameters(), lr = self.learning_rate) elif self.optimiser == "RMSProp": optimiser = optim.RMSprop(self.parameters(), lr = self.learning_rate) else: assert False, f"Unknown optimizer: \"{self.optimiser}\"" return optimiser | I am confused about the difference between the def forward () and the def training_step() methods. Quoting from the docs: "In Lightning we suggest separating training from inference. The training_step defines the full training loop. We encourage users to use the forward to define inference actions." So forward() defines your prediction/inference actions. It doesn't even need to be part of your training_step in which you would define you whole training loop. Nonetheless you can choose to have it in your training_step if you want it that way. An example where forward() isn't part of the training_step: def forward(self, x): # in lightning, forward defines the prediction/inference actions embedding = self.encoder(x) return embedding def training_step(self, batch, batch_idx): # training_step defined the train loop. # in this case it is independent of forward x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) # Logging to TensorBoard by default self.log("train_loss", loss) return loss the model is not called in the training step, only in forward. But forward is also not called in the training step The fact that forward() is not called in your train_step is because self(x) does it for you. You can alternatively call forward() explicitly instead of using call(x). I am worried that given the forward method is not being called, the model is somehow not being implemented? As long as you see your metrics logged with self.log move in the right direction you will know that you model gets called correctly and its learning. | 4 | 9 |
72,425,408 | 2022-5-29 | https://stackoverflow.com/questions/72425408/interrupt-not-prevent-from-starting-screensaver | I am trying to programmatically interrupt the screensaver by moving the cursor like this: win32api.SetCursorPos((random.choice(range(100)),random.choice(range(100)))) And it fails with the message: pywintypes.error: (0, 'SetCursorPos', 'No error message is available') This error only occurs if the screensaver is actively running. The reason for this request is that the computer is ONLY used for inputting data through a bluetooth device (via a Python program). When the BT device sends data to the computer the screensaver is not interrupted (which means I cannot see the data the BT device sent). Thus, when the Python program receives data from the BT device it is also supposed to interrupt the screensaver. I have seen several solution on how to prevent the screensaver from starting (which are not suitable solutions in my case), but none on how to interrupt a running screensaver. How can I do this, using Windows 10 and Python 3.10? | The Windows operating system has a hierarchy of objects. At the top of the hierarchy is the "Window Station". Just below that is the "Desktop" (not to be confused with the desktop folder, or even the desktop window showing the icons of that folder). You can read more about this concept in the documentation. I mention this because ordinarily only one Desktop can receive and process user input at any given time. And, when a screen saver is activated by Windows due to a timeout, Windows creates a new Desktop to run the screen saver. This means any application associated with any other Desktop, including your Python script, will be unable to send input to the new Desktop without some extra work. The nature of that work depends on a few factors. Assuming the simplest case, a screen saver that's created without the "On resume, display logon screen", and no other Window Station has been created by a remote connection or local user login, then you can ask Windows for the active Desktop, attach the Python script to that Desktop, move the mouse, and revert back to the previous Desktop so the rest of the script works as expected. Thankfully, the code to do this is easier than the explanation: import win32con, win32api, win32service import random # Get a handle to the current active Desktop hdesk = win32service.OpenInputDesktop(0, False, win32con.MAXIMUM_ALLOWED); # Get a handle to the Desktop this process is associated with hdeskOld = win32service.GetThreadDesktop(win32api.GetCurrentThreadId()) # Set this process to handle messages and input on the active Desktop hdesk.SetThreadDesktop() # Move the mouse some random amount, most Screen Savers will react to this, # close the window, which in turn causes Windows to destroy this Desktop # Also, move the mouse a few times to avoid the edge case of moving # it randomly to the location it was already at. for _ in range(4): win32api.SetCursorPos((random.randint(0, 100), random.randint(0, 100))) # Revert back to the old desktop association so the rest of this script works hdeskOld.SetThreadDesktop() However, if the screen saver is running on a separate Window Station because "On resume, display logon screen" is selected, or another user is connected either via the physical Console or has connected remotely, then connecting to and attaching to the active Desktop will require elevation of the Python script, and even then, depending on other factors, it may require special permissions. And while this might help your specific case, I will add the the core issue in the general case is perhaps more properly defined as asking "how do I notify the user of the state of something, without the screen saver blocking that notification?". The answer to that question isn't "cause the screen saver to end", but rather "Use something like SetThreadExecutionState() with ES_DISPLAY_REQUIRED to keep the screen saver from running. And show a full-screen top-most window that shows the current status, and when you want to alert the user, flash an eye-catching graphic and/or play a sound to get their attention". Here's what that looks like, using tkinter to show the window: from datetime import datetime, timedelta import ctypes import tkinter as tk # Constants for calling SetThreadExecutionState ES_CONTINUOUS = 0x80000000 ES_SYSTEM_REQUIRED = 0x00000001 ES_DISPLAY_REQUIRED= 0x00000002 # Example work, show nothing, but when the timer hits, "alert" the user ALERT_AT = datetime.utcnow() + timedelta(minutes=2) def timer(root): # Called every second until we alert the user # TODO: This is just alerting the user after a set time goes by, # you could perform a custom check here, to see if the user # should be alerted based off other conditions. if datetime.utcnow() >= ALERT_AT: # Just alert the user root.configure(bg='red') else: # Nothing to do, check again in a bit root.after(1000, timer, root) # Create a full screen window root = tk.Tk() # Simple way to dismiss the window root.bind("<Escape>", lambda e: e.widget.destroy()) root.wm_attributes("-fullscreen", 1) root.wm_attributes("-topmost", 1) root.configure(bg='black') root.config(cursor="none") root.after(1000, timer, root) # Disable the screen saver while the main window is shown ctypes.windll.kernel32.SetThreadExecutionState(ES_CONTINUOUS | ES_DISPLAY_REQUIRED) root.mainloop() # All done, let the screen saver run again ctypes.windll.kernel32.SetThreadExecutionState(ES_CONTINUOUS) While more work, doing this will solve issues around the secure desktop with "On resume, display logon screen" set, and also prevent the system from going to sleep if it's configured to do so. It just generally allows the application to more clearly communicate its intention. | 43 | 52 |
72,427,869 | 2022-5-29 | https://stackoverflow.com/questions/72427869/pytorch-cuda-version-is-always-10-2 | I've installed a handful of PyTorch versions (CUDA 11.7 nightly, CUDA 11.6 nightly, 11.3), but every time, torch.version.cuda returns 10.2. I'd like to run PyTorch on CUDA 11.7. My graphics card has CUDA capability sm_86. [me@legion imagen-test]$ sudo pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 ... [me@legion imagen-test]$ python >>> import torch >>> print(torch.version.cuda) 10.2 When I actually try to use PyTorch, I get an error saying the PyTorch version I have installed doesn't support the newer version of CUDA my graphics card requires. >>> torch.Tensor([1,2,3]).cuda() ... NVIDIA GeForce RTX 3060 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. ... RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. I'm completely stumped, and unsure where to go from here. I'd appreciate any help. | You've probably installed PyTorch with CUDA 10.2 among your different installed versions. This may be taking priority over the versions of PyTorch. To fix this, simply uninstall all versions of PyTorch with pip uninstall torch -y and reinstall PyTorch with CUDA 11.7. Source: https://discuss.pytorch.org/t/cuda-version-is-always-10-2/152876 | 4 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.