question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
72,764,417
2022-6-26
https://stackoverflow.com/questions/72764417/looping-through-rows-of-pandas-when-the-equation-also-changes
I need to ignore timestamps and loop through rows this way. import pandas as pd import numpy as np time = ['11:50', '12:50', '13:50'] data_1 = {'time': time, 'n1': [1, 5, 8], 'n2': [2, 6 ,7], 'n3': [3, 7 ,6], 'n4': [4, 8, 5], } df1 = pd.DataFrame(data = data_1) df1 I am trying to multiply: row 1 * (10^0) row 2 * (10^1) row 3 * (10^2) ... row n * (10^(n-1)) Before: time n1 n2 n3 n4 0 11:50 1 2 3 4 1 12:50 5 6 7 8 2 13:50 8 7 6 5 Expected result: time n1 n2 n3 n4 0 11:50 1 2 3 4 1 12:50 50 60 70 80 2 13:50 800 700 600 500
You can use mul on index axis: df1.iloc[:, 1:] = df1.iloc[:, 1:].mul(10**df1.index, axis=0) print(df1) # Output time n1 n2 n3 n4 0 11:50 1 2 3 4 1 12:50 50 60 70 80 2 13:50 800 700 600 500 You can replace df1.index by np.arange(len(df1)) if your index is not a RangeIndex.
4
5
72,762,251
2022-6-26
https://stackoverflow.com/questions/72762251/create-a-pytorch-tensor-of-sequences-which-excludes-specified-value
I have a 1d PyTorch tensor containing integers between 0 and n-1. Now I need to create a 2d PyTorch tensor with n-1 columns, where each row is a sequence from 0 to n-1 excluding the value in the first tensor. How can I achieve this efficiently? Ex: n = 3 a = torch.Tensor([0, 1, 2, 1, 2, 0]) # desired output b = [ [1, 2], [0, 2], [0, 1], [0, 2], [0, 1], [1, 2] ] Typically, the a.numel() >> n. Detailed Explanation: The first element of a is 0, hence it has to map to the sequence [0, 1, 2] excluding 0, which is [1, 2]. Similarly, the second element of a is 1, hence it has to map to [0, 2] and so on. PS: I actually have an additional batch dimension, which I've excluded here for simplicity. Hence, I need the solution to be easily extendable to one additional dimension.
We can construct a tensor with the desired sequences and index with tensor a. import torch n = 3 a = torch.Tensor([0, 1, 2, 1, 2, 0]) # using torch.tensor is recommended def exclude_gather(a, n): sequences = torch.nonzero(torch.arange(n) != torch.arange(n)[:,None], as_tuple=True)[1].reshape(-1, n-1) return sequences[a.long()] exclude_gather(a, n) Output tensor([[1, 2], [0, 2], [0, 1], [0, 2], [0, 1], [1, 2]]) We can add a batch dimension with functorch.vmap from functorch import vmap n = 4 b = torch.Tensor([[0, 1, 2, 1, 3, 0],[0, 3, 1, 0, 2, 1]]) vmap(exclude_gather, in_dims=(0, None))(b, n) Output tensor([[[1, 2, 3], [0, 2, 3], [0, 1, 3], [0, 2, 3], [0, 1, 2], [1, 2, 3]], [[1, 2, 3], [0, 1, 2], [0, 2, 3], [1, 2, 3], [0, 1, 3], [0, 2, 3]]])
4
1
72,738,301
2022-6-24
https://stackoverflow.com/questions/72738301/best-way-to-read-aws-credentials-file
In my python code I need to extract AWS credentials AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID which are stored in the plain text file as described here: https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html I know the name of the file: AWS_SHARED_CREDENTIALS_FILE and the name of profile: AWS_PROFILE. My current approach is to read and parse this file in python by myself to get AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID. But I hope there is already standard way to get it using boto3 or some other library. Please suggest.
Would something like this work for you, or am I misunderstanding the question? Basically start a session for the appropriate profile (or the default, I guess), and then query those values from the credentials object: session = boto3.Session(profile_name=<...your-profile...>) credentials = session.get_credentials() print("AWS_ACCESS_KEY_ID = {}".format(credentials.access_key)) print("AWS_SECRET_ACCESS_KEY = {}".format(credentials.secret_key)) print("AWS_SESSION_TOKEN = {}".format(credentials.token))
7
8
72,754,426
2022-6-25
https://stackoverflow.com/questions/72754426/gui-freezing-in-python-pyside6
I am developing a software as part of my work, as soon as I make a call to the API to fetch data the GUI freezes, at first I understood the problem and transferred the functions to threads, the problem is that once I used the join() function the app froze again. What I would like is to wait at the same point in the function until the threads end and continue from the same point in the function, is there any way to do this in Python? threads = [] def call_api(self, query, index, return_dict): thread = threading.Thread( target=self.get_data, args=(query, index, return_dict)) self.threads.append(thread) thread.start() def get_all_tickets(self, platform): if platform == 'All': self.call_api(query1, 0, return_dict) self.call_api(query2, 1, return_dict) for thread in self.threads: thread.join() # The app freezes here # Is there a way to wait at this point asynchronously until the processes are complete and continue from that point without the GUI freezing?
One possible option would be to use QThreads finished signal it emits that you can connect to with a slot that contains the remaining logic from your get_all_tickets method. threads = [] def call_api(self, query, index, return_dict): thread = QThread() worker = Worker(query, index, return_dict) worker.moveToThread(thread) thread.started.connect(worker.run) worker.finished.connect(thread.terminate) thread.finished.connect(self.continue_getting_all_tickets) self.threads.append(thread) thread.start() def get_all_tickets(self, platform): if platform == 'All': self.call_api(query1, 0, return_dict) self.call_api(query2, 1, return_dict) def continue_getting_all_tickets(self): # this will be called once for each and every thread created # if you only want it to run once all the threads have completed # you could do something like this: if all([thread.isFinished() for thread in self.threads]): # ... do something The worker class could look something like this. class Worker(QObject): finished = Signal() def __init__(self, query, index, return_dict): super().__init__() self.query = query self.index = index self.return_dict = return_dict def run(self): # this is where you would put `get_data` code # once it is finished it should emit the finished signal self.finished.emit() Hopefully this will help you find the right direction.
4
3
72,754,331
2022-6-25
https://stackoverflow.com/questions/72754331/webdriver-object-has-no-attribute-find-element-by-link-text-selenium-scrip
This is a weird issue I have ran into and I can't find any solution for this across the internet. I was using selenium in google colab to scrape a website and my code was working completely fine. I woke up the next day and ran the code again without changing a single line and don't know how/why my code starting giving me this error, AttributeError: 'WebDriver' object has no attribute 'find_element_by_link_text'. Same for find_element_by_class_name and id etc. I then rechecked a previously working script just to confirm and that gave me the same error too. I am confused about what happened suddenly and the scripts started giving me these errors. How do I solve this? What am I doing wrong here? !pip install selenium !apt-get update !apt install chromium-chromedriver from selenium import webdriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome('chromedriver',options=chrome_options) driver.get("https://petrowiki.spe.org/PetroWiki") driver.title #this line is returning the correct title value, code is able to access the url peh = driver.find_element_by_link_text('Pet. Eng. Handbook') peh.click()
Selenium just removed that method in version 4.3.0. See the CHANGES: https://github.com/SeleniumHQ/selenium/blob/a4995e2c096239b42c373f26498a6c9bb4f2b3e7/py/CHANGES Selenium 4.3.0 * Deprecated find_element_by_* and find_elements_by_* are now removed (#10712) * Deprecated Opera support has been removed (#10630) * Fully upgraded from python 2x to 3.7 syntax and features (#10647) * Added a devtools version fallback mechanism to look for an older version when mismatch occurs (#10749) * Better support for co-operative multi inheritance by utilising super() throughout * Improved type hints throughout You now need to use: driver.find_element("link text", "Pet. Eng. Handbook") For improved reliability, you should consider using WebDriverWait in combination with element_to_be_clickable.
8
25
72,753,000
2022-6-25
https://stackoverflow.com/questions/72753000/what-is-the-most-suitable-efficient-pandas-data-type-for-ids
Probably obvious, but just wanted to confirm: What would be the best suited data type for a numerical ID in pandas? Let's say that I have a sequential numerical ID type user_id, which would be better: an int64 type (that would seem to be the most obvious choice given the numerical representation of the field) a category type (which might make more sense, given that the ID is not to be used for actual numerical operations, but rather as a unique identifier) Same question for characters-based IDs, would it be better to use an object or a category type? I would be tempted to use the category data type (thinking there might be performance benefits, as I would imagine these categories are somehow optimised/hashed/indexed for performance), but I was wondering whether this data type is more suited for a more limited subset of distinct values than the possible 100's of thousands unique user_ids I might have in my dataset. Thanks!
Operating on object-typed dataframes/arrays is slow because Pandas needs to operate on each item using the inefficient CPython interpreter. This causes a high overhead due to reference counting, internal pointer indirections, type checks, internal function calls, etc. Pandas often uses Numpy internally which can be much faster when the types are native like int64, int32, float64, etc. In that case, Numpy can execute a optimized native code that is not slowed down by the CPython overheads and that can even benefit from hardware SIMD units (regarding the target function used). While Numpy supports bounded strings, Pandas does not use this but slow CPython string objects instead. Strings are inherently slow, even in native codes, because of their generally variable size that is often predictable (this strongly impacts the processor that need to predict branches so to be fast, see this post about branch prediction). In practice, unicode characters make strings even slower (it makes the use of SIMD instructions very difficult and branch even harder to predict). Categorial are basically integers associated with a mapping table (of unique values). Categorial columns can theoretically be faster for some computation because the table is already computed. However, the initial computation of the table can be expensive. Additionally, the table is not always used efficient where it could resulting sometimes to a surprisingly slower execution compared to integers. Not to mentions the table can be big when all the values are different. Integers are the less expensive type. Smaller integer can often be faster. Indeed, SIMD vectors have a fixed size (eg. the AVX-2 SIMD instruction set of 86-64 processors can compute 32 int8 value in a row compared to only 4 int64). Furthermore, smaller items cause the whole columns to take less memory reducing the memory throughput so it improve the performance of memory-bound codes (starting from dataframe copies that are pretty frequent in Pandas). However, this is not always faster because smaller types can sometime cause type-conversion adding an additional overhead (though this overhead can be mitigated using lower-level optimizations). Thus, if you are working on huge dataframe, please consider using small integer types. Otherwise, int64 is certainly a very good option.
7
6
72,743,499
2022-6-24
https://stackoverflow.com/questions/72743499/regex-alternative-to-expression-to-define-word-sequence
i am currently trying to match specific sentences based on the words they contain and their order. I am doing this mostly with the lookahead assertion based on this structure: [^>.\]]*(?="The desired Words)[^<.\]]* So i am for example looking for sentences that talk about Vacation in the Maledives. To match the sentence: i´ll book a vacation to the Maledives. I could look for sentences that contain the word vacation and afterwards maledives. The expression [^>.\]]*(?=([Vv]cation.*[Mm]aledives))[^<.\]]* using .* causes problems because the word "Maledives" can also appear in later sentences (Example Wrong) My solutionw as to use the expression ([,'`´() \\]*\w+){0,X}\s* instead of .*, to indicate that "Maledives" has to follow "vacation" within the same sentences and with a maximum of X words between them, changing to this structure: [^>.\]]*(?=([Vv]cation([,'`´() \\]*\w+){0,X}\s*[Mm]aledives))[^<.\]]* (Example Correct) Unfortunately this expression is quite computationally intensive and leads to catastrophic backtracking if the range {0,X} is set to high. Do you have any other suggestions how to look for sentences containing specific words in order?
You can try something like this. (?:^|\.)\s*([^.]*[Vv]acation[^.]*[Mm]aledives[^.]*(?:\.|$)) See demo. https://regex101.com/r/97UvCH/1 There is no need for lookahead .It will slow things down.
4
2
72,741,663
2022-6-24
https://stackoverflow.com/questions/72741663/argument-parser-from-a-pydantic-model
How do I create an argument parser (argparse.ArgumentParser) from a Pydantic model? I have a Pydantic model: from pydantic import BaseModel, Field class MyItem(BaseModel): name: str age: int color: str = Field(default="red", description="Color of the item") And I want to create an instance of MyItem using command line: python myscript.py --name Jack --age 10 --color blue This should yield to: item = MyItem(name="Jack", age=10, color="blue") ... # Process the item I would not like to hard-code the command-line arguments and I would like to create the command-line arguments dynamically from the Pydantic model.
I found an answer myself. Just: create an argument parser, turn the fields of the model as arguments of the parser, parse the command-line arguments, turn the arguments as dict and pass them to the model and process the instance of the model import argparse from pydantic import BaseModel, Field class MyItem(BaseModel): name: str age: int color: str = Field(default="red", description="Color of the item") def add_model(parser, model): "Add Pydantic model to an ArgumentParser" fields = model.__fields__ for name, field in fields.items(): parser.add_argument( f"--{name}", dest=name, type=field.type_, default=field.default, help=field.field_info.description, ) # 1. Create and parse command line arguments parser = argparse.ArgumentParser() # 2. Turn the fields of the model as arguments of the parser add_model(parser, MyItem) # 3. Parse the command-line arguments args = parser.parse_args() # 4. Turn the arguments as dict and pass them to the model item = MyItem(**vars(args)) # 5. Do whatever print(repr(item)) ... You may also add subparsers if you wish to add more functionality to the parser: https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_subparsers
11
14
72,738,553
2022-6-24
https://stackoverflow.com/questions/72738553/how-can-i-run-an-ffmpeg-command-in-a-python-script
I want to overlay a transparent video over top of an image using ffmpeg and python I am able to do this successfully through terminal, but I cannot get ffmpeg commands to work in python. The following command produces the result that I want in terminal when I am in the directory with the files there. ffmpeg -i head1.png -i hdmiSpitting.mov -filter_complex "[0:v][1:v] overlay=0:0" -pix_fmt yuv420p -:a copy output3.mov In python, my code is simple: import os import subprocess command = "ffmpeg -i head1.png -i hdmiSpitting.mov -filter_complex \"[0:v][1:v] overlay=0:0\" -pix_fmt yuv420p -c:a copy output3.mov" subprocess.call(command,shell=True) The code runs, there is no indication of an error, but no output is produced. What am I missing here?
In Windows one line with spaces should work, but in Linux we have to pass the arguments as list. We can build the command as a list: command = ['ffmpeg', '-i', 'head1.png', '-i', 'hdmiSpitting.mov', '-filter_complex', '[0:v][1:v]overlay=0:0', '-pix_fmt', 'yuv420p', '-c:a', 'copy', 'output3.mov'] We may also use shlex.split: import shlex command = shlex.split('ffmpeg -i head1.png -i hdmiSpitting.mov -filter_complex "[0:v][1:v] overlay=0:0" -pix_fmt yuv420p -c:a copy output3.mov') Adding -y argument: If the output file output3.mov already exists, FFmpeg prints a message: File 'output3.mov' already exists. Overwrite? [y/N] And waits for the user to press y. In some development environments we can't see the message. Add -y for overwriting the output if already exists (without asking): command = shlex.split('ffmpeg -y -i head1.png -i hdmiSpitting.mov -filter_complex "[0:v][1:v] overlay=0:0" -pix_fmt yuv420p -c:a copy output3.mov') Path issues: There are cases when ffmpeg executable is not in the execution path. Using full path may be necessary. Example for Windows (assuming ffmpeg.exe is in c:\FFmpeg\bin): command = shlex.split('c:\\FFmpeg\\bin\\ffmpeg.exe -y -i head1.png -i hdmiSpitting.mov -filter_complex "[0:v][1:v] overlay=0:0" -pix_fmt yuv420p -c:a copy output3.mov') In Linux, the default path is /usr/bin/ffmpeg. Using shell=True is not recommended and considered "unsafe". For details see Security Considerations. The default is False, so we may use subprocess.call(command). Note: subprocess.run supposes to replace subprocess.call. See this post for details. Creating log file by adding -report argument: In some development environments we can't see FFmpeg messages, which are printed to the console (written to stderr). Adding -report argument creates a log file with name like ffmpeg-20220624-114156.log. The log file may tell us what went wrong when we can't see the console. Example: import subprocess import shlex subprocess.run(shlex.split('ffmpeg -y -i head1.png -i hdmiSpitting.mov -filter_complex "[0:v][1:v] overlay=0:0" -pix_fmt yuv420p -c:a copy output3.mov -report'))
4
10
72,737,853
2022-6-24
https://stackoverflow.com/questions/72737853/get-values-after-idxmin1
I apply idxmin() to get the index of the minimum absolute value of two columns: rep["OffsetFrom"] = rep[["OffsetDates", "OffsetDays"]].abs().idxmin(axis=1).dropna() OffsetDates OffsetDays OffsetFrom 0 0.0 0.0 OffsetDates 1 1.0 1.0 OffsetDates 2 4.0 -3.0 OffsetDays 3 4.0 -3.0 OffsetDays 4 6.0 -1.0 OffsetDays ... ... ... ... 1165 0.0 0.0 OffsetDates 1166 0.0 0.0 OffsetDates 1167 0.0 0.0 OffsetDates 1168 0.0 0.0 OffsetDates 1169 0.0 0.0 OffsetDates 1170 rows × 3 columns How do I get a column of the actual values after that? PS. This one wouldn't work, since I need to keep the original signs: rep[["OffsetFrom", "Offset"]] = rep[["OffsetDates", "OffsetDays"]].abs().agg(['idxmin','min'], axis=1) Edit: Eventually, I've simply done it with apply: def get_abs_min_keep_sign(x): return min(x.min(), x.max(), key=abs) rep["Offset"] = rep[["OffsetDates", "OffsetDays"]].apply(get_abs_min_keep_sign, axis=1) However, there must be a more elegant approach out there.
First idea is use lookup by column Offset: rep["Offset"] = rep[["OffsetDates", "OffsetDays"]].abs().idxmin(axis=1) idx, cols = pd.factorize(rep['Offset']) rep["OffsetVal"] = rep.reindex(cols, axis=1).to_numpy()[np.arange(len(rep)), idx] Another numpy solution is use numpy.argmin for positions by absolute values, then is possible use indexing by array by columns names for Offset and for values is used indexing by rep[["OffsetDates", "OffsetDays"]] converted to 2d array: cols = ["OffsetDates", "OffsetDays"] pos = np.argmin(rep[cols].abs().to_numpy(), axis=1) rep["Offset2"] = np.array(cols)[pos] rep["OffsetVal2"] = rep[cols].to_numpy()[np.arange(len(rep.index)),pos] print (rep) OffsetDates OffsetDays Offset OffsetVal Offset2 OffsetVal2 0 0.0 0.0 OffsetDates 0.0 OffsetDates 0.0 1 1.0 1.0 OffsetDates 1.0 OffsetDates 1.0 2 4.0 -3.0 OffsetDays -3.0 OffsetDays -3.0 3 4.0 -3.0 OffsetDays -3.0 OffsetDays -3.0 4 6.0 -1.0 OffsetDays -1.0 OffsetDays -1.0 Your function should be simplify: rep["OffsetVal"] = rep[["OffsetDates","OffsetDays"]].apply(lambda x: min(x,key=abs),axis=1)
4
1
72,736,760
2022-6-23
https://stackoverflow.com/questions/72736760/making-abstract-property-in-python-3-results-in-attributeerror
How do you make an abstract property in python? import abc class MyClass(abc.ABC): @abc.abstractmethod @property def foo(self): pass results in the error AttributeError: attribute '__isabstractmethod__' of 'property' objects is not writable
It turns out that order matters when it comes to python decorators. @abc.abstractmethod @property is not the same as @property @abc.abstractmethod The correct way to create an abstract property is: import abc class MyClass(abc.ABC): @property @abc.abstractmethod def foo(self): pass
28
37
72,736,116
2022-6-23
https://stackoverflow.com/questions/72736116/could-not-interpret-value-in-scatterplot
I have a dataset with 2 features with the name pos_x and pos_y and I need to scatter plot the clustered data done with DBScan. Here is what I have tried for it: dataset = pd.read_csv(r'/Users/file_name.csv') Data = dataset[["pos_x","pos_y"]].to_numpy() dbscan=DBSCAN() clusters =dbscan.fit(Data) p = sns.scatterplot(data=Data, x="pos_x", y="pos_y", hue=clusters.labels_, legend="full", palette="deep") sns.move_legend(p, "upper right", bbox_to_anchor=(1.17, 1.2), title='Clusters') plt.show() however I get the following error for it. I appreciate if anyone can help me with it. Because as I know for the parameter x and y in scatter plot I should write the name of the features. ValueError: Could not interpret value `pos_x` for parameter `x`
I think the error is caused by this part of the code: Data = dataset[["pos_x","pos_y"]].to_numpy() When you convert the dataframe to numpy, seaborn cannot access the columns as it should. Try this: dataset = pd.read_csv(r'/Users/file_name.csv') Data = dataset[["pos_x","pos_y"]] dbscan = DBSCAN() clusters = dbscan.fit(Data.to_numpy()) p = sns.scatterplot(data=Data, x="pos_x", y="pos_y", hue=clusters.labels_, legend="full", palette="deep") sns.move_legend(p, "upper right", bbox_to_anchor=(1.17, 1.2), title='Clusters') plt.show()
4
3
72,723,928
2022-6-23
https://stackoverflow.com/questions/72723928/how-to-combine-several-images-to-one-image-in-a-grid-structure-in-python
I have several images (PNG format) that I want to combine them into one image file in a grid structure (in such a way that I can set the No. of images shown in every row). Also, I want to add small empty space between images. For example, assume that there are 7 images. And I want to set the No. of images shown in every row as 3. The general structure of the combined image will be: Please let me know if you know a good way to do that (preferably using PIL/Pillow or matplotlib libraries). Thanks.
You can pass to the combine_images function number of expected columns, space between images in pixels and the list of images: from PIL import Image def combine_images(columns, space, images): rows = len(images) // columns if len(images) % columns: rows += 1 width_max = max([Image.open(image).width for image in images]) height_max = max([Image.open(image).height for image in images]) background_width = width_max*columns + (space*columns)-space background_height = height_max*rows + (space*rows)-space background = Image.new('RGBA', (background_width, background_height), (255, 255, 255, 255)) x = 0 y = 0 for i, image in enumerate(images): img = Image.open(image) x_offset = int((width_max-img.width)/2) y_offset = int((height_max-img.height)/2) background.paste(img, (x+x_offset, y+y_offset)) x += width_max + space if (i+1) % columns == 0: y += height_max + space x = 0 background.save('image.png') combine_images(columns=3, space=20, images=['apple_PNG12507.png', 'banana_PNG838.png', 'blackberry_PNG45.png', 'cherry_PNG635.png', 'pear_PNG3466.png', 'plum_PNG8670.png', 'strawberry_PNG2595.png']) Result for 7 images and 3 columns: Result for 6 images and 2 columns:
7
7
72,710,695
2022-6-22
https://stackoverflow.com/questions/72710695/controlling-context-manager-in-a-meta-class
I would like to know if it's possible to control the context automatically in a metaclass and decorator. I have written a decorator function that creates the stub from the grpc insecure channel: def grpc_factory(grpc_server_address: str): print("grpc_factory") def grpc_connect(func): print("grpc_connect") def grpc_connect_wrapper(*args, **kwargs): with grpc.insecure_channel(grpc_server_address) as channel: stub = AnalyserStub(channel) return func(*args, stub=stub, **kwargs) return grpc_connect_wrapper return grpc_connect I have then created a metaclass that uses the context manager with every method that starts with grpc_ and then injects the stub into the methods kwargs: class Client(type): @classmethod def __prepare__(metacls, name, bases, **kwargs): return super().__prepare__(name, bases, **kwargs) def __new__(cls, name, bases, attrs, **kwargs): if "grpc_server_address" not in kwargs: raise ValueError("""grpc_server_address is required on client class, see below example\n class MyClient(AnalyserClient, metaclass=Client, grpc_server_address='localhost:50051')""") for key, value in attrs.items(): if callable(value) and key.startswith("grpc_"): attrs[key] = grpc_factory(kwargs["grpc_server_address"])(value) return super().__new__(cls, name, bases, attrs) From this, I'd like to create all of the methods from the proto file not implemented errors: class AnalyserClient(metaclass=Client, grpc_server_address="localhost:50051"): def grpc_analyse(self, *args, **kwargs): raise NotImplementedError("grpc_analyse is not implemented") With a final use case of the class below with the stub placed into the methods args: class AnalyserClient(AC, metaclass=Client, grpc_server_address="localhost:50051"): def grpc_analyse(self, text, stub) -> str: print("Analysing text: {}".format(text)) print("Stub is ", stub) stub.AnalyseSentiment(text) return "Analysed" I am getting this error which I assume means the channel is no longer open but I'm not sure how this could be done better to ensure all users have a simple interface with safety around using the services defined in the proto file. grpc_factory grpc_connect grpc_factory grpc_connect Inside grpc_connect_wrapper Created channel Analysing text: Hello World Stub is <grpc_implementation.protos.analyse_pb2_grpc.AnalyserStub object at 0x7f29d7726670> ERROR:grpc._common:Exception serializing message! Traceback (most recent call last): File "/python/venv/lib/python3.8/site-packages/grpc/_common.py", line 86, in _transform return transformer(message) TypeError: descriptor 'SerializeToString' for 'google.protobuf.pyext._message.CMessage' objects doesn't apply to a 'str' object Traceback (most recent call last): File "run_client.py", line 27, in <module> client.grpc_analyse("Hello World") File "/python/grpc_implementation/client/client.py", line 15, in grpc_connect_wrapper return func(*args, stub=stub, **kwargs) File "run_client.py", line 11, in grpc_analyse stub.AnalyseSentiment(text) File "/python/venv/lib/python3.8/site-packages/grpc/_channel.py", line 944, in __call__ state, call, = self._blocking(request, timeout, metadata, credentials, File "/python/venv/lib/python3.8/site-packages/grpc/_channel.py", line 924, in _blocking raise rendezvous # pylint: disable-msg=raising-bad-type grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.INTERNAL details = "Exception serializing request!" debug_error_string = "None" The proto file is: syntax = "proto3"; package analyse; option go_package = "./grpc_implementation"; service Analyser { rpc AnalyseSentiment(SentimentRequest) returns (SentimentResponse) {} } message SentimentRequest { string text = 1; } message SentimentResponse { string sentiment = 1; } Below is the class I am trying to emulate after the metaclass has added decorator. class AnalyserClientTrad: def __init__(self, host: str = "localhost:50051"): self.host = host def grpc_analyse(self, text: str): with grpc.insecure_channel(self.host) as channel: stub = AnalyserStub(channel) response = stub.AnalyseSentiment(SentimentRequest(text=text)) return response.sentiment client = AnalyserClientTrad() print(client.grpc_analyse("Hello, world!")) I have further tested this through adding the decorator traditionally which also works: def grpc_factory(grpc_server_address: str): def grpc_connect(func): def grpc_connect_wrapper(*args, **kwargs): with grpc.insecure_channel(grpc_server_address) as channel: stub = AnalyserStub(channel) return func(*args, stub=stub, **kwargs) return grpc_connect_wrapper return grpc_connect class AnalyserClientTradWithDecs: @grpc_factory("localhost:50051") def grpc_analyse(self, text: str, stub: AnalyserStub): response = stub.AnalyseSentiment(SentimentRequest(text=text)) return response.sentiment def run_client_with_decorator(): client = AnalyserClientTradWithDecs() print(client.grpc_analyse("Hello, world!")) Any help would be appreciated.
You have a problem in this part of the code, that your are not setting an expected proto object instead you are setting string class AnalyserClient(AC, metaclass=Client, grpc_server_address="localhost:50051"): def grpc_analyse(self, text, stub) -> str: print("Analysing text: {}".format(text)) print("Stub is ", stub) stub.AnalyseSentiment(text) #--> Error, use a proto object here. return "Analysed" The correct way would be to change the line stub.AnalyseSentiment(text) , with stub.AnalyseSentiment(SentimentRequest(text=text))
6
2
72,722,768
2022-6-22
https://stackoverflow.com/questions/72722768/why-does-super-not-do-the-same-as-super-init
I have been wondering this for a while now, and I hope this isn't a stupid question with an obvious answer I'm not realizing: Why can't I just call the __init__ method of super() like super()()? I have to call the method like this instead: super().__init__() Here is an example that gets a TypeError: 'super' object is not callable error when I run it (specifically on line 6 that comes from line 3 in __init__): class My_Int(int): def __init__(self,value,extr_data=None): super()(value) self.extr_data=extr_data x=My_Int(3) Doesn't super() get the inherited class int making super()(value) the same as int(value)? Furthermore, why can't I use the len function with super() when inheriting from the class list? Doesn't len() do the same as __len__()? class My_List(list): def some_method1(self): print(len(super())) def some_method2(self): print(super().__len__()) x=My_List((1,2,3,4)) x.some_method2() x.some_method1() This example prints 4 and then an error as expected. Here is the output exactly: 4 Traceback (most recent call last): File "/home/user/test.py", line 11, in <module> x.some_method1() File "/home/user/test.py", line 3, in some_method1 print(len(super())) TypeError: object of type 'super' has no len() Notice I called some_method2 before calling some_method1 (sorry for the confusion). Am I missing something obvious here? P.S. Thanks for all the help!
super() objects can't intercept most special method calls, because they bypass the instance and look up the method on the type directly, and they don't want to implement all the special methods when many of them won't apply for any given usage. This case gets weirder, super()() would try to lookup a __call__ method on the super type itself, and pass it the super instance. They don't do this because it's ambiguous, and not particularly explicit. Does super()() mean invoke the super class's __init__? Its __call__? What if we're in a __new__ method, do you invoke __new__, __init__ or both? Does this mean all super uses must implicitly know which method they're called in (even more magical than knowing the class they were defined in and the self passed when constructed with zero arguments)? Rather than deal with all this, and to avoid implementing all the special methods on super just so it can delegate them if they exist on the instance in question, they required you to explicitly specify the special method you intend to call.
4
6
72,714,112
2022-6-22
https://stackoverflow.com/questions/72714112/what-is-the-difference-between-engine-begin-and-engine-connect
i go first straight for my questions: Why would one rather use engine.connect() instead of engine.begin(), if the second is more reliable? Then, why is it still on the tutorial page of SQLAlchemy and everywhere in stackoverflow? Performance? Why does engine.connect() work so inconsistently? Is the problem withing the autocommit? My backgroundstory to this is, that i just resolved an issue. Normal SQL-queries like SELECT, CREATE TABLE and DELETE would work flawlessly when using engine.connect(). Though, using MERGE would work very inconsistently. Sometimes committing, sometimes blocking other queries, sometimes nothing. Here it is recommended to use engine.begin() for MERGE queries. So i substituted the following code: with engine.connect() as connection: connection.execute('MERGE Table1 USING Table2 ON .....') by with engine.begin() as connection: connection.execute('MERGE Table1 USING Table2 ON .....') and now everything works perfectly. Inlcuding the queries of SELECT,CREATE TABLE and DELETE. In the SQLAlchemy docs it says the second option uses transactions with a transaction-commit, but the scope of with engine.connect() does an autocommit aswell. Sorry i am a complete newbie to SQL.
the scope of with engine.connect() does an autocommit as well No, it doesn't. That's the most striking difference between with engine.connect() and with engine.begin() with engine.connect() as conn: # do stuff # on exit, the transaction is automatically rolled back with engine.begin() as conn: # do stuff # on exit, the transaction is automatically committed if no errors occurred As mentioned in the tutorial, engine.connect() is used with the "[explicitly] commit as you go" style of code, while engine.begin() represents the "begin once" style. Transactions are used in both cases. However, engine.begin() begins the transaction immediately, while engine.connect() waits until a statement is executed before beginning the transaction. That permits us to alter the characteristics of the transaction that will eventually be started. A common use of this engine.connect() feature is to use non-default transaction isolation: # default isolation level with engine.connect() as conn: print(conn.get_isolation_level()) # REPEATABLE READ # using another isolation level with engine.connect().execution_options( isolation_level="SERIALIZABLE" ) as conn: print(conn.get_isolation_level()) # SERIALIZABLE
8
16
72,715,121
2022-6-22
https://stackoverflow.com/questions/72715121/how-to-restart-a-python-script
In a program I am writing in python I need to completely restart the program if a variable becomes true, looking for a while I found this command: while True: if reboot == True: os.execv(sys.argv[0], sys.argv) When executed it returns the error [Errno 8] Exec format error. I searched for further documentation on os.execv, but didn't find anything relevant, so my question is if anyone knows what I did wrong or knows a better way to restart a script (by restarting I mean completely re-running the script, as if it were been opened for the first time, so with all unassigned variables and no thread running).
There are multiple ways to achieve the same thing. Start by modifying the program to exit whenever the flag turns True. Then there are various options, each one with its advantages and disadvantages. Wrap it using a bash script. The script should handle exits and restart your program. A really basic version could be: #!/bin/bash while : do python program.py sleep 1 done Start the program as a sub-process of another program. Start by wrapping your program's code to a function. Then your __main__ could look like this: def program(): ### Here is the code of your program ... while True: from multiprocessing import Process process = Process(target=program) process.start() process.join() print("Restarting...") This code is relatively basic, and it requires error handling to be implemented. Use a process manager There are a lot of tools available that can monitor the process, run multiple processes in parallel and automatically restart stopped processes. It's worth having a look at PM2 or similar. IMHO the third option (process manager) looks like the safest approach. The other approaches will have edge cases and require implementation from your side to handle edge cases.
4
7
72,644,693
2022-6-16
https://stackoverflow.com/questions/72644693/new-union-shorthand-giving-unsupported-operand-types-for-str-and-type
Before 3.10, I was using Union to create union parameter annotations: from typing import Union class Vector: def __mul__(self, other: Union["Vector", float]): pass Now, when I use the new union shorthand syntax: class Vector: def __mul__(self, other: "Vector" | float): pass I get the error: TypeError: unsupported operand type(s) for |: 'str' and 'type' Is this not supported?
The fact that it's being used as a type hint doesn't really matter; fundamentally the expression "Vector" | float is a type error because strings don't support the | operator, they don't implement __or__. To get this passing, you have three options: Defer evaluation (see PEP 563): from __future__ import annotations class Vector: def __mul__(self, other: Vector | float): ... Make the whole type a string (effectively the same as deferring evaluation): class Vector: def __mul__(self, other: "Vector | float"): ... Keep using the Union: from typing import Union class Vector: def __mul__(self, other: Union["Vector", float]): ... You can see further discussion on this bug, resolved by documenting the behaviour explicitly: Note: The | operand cannot be used at runtime to define unions where one or more members is a forward reference. For example, int | "Foo", where "Foo" is a reference to a class not yet defined, will fail at runtime. For unions which include forward references, present the whole expression as a string, e.g. "int | Foo".
36
41
72,633,453
2022-6-15
https://stackoverflow.com/questions/72633453/how-can-i-style-a-django-form-with-css
I tried looking for the answer earlier but couldn't seem to figure out a few things. I'm creating my form in a form.py file so its a python file. Here is my forms.py file : class UploadForm(ModelForm): name = forms.TextInput(attrs={'class': 'myfieldclass'}) details = forms.TextInput() littype = forms.TextInput() image = forms.ImageField() class Meta: model = info fields = ["name", "details", "littype", "image"] Here is my views.py function for it if it helps find the solution : def uploadform(request): if request.method == 'POST': form = UploadForm(request.POST, request.FILES) print(request.FILES) if form.is_valid(): form.save() redirect(home) return render(request, 'uploadform.html', {'form': UploadForm}) To style it I thought I could do something like this which I found in another question : class MyForm(forms.Form): myfield = forms.CharField(widget=forms.TextInput(attrs={'class': 'myfieldclass'})) Except I have no idea how to link a css page to that python file. This is what I tried writing but i think it doesnt work because its meant for html but its in a python file : <link type="text/css" rel="stylesheet" href="templates/form.css"> And so I'm just not sure how to style my form. Thanks for any answers!
Within your template you have to just import the css file in the head tag, but do ensure you load static first. html file: {% load static %} <!doctype html> <html lang="en"> <head> # import the css file here <link rel="stylesheet" href="{% static 'path to css file' %}"> </head> ... </html> Within the css file: # Styling the class .myfieldclass{ width: 30% !important; height: 100px !important; ... } But please note that you don't have to import a css file since you can add the style tag in the head tag as well. For example: <!doctype html> <html lang="en"> <head> <style type="text/css"> .myfieldclass{ width: 30% !important; height: 100px !important; ... } </style> </head> ... </html> You could apply css from the python file itself as well. Approach from the python file. # Custom way to set css styling on a form field in python code def field_style(): styles_string = ' ' # List of what you want to add to style the field styles_list = [ 'width: 30% !important;', 'height: 100px !important;', ] # Converting the list to a string styles_string = styles_string.join(styles_list) # or # styles_string = ' '.join(styles_list) return styles_string class MyForm(forms.Form): myfield = forms.CharField(widget=forms.TextInput(attrs={'class': 'myfieldclass', 'style': field_style()})) # 'style': field_style() .... field_style() will return in this case 'width: 30% !important; height: 100px !important;' Any of those should work, but I recommend styling from the html or css file instead.
3
5
72,633,461
2022-6-15
https://stackoverflow.com/questions/72633461/sample-from-each-group-in-polars-dataframe
I'm looking for a function along the lines of df.group_by('column').agg(sample(10)) so that I can take ten or so randomly-selected elements from each group. This is specifically so I can read in a LazyFrame and work with a small sample of each group as opposed to the entire dataframe. Update: One approximate solution is: df = lf.group_by('column').agg( pl.all().sample(.001) ) df = df.explode(df.columns[1:]) Update 2 That approximate solution is just the same as sampling the whole dataframe and doing a groupby after. No good.
Let start with some dummy data: n = 100 seed = 0 df = pl.DataFrame({ "groups": (pl.int_range(n, eager=True) % 5).shuffle(seed=seed), "values": pl.int_range(n, eager=True).shuffle(seed=seed) }) shape: (100, 2) ┌────────┬────────┐ │ groups ┆ values │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════╪════════╡ │ 0 ┆ 55 │ │ 0 ┆ 40 │ │ 2 ┆ 57 │ │ 4 ┆ 99 │ │ 4 ┆ 4 │ │ … ┆ … │ │ 0 ┆ 90 │ │ 2 ┆ 87 │ │ 1 ┆ 96 │ │ 3 ┆ 43 │ │ 4 ┆ 44 │ └────────┴────────┘ This gives us 100 / 5, is 5 groups of 20 elements. Let's verify that: df.group_by("groups").agg(pl.len()) shape: (5, 2) ┌────────┬─────┐ │ groups ┆ len │ │ --- ┆ --- │ │ i64 ┆ u32 │ ╞════════╪═════╡ │ 0 ┆ 20 │ │ 4 ┆ 20 │ │ 2 ┆ 20 │ │ 3 ┆ 20 │ │ 1 ┆ 20 │ └────────┴─────┘ Sample our data Now we are going to use a window function to take a sample of our data. df.filter( pl.int_range(pl.len()).shuffle().over("groups") < 10 ) shape: (50, 2) ┌────────┬────────┐ │ groups ┆ values │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════╪════════╡ │ 0 ┆ 55 │ │ 2 ┆ 57 │ │ 4 ┆ 99 │ │ 4 ┆ 4 │ │ 1 ┆ 81 │ │ … ┆ … │ │ 2 ┆ 22 │ │ 1 ┆ 76 │ │ 3 ┆ 98 │ │ 0 ┆ 90 │ │ 4 ┆ 44 │ └────────┴────────┘ For every group in over("group") the pl.int_range(pl.len()) expression creates an index row. We then shuffle that range so that we take a sample and not a slice. Then we only want to take the index values that are lower than 10. This creates a boolean mask that we can pass to the filter method.
14
13
72,707,357
2022-6-21
https://stackoverflow.com/questions/72707357/buildx-failed-with-error-cache-export-feature-is-currently-not-supported-for-d
I have been trying to setup a CI pipeline through Github Actions to docker-hub. I have written the following .yml file part of .github\workflow and getting the error as indicated below in Build and Push step of the job. I have tried to find it on Internet but I could not able to find it. name: Build and Deploy Code on: [push, pull_request] jobs: job1: # This will tell the job to run on ubuntu machine runs-on: ubuntu-latest env: DB_USER: postgres DB_PASSWORD: root123 DB_HOST: postgres DB_PORT: 5432 DATABASE: postgres SECRET_KEY: 09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7 ALGORITHM: HS256 ACCESS_TOKEN_EXPIRE_DAYS: 300 REDIS_HOST: redis REDIS_PORT: 6379 services: postgres: image: postgres env: POSTGRES_PASSWORD: root123 options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 ports: - 5432:5432 redis: image: redis #Docker hub image options: >- #Set health checks to wait until redis has started --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5 ports: - 6379:6379 steps: - name: running git repo uses: actions/checkout@v3 # from marketplace - name: install python version 3.9 uses: actions/setup-python@v2 - name: update pip run: python -m pip install --upgrade pip - name: Install all dependencies run: pip install -r backend/requirements.txt - name: login to docker-hub uses: docker/login-action@v1 with: username: pankeshpatel password: Access-token - name: Build and push id: docker_build uses: docker/build-push-action@v2 with: context: ./ file: ./Dockerfile builder: ${{ steps.buildx.outputs.name }} push: true tags: pankeshpatel/bmw:latest cache-from: type=local, src=/tmp/.buildx-cache cache-to: type=local,dest=/tmp/.buildx-cache - name: Image digest run: echo ${{ steps.docker_build.outputs.diget }} My dockerfile is # Use an existing image as a base FROM python:3.9.7 WORKDIR /usr/src/app COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] The following is the error, when I build on github actions /usr/bin/docker buildx build --cache-from type=local, src=/tmp/.buildx-cache --cache-to type=local,dest=/tmp/.buildx-cache --file ./Dockerfile --iidfile /tmp/docker-build-push-XZhTaT/iidfile --tag pankeshpatel/bmw:latest --metadata-file /tmp/docker-build-push-XZhTaT/metadata-file --push ./ error: cache export feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use") Error: buildx failed with: error: cache export feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")
You'll need something like this in your yml workflow file: - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 You can see a complete example at Configuring your GitHub Action builder
10
21
72,636,788
2022-6-15
https://stackoverflow.com/questions/72636788/in-python-how-can-i-add-a-patch-decorator-that-does-not-add-a-mock-input-argume
In python, how can I add a patch that does not add a mock input argument? I want to add a patch on all methods in a class like so: @patch('django.utils.timezone.now', return_value=datetime.datetime(2022, 4, 22, tzinfo=timezone.utc)) class TestmanyMethods(unittest.TestCase): # my test methods here But when I do that, a mock must be added to all test methods. How can I patch the function without adding the mock input to all test methods?
One can do this by using the new keyword arg in the patch like so: @patch('django.utils.timezone.now', new=Mock(return_value=datetime.datetime(2022, 4, 22, tzinfo=timezone.utc))) class TestmanyMethods(unittest.TestCase): # my test methods here With that, the mock input is not added to the class's test methods
7
7
72,668,275
2022-6-18
https://stackoverflow.com/questions/72668275/how-to-upload-an-image-file-to-github-using-pygithub
I want to upload an image file to my Github Repository using Pygithub. from github import Github g=Github("My Git Token") repo=g.get_repo("My Repo") content=repo.get_contents("") f=open("1.png") img=f.read() repo.create_file("1.png","commit",img) But I am getting the following error: File "c:\Users\mjjha\Documents\Checkrow\tempCodeRunnerFile.py", line 10, in <module> img=f.read() File "C:\Program Files\Python310\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 119: character maps to <undefined> This method is working fine for text files. But I am not able to upload image files to my repository. When I use open-CV to read the image file I get the following error: assert isinstance(content, (str, bytes)) AssertionError My code while using cv2 is: from github import Github import cv2 g=Github("") repo=g.get_repo("") content=repo.get_contents("") f=cv2.imread("1.png") img=f repo.create_file("1.png","commit",img) I think createFile() only takes string as an argument and thus these errors are showing up. Is there any way to upload the image file to Github using Pygithub (or any library)?
Just Tested my solution and it's working Explanation: The AssertionError assert isinstance(content, (str, bytes)) Tells us that it only takes string and bytes. So we just have to convert our image to bytes. #1 Converting image to bytearray file_path = "1.png" with open(file_path, "rb") as image: f = image.read() image_data = bytearray(f) #2 Pushing image to Github Repo by converting the data to bytes repo.create_file("1.png","commit", bytes(image_data)) #CompleteCode in a Fun way from github import Github g=Github("Git Token") repo=g.get_repo("Repo") file_path = "Image.png" message = "Commit Message" branch = "master" with open(file_path, "rb") as image: f = image.read() image_data = bytearray(f) def push_image(path,commit_message,content,branch,update=False): if update: contents = repo.get_contents(path, ref=branch) repo.update_file(contents.path, commit_message, content, sha=contents.sha, branch) else: repo.create_file(path, commit_message, content, branch) push_image(file_path,message, bytes(image_data), branch, update=False) My References For this Answer. Converting Images to ByteArray Programmatically Update File PyGithub (StringFile Not Image)
4
8
72,660,874
2022-6-17
https://stackoverflow.com/questions/72660874/how-to-print-one-log-line-per-every-10-epochs-when-training-models-with-tensorfl
When I fit the model with: model.fit(X, y, epochs=40, batch_size=32, validation_split=0.2, verbose=2) it prints one log line for each epoch as: Epoch 1/100 0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750 Epoch 2/100 0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250 Epoch 3/100 0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250 ..... How can I print the log line per very 10 epochs as follows? Epoch 10/100 0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750 Epoch 20/100 0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250 Epoch 30/100 0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250 .....
This callback will create and write on a log text file what you want: log_path = "text_file_name.txt" # it will be created automatically class print_training_on_text_every_10_epochs_Callback(Callback): def __init__(self, logpath): self.logpath = logpath def on_epoch_end(self, epoch, logs=None): with open(self.logpath, "a") as writefile: # put log_path here with redirect_stdout(writefile): if (int(epoch) % 10) == 0: print( f"Epoch: {epoch:>3}" + f" | Loss: {logs['loss']:.4e}" + f" | Accuracy: {logs['accuracy']:.4e}" + f" | Validation loss: {logs['val_loss']:.4e}" + f" | Validation accuracy: {logs['val_accuracy']:.4e}" ) writefile.write("\n") my_callbacks = [ print_training_on_text_every_10_epochs_Callback(logpath=log_path), ] You want to call it like this. model.fit( training_dataset, epochs=60, verbose=0, # Must be zero, else it would still print log per epoch. validation_data=validation_dataset, callbacks=my_callbacks, ) The text file will be updated only after 10 epochs have passed This is what i get on the text file Epoch: 0 | Loss: 5.3454e+00 | Valid loss: 4.2420e-01 Epoch: 10 | Loss: 3.1342e-02 | Valid loss: 3.4554e-02 Epoch: 20 | Loss: 1.6330e-02 | Valid loss: 2.2512e-02 The first epoch is numbered 0, the second 1 and so on.
4
2
72,642,843
2022-6-16
https://stackoverflow.com/questions/72642843/using-the-in-operator-in-python-3-10-match-case
Is there a "in" operator in python 3.10 Match Case like with if else statements if "\n" in message: the in operator doesn't work in match case match message: case "\n" in message: This doesn't work. How to have something like the "in" operator in Match-Case.
For completeness, it can be done indirectly using guards (as used in the other answer). match message: case message if "\n" in message: ... some code... case message if "foo" in message: ... some other code... Note using the name message in each case statement is not required (this does have the side effect of binding the name x to a value): match message: case x if "\n" in x: ... some code... case x if "foo" in x: ... some other code... Or you can also use wildcards (which skips the name binding): match message: case _ if "\n" in message: ... some code... case _ if "foo" in message: ... some other code... But you're probably better off just using an if statement since using match-case is more code and has worse readability for this situation if "\n" in message: ... some code ... elif "foo" in message: ... some code ... else: ... some code ...
5
8
72,651,555
2022-6-16
https://stackoverflow.com/questions/72651555/attributeerror-module-jinja2-ext-has-no-attribute-autoescape-while-trying-t
I am new to Flask and Babel and I have just started a project which will contain several languages. After I have generated the babel.cfg file, when I attempt to extract it with the command pybabel extract -F babel.cfg -o messages.pot ., I get the AttributeError: module 'jinja2.ext' has no attribute 'autoescape' error. What can be the reason for this error and how can I fix it? Thank you
With Jinja2 3.1, WithExtension and AutoEscapeExtension are built-in now. So you don't need these extensions anymore. Delete these extension from babel.cfg file [python: **.py] [jinja2: **/templates/**.html] ;extensions=jinja2.ext.auto escape,jinja2.ext.with_ https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-0-0
13
24
72,646,458
2022-6-16
https://stackoverflow.com/questions/72646458/how-to-authenticate-google-services-in-google-colaboratory-without-user-interact
I use the following to access bouth Google Drive and Google Sheets in my Google Colab Notebook: # Mount Google Drive from google.colab import drive drive.mount('/content/drive') # Google Sheets from google.colab import auth auth.authenticate_user() import gspread from google.auth import default creds, _ = default() gc = gspread.authorize(creds) from gspread_dataframe import get_as_dataframe, set_with_dataframe My problem with this is that it asks for the user to confirm the authentication in a pop-up and I want to use colabctl (https://github.com/bitnom/colabctl) to automate the execution of some scripts, so I need to execute the authentication part without user interaction.
As a possible solution, I may suggest using Service Account. Go to GCP Console Select your current project (or create a new one): Once the project is selected, go to IAM and Admin -> Service accounts -> + Create service account: Next, you'll see a create service account prompt, which consists of 3 form steps. Fill in step 1 with any data you want. Skip steps 2 and 3. You'll be redirected to the new service account page. Note an Email field there. You may share any Google Drive content for that email address (same as you do for users' personal email addresses). Now you need to create a service account key, so you could use it to authenticate your script (select JSON key type): After step 6, you should get your JSON key file downloaded. Put it into your Google Colab files. You may do something like that to initialize your google client (gc) : import gspread gc = gspread.service_account(filename='/content/credentials.json') Important: gspread version should be >=3.6.0 to support service_account() method. Note: See my example to list all shared Google Sheets using gspread:
4
4
72,686,010
2022-6-20
https://stackoverflow.com/questions/72686010/is-preexec-fn-ever-safe-in-multi-threaded-programs-under-what-circumstances
I understand that using subprocess.Popen(..., preexec_fn=func) makes Popen thread-unsafe, and might deadlock the child process if used within multi-threaded programs: Warning: The preexec_fn parameter is not safe to use in the presence of threads in your application. The child process could deadlock before exec is called. If you must use it, keep it trivial! Minimize the number of libraries you call into. Are there any circumstances under which it is actually safe to use it within a multi-threaded environment? E.g. would passing a C-compiled extension function, one that does not acquire any interpreter locks by itself, be safe? I looked through the relevant interpreter code and am unable to find any trivially occurring deadlocks. Could passing a simple, pure-Python function such as lambda: os.nice(20) ever make the child process deadlock? Note: most of the obvious deadlocks are avoided via a call to PyOS_AfterFork_Child() (PyOS_AfterFork() in earlier versions of Python). Note 2: for the sake of making the question answerable, lets assume we are running on a recent version of Glibc.
The following explanation is for POSIX only. This issue of executing code after forking and before execing in a multi-threaded process is not Python specific. In the child, do not call any library functions after calling fork() and before calling exec(). One of the library functions might use a lock that was held in the parent at the time of the fork(). Besides the usual concerns such as locking shared data, a library should be well behaved with respect to forking a child process when only the thread that called fork() is running. The problem is that the sole thread in the child process might try to grab a lock held by a thread not duplicated in the child. For example, assume that T1 is in the middle of printing something and holds a lock for printf(), when T2 forks a new process. In the child process, if the sole thread (T2) calls printf(), T2 promptly deadlocks. https://docs.oracle.com/cd/E19120-01/open.solaris/816-5137/gen-1/index.html The fork( ) system call creates an exact duplicate of the address space from which it is called, resulting in two address spaces executing the same code. Suppose that one of the other threads (any thread other than the one doing the fork( )) has the job of deducting money from your checking account. POSIX defined the behavior of fork( ) in the presence of threads to propagate only the forking thread. If the other thread has a mutex locked, the mutex will be locked in the child process, but the lock owner will not exist to unlock it. Therefore, the resource protected by the lock will be permanently unavailable. The fact that there may be mutexes outstanding only becomes a problem if your code attempts to lock a mutex that could be locked by another thread at the time of the fork( ). This means that you cannot call outside of your own code between the call to fork( ) and the call to exec( ). Note that a call to malloc( ), for example, is a call outside of the currently executing application program and may have a mutex outstanding. if your code calls some of your own code that does not make any calls outside of your code and does not lock any mutexes that could possibly be locked in another thread, then your code is safe. http://www.doublersolutions.com/docs/dce/osfdocs/htmls/develop/appdev/Appde193.htm When duplicating the parent process, the fork subroutine also duplicates all the synchronization variables, including their state. Thus, for example, mutexes may be held by threads that no longer exist in the child process and any associated resource may be inconsistent. https://www.ibm.com/docs/en/aix/7.2?topic=programming-process-duplication-termination https://pubs.opengroup.org/onlinepubs/000095399/functions/fork.html https://lwn.net/Articles/674660/ https://softwareengineering.stackexchange.com/questions/384505/why-would-cpython-logging-use-a-lock-for-each-handler-rather-than-one-lock-per-l The code below is stolen from https://blog.actorsfit.com/a?ID=00001-993928f7-96b0-42dd-8903-18ae712467f3 The preexec_fn in subprocess.Popenis similar to target in multiprocessing.Process and interrupting multiprocessing.Process clearly shows in the stacktrace the lock acquiring code (self.lock.acquire()). time.sleep in emit mostly results in deadlock. import sys import time import logging import threading import multiprocessing import subprocess class MyHandler(logging.StreamHandler): def emit(self, record): time.sleep(0.1) super().emit(record) logger = logging.getLogger() logger.setLevel(logging.DEBUG) handler = MyHandler() formatter = logging.Formatter('%(asctime)s %(process)d %(thread)d %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) def thread_fn(): logger.info('thread') logger.info('thread') def process_fn(): logger.info('child') logger.info('child') t1 = threading.Thread(target=thread_fn) t1.start() p1 = multiprocessing.Process(target=process_fn) p1.start() # subprocess.Popen(args=['echo from shell'], shell=True, preexec_fn=process_fn) Normal output; 2022-06-30 03:28:28,162 30093 140582533375744 thread 2022-06-30 03:28:28,164 30100 140582559856448 child 2022-06-30 03:28:28,263 30093 140582533375744 thread 2022-06-30 03:28:28,266 30100 140582559856448 child os.register_at_fork (pthread_atfork) was added in Python 3.7. This is the mechanism CPython uses to avoid deadlocks. https://github.com/google/python-atfork For demo, I delete it before importing logging. import os del os.register_at_fork # same code as above Deadlock output; 2022-06-30 03:30:10,090 30374 140014242154240 thread 2022-06-30 03:30:10,191 30374 140014242154240 thread ^CError in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll Process Process-1: pid, sts = os.waitpid(self.pid, flag) KeyboardInterrupt Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 313, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/niz/src/python/tor-sec/tor_sec/new_fork_trhead.py", line 29, in process_fn logger.info('child') File "/usr/lib/python3.8/logging/__init__.py", line 1434, in info self._log(INFO, msg, args, **kwargs) File "/usr/lib/python3.8/logging/__init__.py", line 1577, in _log self.handle(record) File "/usr/lib/python3.8/logging/__init__.py", line 1587, in handle self.callHandlers(record) File "/usr/lib/python3.8/logging/__init__.py", line 1649, in callHandlers hdlr.handle(record) File "/usr/lib/python3.8/logging/__init__.py", line 948, in handle self.acquire() File "/usr/lib/python3.8/logging/__init__.py", line 899, in acquire self.lock.acquire() KeyboardInterrupt logging module at GH
4
2
72,656,861
2022-6-17
https://stackoverflow.com/questions/72656861/how-to-add-hatches-to-boxplots-with-sns-boxplot-or-sns-catplot
I need to add hatches to a categorical box plot. What I have is this: What I need is something like this (with the median lines): And what I have tried is this code: exercise = sns.load_dataset("exercise") g = sns.catplot(x="time", y="pulse", hue="kind", data=exercise, kind="box") bars = g.axes[0][0].patches hatches=['//','..','xx','//','..','xx','//','..','xx'] for pat,bar in zip(hatches,bars): bar.set_hatch(pat) That only generates the first figure. The idea for lines 3-6 comes from this question. But the idea to get axes[0][0] in line 3 comes from this question. Because FacetGrids don't have attributes like patches or containers, it makes it harder to adapt the answers about hatches in individual plots to categorical plots, so I couldn't figure it out. Other reviewed questions that don't work: Face pattern for boxes in boxplots
Iterate through each subplot / FacetGrid with for ax in g.axes.flat:. ax.patches contains matplotlib.patches.Rectangle and matplotlib.patches.PathPatch, so the correct ones must be used. Caveat: all hues must appear for each group in each Facet, otherwise the patches and hatches will not match. In this case, manual or conditional code will probably be required to correctly determine h, so zip(patches, h) works. Tested in python 3.10, pandas 1.4.2, matplotlib 3.5.1, seaborn 0.11.2 import matplotlib as mpl import seaborn as sns # load test data exercise = sns.load_dataset("exercise") # plot g = sns.catplot(x="time", y="pulse", hue="kind", data=exercise, col='diet', kind="box") # hatches must equal the number of hues (3 in this case) hatches = ['//', '..', 'xx'] # iterate through each subplot / Facet for ax in g.axes.flat: # select the correct patches patches = [patch for patch in ax.patches if type(patch) == mpl.patches.PathPatch] # the number of patches should be evenly divisible by the number of hatches h = hatches * (len(patches) // len(hatches)) # iterate through the patches for each subplot for patch, hatch in zip(patches, h): patch.set_hatch(hatch) fc = patch.get_facecolor() patch.set_edgecolor(fc) patch.set_facecolor('none') Add the following, to change the legend. for lp, hatch in zip(g.legend.get_patches(), hatches): lp.set_hatch(hatch) fc = lp.get_facecolor() lp.set_edgecolor(fc) lp.set_facecolor('none') If only using the axes-level sns.boxplot, there's no need to iterate through multiple axes. ax = sns.boxplot(x="time", y="pulse", hue="kind", data=exercise) # select the correct patches patches = [patch for patch in ax.patches if type(patch) == mpl.patches.PathPatch] # the number of patches should be evenly divisible by the number of hatches h = hatches * (len(patches) // len(hatches)) # iterate through the patches for each subplot for patch, hatch in zip(patches, h): patch.set_hatch(hatch) fc = patch.get_facecolor() patch.set_edgecolor(fc) patch.set_facecolor('none') l = ax.legend() for lp, hatch in zip(l.get_patches(), hatches): lp.set_hatch(hatch) fc = lp.get_facecolor() lp.set_edgecolor(fc) lp.set_facecolor('none') To keep the facecolor of the box plots: Remove patch.set_facecolor('none') Set the edgecolor as 'k' (black) instead of fc, patch.set_edgecolor('k'). Applies to the sns.catplot code too. ax = sns.boxplot(x="time", y="pulse", hue="kind", data=exercise) # select the correct patches patches = [patch for patch in ax.patches if type(patch) == mpl.patches.PathPatch] # the number of patches should be evenly divisible by the number of hatches h = hatches * (len(patches) // len(hatches)) # iterate through the patches for each subplot for patch, hatch in zip(patches, h): patch.set_hatch(hatch) patch.set_edgecolor('k') l = ax.legend() for lp, hatch in zip(l.get_patches(), hatches): lp.set_hatch(hatch) lp.set_edgecolor('k')
5
7
72,694,757
2022-6-21
https://stackoverflow.com/questions/72694757/cryptographydeprecationwarning-python-3-6-is-no-longer-supported-by-the-python
I upgraded my system from python 2 to python 3, and now when I run my code: from cryptography.hazmat.backends import default_backend I am getting this error /usr/local/lib/python3.6/site-packages/paramiko/transport.py:33: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release. How to resolve it?
I ran into this issue today and did some digging. Since the os is not provided I think you could consider one of the options below: Upgrade your python version. This might be the best option since python 3.6 reached its EOL. You might be using SSH in your code. Consider installing an older version of paramiko. You can suppress the warning with these lines of code before importing Paramiko: import warnings warnings.filterwarnings(action='ignore',module='.*paramiko.*') or If you want to be more selective about suppressing JUST that particular deprecation warning: import warnings from cryptography.utils import CryptographyDeprecationWarning warnings.filterwarnings("ignore", category=CryptographyDeprecationWarning) source for option 3 Check if you installed it using pip when you needed to use conda.
9
5
72,663,716
2022-6-17
https://stackoverflow.com/questions/72663716/how-to-efficiently-convert-npy-to-xarray-zarr
I have a 37 GB .npy file that I would like to convert to Zarr store so that I can include coordinate labels. I have code that does this in theory, but I keep running out of memory. I want to use Dask in-between to facilitate doing this in chunks, but I still keep running out of memory. The data is "thickness maps" for people's femoral cartilage. Each map is a 310x310 float array, and there are 47789 of these maps. So the data shape is (47789, 310, 310). Step 1: Load the npy file as a memmapped Dask array. fem_dask = dask.array.from_array(np.load('/Volumes/T7/cartilagenpy20220602/femoral.npy', mmap_mode='r'), chunks=(300, -1, -1)) Step 2: Make an xarray DataArray over the Dask array, with the desired coordinates. I have several coordinates for the 'map' dimension that come from metadata (a pandas dataframe). fem_xr = xr.DataArray(fem_dask, dims=['map','x','y'], coords={'patient_id': ('map', metadata['patient_id']), 'side': ('map', metadata['side'].astype(np.string_)), 'timepoint': ('map', metadata['timepoint']) }) Step 3: Write to Zarr. fem_ds = fem_xr.to_dataset(name='femoral') # Zarr requires Dataset, not DataArray res = fem_ds.to_zarr('/Volumes/T7/femoral.zarr', encoding={'femoral': {'dtype': 'float32'}}, compute=False) res.visualize() See task graph below if desired When I call res.compute(), RAM use quickly climbs out of control. The other python processes, which I think are the Dask workers, seem to be inactive: But a bit later, they are active -- see that one of those Python processes now has 20 gb RAM and another has 36 gb: Which we can also confirm from the Dask dashboard: Eventually all the workers get killed and the task errors out. How can I do this in an efficient way that correctly uses Dask, xarray, and Zarr, without running out of RAM (or melting the laptop)?
using threads If the dask workers can share threads, your code should just work. If you don't initialize a dask Cluster explicitly, dask.Array will create one with default args, which use processes. This results in the behavior you're seeing. To solve this, explicitly create a cluster using threads: # use threads, not processes cluster = dask.distributed.LocalCluster(processes=False) client = dask.distributed.Client(cluster) arr = np.load('myarr.npy', mmap_mode='r') da = dda.from_array(arr).rechunk(chunks=(100, 310, 310)) da.to_zarr('myarr.zarr', mode='w') using processes or distributed workers If you're using a cluster which cannot share threads, such as a JobQueue, KubernetesCluster, etc., you can use the following to read the npy file, assuming it's on a networked filesystem or is in some way available to all workers. Here's a workflow that creates an empy array from the memory map, then maps the read job using dask.array.map_blocks. The key is the use of the block_info optional keyword, which gives information about the location of the block within the array, which we can use to slice new mmap array objects using dask workers: def load_npy_chunk(da, fp, block_info=None, mmap_mode='r'): """Load a slice of the .npy array, making use of the block_info kwarg""" np_mmap = np.load(fp, mmap_mode=mmap_mode) array_location = block_info[0]['array-location'] dim_slicer = tuple(list(map(lambda x: slice(*x), array_location))) return np_mmap[dim_slicer] def dask_read_npy(fp, chunks=None, mmap_mode='r'): """Read metadata by opening the mmap, then send the read job to workers""" np_mmap = np.load(fp, mmap_mode=mmap_mode) da = dda.empty_like(np_mmap, chunks=chunks) return da.map_blocks(load_npy_chunk, fp=fp, mmap_mode=mmap_mode, meta=da) This works for me on a demo of the same size (you could add the xarray.DataArray creation/formatting step at the end, but the dask ops work fine and worker memory stays below 1GB for me): import numpy as np, dask.array as dda, xarray as xr, pandas as pd, dask.distributed ### insert/import above functions here # save a large numpy array np.save('myarr.npy', np.empty(shape=(47789, 310, 310), dtype=np.float32)) cluster = dask.distributed.LocalCluster() client = dask.distributed.Client(cluster) da = dask_read_npy('myarr.npy', chunks=(300, -1, -1), mmap_mode='r') da.to_zarr('myarr.zarr', mode='w')
4
4
72,707,064
2022-6-21
https://stackoverflow.com/questions/72707064/vs-code-does-not-find-own-python-module-in-workspace
I am working on a python package with VS Code with the following layout of the opened workspace folder in VS Code workspace | + tests | | - test1.py | | + other_tests | | | - test2.py | + mymodule | | ... What I want is to call in test1.py and test2.py the package mymodule with import mymodule When I do this, I always get the error no module named 'mymodule'. I know i can load the path via inserting the following to all test files: import sys sys.path.append('<path_to>/workspace/') Then mymodule is correctly loaded. (I know I can use pathlib.Path(__file__).parent[n] for some n to get <path_to>. But this process is really cumbersome if you have many test files in different folders... and it is just ugly. Is there any way, to tell VS Code to always include the workspace in the path? I tried a .env file with PYTHONPATH=<path_to>/workspace/ and I also tried to add the following to launch.json: "env": { "PYTHONPATH": "${workspaceFolder}" } Both did not do the trick. (I am also not sure if PYTHONPATH is at all the right solution to my problem, i just tried it ^^). In short: I would like to tell VS Code, that it should always include workspace to the search path whenever a python function tries to load a module, regarding the location of the function, so I do not have to call sys.path.append('<path_to>/workspace/') in every single main file.
When the python interpreter is importing a package, it looks for the package in the following locations: the directory containing the input script (or the current directory). PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH). the installation-dependent default. So if your package and script file are in the same directory, it will be easily found. But obviously you chose a different directory. Then you have to use the methods mentioned in your article to specify the path to the interpreter, such as modifying the PYTHONPATH environment variable or using the sys.path.append() method. Here's a suggestion for another approach. Put your package where python can find it, say under the lib folder. This way you don't need to use the sys.path.append() method at the beginning of each file. I am using a virtual environment UPDATE: You can use virtual environments. it won't make a difference. If you are worried about some factors and do not want to use it, there is no problem. Just find the lib folder in your current environment and put your package in it. for example ( As far as my machine is concerned ) : The lib folder corresponding to the interpreter of this environment is in: C:\Users\Admin\AppData\Local\Programs\Python\Python310\Lib The lib folder corresponding to the interpreter of this environment is in: C:\Users\Admin\anaconda3\Lib PS : It can also be placed in the site-packages folder one level below the lib folder.
7
3
72,688,032
2022-6-20
https://stackoverflow.com/questions/72688032/hash-of-integers-in-python
I understand that hash of an immutable object is an integer representation of that object which is unique within the process's lifetime. Hash of an integer object is the same as the value held by the integer. For example, >>> int(1000).__hash__() 1000 But when the integer grows big enough, above principle breaks after a certain threshold as it seems. Its value seems to be rolled with in some limit. >>> int(10000000000000000).__hash__() 10000000000000000 >>> int(100000000000000000).__hash__() 100000000000000000 >>> int(1000000000000000000).__hash__() 1000000000000000000 >>> int(10000000000000000000).__hash__() 776627963145224196 Two questions: What is the limit? What is the integer-space that hash table covers? How is the hash value calculated for an integer exceeding the above limit? System information: Linux lap-0179 5.13.0-44-generic #49~20.04.1-Ubuntu SMP Wed May 18 18:44:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Python interpreter: Python 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] on linux
While this is machine and implementation dependent, for CPython on 64-bit machines, the hash() for a non-negative integer n is computed as n % k with k = (2 ** 61 - 1) (= 2305843009213693951) hence values between 0 and k - 1 are left as is. This is empirically evidenced here: k = 2 ** 61 - 1 for i in range(k - 2, k + 2): print(i, hash(i), i % k) # 2305843009213693949 2305843009213693949 # 2305843009213693950 2305843009213693950 # 2305843009213693951 0 # 2305843009213693952 1 For the complete ruleset, see the documentation.
4
2
72,697,369
2022-6-21
https://stackoverflow.com/questions/72697369/real-time-data-plotting-from-a-high-throughput-source
I want to plot Real time in a way that updates fast. The data I have: arrives via serial port at 62.5 Hz data corresponds to 32 sensors (so plot 32 lines vs time). 32points *62.5Hz = 2000 points/sec The problem with my current plotting loop is that it runs slower than 62.5[Hz], meaning I miss some data coming in from serial port. I am looking for any solution to this problem that allows for: All data from serial port to be saved. Plots the data (even skipping a few points/using averages/eliminating old points and only keeping the most recent) Here is my code, I am using random data to simulate the serial port data. import numpy as np import time import matplotlib.pyplot as plt #extra plot debugging hz_ = [] #list of speed time_=[] #list for time vs Hz plot #store all data generated store_data = np.zeros((1, 33)) #only data to plot to_plot = np.zeros((1, 33)) #color each line colours = [f"C{i}" for i in range (1,33)] fig,ax = plt.subplots(1,1, figsize=(10,8)) ax.set_xlabel('time(s)') ax.set_ylabel('y') ax.set_ylim([0, 300]) ax.set_xlim([0, 200]) start_time = time.time() for i in range (100): loop_time = time.time() #make data with col0=time and col[1:11] = y values data = np.random.randint(1,255,(1,32)).astype(float) #simulated data, usually comes in at 62.5 [Hz] data = np.insert(data, 0, time.time()-start_time).reshape(1,33) #adding time for first column store_data = np.append(store_data, data , axis=0) to_plot = store_data[-100:,] for i in range(1, to_plot.shape[1]): ax.plot(to_plot[:,0], to_plot[:,i],c = colours[i-1], marker=(5, 2), linewidth=0, label=i) #ax.lines = ax.lines[-33:] #This soluition speeds it up, to clear old code. fig.canvas.draw() fig.canvas.flush_events() Hz = 1/(time.time()-loop_time) #for time vs Hz plot hz_.append(Hz) time_.append( time.time()-start_time) print(1/(time.time()-loop_time), "Hz - frequncy program loops at") #extra fig showing how speed drops off vs time fig,ax = plt.subplots(1,1, figsize=(10,8)) fig.suptitle('Decreasingn Speed vs Time', fontsize=20) ax.set_xlabel('time(s)') ax.set_ylabel('Hz') ax.plot(time_, hz_) fig.show() I also tried while using ax.lines = ax.lines[-33:] to remove older points, and this speed up the plotting, but still slower than the speed i aquire data. Any library/solution to make sure I collect all data and plot the general trendlines (so even not all points) is ok. Maybe something that runs acquiring data and plotting in parallel?
You could try to have two separate processes: one for acquiring and storing the data one for plotting the data Below there are two basic scripts to get the idea. You first run gen.py which starts to generate numbers and save them in a file. Then, in the same directory, you can run plot.py which will read the last part of the file and will update the a Matplotlib plot. Here is the gen.py script to generate data: #!/usr/bin/env python3 import time import random LIMIT_TIME = 100 # s DATA_FILENAME = "data.txt" def gen_data(filename, limit_time): start_time = time.time() elapsed_time = time.time() - start_time with open(filename, "w") as f: while elapsed_time < limit_time: f.write(f"{time.time():30.12f} {random.random():30.12f}\n") # produces 64 bytes f.flush() elapsed = time.time() - start_time gen_data(DATA_FILENAME, LIMIT_TIME) and here is the plot.py script to plot the data (reworked from this one): #!/usr/bin/env python3 import io import time import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.animation BUFFER_LEN = 64 DATA_FILENAME = "data.txt" PLOT_LIMIT = 20 ANIM_FILENAME = "video.gif" fig, ax = plt.subplots(1, 1, figsize=(10,8)) ax.set_title("Plot of random numbers from `gen.py`") ax.set_xlabel("time / s") ax.set_ylabel("random number / #") ax.set_ylim([0, 1]) def get_data(filename, buffer_len, delay=0.0): with open(filename, "r") as f: f.seek(0, io.SEEK_END) data = f.read(buffer_len) if delay: time.sleep(delay) return data def animate(i, xs, ys, limit=PLOT_LIMIT, verbose=False): # grab the data try: data = get_data(DATA_FILENAME, BUFFER_LEN) if verbose: print(data) x, y = map(float, data.split()) if x > xs[-1]: # Add x and y to lists xs.append(x) ys.append(y) # Limit x and y lists to 10 items xs = xs[-limit:] ys = ys[-limit:] else: print(f"W: {time.time()} :: STALE!") except ValueError: print(f"W: {time.time()} :: EXCEPTION!") else: # Draw x and y lists ax.clear() ax.set_ylim([0, 1]) ax.plot(xs, ys) # save video (only to attach here) #anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1, frames=3 * PLOT_LIMIT, repeat=False) #anim.save(ANIM_FILENAME, writer='imagemagick', fps=10) #print(f"I: Saved to `{ANIM_FILENAME}`") # show interactively anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1) plt.show() plt.close() Note that I have also included and commented out the portion of code that I used to generate the animated GIF above. I believe this should be enough to get you going.
4
4
72,691,325
2022-6-20
https://stackoverflow.com/questions/72691325/how-to-add-n-to-a-variable-value-to-submit-it-as-an-input-of-remote-process-v
I am working with Paramiko on Linux, I would like to know if I can send a variable to shell. I want to enter to "enable mode" of a Cisco router. But I don't want to hard-code the password in the script. I am using getpass, but when I run the script it fails in the enable command. I get "bad secrets". I know that happens for wrong password. I know shell had to add \n in the end of the command. But how I can add to a variable? #!/bin/python3 import paramiko import time import getpass ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) password = getpass.getpass('Enter password: ') router = {'hostname': '192.168.142.132', 'port': '22', 'username':'user', 'password': password} print(f"Connecting to {router['hostname']} ") ssh_client.connect(**router, look_for_keys=False, allow_agent=False) shell = ssh_client.invoke_shell() shell.send('enable\n') passwordE = getpass.getpass('Enter the enable password:') shell.send('passwordE')
Use + operator. shell.send(passwordE + '\n') Or you can simply call send twice: shell.send(passwordE) shell.send('\n') Obligatory warning: Do not use AutoAddPolicy this way – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
4
2
72,703,563
2022-6-21
https://stackoverflow.com/questions/72703563/read-a-pandas-dataframe-into-r
I have used reticulate package to source python code in R. source_python("data_loading.py") df = my_data() str(df) 'data.frame': 268 obs. of 13 variables: $ DKF: num 1.352 1.283 1.246 0.73 0.784 ... $ GDT: num NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... $ GSB: num 1.427 1.366 1.162 0.785 0.742 ... $ HKZ: num 1.355 1.495 1.211 0.807 0.927 ... $ SLG: num 1.549 1.542 1.228 0.632 0.716 ... $ SRL: num 2.059 1.379 1.751 0.845 1.205 ... $ UAB: num 1.555 1.51 1.007 0.904 0.447 ... $ UKE: num 1.269 1.449 1.122 0.97 0.858 ... $ UKF: num 1.325 1.483 1.172 0.972 0.852 ... $ UOR: num NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... $ UTH: num 1.336 1.499 1.149 0.974 0.875 ... $ WH1: num 1.48 1.466 1.191 0.762 0.657 ... $ WH3: num 1.434 1.385 1.154 0.781 0.661 ... - attr(*, "pandas.index")=MultiIndex([(2000, 1), (2000, 2), (2000, 3), (2000, 4), (2000, 5), (2000, 6), (2000, 7), (2000, 8), (2000, 9), (2000, 10), ... (2021, 7), (2021, 8), (2021, 9), (2021, 10), (2021, 11), (2021, 12), (2022, 1), (2022, 2), (2022, 3), (2022, 4)], names=['year', 'month'], length=268) I don't know how to access the indexes (pandas.index) in the R dataframe object ! I would like to be able to plot the data by grouping by year and month. Data in python environment look like : df = {'DKF': {(2000, 1): 1.3517226474709378, (2000, 2): 1.2830496825194315, (2000, 3): 1.2455233121831182, (2000, 4): 0.7299107400424064, (2000, 5): 0.7835659509257843, (2000, 6): 0.9378139543942825, (2000, 7): 0.6590645941762302, (2000, 8): 0.6744555183919568, (2000, 9): 1.0028705091347772, (2000, 10): 0.9667005317835341, (2000, 11): 1.0678912258546405, (2000, 12): 0.9112284147742452, (2001, 1): 0.8881925577638216, (2001, 2): 1.0094984492846548, (2001, 3): 0.9234818764810746, (2001, 4): 0.8256952570446592, (2001, 5): 0.7823148556967823, (2001, 6): 0.58656187528325, (2001, 7): 0.5986398301438705, (2001, 8): 0.9152977641105378, (2001, 9): 0.7957461362888425, (2001, 10): 1.2289233852664525, (2001, 11): 1.215903707002575, (2001, 12): 0.9411119504485526, (2002, 1): 1.253743089309995, (2002, 2): 1.3771820959703505, (2002, 3): 1.1458998591562888, (2002, 4): 0.780148229684933, (2002, 5): 0.8745953536826678, (2002, 6): 1.14604519185711, (2002, 7): 0.9925386650483022, (2002, 8): 0.8395810391153705, (2002, 9): 0.6922482171262826, (2002, 10): 1.0927541758198784, (2002, 11): 0.9355095991032771, (2002, 12): 1.0512662479817905, (2003, 1): 1.2138933498346607, (2003, 2): 0.47180505624826446, (2003, 3): 0.8997961889216146, (2003, 4): 1.1346551380638876}, 'GDT': {(2000, 1): nan, (2000, 2): nan, (2000, 3): nan, (2000, 4): nan, (2000, 5): nan, (2000, 6): nan, (2000, 7): nan, (2000, 8): nan, (2000, 9): nan, (2000, 10): nan, (2000, 11): nan, (2000, 12): nan, (2001, 1): nan, (2001, 2): nan, (2001, 3): nan, (2001, 4): nan, (2001, 5): nan, (2001, 6): nan, (2001, 7): nan, (2001, 8): nan, (2001, 9): nan, (2001, 10): nan, (2001, 11): nan, (2001, 12): nan, (2002, 1): 1.256807526788879, (2002, 2): 1.3045873333521896, (2002, 3): 1.0922871367435592, (2002, 4): 0.7311444028087076, (2002, 5): 0.9018008043140232, (2002, 6): 0.957461410038454, (2002, 7): 0.8197060450786827, (2002, 8): 0.47577545253187387, (2002, 9): 0.8128263280942774, (2002, 10): 1.1168799254163093, (2002, 11): 0.9324831606889018, (2002, 12): 1.3286891013558542, (2003, 1): 1.2538435349279176, (2003, 2): 0.6461925212429451, (2003, 3): 0.8513379444813536, (2003, 4): 1.0889167190287157}, 'GSB': {(2000, 1): 1.427328551728886, (2000, 2): 1.3664731294718808, (2000, 3): 1.1615089781365158, (2000, 4): 0.7851300114287154, (2000, 5): 0.7422343232255912, (2000, 6): 1.031580853170691, (2000, 7): 0.7242493688325659, (2000, 8): 0.5766162898393684, (2000, 9): 1.0332126497783654, (2000, 10): 1.1857205460845148, (2000, 11): 1.366502815093195, (2000, 12): 1.157381956346292, (2001, 1): 0.9902549635833164, (2001, 2): 0.9724394327156028, (2001, 3): 1.048815013697255, (2001, 4): 0.8273478915528274, (2001, 5): 0.7430593714405965, (2001, 6): 0.7536324648475042, (2001, 7): 0.6349134921464644, (2001, 8): 0.7826830215295701, (2001, 9): 1.0233964167323912, (2001, 10): 1.412918707213485, (2001, 11): 1.260580991090124, (2001, 12): 0.9576845313512137, (2002, 1): 1.2652569266159417, (2002, 2): 1.3065244634866244, (2002, 3): 1.0856890909647214, (2002, 4): 0.7318101398049849, (2002, 5): 0.9215182854866718, (2002, 6): 0.9534957346958998, (2002, 7): 0.8107925205416322, (2002, 8): 0.4734280417496323, (2002, 9): 0.7966062482404612, (2002, 10): 1.1170454368296987, (2002, 11): 0.9322063873084222, (2002, 12): 1.32238691545603, (2003, 1): 1.2531210790686047, (2003, 2): 0.6576300158329207, (2003, 3): 0.8389175666051795, (2003, 4): 1.0881232197385742}, 'HKZ': {(2000, 1): 1.3554525362480663, (2000, 2): 1.4951631344657137, (2000, 3): 1.2105681636598593, (2000, 4): 0.8072930444495694, (2000, 5): 0.926977008767427, (2000, 6): 0.8022017476928317, (2000, 7): 0.69652277417082, (2000, 8): 0.456583016223536, (2000, 9): 0.8919032946330773, (2000, 10): 1.349532169190594, (2000, 11): 1.5861420749020783, (2000, 12): 1.4914960612718604, (2001, 1): 1.1734639773151045, (2001, 2): 0.9458568945260551, (2001, 3): 1.0691427569966114, (2001, 4): 1.1488059357249147, (2001, 5): 1.0912434760028036, (2001, 6): 0.7592430266062555, (2001, 7): 0.7202558342474065, (2001, 8): 0.7608615824541107, (2001, 9): 1.1328043102045258, (2001, 10): 1.4500451856634633, (2001, 11): 1.0657649898420833, (2001, 12): 1.1213623436385283, (2002, 1): 1.2927673511490292, (2002, 2): 1.5212939742846794, (2002, 3): 0.9924934326785696, (2002, 4): 0.9566700271565793, (2002, 5): 1.0323873124589016, (2002, 6): 0.7476240350020518, (2002, 7): 0.9040117274509223, (2002, 8): 0.39706056802306583, (2002, 9): 0.6625794086698624, (2002, 10): 1.1662353900999265, (2002, 11): 0.9919567773693077, (2002, 12): 1.1606766246913955, (2003, 1): 1.4024781315822306, (2003, 2): 0.8526080709173501, (2003, 3): 0.825425230187763, (2003, 4): 1.0372282355745681}, 'SLG': {(2000, 1): 1.5487504409690902, (2000, 2): 1.5415838472843555, (2000, 3): 1.2282126578372121, (2000, 4): 0.632444048068446, (2000, 5): 0.7158862148925723, (2000, 6): 0.8454897104320542, (2000, 7): 0.4551865318931529, (2000, 8): 0.5826158644517344, (2000, 9): 0.9977976362941181, (2000, 10): 1.138989723113055, (2000, 11): 1.0930164859430511, (2000, 12): 0.8898335222651087, (2001, 1): 0.8842004763468283, (2001, 2): 1.0275628718138945, (2001, 3): 0.8746407345516105, (2001, 4): 0.7131039098650879, (2001, 5): 0.6837954767731654, (2001, 6): 0.4790985831081878, (2001, 7): 0.4988440876977696, (2001, 8): 0.8223209745844613, (2001, 9): 0.7417117336660771, (2001, 10): 1.3284354806383687, (2001, 11): 1.3409613738776682, (2001, 12): 0.9324083887988991, (2002, 1): 1.4379027571945358, (2002, 2): 1.7433696218646755, (2002, 3): 1.2750778360525403, (2002, 4): 0.5842975310863218, (2002, 5): 0.7145673671783507, (2002, 6): 1.1216530700131497, (2002, 7): 0.7555671720037355, (2002, 8): 0.6588768243045798, (2002, 9): 0.584884861123621, (2002, 10): 1.1539708804408706, (2002, 11): 0.9312319138226935, (2002, 12): 1.0305200250483402, (2003, 1): 1.3855526535671203, (2003, 2): 0.4802953659778654, (2003, 3): 0.8327866009478592, (2003, 4): 1.0896804098036148}, 'SRL': {(2000, 1): 2.0594032935476854, (2000, 2): 1.3791095262556443, (2000, 3): 1.7505922447857718, (2000, 4): 0.8446108545729051, (2000, 5): 1.2051921061573831, (2000, 6): 1.010183601066476, (2000, 7): 0.6508079260726014, (2000, 8): 0.7407540065635136, (2000, 9): 0.8294641525482448, (2000, 10): 1.136002398465397, (2000, 11): 0.7332513652958302, (2000, 12): 0.7179854954011051, (2001, 1): 1.2405938678639528, (2001, 2): 1.3443045113742507, (2001, 3): 0.6103847170941851, (2001, 4): 0.7889614738491562, (2001, 5): 0.8836293099223432, (2001, 6): 0.6398611202963761, (2001, 7): 0.9101469931017445, (2001, 8): 0.7110037038066164, (2001, 9): 0.6944426634848774, (2001, 10): 0.743163322298458, (2001, 11): 1.641550980464115, (2001, 12): 1.3732158510392996, (2002, 1): 1.5435232330958792, (2002, 2): 1.2671850986946984, (2002, 3): 1.0342041708473233, (2002, 4): 0.7413588349552598, (2002, 5): 0.6662992151308426, (2002, 6): 0.42994912796486, (2002, 7): 0.4614513838546502, (2002, 8): 0.33740401239093476, (2002, 9): 1.0786288336533776, (2002, 10): 0.7201737273806963, (2002, 11): 0.7295008767212589, (2002, 12): 1.1650001638656944, (2003, 1): 1.3133849508688604, (2003, 2): 1.2843906275766657, (2003, 3): 1.7162898530793824, (2003, 4): 0.8887775424337104}, 'UAB': {(2000, 1): 1.554890436351146, (2000, 2): 1.5095900904911665, (2000, 3): 1.0071870623725239, (2000, 4): 0.90356286432306, (2000, 5): 0.4465354831707406, (2000, 6): 0.7661408912717084, (2000, 7): 0.5077752357230882, (2000, 8): 0.3953245353667669, (2000, 9): 0.9392679381098866, (2000, 10): 1.3380420085685858, (2000, 11): 1.0678858961974096, (2000, 12): 1.4888333841570116, (2001, 1): 0.9310731584491987, (2001, 2): 1.23831178634754, (2001, 3): 1.0869269288933887, (2001, 4): 1.0308784345088216, (2001, 5): 0.35816971016021937, (2001, 6): 0.7424053638684773, (2001, 7): 0.5807096459444877, (2001, 8): 0.59610372872468, (2001, 9): 1.041214912761702, (2001, 10): 1.5274150222159848, (2001, 11): 1.0127948462369876, (2001, 12): 1.2298052365051528, (2002, 1): 1.3749320279589967, (2002, 2): 1.1748243056243677, (2002, 3): 1.2195563075988665, (2002, 4): 0.8655470797340363, (2002, 5): 0.9279975373039556, (2002, 6): 1.010587765336004, (2002, 7): 0.3876746609412122, (2002, 8): 0.6273582470414101, (2002, 9): 0.5008568258056315, (2002, 10): 1.237209343982918, (2002, 11): 1.4064113075563864, (2002, 12): 1.1152126487756142, (2003, 1): 1.4878686046714453, (2003, 2): 1.2739834005782047, (2003, 3): 0.9862452377520529, (2003, 4): 0.6976324874080646}, 'UKE': {(2000, 1): 1.2689103271741642, (2000, 2): 1.448850876836714, (2000, 3): 1.1215747148313229, (2000, 4): 0.9697809286187101, (2000, 5): 0.8580554214102996, (2000, 6): 0.7055802545680286, (2000, 7): 0.569177769555276, (2000, 8): 0.5093118817682774, (2000, 9): 0.9217522030201674, (2000, 10): 1.2829829181484778, (2000, 11): 1.37989309039233, (2000, 12): 1.4692246015519577, (2001, 1): 1.2514806457756777, (2001, 2): 0.9622779992784523, (2001, 3): 1.1523632901686611, (2001, 4): 1.170899452345576, (2001, 5): 0.8924892052905243, (2001, 6): 0.6389294687842351, (2001, 7): 0.6533905745080162, (2001, 8): 0.718973320926109, (2001, 9): 1.0480403777450924, (2001, 10): 1.4336148798358987, (2001, 11): 1.0187400631716952, (2001, 12): 1.2025621345600492, (2002, 1): 1.41115713291571, (2002, 2): 1.7419895139140342, (2002, 3): 0.9845770676939698, (2002, 4): 1.0031594716517345, (2002, 5): 1.1751386096650493, (2002, 6): 0.6471700368073762, (2002, 7): 0.6470519297198025, (2002, 8): 0.32799519360372364, (2002, 9): 0.6818357988453819, (2002, 10): 1.0383455732558626, (2002, 11): 1.1494409536769357, (2002, 12): 1.3192795655449814, (2003, 1): 1.6602002396523663, (2003, 2): 0.9006465583294068, (2003, 3): 0.9067321375346832, (2003, 4): 0.9961612972679239}, 'UKF': {(2000, 1): 1.3249433826076369, (2000, 2): 1.4834390048790316, (2000, 3): 1.1720744005021664, (2000, 4): 0.9721227613412933, (2000, 5): 0.8515448592728304, (2000, 6): 0.6193634003508302, (2000, 7): 0.5370454750545013, (2000, 8): 0.43042095662362834, (2000, 9): 0.8263760497346081, (2000, 10): 1.3281730350770156, (2000, 11): 1.4011761926628954, (2000, 12): 1.6555818191216964, (2001, 1): 1.3357608146005617, (2001, 2): 1.034198046367072, (2001, 3): 1.121260727670918, (2001, 4): 1.126124763713561, (2001, 5): 0.8756813654921048, (2001, 6): 0.5645819016595045, (2001, 7): 0.6105116367301959, (2001, 8): 0.6219140873691195, (2001, 9): 1.051983907255014, (2001, 10): 1.39389121228373, (2001, 11): 1.0481969547160257, (2001, 12): 1.2687238324230548, (2002, 1): 1.5254562990321106, (2002, 2): 2.051792727712291, (2002, 3): 1.0025016594272602, (2002, 4): 1.0659149467240578, (2002, 5): 1.1244210818495683, (2002, 6): 0.5315380292328106, (2002, 7): 0.5749383603397371, (2002, 8): 0.2885568506939066, (2002, 9): 0.6685273721263465, (2002, 10): 1.0808780812122165, (2002, 11): 1.0930127690879337, (2002, 12): 1.3422979556904777, (2003, 1): 1.7777334775059637, (2003, 2): 0.8524602861172142, (2003, 3): 0.9132216671993684, (2003, 4): 1.0135032567751208}, 'UOR': {(2000, 1): nan, (2000, 2): nan, (2000, 3): nan, (2000, 4): nan, (2000, 5): nan, (2000, 6): nan, (2000, 7): nan, (2000, 8): nan, (2000, 9): nan, (2000, 10): nan, (2000, 11): nan, (2000, 12): nan, (2001, 1): nan, (2001, 2): nan, (2001, 3): nan, (2001, 4): nan, (2001, 5): nan, (2001, 6): nan, (2001, 7): nan, (2001, 8): nan, (2001, 9): nan, (2001, 10): nan, (2001, 11): nan, (2001, 12): nan, (2002, 1): 1.568196798031884, (2002, 2): 1.6355130591248717, (2002, 3): 1.1190957069138054, (2002, 4): 0.8733079573257021, (2002, 5): 1.1104032676426743, (2002, 6): 1.0014213082673353, (2002, 7): 0.46462779835652446, (2002, 8): 0.43754519044744683, (2002, 9): 0.34292019892534936, (2002, 10): 1.1210636145981827, (2002, 11): 1.332485106490388, (2002, 12): 1.2229630481992118, (2003, 1): 1.508984812821176, (2003, 2): 1.1887845201515987, (2003, 3): 0.8611108497973553, (2003, 4): 0.9059291273938993}, 'UTH': {(2000, 1): 1.3359812952143335, (2000, 2): 1.4988411220343303, (2000, 3): 1.1487175960792586, (2000, 4): 0.9738685779965163, (2000, 5): 0.8745674941392821, (2000, 6): 0.6443392124290698, (2000, 7): 0.5668678025838506, (2000, 8): 0.436425063880557, (2000, 9): 0.8719463874510108, (2000, 10): 1.3778434744682042, (2000, 11): 1.500864165719369, (2000, 12): 1.5967169053821837, (2001, 1): 1.3473679073576996, (2001, 2): 1.0010888072264865, (2001, 3): 1.1149274936810654, (2001, 4): 1.1446113967153915, (2001, 5): 0.9704882177138088, (2001, 6): 0.5855553430495062, (2001, 7): 0.6431702098251845, (2001, 8): 0.6673872030730853, (2001, 9): 1.0184462652370758, (2001, 10): 1.421874247940507, (2001, 11): 1.0718512105705393, (2001, 12): 1.2613274250630966, (2002, 1): 1.4613909819098134, (2002, 2): 1.9547228219671733, (2002, 3): 0.9969535824706017, (2002, 4): 1.0390713222466403, (2002, 5): 1.1405910444641447, (2002, 6): 0.5816438797339953, (2002, 7): 0.6358023493332612, (2002, 8): 0.28488584160071145, (2002, 9): 0.6864525632252447, (2002, 10): 1.0811011952357756, (2002, 11): 1.1342053981023652, (2002, 12): 1.3175258026284367, (2003, 1): 1.7155125770519015, (2003, 2): 0.8655302690735893, (2003, 3): 0.9004624256328815, (2003, 4): 1.0362839133127566}, 'WH1': {(2000, 1): 1.479678096243991, (2000, 2): 1.4655890964853133, (2000, 3): 1.1907311408382424, (2000, 4): 0.7624104192520295, (2000, 5): 0.6570086687440684, (2000, 6): 1.0220573194773563, (2000, 7): 0.7128320531734508, (2000, 8): 0.5808600771066151, (2000, 9): 1.0547152354298281, (2000, 10): 1.225085170050707, (2000, 11): 1.4736262369880635, (2000, 12): 1.1945189658959103, (2001, 1): 1.0201317967474317, (2001, 2): 1.012860123046342, (2001, 3): 0.9979750061138811, (2001, 4): 0.7657471321151421, (2001, 5): 0.6834537983986184, (2001, 6): 0.7291585577119047, (2001, 7): 0.546398452720382, (2001, 8): 0.7511935928668964, (2001, 9): 0.9255563657580868, (2001, 10): 1.504270396795507, (2001, 11): 1.3057348102285011, (2001, 12): 0.95159456191817, (2002, 1): 1.343115697136973, (2002, 2): 1.458869255883179, (2002, 3): 1.1351981097654034, (2002, 4): 0.6741782716236064, (2002, 5): 0.8381647865673924, (2002, 6): 0.943335749690483, (2002, 7): 0.7771730299735837, (2002, 8): 0.44638095117997273, (2002, 9): 0.8030759725471737, (2002, 10): 1.119470100075558, (2002, 11): 0.8945063067825321, (2002, 12): 1.259814754626267, (2003, 1): 1.346474717311829, (2003, 2): 0.5614396933726116, (2003, 3): 0.9001633009379663, (2003, 4): 1.073275299867811}, 'WH3': {(2000, 1): 1.4338343574503027, (2000, 2): 1.384953627743628, (2000, 3): 1.1535912277947202, (2000, 4): 0.7809206933722733, (2000, 5): 0.6605269426785618, (2000, 6): 1.0027970510744195, (2000, 7): 0.745909465377024, (2000, 8): 0.6482779349782384, (2000, 9): 1.072484454331807, (2000, 10): 1.184206133353829, (2000, 11): 1.37061783182705, (2000, 12): 1.1447962269107579, (2001, 1): 1.0056652795344019, (2001, 2): 1.0245671990470344, (2001, 3): 1.0022481060568582, (2001, 4): 0.7777726845228051, (2001, 5): 0.7053399171788303, (2001, 6): 0.7832286207773255, (2001, 7): 0.5623766087315936, (2001, 8): 0.7462191387020645, (2001, 9): 0.9638196538715932, (2001, 10): 1.4669122980607163, (2001, 11): 1.2973547468521627, (2001, 12): 0.9414453931278712, (2002, 1): 1.2898967228235176, (2002, 2): 1.3607244135464809, (2002, 3): 1.1432591220278236, (2002, 4): 0.7083203488460438, (2002, 5): 0.8652599449264562, (2002, 6): 0.9549555936338141, (2002, 7): 0.7822433450003142, (2002, 8): 0.4705431907982273, (2002, 9): 0.850332652057204, (2002, 10): 1.1519949214772387, (2002, 11): 0.9248217294538993, (2002, 12): 1.2727311065935878, (2003, 1): 1.2870399021910714, (2003, 2): 0.5604986980213141, (2003, 3): 0.8779326785220639, (2003, 4): 1.06803082532777}}
if you have data already in R you can run: library(reticulate) d <- py_to_r(py_eval('my_data().reset_index()')) d which should give you: year month DKF GDT GSB HKZ SLG SRL UAB UKE 1 2000 1 1.3517226 NaN 1.4273286 1.3554525 1.5487504 2.0594033 1.5548904 1.2689103 2 2000 2 1.2830497 NaN 1.3664731 1.4951631 1.5415838 1.3791095 1.5095901 1.4488509 3 2000 3 1.2455233 NaN 1.1615090 1.2105682 1.2282127 1.7505922 1.0071871 1.1215747 4 2000 4 0.7299107 NaN 0.7851300 0.8072930 0.6324440 0.8446109 0.9035629 0.9697809 5 2000 5 0.7835660 NaN 0.7422343 0.9269770 0.7158862 1.2051921 0.4465355 0.8580554 if you are dealing with time series data: xts::xts(d[-(1:2)], zoo::as.yearmon(paste(d[,1], d[,2]), '%Y %m')) DKF GDT GSB HKZ SLG SRL UAB UKE UKF Jan 2000 1.3517226 NaN 1.4273286 1.3554525 1.5487504 2.0594033 1.5548904 1.2689103 1.3249434 Feb 2000 1.2830497 NaN 1.3664731 1.4951631 1.5415838 1.3791095 1.5095901 1.4488509 1.4834390 Mar 2000 1.2455233 NaN 1.1615090 1.2105682 1.2282127 1.7505922 1.0071871 1.1215747 1.1720744 Apr 2000 0.7299107 NaN 0.7851300 0.8072930 0.6324440 0.8446109 0.9035629 0.9697809 0.9721228 May 2000 0.7835660 NaN 0.7422343 0.9269770 0.7158862 1.2051921 0.4465355 0.8580554 0.8515449 Jun 2000 0.9378140 NaN 1.0315809 0.8022017 0.8454897 1.0101836 0.7661409 0.7055803 0.6193634 Jul 2000 0.6590646 NaN 0.7242494 0.6965228 0.4551865 0.6508079 0.5077752 0.5691778 0.5370455 Aug 2000 0.6744555 NaN 0.5766163 0.4565830 0.5826159 0.7407540 0.3953245 0.5093119 0.4304210 And as ftable format: a <- ftable(xtabs(value~., pivot_longer(d, -c(Year, month))), row.vars = 1:2) replace(a, a==0, NaN) name DKF GDT GSB HKZ SLG SRL UAB UKE UKF UOR UTH WH1 WH3 Year month 2000 1 1.3517226 NaN 1.4273286 1.3554525 1.5487504 2.0594033 1.5548904 1.2689103 1.3249434 NaN 1.3359813 1.4796781 1.4338344 2 1.2830497 NaN 1.3664731 1.4951631 1.5415838 1.3791095 1.5095901 1.4488509 1.4834390 NaN 1.4988411 1.4655891 1.3849536 3 1.2455233 NaN 1.1615090 1.2105682 1.2282127 1.7505922 1.0071871 1.1215747 1.1720744 NaN 1.1487176 1.1907311 1.1535912 4 0.7299107 NaN 0.7851300 0.8072930 0.6324440 0.8446109 0.9035629 0.9697809 0.9721228 NaN 0.9738686 0.7624104 0.7809207 5 0.7835660 NaN 0.7422343 0.9269770 0.7158862 1.2051921 0.4465355 0.8580554 0.8515449 NaN 0.8745675 0.6570087 0.6605269 6 0.9378140 NaN 1.0315809 0.8022017 0.8454897 1.0101836 0.7661409 0.7055803 0.6193634 NaN 0.6443392 1.0220573 1.0027971 7 0.6590646 NaN 0.7242494 0.6965228 0.4551865 0.6508079 0.5077752 0.5691778 0.5370455 NaN 0.5668678 0.7128321 0.7459095 8 0.6744555 NaN 0.5766163 0.4565830 0.5826159 0.7407540 0.3953245 0.5093119 0.4304210 NaN 0.4364251 0.5808601 0.6482779 9 1.0028705 NaN 1.0332126 0.8919033 0.9977976 0.8294642 0.9392679 0.9217522 0.8263760 NaN 0.8719464 1.0547152 1.0724845 10 0.9667005 NaN 1.1857205 1.3495322 1.1389897 1.1360024 1.3380420 1.2829829 1.3281730 NaN 1.3778435 1.2250852 1.1842061 11 1.0678912 NaN 1.3665028 1.5861421 1.0930165 0.7332514 1.0678859 1.3798931 1.4011762 NaN 1.5008642 1.4736262 1.3706178 12 0.9112284 NaN 1.1573820 1.4914961 0.8898335 0.7179855 1.4888334 1.4692246 1.6555818 NaN 1.5967169 1.1945190 1.1447962 2001 1 0.8881926 NaN 0.9902550 1.1734640 0.8842005 1.2405939 0.9310732 1.2514806 1.3357608 NaN 1.3473679 1.0201318 1.0056653 2 1.0094984 NaN 0.9724394 0.9458569 1.0275629 1.3443045 1.2383118 0.9622780 1.0341980 NaN 1.0010888 1.0128601 1.0245672 3 0.9234819 NaN 1.0488150 1.0691428 0.8746407 0.6103847 1.0869269 1.1523633 1.1212607 NaN 1.1149275 0.9979750 1.0022481 4 0.8256953 NaN 0.8273479 1.1488059 0.7131039 0.7889615 1.0308784 1.1708995 1.1261248 NaN 1.1446114 0.7657471 0.7777727 5 0.7823149 NaN 0.7430594 1.0912435 0.6837955 0.8836293 0.3581697 0.8924892 0.8756814 NaN 0.9704882 0.6834538 0.7053399 6 0.5865619 NaN 0.7536325 0.7592430 0.4790986 0.6398611 0.7424054 0.6389295 0.5645819 NaN 0.5855553 0.7291586 0.7832286 7 0.5986398 NaN 0.6349135 0.7202558 0.4988441 0.9101470 0.5807096 0.6533906 0.6105116 NaN 0.6431702 0.5463985 0.5623766 8 0.9152978 NaN 0.7826830 0.7608616 0.8223210 0.7110037 0.5961037 0.7189733 0.6219141 NaN 0.6673872 0.7511936 0.7462191 9 0.7957461 NaN 1.0233964 1.1328043 0.7417117 0.6944427 1.0412149 1.0480404 1.0519839 NaN 1.0184463 0.9255564 0.9638197 10 1.2289234 NaN 1.4129187 1.4500452 1.3284355 0.7431633 1.5274150 1.4336149 1.3938912 NaN 1.4218742 1.5042704 1.4669123 11 1.2159037 NaN 1.2605810 1.0657650 1.3409614 1.6415510 1.0127948 1.0187401 1.0481970 NaN 1.0718512 1.3057348 1.2973547 12 0.9411120 NaN 0.9576845 1.1213623 0.9324084 1.3732159 1.2298052 1.2025621 1.2687238 NaN 1.2613274 0.9515946 0.9414454
3
2
72,702,555
2022-6-21
https://stackoverflow.com/questions/72702555/how-do-i-fill-between-two-lineplots-in-seaborn
I have a graph that plots Bayesian mean point data in a line with credible intervals around the mean value. I'm trying to fill between to two credibility lines with a translucent color so the mean line really pops through. I've tried the following: plt.fill_between(b.get_data(), c.data_get(), color='blue', alpha = .5) I'm pulling this data out of an arviz inference set. Here is a toy dataset. import numpy as np import seaborn as sns import matplotlib.pyplot as plt mean = np.array([861.98525 , 705.23875 , 640.14575 , 658.727625, 728.23775 , 792.4645 , 803.045375, 763.425875, 721.785375, 713.182375, 740.543375, 781.466875]) confidence1 = np.array([788. , 607. , 493. , 443.975, 435.975, 412. , 366.975, 295. , 243. , 207. , 181. , 161. ]) confidence2 = np.array([ 938. , 811. , 815. , 935.025, 1150.025, 1391.05 , 1556.05 , 1624.05 , 1689.025, 1829. , 2078.125, 2390.025]) date = list(df_sat_test['t'].unique()) fig = plt.figure(figsize=(15,4)) a=sns.lineplot(x =date, y = mean, label = 'Posterior Predictive') b=sns.lineplot(x =date, y = confidence1, label = 'Confidence', color = 'r') c=sns.lineplot(x =date, y = confidence2, label = 'Confidence', color = 'r') # plt.fill_between(b.get_data(), c.data_get(), color='blue', alpha = .5) sns.scatterplot(x =df_sat_test['t'], y = np.array(test_ppc.observed_data.obs), label = 'True Value') plt.legend()
If you get the last line graph, you will get data for three line graphs, which you can set to fill the first of the x-axis and y-axis and the second of the y-axis to fill to get the intended result. fig = plt.figure(figsize=(15,4)) a=sns.lineplot(x=date, y=mean, label = 'Posterior Predictive') b=sns.lineplot(x=date, y=confidence1, label='Confidence', color='r') c=sns.lineplot(x=date, y=confidence2, label='Confidence', color='r') line = c.get_lines() plt.fill_between(line[0].get_xdata(), line[1].get_ydata(), line[2].get_ydata(), color='blue', alpha=.5) #sns.scatterplot(x =df_sat_test['t'], y = np.array(test_ppc.observed_data.obs), label = 'True Value') plt.legend() plt.show()
3
8
72,687,875
2022-6-20
https://stackoverflow.com/questions/72687875/how-to-detect-which-key-was-pressed-on-keyboard-mouse-using-tkinter-python
I'm using tkinter to make a python app and I need to let the user choose which key they will use to do some specific action. Then I want to make a button which when the user clicks it, the next key they press as well in keyboard as in mouse will be detected and then it will be bound it to that specific action. How can I get the key pressed by the user?
To expand on @darthmorf's answer in order to also detect mouse button events, you'll need to add a separate event binding for mouse buttons with either the '<Button>' event which will fire on any mouse button press, or '<Button-1>', (or 2 or 3) which will fire when that specific button is pressed (where '1' is the left mouse button, '2' is the right, and '3' is the middle...though I think on Mac the right and middle buttons are swapped). import tkinter as tk root = tk.Tk() def on_event(event): text = event.char if event.num == '??' else event.num label = tk.Label(root, text=text) label.place(x=50, y=50) root.bind('<Key>', on_event) root.bind('<Button>', on_event) root.mainloop()
3
3
72,699,442
2022-6-21
https://stackoverflow.com/questions/72699442/hangman-game-that-doesnt-show-any-errors-however-terminal-shows-nothing
import random from Words import words import string def get_valid_word(words): word = random.choice(words) while '-' or ' ' in word: word = random.choice(words) return word.upper() def hangman() -> object: word = get_valid_word(words) word_letter = set(word) alphabet = set(string.ascii_uppercase) used_letter = set() while len(word) > 0: print("You have used these letter", ' '.join(used_letter)) word_list = [letter if letter in used_letter else '-' for letter in word] print("Current word: ", ' '.join(word_list)) user_letter = input("Guess a letter: ") if user_letter in alphabet - used_letter: used_letter.add(user_letter) if user_letter in word_letter: word_letter.remove(user_letter) elif user_letter in used_letter: print("You have already used this letter: ") else: print("Invalid input") if __name__ == '__main__': hangman() I can't bring code to execute function hangman(). Terminal only shows the following output: /Users/nighttwinkle/PycharmProjects/pythonProject/venv/bin/python /Users/nighttwinkle/PycharmProjects/pythonProject/main.py
The issue appears to be with the line while '-' or ' ' in word: your syntax is slightly off meaning that this will always evaluate to True and thus the code enters an infinite loop at this point python interprets the above as while ('-') or (' ' in word): an thus the string - is one half of the or statement and any non empty string will always evaluate to True leaving the while condition as: while True or ' ' in words: try this intead: while '-' in word or ' ' in word:
3
2
72,699,262
2022-6-21
https://stackoverflow.com/questions/72699262/getting-cell-value-by-row-name-and-column-name-from-dataframe
Let's say I have the following data frame name age favorite_color grade 0 Willard Morris 20 blue 88 1 Al Jennings 19 blue 92 2 Omar Mullins 22 yellow 95 3 Spencer McDaniel 21 green 70 And I'm trying to get the grade for Omar which is "95" it can be easily obtained using ddf = df.loc[[2], ['grade']] print(ddf) However, I want to use his name "Omar" instead of using the raw index "2". Is it possible? I tried the following syntax but it didn't work ddf = df.loc[['Omar Mullins'], ['grade']]
Try this: ddf = df[df['name'] == 'Omar Mullins']['grade'] to output the grade values. Instead: ddf = df[df['name'] == 'Omar Mullins'] will output the full row.
3
6
72,692,790
2022-6-20
https://stackoverflow.com/questions/72692790/numpy-determine-the-corresponding-float-datatype-from-a-complex-datatype
I'm using two numpy arrays of floating point numbers Rs = np.linspace(*realBounds, realResolution, dtype=fdtype) Is = np.linspace(*imagBounds, imagResolution, dtype=fdtype) to make a grid Zs of complex numbers with shape (realResolution, imagResolution). I'd like to only specify the datatype of Zs and use this to determine the float datatype fdtype. As I understand it from this page, complex datatypes are always represented as two floats with the same datatype, hence why Rs and Is both have dtype=fdtype. If the datatype of Zs is specified as np.csingle then I'd like fdtype to be np.single, if it's specified as np.dtype("complex128") then I'd like fdtype to be np.dtype("float64"), and so on. Is there a nice way to do this? Edit: I'm equally interested in the inverse function which sends, for example, np.single to np.csingle.
I don't think there is such a function in numpy but you can easily roll your own using num, which uniquely identifies each of the built-in types: def get_corresonding_dtype(dt): return {11: np.csingle, 12: np.cdouble, 13: np.clongdouble, 14: np.single, 15: np.double, 16: np.longdouble}[np.dtype(dt).num] The dict was obtained from {np.dtype(dt).num: dt for dt in (np.single, np.double, np.longdouble, np.csingle, np.cdouble, np.clongdouble)} which returns {11: numpy.float32, 12: numpy.float64, 13: numpy.longdouble, 14: numpy.complex64, 15: numpy.complex128, 16: numpy.clongdouble} on win-amd64 (sysconfig.get_platform()). The names may vary, e.g. on linux-x86_64 it shows numpy.complex256 instead of numpy.clongdouble, but thanks to num you don't need to pay attention to it. So for instance for all of np.double, np.float_, np.float64, float, 'float64' etc., get_corresonding_dtype will return numpy.complex128. If you like you can also add np.half as 23: np.csingle to the dict, as there's no complex half/float16 type in numpy.
4
2
72,693,302
2022-6-20
https://stackoverflow.com/questions/72693302/merge-dataframes-and-extract-only-the-rows-of-the-dataframe-that-does-not-exist
I am trying to merge two dataframes and create a new dataframe containing only the rows from the first dataframe that does not exist in the second one. For example: The dataframes that I have as input: The dataframe that I want to have as output: Do you know if there is a way to do that? If you could help me, I would be more than thankful!! Thanks, Eleni
Creating some data, we have two dataframes: import pandas as pd import numpy as np rng = np.random.default_rng(seed=5) df1 = pd.DataFrame(data=rng.integers(0, 5, size=(5, 2))) df2 = pd.DataFrame(data=rng.integers(0, 5, size=(5, 2))) # df1 a b 0 3 4 1 0 4 2 2 2 3 3 1 4 4 0 # df2 a b 0 1 1 1 2 2 2 0 0 3 0 0 4 0 4 We can use pandas.merge to combine equal rows. And we can use its indicator=True feature to mark those rows that are only from the left (and right, when applicable). Since we only need those that are unique to left, we can merge using how="left" to be more efficient. dfm = pd.merge(df1, df2, on=list(df1.columns), how="left", indicator=True) # dfm a b _merge 0 3 4 left_only 1 0 4 both 2 2 2 both 3 3 1 left_only 4 4 0 left_only Great, so then the final result is using the merge but only keeping those that have an indicator of left_only: (dfm.loc[dfm._merge == 'left_only'] .drop(columns=['_merge'])) a b 0 3 4 3 3 1 4 4 0 If you'd want to deduplicate by a subset of the columns, that should be possible. In that case I would do the merge it like this, repeating the subset so that we don't get other columns in duplicate versions from the left and right side. pd.merge(df1, df2[subset], on=subset, how="left", indicator=True)
4
3
72,677,648
2022-6-19
https://stackoverflow.com/questions/72677648/how-to-iterate-through-list-infinitely-with-1-offset-each-loop
I want to infinitely iterate through the list from 0 to the end, but in the next loop I want to start at 1 to the end plus 0, and the next loop would start at 2 to the end plus 0, 1, up to the last item where it would start again at 0 and go to the end. Here is my code: a = [ 0, 1, 2 ] offset = 0 rotate = 0 while True: print(a[rotate]) offset += 1 rotate += 1 if offset >= len(a): offset = 0 rotate += 1 if rotate >= len(a): rotate = 0 This is the solution I came up with so far. It's far from perfect. The result that I want is: 0, 1, 2 # first iteration 1, 2, 0 # second iteration 2, 0, 1 # third iteration 0, 1, 2 # fourth iteration and so on.
You can use a deque which has a built-in and efficient rotate function (~O(1)): >>> d = deque([0,1,2]) >>> for _ in range(10): ... print(*d) ... d.rotate(-1) # negative -> rotate to the left ... 0 1 2 1 2 0 2 0 1 0 1 2 1 2 0 2 0 1 0 1 2 1 2 0 2 0 1 0 1 2
28
49
72,686,762
2022-6-20
https://stackoverflow.com/questions/72686762/how-to-apply-filter-on-django-manytomanyfield-so-that-multiple-value-of-the-fiel
class Publication(models.Model): title = models.CharField(max_length=30) class Article(models.Model): headline = models.CharField(max_length=100) publications = models.ManyToManyField(Publication) p1 = Publication.objects.create(title='The Python Journal') p2 = Publication.objects.create(title='Science News') p3 = Publication.objects.create(title='Science Weekly') I want to filter articles, published in both p1 and p3. They might or might not publish in other publications but they must have to publish in p1 and p3. I have tried: Article.objects.filter(Q(publications=p1) & Q(publications=p3)) but it returns empty queryset which is not true
If you chain mutliple calls to filter() they should behave like being connected with an AND: articles = Article.objects.filter(publications=p1).filter(publications=p3).distinct()
5
4
72,683,540
2022-6-20
https://stackoverflow.com/questions/72683540/count-unique-words-with-collections-and-dataframe
I have a problem, I want to count the unique words from a dataframe, but unfortunately it only counts the first sentences. text 0 hello is a unique sentences 1 hello this is a test 2 does this works import pandas as pd d = { "text": ["hello is a unique sentences", "hello this is a test", "does this works"], } df = pd.DataFrame(data=d) from collections import Counter # Count unique words def counter_word(text_col): print(len(text_col.values)) count = Counter() for i, text in enumerate(text_col.values): print(i) for word in text.split(): count[word] += 1 return count counter = counter_word(df['text']) len(counter)
I think simplier is join values by space, then split for words and count: counter = Counter((' '.join(df['text'])).split()) print (counter) Counter({'hello': 2, 'is': 2, 'a': 2, 'this': 2, 'unique': 1, 'sentences': 1, 'test': 1, 'does': 1, 'works': 1})
3
2
72,680,516
2022-6-19
https://stackoverflow.com/questions/72680516/having-problem-with-drawing-lines-thickness-in-pygame
Here is the board I drew using pygame: https://i.sstatic.net/Hne6A.png I'm facing a bug with the thickness of the last two lines as I marked them on the image. I believe there is something wrong with the if statement in my code but I can't quite figure it out, it just won't take effect. here is the code that has drawn the board above: import pygame, sys pygame.init() width, height = 750, 750 rows, cols = 9, 9 BLACK = (0,0,0) screen = pygame.display.set_mode((width, height)) screen.fill((255, 255, 255, 255)) running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False sys.exit() surf = pygame.Surface((600, 600)) surf.fill((255,255,255)) padding = surf.get_width()/9 for i in range(rows+1): if i % 3 == 0: thick = 4 else: thick = 1 pygame.draw.line(surf, BLACK, (0, i*padding), (width, i*padding), width=thick) pygame.draw.line(surf, BLACK, (i*padding, 0), (i*padding, height), width=thick) surf_center = ( (width-surf.get_width())/2, (height-surf.get_height())/2 ) screen.blit(surf, surf_center) pygame.display.update() pygame.display.flip()
To draw a thick line, half the thickness is applied to both sides of the line. The simplest solution is to make the target surface a little larger and add a small offset to the coordinates of the lines. For performance reasons, I also recommend creating the surface before the application loop and continuously blit it in the application loop: import pygame, sys pygame.init() width, height = 750, 750 rows, cols = 9, 9 BLACK = (0,0,0) screen = pygame.display.set_mode((width, height)) surf_size = 600 surf = pygame.Surface((surf_size+4, surf_size+4)) surf.fill((255,255,255)) padding = surf_size/9 for i in range(rows+1): thick = 4 if i % 3 == 0 else 1 offset = i * padding + 1 pygame.draw.line(surf, BLACK, (0, offset), (width, offset), width=thick) pygame.draw.line(surf, BLACK, (offset, 0), (offset, height), width=thick) running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False screen.fill((255, 255, 255, 255)) screen.blit(surf, surf.get_rect(center = screen.get_rect().center)) pygame.display.update() pygame.display.flip() pygame.quit() sys.exit()
4
2
72,636,013
2022-6-15
https://stackoverflow.com/questions/72636013/efficiently-find-the-indices-of-shared-values-with-repeats-between-two-large-a
Problem description Let's take this simple array set # 0,1,2,3,4,5 a = np.array([1,1,3,4,6]) b = np.array([6,6,1,3]) From these two arrays I want to get the indices of all possible matches. So for number 1 we get 0,2 and 1,2, with the complete output looking like: 0,2 # 1 1,2 # 1 2,3 # 3 4,0 # 6 4,1 # 6 Note that the arrays are (not yet) sorted neither do they only contain unique elements - two conditions often assumed in other answers (see bottom). The above example is very small, however, I have to apply this to ~40K element arrays. Tried approaches 1.Python loop approach indx = [] for i, aval in enumerate(a): for j, bval in enumerate(b): if aval == bval: indx.append([i,j]) # [[0, 2], [1, 2], [2, 3], [4, 0], [4, 1]] 2.Python dict approach adict = defaultdict(list) bdict = defaultdict(list) for i, aval in enumerate(a): adict[aval].append(i) for j, bval in enumerate(b): bdict[bval].append(j) for val, a_positions in adict.items(): for b_position in bdict[val]: for a_position in a_positions: print(a_position, b_position) 3.Numpy where print(np.where(a.reshape(-1,1) == b)) 4. Polars dataframes Converting it to a dataframe and then using Polars import polars as pl a = pl.DataFrame( {'x': a, 'apos':list(range(len(a)))} ) b = pl.DataFrame( {'x': b, 'apos':list(range(len(b)))} ) a.join(b, how='inner', on='x') "Big data" On "big" data using Polars seems the fastest now with around 0.02 secs. I'm suprised that creating DataFrames first and then joining them is faster than any other approach I could come up with so curious if there is any other way to beat it :) a = np.random.randint(0,1000, 40000) b = np.random.randint(0,1000, 40000) Using the above data: python loop: 218s python dict: 0.03s numpy.where: 4.5s polars: 0.02s How related questions didn't solve this Return common element indices between two numpy arrays, only returns the indexes of matchesin one of the arrays, not both Find indices of common values in two arrays, returns the matching indices of A with B and B with A, but not the paired indices (see example) Very surprised a DataFrame library is currently the fastest, so curious to see if there are other approaches to beat this speed :) Everything is fine, cython, numba, pythran etc.
NOTE: this post is now superseded by the faster alternative sort-based solution. The dict based approach is an algorithmically efficient solution compared to others (I guess Polars should use a similar approach). However, the overhead of CPython make it a bit slow. You can speed it up a bit using Numba. Here is an implementation: import numba as nb import numpy as np from numba.typed.typeddict import Dict from numba.typed.typedlist import ListType from numba.typed.typedlist import List IntList = ListType(nb.int32) @nb.njit('(int32[:], int32[:])') def numba_dict_based_compute(a, b): adict = Dict.empty(nb.int32, IntList) bdict = Dict.empty(nb.int32, IntList) for i, val in enumerate(a): if val in adict: adict[val].append(i) else: adict[val] = List([nb.int32(i)]) for i, val in enumerate(b): if val in bdict: bdict[val].append(i) else: bdict[val] = List([nb.int32(i)]) count = 0 for val, a_positions in adict.items(): if val not in bdict: continue b_positions = bdict[val] count += len(a_positions) * len(b_positions) result = np.empty((count, 2), dtype=np.int32) cur = 0 for val, a_positions in adict.items(): if val not in bdict: continue for b_position in bdict[val]: for a_position in a_positions: result[cur, 0] = a_position result[cur, 1] = b_position cur += 1 return result result = numba_dict_based_compute(a.astype(np.int32), b.astype(np.int32)) Note that computing in-place the value is a bit faster than storing them (and pre-compute the size of the array). However, if nothing is done in the loop, Numba can completely optimize it out and the benchmark would be biased. Alternatively, printing values is so slow that is would also biased the benchmark. Note also that the implementation assumes the numbers are 32-bit ones. A 64 bit implementation can be trivially implemented by replacing 32-bit types by 64-bit ones though it decreases the performance. This solution is about twice faster on my machine though it is a bit verbose and not very easy to read. The performance of the operation is mainly bounded by the speed of dictionary lookups. This implementation is a bit faster than the one of polars on my machine. Here are timings: Naive python loop: >100_000 ms Numpy where: 3_451 ms Python dict: 24.7 ms Polars: 12.3 ms This implementation: 11.3 ms (takes 13.2 ms on 64-bit values)
4
3
72,676,882
2022-6-19
https://stackoverflow.com/questions/72676882/force-an-abstract-class-attribute-to-be-implemented-by-concrete-class
Considering this abstract class and a class implementing it: from abc import ABC class FooBase(ABC): foo: str bar: str baz: int def __init__(self): self.bar = "bar" self.baz = "baz" class Foo(FooBase): foo: str = "hello" The idea here is that a Foo class that implements FooBase would be required to specify the value of the foo attribute, but the other attributes (bar and baz) would not need to be overwritten, as they're already handle by a method provided by the abstract class. From a MyPy type-checking perspective, is it possible to force Foo to declare the attribute foo and raise a type-checking error otherwise? EDIT: The rationale is that FooBase is part of a library, and the client code should be prevented from implementing it without specifying a value for foo. For bar and baz however, these are entirely managed by the library and the client doesn't care about them.
This is a partial answer. You can use class FooBase(ABC): @property @classmethod @abstractmethod def foo(cls) -> str: ... class Foo(FooBase): foo = "hi" def go(f: FooBase) -> str: return f.foo It's only partial because you'll only get a mypy error if you try to instantiate Foo without an initialized foo, like class Foo(FooBase): ... Foo() # error: Cannot instantiate abstract class "Foo" with abstract attribute "foo" This is the same behaviour as when you have a simple @abstractmethod. Only when instantiating it is the error raised. This is expected because Foo might not be intended as a concrete class, and may itself be subclassed. You can mitigate this somewhat by stating it is a concrete class with typing.final. The following will raise an error on the class itself. @final class Foo(FooBase): # error: Final class __main__.Foo has abstract attributes "foo" ...
6
2
72,675,162
2022-6-19
https://stackoverflow.com/questions/72675162/pydantic-model-copied-when-passing-it-to-another-model
Pydantic copies a model when passing it to the constructor of another model. This fails: from pydantic import BaseModel class Child(BaseModel): pass class Parent(BaseModel): child: Child child = Child() parent = Parent(child=child) assert parent.child is child # Fails It seems child is copied when passing it to the parent's constructor and therefore the identities of child and parent.child are not the same. I would like to have them to be the same as I need to modify child's attributes later and the changes should be seen in parent.child. How do I make Pydantic not copy the child?
I found the answer myself. Seems this was an issue but it was fixed in a PR by creating a config option copy_on_model_validation. If this option is set to False for the child, then the child is not copied in the construction. This does not copy the child: from pydantic import BaseModel class Child(BaseModel): class Config: copy_on_model_validation = False class Parent(BaseModel): child: Child child = Child() parent = Parent(child=child) assert parent.child is child # Passes
5
8
72,668,076
2022-6-18
https://stackoverflow.com/questions/72668076/why-does-splitlines-not-give-the-expected-result-for-triple-dots-in-jupyter
I believe the following code s = ''' ... .o. ... ''' print(s.splitlines()) should print ['', '...', '.o.', '...'] Indeed, this is the case when Python is executed normally (example run on Wandbox is here). But the reality is ruthless (as usual); Google Colaboratory prints a result without "triple dots": I also tried the same code with a locally installed Jupyter (Python 3.7.13, Jupyter notebook 6.4.12, IPython 7.34.0) and it gave me the same result as Google Colaboratory. Does anyone know what causes this deletion of the triple dots?
Google collab interprets ... as part of the prompt. You can change the prompt to some other string and the result will be as you expected : import sys sys.ps2 = '<<<' # default value is ... s = ''' ... .o. ... ''' print(s.splitlines()) ['', '...', '.o.', '...'] EDIT: As @user2357112 pointed out in the comments and in their answer changing the prompt does not affect this. Here it seemed to work because adding more lines to the beginning of the cell make Ipython interpreter think they are no longer part of the prompt. You can change your string to '\n...\n.o.\n...' as a workaround.
16
11
72,668,970
2022-6-18
https://stackoverflow.com/questions/72668970/drf-yasg-swagger-auto-schema-not-showing-the-required-parameters-for-post-reque
I am using django-yasg to create an api documentation. But no parameters are showing in the documentation to create post request. Following are my codes: After that in swagger api, no parameters are showing for post request to create the event model.py class Events(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) category = models.ForeignKey(EventCategory, on_delete=models.CASCADE) name = models.CharField(max_length=250, null=True) posted_at = models.DateTimeField(null=True, auto_now=True) location = models.CharField(max_length=100, null=True) banner = models.ImageField(default='avatar.jpg', upload_to='Banner_Images') start_date = models.DateField( auto_now=False, auto_now_add=False, blank=True, null=True) end_date = models.DateField( auto_now=False, auto_now_add=False, blank=True, null=True) description = models.TextField(max_length=2000, null=True) completed = models.BooleanField(default=False) def __str__(self): return f'{self.name}' class Meta: verbose_name_plural = "Events" serializers.py class EventSerializer(serializers.ModelSerializer): user = UserSerializer(read_only=True, many=False) category = EventCategorySerializer(read_only=True, many=False) class Meta: model = Events fields = '__all__' views.py @api_view(['POST']) @permission_classes([IsAuthenticated]) @user_is_organization @swagger_auto_schema( request_body=EventSerializer ) def registerEvent(request): """ To Register an events, you must be an organization """ data = request.data print("==================") print(data) print("==================") try: Event = Events.objects.create( user = request.user, category=EventCategory.objects.get(category=data['category']), name=data['name'], location=data['location'], start_date=data['start_date'], end_date=data['end_date'], description=data['description'], completed=data['completed'], ) serializer = EventSerializer(Event, many=False) Event = Events.objects.get(id=serializer.data['id']) Event.banner = request.FILES.get('banner') Event.save() serializer = EventSerializer(Event, many=False) return Response(serializer.data) except: message = {'detail': 'Event with this content already exists'} return Response(message, status=status.HTTP_400_BAD_REQUEST)
I got it running with following changes in views.py @swagger_auto_schema( methods=['post'], request_body=openapi.Schema( type=openapi.TYPE_OBJECT, required=['category','name', 'location', 'start_date', 'end_date', 'description', 'completed', 'banner'], properties={ 'category':openapi.Schema(type=openapi.TYPE_STRING), 'name':openapi.Schema(type=openapi.TYPE_STRING), 'location':openapi.Schema(type=openapi.TYPE_STRING), 'start_date':openapi.Schema(type=openapi.TYPE_STRING, default="yyyy-mm-dd"), 'end_date':openapi.Schema(type=openapi.TYPE_STRING, default='yyyy-mm-dd'), 'description':openapi.Schema(type=openapi.TYPE_STRING), 'completed':openapi.Schema(type=openapi.TYPE_BOOLEAN, default=False), 'banner': openapi.Schema(type=openapi.TYPE_FILE), }, ), operation_description='Create an events' ) @api_view(['POST']) @permission_classes([IsAuthenticated]) @user_is_organization def registerEvent(request): """ To Register an events, you must be an organization """ data = request.data print("==================") print(data) print(type(data)) category=EventCategory.objects.get(category=data['category']), print(category) print(type(data["start_date"])) print("==================") try: Event = Events.objects.create( user = request.user, category=EventCategory.objects.get(category=data['category']), name=data['name'], location=data['location'], start_date=data['start_date'], end_date=data['end_date'], description=data['description'], completed=data['completed'], ) print("****************************") serializer = EventSerializer(Event, many=False) Event = Events.objects.get(id=serializer.data['id']) Event.banner = request.FILES.get('banner') Event.save() serializer = EventSerializer(Event, many=False) return Response(serializer.data) except ValidationError as e: return Response({"ValidationError" : e}, status = status.HTTP_400_BAD_REQUEST) except Exception as e: message = {'error': e} return Response(message, status=status.HTTP_400_BAD_REQUEST)
3
4
72,670,733
2022-6-18
https://stackoverflow.com/questions/72670733/insert-empty-row-after-every-nth-row-in-pandas-dataframe
I have a dataframe: pd.DataFrame(columns=['a','b'],data=[[3,4], [5,5],[9,3],[1,2],[9,9],[6,5],[6,5],[6,5],[6,5], [6,5],[6,5],[6,5],[6,5],[6,5],[6,5],[6,5],[6,5]]) I want to insert two empty rows after every third row so the resulting output looks like that: a b 0 3.0 4.0 1 5.0 5.0 2 9.0 3.0 3 NaN NaN 4 NaN NaN 5 1.0 2.0 6 9.0 9.0 7 6.0 5.0 8 NaN NaN 9 NaN NaN 10 6.0 5.0 11 6.0 5.0 12 6.0 5.0 13 NaN NaN 14 NaN NaN 15 6.0 5.0 16 6.0 5.0 17 6.0 5.0 18 NaN NaN 19 NaN NaN 20 6.0 5.0 21 6.0 5.0 22 6.0 5.0 23 NaN NaN 24 NaN NaN 25 6.0 5.0 26 6.0 5.0 I tried a number of things but didn't get any closer to the desired output.
The following should scale well with the size of the DataFrame since it doesn't iterate over the rows and doesn't create intermediate DataFrames. import pandas as pd df = pd.DataFrame(columns=['a','b'],data=[[3,4], [5,5],[9,3],[1,2],[9,9],[6,5],[6,5],[6,5],[6,5], [6,5],[6,5],[6,5],[6,5],[6,5],[6,5],[6,5],[6,5]]) def add_empty_rows(df, n_empty, period): """ adds 'n_empty' empty rows every 'period' rows to 'df'. Returns a new DataFrame. """ # to make sure that the DataFrame index is a RangeIndex(start=0, stop=len(df)) # and that the original df object is not mutated. df = df.reset_index(drop=True) # length of the new DataFrame containing the NaN rows len_new_index = len(df) + n_empty*(len(df) // period) # index of the new DataFrame new_index = pd.RangeIndex(len_new_index) # add an offset (= number of NaN rows up to that row) # to the current df.index to align with new_index. df.index += n_empty * (df.index .to_series() .groupby(df.index // period) .ngroup()) # reindex by aligning df.index with new_index. # Values of new_index not present in df.index are filled with NaN. new_df = df.reindex(new_index) return new_df Tests: # original df >>> df a b 0 3 4 1 5 5 2 9 3 3 1 2 4 9 9 5 6 5 6 6 5 7 6 5 8 6 5 9 6 5 10 6 5 11 6 5 12 6 5 13 6 5 14 6 5 15 6 5 16 6 5 # add 2 empty rows every 3 rows >>> add_empty_rows(df, 2, 3) a b 0 3.0 4.0 1 5.0 5.0 2 9.0 3.0 3 NaN NaN 4 NaN NaN 5 1.0 2.0 6 9.0 9.0 7 6.0 5.0 8 NaN NaN 9 NaN NaN 10 6.0 5.0 11 6.0 5.0 12 6.0 5.0 13 NaN NaN 14 NaN NaN 15 6.0 5.0 16 6.0 5.0 17 6.0 5.0 18 NaN NaN 19 NaN NaN 20 6.0 5.0 21 6.0 5.0 22 6.0 5.0 23 NaN NaN 24 NaN NaN 25 6.0 5.0 26 6.0 5.0 # add 5 empty rows every 4 rows >>> add_empty_rows(df, 5, 4) a b 0 3.0 4.0 1 5.0 5.0 2 9.0 3.0 3 1.0 2.0 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN NaN 8 NaN NaN 9 9.0 9.0 10 6.0 5.0 11 6.0 5.0 12 6.0 5.0 13 NaN NaN 14 NaN NaN 15 NaN NaN 16 NaN NaN 17 NaN NaN 18 6.0 5.0 19 6.0 5.0 20 6.0 5.0 21 6.0 5.0 22 NaN NaN 23 NaN NaN 24 NaN NaN 25 NaN NaN 26 NaN NaN 27 6.0 5.0 28 6.0 5.0 29 6.0 5.0 30 6.0 5.0 31 NaN NaN 32 NaN NaN 33 NaN NaN 34 NaN NaN 35 NaN NaN 36 6.0 5.0
4
6
72,671,820
2022-6-18
https://stackoverflow.com/questions/72671820/copied-list-passed-to-generator-reflects-changes-made-to-original
In answering this question, I stumbled across some unexpected behavior: from typing import List, Iterable class Name: def __init__(self, name: str): self.name = name def generator(lst: List[Name]) -> Iterable[str]: lst_copy = lst.copy() for obj in lst_copy: yield obj.name When modifying the list that is passed to the generator, even though a copy is made, changes to the original list are still reflected: lst = [Name("Tom"), Name("Tommy")] gen = generator(lst) lst[0] = Name("Andrea") for name in gen: print(name) Output: Andrea Tommy Simply returning a generator expression works as expected: def generator(lst: List[Name]) -> Iterable[str]: return (obj.name for obj in lst.copy()) Output: Tom Tommy Why doesn't the lst.copy() in the first generator function work as expected?
I think the behavior is best understood with the addition of some extra print statements: def generator(lst: List[Name]) -> Iterable[str]: print("Creating list copy...") lst_copy = lst.copy() print("Created list copy!") for obj in lst_copy: yield obj.name lst = [Name("Tom"), Name("Tommy")] print("Starting assignment...") gen = generator(lst) print("Assignment complete!") print("Modifying list...") lst[0] = Name("Andrea") print("Modification complete!") for name in gen: print(name) Notice that the copy does not happen at assignment time -- it happens after the list is modified! Starting assignment... Assignment complete! Modifying list... Modification complete! Creating list copy... Created list copy! Andrea Tommy Nothing in the generator's body is executed until the for loop attempts to extract an element. Since this extraction attempt occurs after the list is mutated, the mutation is reflected in the results from the generator.
4
2
72,671,197
2022-6-18
https://stackoverflow.com/questions/72671197/how-to-make-a-select-field-using-django-model-forms-using-set-values
I recently switched from using simple forms in Django to model forms. I am now trying to use a select field in my form that has set field names (eg:Europe,North America,South America...) I thought I would just add a select input type to the for fields but it shows up as just a regular text input. The select field is supposed to be "continent". Does anyone know how to to d this?? class TripForm(forms.ModelForm): # city = forms.TextInput() # country = forms.TimeField() # description = forms.Textarea() # photo = forms.ImageField() class Meta: model = Trip fields = ('city', 'country', 'continent', 'description', 'photo') widget = { 'city': forms.TextInput(attrs={'class': 'form-control'}), 'country': forms.TextInput(attrs ={'class': 'form-control'}), 'continent': forms.SelectMultiple(attrs= {'class':'form-control'}), 'description': forms.TextInput(attrs = {'class': 'form-control'}), # 'photo': forms.ImageField(attrs = {'class': 'form-control'}), } [![enter image description here][1]][1] This is the form continent should show as a select.
Here you can use ModelChoiceField provided by django forms. Define a Country model containing all the countries, and use that as a QuerySet here. It will be also be dynamic that can be changed later. from django import forms from .models import Trip, Country class TripForm(forms.ModelForm): class Meta: model = Trip fields = ['name', 'country', 'start_date', 'end_date'] country = forms.ModelChoiceField( queryset=Country.objects.all(), to_field_name='name', required=True, widget=forms.Select(attrs={'class': 'form-control'}) )
4
6
72,669,730
2022-6-18
https://stackoverflow.com/questions/72669730/how-to-easily-and-cheaply-run-a-simply-python-script-every-5-mins-for-a-whole-we
I want to run a very simple python script (or any language) which collects data from an api at regular 5 minute intervals and saves the data. I need this process to run for a whole week - day and night. I can't keep my laptop on all week so I guess I will need this to run on some kind of server. What is the cheapest and easiest way for me to do something like this? This is an incredibly simple script so I don't want to spend ages setting up a complicated server which is ultimately overkill for a task like this. Pseudocode: # Every 5 minutes for a whole week: data = call_api(url) write_csv(data)
You're going to need a cloud server as it's the cheapest option if u can't setup a system at home. I'll 2nd what furas said about pythonanywhere.com. I use their $5 a month plan to run simple tasks like this. You can cancel anytime so if it's just 1 week then you pay $5 for your account, that's it. There is no messing around with setup either, just upload your main.py and schedule as task in the dashboard.
3
4
72,631,305
2022-6-15
https://stackoverflow.com/questions/72631305/advance-a-interpolation
Note; No special knowledge of Pykrige is needed to answer the question, as I already mention examples in the question! Hi I would like to use Universal Kriging in my code. For this I have data that is structured as follows: Latitude Longitude Altitude H2 O18 Date Year month dates a_diffO O18a 0 45.320000 -75.670000 114.0 -77.500000 -11.110000 2004-09-15 2004.0 9.0 2004-09-15 -0.228 -10.882000 1 48.828100 9.200000 314.0 -31.350000 -4.880000 2004-09-15 2004.0 9.0 2004-09-15 -0.628 -4.252000 2 51.930000 -10.250000 9.0 -18.800000 -3.160000 2004-09-15 2004.0 9.0 2004-09-15 -0.018 -3.142000 3 48.248611 16.356389 198.0 -45.000000 -6.920000 2004-09-15 2004.0 9.0 2004-09-15 -0.396 -6.524000 4 50.338100 7.600000 85.0 -19.200000 -3.190000 2004-09-15 2004.0 9.0 2004-09-15 -0.170 -3.020000 You can find my data here:https://wetransfer.com/downloads/9c02e4fc1c2da765d5ee9137e6d7df4920220618071144/8f450e I want to interpolate the data (Latitude, Longitude, Altitude and O18) with Universal Kriging and use the height as a drift function. So far I have programmed this here but I am not getting anywhere, e.g. I don't know how to effectively use the height as a drift function and the information from the Pykrige documentation is of limited help: from traceback import print_tb from typing_extensions import Self import numpy as np from pykrige.uk import UniversalKriging from pykrige.kriging_tools import write_asc_grid import pykrige.kriging_tools as kt import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap from matplotlib.patches import Path, PathPatch import pandas as pd from osgeo import gdal def load_data(): df = pd.read_csv(r"File") return(df) def get_data(df): return { "lons": df['Longitude'], "lats": df['Latitude'], "values": df['O18'], } def extend_data(data): return { "lons": np.concatenate([np.array([lon-360 for lon in data["lons"]]), data["lons"], np.array([lon+360 for lon in data["lons"]])]), "lats": np.concatenate([data["lats"], data["lats"], data["lats"]]), "values": np.concatenate([data["values"], data["values"], data["values"]]), } def generate_grid(data, basemap, delta=1): grid = { 'lon': np.arange(-180, 180, delta), 'lat': np.arange(np.amin(data["lats"]), np.amax(data["lats"]), delta) } grid["x"], grid["y"] = np.meshgrid(grid["lon"], grid["lat"]) grid["x"], grid["y"] = basemap(grid["x"], grid["y"]) return grid def interpolate(data, grid): uk = UniversalKriging( data["lons"], data["lats"], data["values"], variogram_model='exponential', verbose=True, drift_terms=["functional"], functional_drift=[func], ) return uk.execute("grid", grid["lon"], grid["lat"]) def prepare_map_plot(): figure, axes = plt.subplots(figsize=(10,10)) basemap = Basemap(projection='robin', lon_0=0, lat_0=0, resolution='h',area_thresh=1000,ax=axes) return figure, axes, basemap def plot_mesh_data(interpolation, grid, basemap): colormesh = basemap.contourf(grid["x"], grid["y"], interpolation,32, cmap='RdBu_r') #plot the data on the map. plt.cm.RdYlBu_r color_bar = basemap.colorbar(colormesh,location='bottom',pad="10%") df = load_data() base_data = get_data(df) figure, axes, basemap = prepare_map_plot() grid = generate_grid(base_data, basemap, 90) extended_data = extend_data(base_data) interpolation, interpolation_error = interpolate(extended_data, grid) plot_mesh_data(interpolation, grid,basemap) plt.show() I now only use universal kriging and create these images: I get the expected error: ValueError: Must specify location(s) and strength(s) of point drift terms. I just know that I have to create a grid with the height, but I don't know how and I don't know how to make the drift dependent on the altitude. The altitude formula is: where 100 m represents 100 m hight difference. The interesting thing is that there is this website with examples: however, I am too inexperienced in coding to understand the examples and to transfer them to my example: https://python.hotexamples.com/examples/core/-/calc_cR/python-calc_cr-function-examples.html conclusion: I don't know how to define the ["external_drift"] to do what I want (this is based on me being so inexperienced in coding in general). I've been trying to solve these problems for 3 weeks now, but I'm really getting nowhere.
From the documentation of pykrige.uk.UniversalKriging (https://geostat-framework.readthedocs.io/projects/pykrige/en/stable/generated/pykrige.uk.UniversalKriging.html#pykrige.uk.UniversalKriging): drift_terms (list of strings, optional) – List of drift terms to include in universal kriging. Supported drift terms are currently ‘regional_linear’, ‘point_log’, ‘external_Z’, ‘specified’, and ‘functional’. In your code you specified drift_terms = ["external_drift"] which is not supported. I'm sorry I don't have specialized knowledge in this model so I cannot help you much further. But it's very likely these are the parameters that you need to specify: external_drift (array_like, optional) – Gridded data used for the external Z scalar drift term. Must be shape (M, N), where M is in the y-direction and N is in the x-direction. Grid spacing does not need to be constant. If grid spacing is not constant, must specify the grid cell sizes. If the problem involves anisotropy, the external drift values are extracted based on the pre-adjusted coordinates (i.e., the original coordinate system). external_drift_x (array_like, optional) – X-coordinates for gridded external Z-scalar data. Must be shape (M,) or (M, 1), where M is the number of grid cells in the x-direction. The coordinate is treated as the center of the cell. external_drift_y (array_like, optional) – Y-coordinates for gridded external Z-scalar data. Must be shape (N,) or (N, 1), where N is the number of grid cells in the y-direction. The coordinate is treated as the center of the cell.
4
1
72,663,092
2022-6-17
https://stackoverflow.com/questions/72663092/getting-numpy-linalg-svd-and-numpy-matrix-multiplication-to-use-multithreadng
I have a script that uses a lot of numpy and numpy.linalg functions and after some reaserch it tourned out that supposedly they automaticaly use multithreading. Altought that, my htop display always shows just one thread being used to run my script. I am new to multithreading and I don´t quite now how to set up it correctly. I am mostly making use of numpy.linalg.svd Here is the output of numpy.show_config() openblas64__info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] runtime_library_dirs = ['/usr/local/lib'] blas_ilp64_opt_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] runtime_library_dirs = ['/usr/local/lib'] openblas64__lapack_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_ilp64_opt_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] runtime_library_dirs = ['/usr/local/lib'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL MRE import numpy as np import tensorly as ty tensor = np.random.rand(32,32,32,32) unfolding = ty.unfold(tensor,0) unfolding = unfolding @ unfolding.transpose() U,S,_ = np.linalg.svd(unfolding) Update As suggested in the accepted answer, rebuilding numpy with MKL solved the issue.
The main issue is that the size of the matrices is too small for threads to be really worth it on all platforms. Indeed, OpenBLAS uses OpenMP to create threads regarding the size of the matrix. Threads are generally created once but the creation can take from dozens of microseconds to dozens of milliseconds regarding the machine (typically hundreds of microseconds on a regular PC). The bigger the number of cores on the machine the bigger the number of threads to create and so the bigger the overhead. When the OpenMP thread pool is reused, there are still overheads to pay mainly due to the distribution of the work and synchronizations between threads though the overheads are generally significantly smaller (typically an order of magnitude smaller). That being said, OpenBLAS makes clearly sub-optimal choices when the output matrix is tiny compared to the input ones (which is your case). Indeed, OpenBLAS can hardly know the parallel overhead before running the target kernel so it has to make a choice: set a threshold typically based on the size of the input matrix so to define when the kernel will be executed sequentially or with multiple threads. This is critical for very small kernel to still be fast as well as huge ones to remain competitive with other BLAS implementations. The thing is this threshold is not perfectly chosen. It looks like OpenBLAS only look the size of the output matrix which is clearly sub-optimal for "thin" matrices like in your code (eg. 50x1000000 @ 1000000x50). An empirical analysis show that the threshold is arbitrary set to 100x100 in your case: beyond this threshold, OpenBLAS use multiple threads but not otherwise. The thing is threads are already useful for significantly smaller matrices in your case on most platforms (eg. for 64x64x64x64 tensors). This threshold is tuned by compile-time definitions like GEMM_MULTITHREAD_THRESHOLD which is used in gemm.c (or gemv.c. Note that in the code, the k dimension matters but this is not what benchmarks show on my machine (possibly due to an older version of OpenBLAS being used). You can rebuild OpenBLAS with a smaller threshold (like 1 instead of 4). An alternative solution is to use another BLAS implementation like BLIS or the Intel MKL that should use different threshold (possibly better ones). A last solution is to implement a specific implementation to efficiently compute the matrices of your code (possibly using Numba or Cython) but BLAS implementations are heavily optimized so it is often hard to actually write a faster code (unless you are very familiar with low-level optimizations, compilers, and modern processor architectures).
6
5
72,649,475
2022-6-16
https://stackoverflow.com/questions/72649475/specify-python-version-on-cloud-run-buildpack
I am deploying a web app on Cloud Run using the automated Cloud Build "Buildpack" option (as explained here); hence not having to create a Docker File. I would like to deploy the app using python-3.8.12 and buildpacks. How can I specify that?
There’s no need to specify it. The builder attempts to autodetect the language of your source code and Python 3.7+ is one of the supported languages. You may try this Python buildpacks sample. You can also find out deeper information about buildpacks in this presentation. Update As an alternative, you can also manually specify which buildpack to use, thereby skipping the auto-detection step. As described in this answer, you can also specify it in the project.toml the following way: [[build.env]] name = "GOOGLE_RUNTIME_VERSION" value = "3.8.6" This would be the same as what you'd put in the .python-version, but would take precedence over it.
3
0
72,660,954
2022-6-17
https://stackoverflow.com/questions/72660954/coverage-no-source-for-code-with-pytest
I am trying to measure code coverage by my pytest tests. I tried following the quick start guide of coverage (https://coverage.readthedocs.io/en/6.4.1/) When I run my test with the following command, everything seems fine coverage run -m pytest tests/ ===================================== test session starts ====================================== platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0 rootdir: /home/arnaud/Documents/Github/gotcha collected 4 items tests/preprocessing/test_preprocessing.py .... [100%] ====================================== 4 passed in 0.30s ======================================= However, when I try to access the report with either of those commands, coverage report coverage html I get the following message: No source for code: '<project_directory>/config-3.py'. I did not find an appropriate solution to this problem so far
It is possible to ignore errors using the command coverage html -i which solved my issue
6
3
72,657,415
2022-6-17
https://stackoverflow.com/questions/72657415/fix-futurewarning-related-to-the-pandas-append-function
I am getting the following FutureWarning in my Python code: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. Right now I was using the append function, in various parts of my code, to add rows to an existing DataFrame. Example 1: init_hour = pd.to_datetime('00:00:00') orig_hour = init_hour+timedelta(days=1) while init_hour < orig_hour: row = {'Hours': init_hour.time()} df = df.append(row, ignore_index = True) init_hour = init_hour + timedelta(minutes=60) Example 2: row2 = {'date': tmp_date, 'false_negatives': fn, 'total': total} df2 = df2.append(row2, ignore_index = True) How could I solve this in a simple way without modifying much of the code before the sections above?
Use pd.concat instead of append. Preferably on a bunch of rows at the same time. Also, use date_range and timedelta_range whenever possible. Example 1: # First transformation init_hour = pd.to_datetime('00:00:00') orig_hour = init_hour+timedelta(days=1) rows = [] while init_hour < orig_hour: rows.append({'Hours': init_hour.time()}) init_hour = init_hour + timedelta(minutes=60) df = pd.concat([df, pd.DataFrame(rows)], axis=0, ignore_index=True) # Second transformation - just construct it without loop hours = pd.Series(pd.date_range("00:00:00", periods=24, freq="H").time, name="Hours") # Then insert/concat hours into your dataframe. Example 2 Don't see the context so it's harder to know what's appropriate. Either of these two # Alternative 1 row2 = {'date': tmp_date, 'false_negatives': fn, 'total': total} row2df = pd.DataFrame.from_records([row2]) df2 = pd.concat([df2, row2df], ignore_index=True, axis=0) # how to handle index depends on context # Alternative 2 # assuming integer monotonic index - assign a new row with loc new_index = len(df2.index) df2.loc[new_index] = row2 Note: there's a reason append is deprecated. Using it row by row leads to very slow code. So it's important that we do more than just a local translation of the code to not waste computer time when the analysis runs. Pandas authors are trying to get us to understand and appreciate concatenating or building dataframes using more efficient means, i.e. not row-by-row. Combining many rows into one DataFrame and concatenating bigger dataframes together is the way to go.
4
4
72,657,327
2022-6-17
https://stackoverflow.com/questions/72657327/stdin-is-not-a-tty-when-populating-postgres-database
I get the message stdin is not a tty when I run the command below in my terminal. psql -U postgres kdc < kdc.psql kdc is the database and kdc.psql is the psql file with commands to populate the database. I am in the directory that holds the psql file.
I am not sure what causes that message (it does not happen here), but you should be able to avoid it using the -f option: psql -U postgres -d kdc -f kdc.psql
3
5
72,654,704
2022-6-17
https://stackoverflow.com/questions/72654704/filenotfounderror-op-type-not-registered-regexsplitwithoffsets-while-loading
I am trying to load a Keras model on aws server with the following command import tensorflow_hub as hub from keras.models import load_model model = load_model(model_path, custom_objects={'KerasLayer': hub.KerasLayer}) but its giving an error FileNotFoundError: Op type not registered 'RegexSplitWithOffsets' in binary running on ip-10-0-xx-xxx.us-east-2.compute.internal. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'. The exact same code was working the other day but this time it's giving an error. I double-checked if the model file is still there on the specified path. Also, the file is not modified so there is no doubt of the file being corrupt or anything. I guess the error has to do something with aws, but I don't how to resolve it. I tried finding any solution on the web but it wasn't helpful.
importing tensorflow_text resolve this issue
4
8
72,648,685
2022-6-16
https://stackoverflow.com/questions/72648685/how-to-test-if-one-graph-is-a-subgraph-of-another-in-networkx
I am new to building directed graphs with networkx and I'm trying to work out how to compare two graphs. More specifically, how to tell if a smaller graph is a subgraph (unsure of the exact terminology) of a larger graph As an example, assume I have the following directed graph: I would like to be able to check whether a series of smaller graphs are sub-graphs of this initial graph. Returning a True value if they are (graph B), and False if they are not (graph C): Graph B = Sub-graph of Graph A Graph C != Sub-graph of Graph A Example Code import networkx A = nx.DiGraph() A.add_edges_from([('A','B'),('B','C'),('C','A')]) nx.draw_networkx(A) B = nx.DiGraph() B.add_edges_from([('A','B')]) nx.draw_networkx(B) C = nx.DiGraph() C.add_edges_from([('A','B'),('A','C')]) nx.draw_networkx(C) I've had a look through the documentation and cannot seem to find what I need. An alternative I have been considering is to represent the nodes as a sequence of strings, and then searching for each substring in the main graphs string sequence - however, I can't imagine this is the most effecient/effective/stable way to solve the problem.
You are looking for a subgraph isomorphisms. nx.isomorphism.DiGraphMatcher(A, B).subgraph_is_isomorphic() # True nx.isomorphism.DiGraphMatcher(A, C).subgraph_is_isomorphic() # False Note that the operation can be slow for large graphs, as the problem is NP-complete.
4
4
72,648,520
2022-6-16
https://stackoverflow.com/questions/72648520/how-to-remove-duplicate-values-from-list-of-dicts-and-keep-original-order
I have a list of dictionaries like this : time_array_final = [{'day': 15, 'month': 5},{'day': 29, 'month': 5}, {'day': 10, 'month': 6}, {'day': 10, 'month': 6}, {'day': 10, 'month': 6}, {'day': 10, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 14, 'month': 6},{'day': 15, 'month': 6}, {'day': 15, 'month': 6}, {'day': 15, 'month': 6}] I want to remove the duplicate dictionaries from this list. Here is what I tried: import ast final = [ast.literal_eval(el1) for el1 in set([str(el2) for el2 in time_array_final])] eventually it's working but there is issue I want to retain this data in its original order but the order is modified in my output. Is there a way to remove duplicates and maintain the order from the original list? Note: expected output should be unique and in case of repeating it should pick one record from repeating elements as the code doing above for example in this case output should be [{'day': 15, 'month': 5},{'day': 29, 'month': 5},{'day': 10, 'month': 6}, {'day': 12, 'month': 6}, {'day': 14, 'month': 6},{'day': 15, 'month': 6}]
You can create a dictionary where the key is the string representation of the items in your list, and the value is the actual item. time_array_final = [{'day': 15, 'month': 5},{'day': 29, 'month': 5}, {'day': 10, 'month': 6}, {'day': 10, 'month': 6}, {'day': 10, 'month': 6}, {'day': 10, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 12, 'month': 6}, {'day': 14, 'month': 6},{'day': 15, 'month': 6}, {'day': 15, 'month': 6}, {'day': 15, 'month': 6}] dedupe_dict = {str(item): item for item in time_array_final} Upon encountering a duplicate item, the dict comprehension will overwrite the previous item with the duplicate one, but that doesn't make any material difference because both items are identical. Since python 3.6, dictionaries keep insertion order, so dict.values() should give you the output you need. deduped_list = list(dedupe_dict.values()) Which gives: [{'day': 15, 'month': 5}, {'day': 29, 'month': 5}, {'day': 10, 'month': 6}, {'day': 12, 'month': 6}, {'day': 14, 'month': 6}, {'day': 15, 'month': 6}] As noted by @Copperfield in their comments on another answer, str(dict) is not the most reliable way of stringifying dicts for comparison, because the order of keys matters. d1 = {'day': 1, 'month': 2} d2 = {'month': 2, 'day': 1} d1 == d2 # True str(d1) == str(d2) # False To get around this, you could create a frozenset of the dict.items(), and use that as your key (provided all the values in your dict are hashable) like so: dedupe_dict = {frozenset(d.items()): d for d in time_array_final}
3
3
72,645,862
2022-6-16
https://stackoverflow.com/questions/72645862/numpy-array-slicing-to-return-sliced-array-and-corresponding-array-indices
I'm trying to generate two numpy arrays from one. One which is a slice slice of an original array, and another which represents the indexes which can be used to look up the values produced. The best way I can explain this is by example: import numpy as np original = np.array([ [5, 3, 7, 3, 2], [8, 4, 22, 6, 4], ]) sliced_array = original[:,::3] indices_of_slice = None # here, what command to use? for val, idx in zip(np.nditer(sliced_array), np.nditer(indices_of_slice)): # I want indices_of_slice to behave the following way: assert val == original[idx], "Error. This implementation is not correct. " Ultimately what I'm aiming for is an array which I can iterate through with nditer, and a corresponding array indices_of_slices, which returns the original lookup indices (i,j,...). Then, the value of the sliced array should be equal to value of the original array in index (i,j,...). Main question is: Can I return both the new sliced array as well as the indices of the values when slicing a numpy array? Hopefully all is clear! Edit: Here are the expected printouts of the two arrays: # print(sliced_array) # >>> [[5 3] # >>> [8 6]] # expected result of # print(indices_of_slice) # >>> [[(0 0) (0 3)] # >>> [(1 0) (1 3)]]
You can use numpy's slice np.s_[] with a tiny bit of gymnastics to get the indices you are looking for: slc = np.s_[:, ::3] shape = original.shape ix = np.unravel_index(np.arange(np.prod(shape)).reshape(shape)[slc], shape) >>> ix (array([[0, 0], [1, 1]]), array([[0, 3], [0, 3]])) >>> original[ix] array([[5, 3], [8, 6]]) >>> original[slc] array([[5, 3], [8, 6]]) Note that this works with slices that have some reverse direction: slc = np.s_[:, ::-2] # ... (as above) >>> ix (array([[0, 0, 0], [1, 1, 1]]), array([[4, 2, 0], [4, 2, 0]])) >>> np.array_equal(original[ix], original[slc]) True
3
2
72,649,220
2022-6-16
https://stackoverflow.com/questions/72649220/precise-type-annotating-array-numpy-ndarray-of-matplotlib-axes-from-plt-subplo
I wanted to have no errors while using VSCode Pylance type checker. How to type the axs correctly in the following code: import matplotlib.pyplot as plt fig, axs = plt.subplots(2, 2) In the image below, you can see that Pylance on VSCode is detecting an error.
It turns out that strongly typing the axs variable is not straightforward at all and requires to understant well how to type np.ndarray. See this question and this question for more details. The simplest and most powerful solution is to wrap numpy.ndarray with ' characters, in order to avoid the infamous TypeError: 'numpy._DTypeMeta' object is not subscriptable when Python tries to interpret the [] in the expression. An example: import matplotlib.pyplot as plt import numpy as np import numpy.typing as npt import seaborn as sns from typing import cast, Type, Sequence import typing sns.set() # Some example data to display x = np.linspace(0, 2 * np.pi, 400) y = np.sin(x ** 2) fig, axs = plt.subplots( 2, 2, figsize=(12, 10) # set graph size ) # typechecking operation NDArrayOfAxes: typing.TypeAlias = 'np.ndarray[Sequence[Sequence[plt.Axes]], np.dtype[np.object_]]' axs = cast(np.ndarray, axs) axs[0, 0].plot(x, y) axs[0, 0].set_title("main") axs[1, 0].plot(x, y**2) axs[1, 0].set_title("shares x with main") axs[1, 0].sharex(axs[0, 0]) axs[0, 1].plot(x + 1, y + 1) axs[0, 1].set_title("unrelated") axs[1, 1].plot(x + 2, y + 2) axs[1, 1].set_title("also unrelated") fig.tight_layout() Which is well detected by Pylance and runs correctly:
4
1
72,646,846
2022-6-16
https://stackoverflow.com/questions/72646846/remove-duplicates-from-a-list-and-remove-elements-at-same-index-in-another-list
I have two lists, one with word and another one with word type like this ['fish', 'Robert', 'dog', 'ball', 'cat', 'dog', 'Robert'] ['animal', 'person', 'animal', 'object', 'animal', 'animal', 'person'] I need to remove duplicates in the first list and to remove the type at the same index of the word removed. In the end I need something like this: ['fish', 'Robert', 'dog', 'ball', 'cat'] ['animal', 'person', 'animal', 'object', 'animal'] How can I do that?
just do a loop and use the zip function Script # Input Vars a = ['fish', 'Robert', 'dog', 'ball', 'cat', 'dog', 'Robert'] b = ['animal', 'person', 'animal', 'object', 'animal', 'animal', 'person'] # Output Vars c = [] d = [] # Process for e, f in zip(a, b): if(e not in c): c.append(e) d.append(f) # Print Output print(c) print(d) Output ['fish', 'Robert', 'dog', 'ball', 'cat'] ['animal', 'person', 'animal', 'object', 'animal']
3
1
72,630,488
2022-6-15
https://stackoverflow.com/questions/72630488/valueerror-mutable-default-class-dict-for-field-headers-is-not-allowed-use
I am trying to get used with new python's features (dataclasses). I am trying to initialize variables and I get error: raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'dict'> for field headers is not allowed: use default_factory My code: @dataclass class Application(): __config = ConfigParser() __config.read('mydb.ini') __host: str = __config.get('db','dbhost') __user: str = __config.get('db','dbuser') __password: str = __config.get('db','dbpw') __database: str = __config.get('db','database') url: str = "https://xxxx.domain.com/" headers: str = {'X-ApiKeys':'accessKey=xxxxxxx;secretKey=xxxxx','Content-Type': 'application/json'} def main(self): print(self.__host,self.__user,self.__password, self.__database) app = Application() if __name__=="__main__": app.main() What's the proper way to initialize dictionaries?
Dataclasses have a few useful things for defining complex fields. The one that you need is called field. This one has the argument default_factory that needs to receive callable, and there is where lambda comes to the rescue. So using this above, code that will work looks like (just part with dict): from dataclasses import field from typing import Dict @dataclass class Application(): ... headers: Dict[str, str] = field( default_factory=lambda: {'X-ApiKeys':'accessKey=xxxxxxx;secretKey=xxxxx','Content-Type': 'application/json'} )
5
10
72,629,578
2022-6-15
https://stackoverflow.com/questions/72629578/python-how-to-make-poetry-include-a-package-module-thats-not-on-a-subpath
I have, in a single repo, two Python projects that both depend on a shared utility package. My goal is to package each of the two projects in a software distribution package (i.e. a .tzr.gz file) I am currently getting this done using setuptools and setup.py files and having a hard time of it. I would much rather use Poetry to manage and package each of the two projects separately. Please consider this "minimal repro" of my problem: repo project1/ __init__.py main_module.py pyproject.toml project2/ __init__.py main_module.py pyproject.toml util/ __init__.py util_module.py I tried to get Poetry to include the util package when building project1 by modifying its project.toml this way: [tool.poetry] name = "project1" version = "0.1.0" description = "" authors = [""] packages = [ { include = "../util/*.py" } ] [tool.poetry.dependencies] python = "^3.9" [tool.poetry.dev-dependencies] pytest = "^5.2" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" When I run poetry build I get this error: Building project1 (0.1.0) - Building sdist ValueError 'C:\\repo\\util\\__init__.py' is not in the subpath of 'C:\\repo\\project1' OR one path is relative and the other is absolute. at ~\.pyenv\pyenv-win\versions\3.9.6\lib\pathlib.py:929 in relative_to 925│ n = len(to_abs_parts) 926│ cf = self._flavour.casefold_parts 927│ if (root or drv) if n == 0 else cf(abs_parts[:n]) != cf(to_abs_parts): 928│ formatted = self._format_parsed_parts(to_drv, to_root, to_parts) → 929│ raise ValueError("{!r} is not in the subpath of {!r}" 930│ " OR one path is relative and the other is absolute." 931│ .format(str(self), str(formatted))) 932│ return self._from_parsed_parts('', root if n == 1 else '', 933│ abs_parts[n:]) Doesn't poetry support my use-case? If not, what am I missing? Alternatively, please suggest another approach to package my two projects separately, but both packages must include the shared util package.
This seems to be a issue with poetry. Check this out https://github.com/python-poetry/poetry/issues/5621.
7
5
72,637,057
2022-6-15
https://stackoverflow.com/questions/72637057/typeerror-get-takes-1-positional-argument-but-2-were-given
I am building and Inventory Web App and I am facing this error. I am trying to get Username from token to show a custom hello message and show that the user is logged in as someone. Here is my Views.py that gets Token from localStorage as logged in user: class UserDetails(APIView): def post(self, request): serializer = UserAccountTokenSendSerializer(data=request.data) global token if serializer.is_valid(): token = serializer.validated_data['token'] return Response(serializer.data) def get(self): user_id = Token.objects.get(key=token).user_id details = User.objects.get(id=user_id) serializer = UserDetails(details, many=False) return Response(serializer.data) class UserDetails(serializers.ModelSerializer): class Meta: model = User fields = ( 'username', ) Here is my Urls.py: urlpatterns = [ path('get-user-token/', views.UserDetails.as_view()), path('get-user-details/', views.UserDetails.as_view()), ] And this is the error that I get: File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) TypeError: get() takes 1 positional argument but 2 were given [15/Jun/2022 19:34:59] "GET /api/v1/get-user-details/ HTTP/1.1" 500 82239
The .get(…) method [Django-doc] takes a request parameter as well, even if you do not use it, so: class UserDetails(APIView): # … def get(self, request): # … pass I would strongly advise, not to make token a global variable. The same webserver can be used for another user that thus has not authenticated (properly). Take a look at the Authentication section of the Django REST framework documentation for more information on how to authenticate and check credentials. Note: In Django, class-based views (CBV) often have a …View suffix, to avoid a clash with the model names. Therefore you might consider renaming the view class to UserDetailsView, instead of UserDetails.
4
4
72,633,449
2022-6-15
https://stackoverflow.com/questions/72633449/add-checkbox-in-dataframe
I would like to add a column in a dataframe in the streamlit and this column be a checkbox, so that later I could list the lines that the checkboxes were marked, is this possible?
You can use the package streamlit-aggrid. Code """ pip install streamlit-aggrid """ import streamlit as st import pandas as pd from st_aggrid import AgGrid, GridUpdateMode from st_aggrid.grid_options_builder import GridOptionsBuilder data = { 'country': ['Japan', 'China', 'Thailand', 'France', 'Belgium', 'South Korea'], 'capital': ['Tokyo', 'Beijing', 'Bangkok', 'Paris', 'Brussels', 'Seoul'] } df = pd.DataFrame(data) gd = GridOptionsBuilder.from_dataframe(df) gd.configure_selection(selection_mode='multiple', use_checkbox=True) gridoptions = gd.build() grid_table = AgGrid(df, height=250, gridOptions=gridoptions, update_mode=GridUpdateMode.SELECTION_CHANGED) st.write('## Selected') selected_row = grid_table["selected_rows"] st.dataframe(selected_row) Output
3
7
72,631,150
2022-6-15
https://stackoverflow.com/questions/72631150/joblib-load-error-no-module-named-scipy-sparse-csr
Python version: 3.7 (I have to use this version) OS: Linux Cloud Platform: Azure Resource: Azure function with python Goal: Load a model created with skit-learn version 1.0.2 with the following dependencies installed: numpy: 1.17.3 joblib: 1.1.0 scipy: 1.7.3 I am using joblib to load a skit-learn model that I trained (By the way I created the model locally in my machine with python 3.9). However, I am getting the following error: Traceback (most recent call last): File \"/home/site/wwwroot/sortierung/__init__.py\", line 51, in main prediction_file_path) File \"/home/site/wwwroot/shared_code/custom_functions_prediction.py\", line 255, in predict result.update(classify_mail(m,s,X, stop_words, model_folder_path)) File \"/home/site/wwwroot/shared_code/custom_functions_prediction.py\", line 105, in classify_mail model = load(modelFilePath) File \"/home/site/wwwroot/.python_packages/lib/site-packages/joblib/numpy_pickle.py\", line 587, in load obj = _unpickle(fobj, filename, mmap_mode) File \"/home/site/wwwroot/.python_packages/lib/site-packages/joblib/numpy_pickle.py\", line 506, in _unpickle obj = unpickler.load() File \"/usr/local/lib/python3.7/pickle.py\", line 1088, in load dispatch[key[0]](self) File \"/usr/local/lib/python3.7/pickle.py\", line 1385, in load_stack_global self.append(self.find_class(module, name)) File \"/usr/local/lib/python3.7/pickle.py\", line 1426, in find_class __import__(module, level=0)\nModuleNotFoundError: No module named 'scipy.sparse._csr' I checked in the scipy folder installed and I could not find this module. How could I solve this issue?. Tks in advance
To resolve this ModuleNotFoundError: No module named 'scipy.sparse._csr' error, try the following way: This error occurred because you have created a model in Python 3.9 but running it on Python 3.7. You can try creating a model in Python 3.7 or upgrade the Azure Python function app to a specific version of Python 3.9. To change the Python version to 3.9, according to documentation: You can update the linuxFxVersion setting in the function app with the az functionapp config set command. az functionapp config set --name <FUNCTION_APP> \ --resource-group <RESOURCE_GROUP> \ --linux-fx-version "python|3.9" References: No module named 'scipy.sparse._csr' and How to change python version of azure function
3
6
72,628,845
2022-6-15
https://stackoverflow.com/questions/72628845/pycharm-terminal-use-git-bash
Goal: use Git Bash as Terminal in PyCharm. How can I have a normal Bash with Git integrated Terminal in PyCharm? File path for Git Bash: C:\Users\me\AppData\Local\Programs\Git\git-bash.exe --cd-to-home. I apply Git Bash in PyCharm Settings: However, when I click New Session (new Terminal +), it launches as a Window:
You are pointing at git-bash.exe, which is wrong. You should point at bash.exe which is located inside the bin folder. So your Shell path should be: "C:\Users\<username>\AppData\Local\Programs\Git\bin\bash.exe" --login using "C:\Users\<username>\AppData\Local\Programs\Git\bin\sh.exe" --login would also work. Also works without --login. Make sure the " are there.
3
6
72,624,883
2022-6-15
https://stackoverflow.com/questions/72624883/how-do-i-dump-pythons-logging-configuration
How do I dump the current configuration of the Python logging module? For example, if I use a module that configures logging for me, how can I see what it has done?
There does not appear to be a documented way to do so, but we can get hints by looking at how the logging module is implemented. All Loggers belong to a tree, with the root Logger instance at logging.root. The Logger instances do not track their own children but instead have a shared Manager that can be used to get a list of all loggers: >>> print(logging.root.manager.loggerDict) { 'rosgraph': <logging.PlaceHolder object at 0xffffa2851710>, 'rosgraph.network': <logging.Logger object at 0xffffa28517d0>, 'rosout': <rosgraph.roslogging.RospyLogger object at 0xffffa2526290>, 'rospy': <rosgraph.roslogging.RospyLogger object at 0xffffa2594250>, ... } Each Logger instance has handlers and filters attributes which can help understand the behavior of the logger.
4
3
72,607,940
2022-6-13
https://stackoverflow.com/questions/72607940/how-to-use-unboundedpreceding-unboundedfollowing-and-currentrow-in-rowsbetween
I am a little confused about the method pyspark.sql.Window.rowsBetween that accepts Window.unboundedPreceding, Window.unboundedFollowing, and Window.currentRow objects as start and end arguments. Could you please explain how the function works and how to use Window objects correctly, with some examples? Thank you!
Rows between/Range between as the name suggests help with limiting the number of rows considered inside a window. Let us take a simple example. Starting with data: dfw = ( spark .createDataFrame( [ ("abc", 1, 100), ("abc", 2, 200), ("abc", 3, 300), ("abc", 4, 200), ("abc", 5, 100), ], "name string,id int,price int", ) ) # output +----+---+-----+ |name| id|price| +----+---+-----+ | abc| 1| 100| | abc| 2| 200| | abc| 3| 300| | abc| 4| 200| | abc| 5| 100| +----+---+-----+ Now over this data let's try to find of running max i.e max for each row: ( dfw .withColumn( "rm", F.max("price").over(Window.partitionBy("name").orderBy("id")) ) .show() ) #output +----+---+-----+---+ |name| id|price| rm| +----+---+-----+---+ | abc| 1| 100|100| | abc| 2| 200|200| | abc| 3| 300|300| | abc| 4| 200|300| | abc| 5| 100|300| +----+---+-----+---+ So as expected it looked at each price from top to bottom one by one and populated the max value it got this behaviour is known as start = Window.unboundedPreceding to end = Window.currentRow Now changing rows between values to start = Window.unboundedPreceding to end = Window.unboundedFollowing we will get as below: ( dfw .withColumn( "rm", F.max("price").over( Window .partitionBy("name") .orderBy("id") .rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing) ) ) .show() ) #output +----+---+-----+---+ |name| id|price| rm| +----+---+-----+---+ | abc| 1| 100|300| | abc| 2| 200|300| | abc| 3| 300|300| | abc| 4| 200|300| | abc| 5| 100|300| +----+---+-----+---+ Now as you can see in the same window it's looking downwards in all values for a max instead of limiting it to the current row. Now third will be start = Window.currentRow and end = Window.unboundedFollowing ( dfw .withColumn( "rm", F.max("price").over( Window .partitionBy("name") .orderBy("id") .rowsBetween(Window.currentRow, Window.unboundedFollowing) ) ) .show() ) #output +----+---+-----+---+ |name| id|price| rm| +----+---+-----+---+ | abc| 1| 100|300| | abc| 2| 200|300| | abc| 3| 300|300| | abc| 4| 200|200| | abc| 5| 100|100| +----+---+-----+---+ Now it's looking down only for a max starting its row from the current one. Also, it's not limited to just these 3 to use as is you can even start = Window.currentRow-1 and end = Window.currentRow+1 so instead of looking for all values above or below it will only look at 1 row above and 1 row below. like this: ( dfw .withColumn( "rm", F.max("price").over( Window .partitionBy("name") .orderBy("id") .rowsBetween(Window.currentRow-1, Window.currentRow+1) ) ) .show() ) # output +----+---+-----+---+ |name| id|price| rm| +----+---+-----+---+ | abc| 1| 100|200| | abc| 2| 200|300| | abc| 3| 300|300| | abc| 4| 200|300| | abc| 5| 100|200| +----+---+-----+---+ So you can imagine it a window inside the window which works around the current row it's processing.
8
22
72,604,922
2022-6-13
https://stackoverflow.com/questions/72604922/how-to-convert-python-dataclass-to-dictionary-of-string-literal
Given a dataclass like below: class MessageHeader(BaseModel): message_id: uuid.UUID def dict(self, **kwargs): return json.loads(self.json()) I would like to get a dictionary of string literal when I call dict on MessageHeader The desired outcome of dictionary is like below: {'message_id': '383b0bfc-743e-4738-8361-27e6a0753b5a'} I want to avoid using 3rd party library like pydantic & I do not want to use json.loads(self.json()) as there are extra round trips Is there any better way to convert a dataclass to a dictionary with string literal like above?
You can use dataclasses.asdict: from dataclasses import dataclass, asdict class MessageHeader(BaseModel): message_id: uuid.UUID def dict(self): return {k: str(v) for k, v in asdict(self).items()} If you're sure that your class only has string values, you can skip the dictionary comprehension entirely: class MessageHeader(BaseModel): message_id: uuid.UUID dict = asdict
77
120
72,564,558
2022-6-9
https://stackoverflow.com/questions/72564558/django-celery-error-while-adding-tasks-to-rabbitmq-message-queue-attributeerro
I have setup celery, rabbitmq and django web server on digitalocean. RabbitMQ runs on another server where my Django app is not running. When I am trying to add the tasks to the queue using delay I am getting an error AttributeError: 'ChannelPromise' object has no attribute 'value' From django shell I am adding the task to my message queue. python3 manage.py shell Python 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from app1.tasks import add >>> add.delay(5, 6) But getting error Traceback (most recent call last): File "/etc/myprojectenv/lib/python3.8/site-packages/kombu/utils/functional.py", line 30, in __call__ return self.__value__ AttributeError: 'ChannelPromise' object has no attribute '__value__' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/etc/myprojectenv/lib/python3.8/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors yield File "/etc/myprojectenv/lib/python3.8/site-packages/kombu/connection.py", line 433, in _ensure_connection return retry_over_time( File "/etc/myprojectenv/lib/python3.8/site-packages/kombu/utils/functional.py", line 312, in retry_over_time return fun(*args, **kwargs) File "/etc/myprojectenv/lib/python3.8/site-packages/kombu/connection.py", line 877, in _connection_factory self._connection = self._establish_connection() File "/etc/myprojectenv/lib/python3.8/site-packages/kombu/connection.py", line 812, in _establish_connection conn = self.transport.establish_connection() File "/etc/myprojectenv/lib/python3.8/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection conn.connect() File "/etc/myprojectenv/lib/python3.8/site-packages/amqp/connection.py", line 323, in connect self.transport.connect() File "/etc/myprojectenv/lib/python3.8/site-packages/amqp/transport.py", line 129, in connect self._connect(self.host, self.port, self.connect_timeout) File "/etc/myprojectenv/lib/python3.8/site-packages/amqp/transport.py", line 184, in _connect self.sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused Started celery as : celery -A myproject worker -l info which gives me User information: uid=0 euid=0 gid=0 egid=0 warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format( -------------- celery@ubuntu-s-1vcpu-1gb-blr1-01 v5.2.7 (dawn-chorus) --- ***** ----- -- ******* ---- Linux-5.4.0-107-generic-x86_64-with-glibc2.29 2022-06-09 17:24:14 - *** --- * --- - ** ---------- [config] - ** ---------- .> app: myproject:0x7fd64fa5d970 - ** ---------- .> transport: amqp://himanshu:**@IPADDRESS2:5672/vhostcheck - ** ---------- .> results: - *** --- * --- .> concurrency: 1 (prefork) -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .> celery exchange=celery(direct) key=celery [tasks] . app1.tasks.add [2022-06-09 17:24:14,309: INFO/MainProcess] Connected to amqp://himanshu:**@IPADDRESS:5672/vhostcheck [2022-06-09 17:24:14,313: INFO/MainProcess] mingle: searching for neighbors [2022-06-09 17:24:15,333: INFO/MainProcess] mingle: all alone [2022-06-09 17:24:15,349: WARNING/MainProcess] /etc/myprojectenv/lib/python3.8/site-packages/kombu/pidbox.py:70: UserWarning: A node named celery@ubuntu-s-1vcpu-1gb-blr1-01 is already using this process mailbox! [2022-06-09 17:24:15,352: WARNING/MainProcess] /etc/myprojectenv/lib/python3.8/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments! warnings.warn('''Using settings.DEBUG leads to a memory [2022-06-09 17:24:15,352: INFO/MainProcess] celery@ubuntu-s-1vcpu-1gb-blr1-01 ready. Inside app1 project : tasks.py from __future__ import absolute_import, unicode_literals from celery import shared_task @shared_task def add(x, y): return x + y settings.py INSTALLED_APPS = [ ... 'app1', 'django_celery_results', ] CELERY_RESULT_BACKEND = 'django-db' CELERY_CACHE_BACKEND = 'django-cache' CELERY_BROKER_URL = 'amqp://himanshu:password@IPADDRESS:5672/vhostcheck' CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_TIMEZONE = 'Europe/Amsterdam'
To ensure the app is loaded when Django starts, we need to import the Celery app we defined in myproject/__init__.py: sudo nano myproject/__init__.py # This will make sure the app is always imported when # Django starts so that shared_task will use this app. from .celery import app as celery_app __all__ = ['celery_app']
6
9
72,551,035
2022-6-8
https://stackoverflow.com/questions/72551035/why-i-cant-access-my-environment-variable-from-env-file-in-vscode-python
What I am trying to do? I am trying to access my environment variable from the .env file and print its value in the terminal. What is the issue? When I run the script in the terminal I keep getting the error none Some more info: I am using Windows 10 The version of dotenv is 0.20.0 I downloaded it using python -m pip install python-dotenv The code .env file - export PRIVATE_KEY = 0xc3c4e4fe27d8e6b06710e713878e4488c034ce346a578fdfa78bb3d335130eec The Python file - from dotenv import load_dotenv import os load_dotenv() print(os.getenv("PRIVATE_KEY"))
I solved my issue but it wasn't after removing the "export" in the .env file (I tried with and without it both gave the same results in the terminal), I had to specify the full path of the .env file in the load_dotenv(), apparently I had to specify it but many code samples I saw in forums didn't need to do it I wonder why? Here is the new code... The Python code - The .env file - by the way it's not a real private key At some point before I found the solution I ran the Python script in the terminal and got a random private key (I don't have it pictured sadly), it might had been a key I set in the past but then I checked if I had other .env files in the folder but I had none I also didn't have any environment variables in System Properties > Advanced > Environment Variables on windows so where does dotenv gets the key values by dafault? also After that I reopened vscode and tried again but I got a none error...
10
8
72,610,552
2022-6-14
https://stackoverflow.com/questions/72610552/most-replayed-data-of-youtube-video-via-api
Is there any way to extract the "Most Replayed" (aka Video Activity Graph) Data from a YouTube video via API? What I'm referring to:
One more time YouTube Data API v3 doesn't provide a basic feature. I recommend you to try out my open-source YouTube operational API. Indeed by fetching https://yt.lemnoslife.com/videos?part=mostReplayed&id=VIDEO_ID, you will get the most replayed graph values you are looking for in item["mostReplayed"]. With the video id XiCrniLQGYc you would get: { "kind": "youtube#videoListResponse", "etag": "NotImplemented", "items": [ { "kind": "youtube#video", "etag": "NotImplemented", "id": "XiCrniLQGYc", "mostReplayed": { "markers": [ { "startMillis": 0, "intensityScoreNormalized": 1 }, { "startMillis": 2580, "intensityScoreNormalized": 0.7083409245967562 }, { "startMillis": 5160, "intensityScoreNormalized": 0.6381007317793738 }, ... { "startMillis": 255420, "intensityScoreNormalized": 0.012864077773078256 } ], "timedMarkerDecorations": [ { "visibleTimeRangeStartMillis": 0, "visibleTimeRangeEndMillis": 10320 } ] } } ] }
16
21
72,560,837
2022-6-9
https://stackoverflow.com/questions/72560837/custom-fastapi-query-parameter-validation
Is there any way to have custom validation logic in a FastAPI query parameter? example I have a FastAPI app with a bunch of request handlers taking Path components as query parameters. For example: def _raise_if_non_relative_path(path: Path): if path.is_absolute(): raise HTTPException( status_code=409, detail=f"Absolute paths are not allowed, {path} is absolute." ) @app.get("/new",) def new_file(where: Path): _raise_if_non_relative_path(where) # do save a file return Response(status_code=requests.codes.ok) @app.get("/new",) def delete_file(where: Path): _raise_if_non_relative_path(where) # do save a file return Response(status_code=requests.codes.ok) I was wondering if there is way to ensure that the handler is not even called when the given file path is absolute. Now I have to repeat myself with _raise_if_non_relative_path everywhere. what I tried fastapi.Query: This only allows very basic validation (string length and regex). I could define a absolute path regex in this example. But a regex solution is really not generic, I want to validate with a custom function. Subclass pathlib.Path with validation logic in __init__: This doesn't work, the type given in the type signature is ignored, and the object in my handler is a regular pathlib.PosixPath. Use @app.middleware: this can work but seems overkill since not all my request handlers deal with Path objects. class RelativePath(pydantic.Basemodel): I.e. define a class with single path field, which I can validate however I want. Unfortunately, this does not work for query parameters. If I do this, the request handler insists on having a json content body. Or at least that is what the swagger docs say.
This is the kind of validation that the Depends dependency management function is well-suited for. It allows you to define dependencies for given view functions, and add logic to validate (and lazily create) those dependencies. This gives a set of composable dependencies that can be re-used in those views where they are required. You can create a relative_where_query dependency, and then depend on that to perform any required validation: from fastapi import Depends, FastAPI, Response, Query from fastapi.exceptions import HTTPException from pathlib import Path import requests app = FastAPI() def relative_where_query(where: Path = Query(...)): if where.is_absolute(): raise HTTPException( status_code=409, detail=f"Absolute paths are not allowed, {where} is absolute." ) return where @app.get("/new") def new_file(where: Path = Depends(relative_where_query)): return Response(status_code=requests.codes.ok) This gives small, easily readable (and understandable) view functions, while the dependency ("I need a relative path from the where query parameter") has been moved to its own definition. You can then re-use this dependency in every view function that require a relative path from the where query parameter (and you can further decompose and recompose these dependencies further if necessary). Update 2023-05-01: If you want to generalize this handling to decouple the field name from the validation function, you can do that by generating a dynamic dependency. This examples creates a dynamic base model with Pydantic to do it - but there's surely other ways. Since we now raise the error inside a Pydantic validator, we add an exception handler to give a proper error response if the path isn't valid. The magic itself happens inside path_validator - this function takes the name given to it, uses it as a field name in a create_model call - which dynamically creates a Pydantic model from its parameters. This model is then used as a dependency in the controller. The name given as an argument to the function will be the actual parameter name - both when being called (i.e. in the URL in this case) and when the documentation is displayed. The path name in the controller function itself is only relevant inside the function, and isn't used in the documentation or as the query parameter (the field name in the model is used for that). from fastapi import Depends, FastAPI, Response, Query, Request from fastapi.exceptions import HTTPException from fastapi.responses import JSONResponse from pathlib import Path from pydantic import create_model, validator import requests app = FastAPI() class PathNotAbsoluteError(Exception): pass @app.exception_handler(PathNotAbsoluteError) async def value_error_exception_handler(request: Request, exc: PathNotAbsoluteError): return JSONResponse( status_code=400, content={"message": str(exc)}, ) def path_is_absolute(cls, value): if not value.is_absolute(): raise PathNotAbsoluteError("Given path is not absolute") return value def path_validator(param_name): validators = { 'pathname_validator': validator(param_name)(path_is_absolute) } return create_model( "AbsolutePathQuery", **{param_name: (Path, Query(...))}, __validators__=validators, ) @app.get("/path") def new_file(path: Path = Depends(path_validator("where"))): return Response(status_code=requests.codes.ok) You can now reuse this in any location you want by adding a dependency on a given parameter name: def new_file(path: Path = Depends(path_validator("my_path"))): ... def old_file(path: Path = Depends(path_validator("where"))): ...
7
8
72,587,334
2022-6-11
https://stackoverflow.com/questions/72587334/how-to-await-a-list-of-tasks-in-python
In .Net C#, there is a function Task.WhenAll that can take a list of tasks to await them. What should I use in python? I am trying to do the same with this: tasks = ... # list of coroutines for task in tasks: await task
After adding tasks to a list, you should use asyncio.gather that gives coroutines as an argument list and executes them asynchronously. Also, you could use asyncio.create_task that takes a coroutine and calls concurrent tasks in the event loop. import asyncio async def coro(i): await asyncio.sleep(i//2) async def main(): tasks = [] for i in range(5): tasks.append(coro(i)) await asyncio.gather(*tasks) if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close()
4
8
72,620,996
2022-6-14
https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app
I wanna run my app without any problem, but I got this attached error. Could someone help or point me in the right direction regarding why this is happening? Traceback (most recent call last): File "/Users/enchant3dmango/Documents/GitHub/nexus/automation-api/app/main.py", line 4, in <module> from configurations import config # noqa # pylint: disable=unused-import File "/Users/enchant3dmango/Documents/GitHub/nexus/automation-api/app/configurations/config.py", line 7, in <module> from google.cloud import secretmanager File "/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/google/cloud/secretmanager.py", line 20, in <module> from google.cloud.secretmanager_v1 import SecretManagerServiceClient File "/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/google/cloud/secretmanager_v1/__init__.py", line 24, in <module> from google.cloud.secretmanager_v1.gapic import secret_manager_service_client File "/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/google/cloud/secretmanager_v1/gapic/secret_manager_service_client.py", line 25, in <module> import google.api_core.gapic_v1.client_info File "/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/google/api_core/gapic_v1/__init__.py", line 18, in <module> from google.api_core.gapic_v1 import config File "/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/google/api_core/gapic_v1/config.py", line 23, in <module> import grpc File "/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/grpc/__init__.py", line 22, in <module> from grpc import _compression File "/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/grpc/_compression.py", line 15, in <module> from grpc._cython import cygrpc ImportError: dlopen(/Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/grpc/_cython/cygrpc.cpython-39-darwin.so, 2): Symbol not found: _CFRelease Referenced from: /Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/grpc/_cython/cygrpc.cpython-39-darwin.so Expected in: flat namespace in /Users/enchant3dmango/opt/miniconda3/envs/nexus/lib/python3.9/site-packages/grpc/_cython/cygrpc.cpython-39-darwin.so I'm running this in Apple M1. I already upgraded pip and setuptools before installing all the requirements in my virtual environment using conda. Here is my python, pip, and setuptools version: python 3.9.12 pip 21.2.4 setuptools 62.4.0
I was able to get around the problem by rebuilding grpcio from source like this: pip uninstall grpcio export GRPC_PYTHON_LDFLAGS=" -framework CoreFoundation" pip install grpcio --no-binary :all:
17
14
72,583,781
2022-6-11
https://stackoverflow.com/questions/72583781/im-getting-an-import-error-does-anyone-know-the-solution
hi I'm getting an import error, does anyone know the solution? ImportError: Bindings generation error. Submodule name should always start with a parent module name. Parent name: cv2.cv2. Submodule name: cv2
If you are using opencv-python version 4.6.0.66, try to downgrade to 4.5.5.64 version, on Pycharm you can do that by go to File->Setting->Python Interpreter-> Double-click on opencv-python version->check the specify version box, then choose older version. Downgrade opencv also makes the auto-completion works again.
3
15
72,605,385
2022-6-13
https://stackoverflow.com/questions/72605385/how-to-install-local-pip-package-from-mounted-volume-in-docker-compose
I am working on a project which has dependency on another project. In live environment, the dependency is published and installed simply using pip install. On my local environment, I'd like to be able to install the local dependency instead, using the pip install -e command. The structure is as follow: - Home --- Project1 ----- docker-compose ----- Dockerfile --- relaton-py In this structure, Project1 has dependency on relaton-py, thus I'd like to "install" this dependency using the local relaton-py. My docker-compose file looks like: volumes: - .:/code - /Users/myuser/Dev/Projects/relaton-py:/relaton-py while the Dockerfile looks like: COPY requirements.txt /code/requirements.txt RUN ["pip", "install", "-e", "relaton-py"] WORKDIR /code RUN ["pip", "install", "-r", "requirements.txt"] # Copy the rest of the codebase COPY . /code When trying to spin un the environment, I get the following error: => ERROR [local/web-precheck:latest 11/19] RUN ["pip", "install", "-e", "relaton-py"] 1.2s ------ > [local/web-precheck:latest 11/19] RUN ["pip", "install", "-e", "relaton-py"]: #27 0.685 ERROR: relaton-py is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file). #27 0.889 WARNING: You are using pip version 22.0.4; however, version 22.1.2 is available. #27 0.889 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command. ------ failed to solve: rpc error: code = Unknown desc = executor failed running [pip install -e relaton-py]: exit code: 1 However, if I try not to install this local dependency in the Dockerfile, the entire environment spins up and I am able to access the container and install the package manually with the same command pip install -e relaton-py. Has anyone had to deal with this sort of thing already? Any idea on how to make Dockerfile recognise the files in the mounted volumes?
For simpler packages, it may be enough to just mount it directly into the site-packages. In your example, you might want to try: putting relaton-py (just the package in its source form, e.g. the Git repository) one directory above your Compose file, and adding this line to docker-compose.yml under volumes: - ../relaton-py/relaton:/usr/local/lib/python3.10/site-packages/relaton:ro Assuming you use Python 3.10, this should override whatever gets installed from PyPI during Docker image build with the directory on your host system, mounted in read-only mode to prevent accidental changes from within the container (but any source code changes you make on your main system, of course, will be propagated immediately). If your package includes native modules and requires compilation for particular OS/architecture, there may be extra steps involved in this.
4
2
72,584,282
2022-6-11
https://stackoverflow.com/questions/72584282/django-caddy-csrf-protection-issues
I deployed a Django 4 app with Daphne (ASGI) in a docker container. I use Caddy as a reverse proxy in front. It works, except I can't fill in any form because the CSRF protection kicks in. So no admin login, for example. I can currently access the admin interface in two ways: Directly through docker, via a SSH tunelled port Through Caddy, which is then forwarding to the Docker container. Option 1 works. I can log into the admin interface just as if I was running the development server locally. All is working as expected. However, option 2 (caddy reverse proxy) doesn't work. I can access Django and load pages, but any form submission will be blocked because the CSRF protection kicks in. CSRF verification failed. Request aborted. Reason given for failure: Origin checking failed - https://<mydomain.com> does not match any trusted origins. My Caddyfile contains this: <mydomain.com> { reverse_proxy localhost:8088 } localhost:8088 is the port exposed by my docker container. In an effort to eliminate potential issues, I've set the following to false in my config file: SECURE_SSL_REDIRECT (causes a redirect loop, probably related to the reverse proxying) SESSION_COOKIE_SECURE (I'd rather have it set to True, but I don't know at this point) CSRF_COOKIE_SECURE (same remark) The only Django-Caddy examples I could find online are outdated and refer to older versions of Caddy and/or Django. Django is deployed on ASGI with Daphne. I've seens posts suggesting to change CSRF_TRUSTED_ORIGINS, but it doesn't seem right that I would have to add a host that is already in the ALLOWED_HOSTS list. That also wouldn't explain why it works directly on the docker container, unless localhost is a special case for CSRF. Versions: Caddy: 2.5.1 Django: 4.0.5 Daphne: 3.0.2 Python: 3.10.5 Any idea what goes wrong, and how I should go about debugging such issues?
Finally found out what was happening. I first wanted to know the exact HTTP request that was sent from caddy to django: sudo tcpdump -i lo -A -n port 8088 This confirmed that: the Origin and Referer headers were set properly the csrftoken cookie was sent properly Once that was known, I could dig in the code from django. Specifically, this function in the CSRF middleware. In conclusion: Caddy forwards the http request to django unencrypted (so HTTP-non-S between caddy and django). Django considers that request non-secure The CSRF protection expects the Origin header sent by the browser to be http:// because the request is not secure. In my case, it is https:// because my browser is talking to Caddy over https Because the Origin header does not match what the CSRF middleware expects, the request is rejected It's actually a simple fix. Since we know that Caddy will always ignore X-Forwarded-Proto from the browser and sets it itself, we can add SECURE_PROXY_SSL_HEADER to the settings.py in django: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') And voilà! Now I can also set these to true: SECURE_SSL_REDIRECT SESSION_COOKIE_SECURE CSRF_COOKIE_SECURE EDIT Here's the Caddyfile, as per requested: service.mywebsite.com { reverse_proxy localhost:8088 }
5
11
72,550,211
2022-6-8
https://stackoverflow.com/questions/72550211/valueerror-at-least-one-stride-in-the-given-numpy-array-is-negative-and-tensor
I am writing the code for Autonomous Driving using RL. I am using a stable baseline3 and an open ai gym environment. I was running the following code in the jupyter notebook and it is giving me the following error: # Testing our model episodes = 5 # test the environment 5 times for episodes in range(1,episodes+1): # looping through each episodes bs = env.reset() # observation space # Taking the obs and passing it through our model # tells that which kind of the action is best for our work done = False score = 0 while not done: env.render() action, _ = model.predict(obs) # now using model here # returns model action and next state # take that action to get the best reward # for observation space we get the box environment # rather than getting random action we are using model.predict(obs) on our obs for an curr env to gen the action inorder to get best possible reward obs, reward, done, info = env.step(action) # gies state, reward whose value is 1 # reward is 1 for every step including the termination step score += reward print('Episode:{},Score:{}'.format(episodes,score))''' env.close() Error The link for the code that I have written is given below: https://drive.google.com/file/d/1JBVmPLn-N1GCl_Rgb6-qGMpJyWvBaR1N/view?usp=sharing The version of python I am using is Python 3.8.13 in Anaconda Environment. I am using Pytorch CPU version and the OS is Windows 10. Please help me out in solving this question.
Using .copy() for numpy arrays should help (because PyTorch tensors can't handle negative strides): action, _ = model.predict(obs.copy()) I haven't managed to run your notebook quickly because of dependencies problems, but I had the same error with AI2THOR simulator, and adding .copy() has helped. Maybe someone with more technical knowledge about numpy, torch or AI2THOR will explain why the error occurs in more detail.
4
12
72,544,983
2022-6-8
https://stackoverflow.com/questions/72544983/how-can-i-bulk-upload-json-records-to-aws-opensearch-index-using-a-python-client
I have a sufficiently large dataset that I would like to bulk index the JSON objects in AWS OpenSearch. I cannot see how to achieve this using any of: boto3, awswrangler, opensearch-py, elasticsearch, elasticsearch-py. Is there a way to do this without using a python request (PUT/POST) directly? Note that this is not for: ElasticSearch, AWS ElasticSearch. Many thanks!
I finally found a way to do it using opensearch-py, as follows. First establish the client, # First fetch credentials from environment defaults # If you can get this far you probably know how to tailor them # For your particular situation. Otherwise SO is a safe bet :) import boto3 credentials = boto3.Session().get_credentials() region='eu-west-2' # for example auth = AWSV4SignerAuth(credentials, region) # Now set up the AWS 'Signer' from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth auth = AWSV4SignerAuth(credentials, region) # And finally the OpenSearch client host=f"...{region}.es.amazonaws.com" # fill in your hostname (minus the https://) here client = OpenSearch( hosts = [{'host': host, 'port': 443}], http_auth = auth, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection ) Phew! Let's create the data now: # Spot the deliberate mistake(s) :D document1 = { "title": "Moneyball", "director": "Bennett Miller", "year": "2011" } document2 = { "title": "Apollo 13", "director": "Richie Cunningham", "year": "1994" } data = [document1, document2] TIP! Create the index if you need to - my_index = 'my_index' try: response = client.indices.create(my_index) print('\nCreating index:') print(response) except Exception as e: # If, for example, my_index already exists, do not much! print(e) This is where things go a bit nutty. I hadn't realised that every single bulk action needs an, er, action e.g. "index", "search" etc. - so let's define that now action={ "index": { "_index": my_index } } You can read all about the bulk REST API, there. The next quirk is that the OpenSearch bulk API requires Newline Delimited JSON (see https://www.ndjson.org), which is basically JSON serialized as strings and separated by newlines. Someone wrote on SO that this "bizarre" API looked like one designed by a data scientist - far from taking offence, I think that rocks. (I agree ndjson is weird though.) Hideously, now let's build up the full JSON string, combining the data and actions. A helper fn is at hand! def payload_constructor(data,action): # "All my own work" action_string = json.dumps(action) + "\n" payload_string="" for datum in data: payload_string += action_string this_line = json.dumps(datum) + "\n" payload_string += this_line return payload_string OK so now we can finally invoke the bulk API. I suppose you could mix in all sorts of actions (out of scope here) - go for it! response=client.bulk(body=payload_constructor(data,action),index=my_index) That's probably the most boring punchline ever but there you have it. You can also just get (geddit) .bulk() to just use index= and set the action to: action={"index": {}} Hey presto! Now, choose your poison - the other solution looks crazily shorter and neater. PS The well-hidden opensearch-py documentation on this are located here.
7
17
72,621,731
2022-6-14
https://stackoverflow.com/questions/72621731/is-there-any-graceful-way-to-interrupt-a-python-concurrent-future-result-call
The only mechanism I can find for handling a keyboard interrupt is to poll. Without the while loop below, the signal processing never happens and the process hangs forever. Is there any graceful mechanism for allowing a keyboard interrupt to function when given a concurrent future object? Putting polling loops all over my code base seems to defeat the purpose of using futures at all. More info: Waiting on the future in the main thread in Windows blocks all signal handling, even if it's fully cancellable and even if it has not "started" yet. The word "exiting" doesn't even print. So 'cancellability' is only part (the easy part) of the issue. In my real code, I obtain futures via executors (run coro threadsafe, in this case), this was just a simplified example import concurrent.futures import signal import time import sys fut = concurrent.futures.Future() def handler(signum, frame): print("exiting") fut.cancel() signal.signal(signal.SIGINT, orig) sys.exit() orig = signal.signal(signal.SIGINT, handler) # a time sleep is fully interruptible with a signal... but a future isnt # time.sleep(100) while True: try: fut.result(.03) except concurrent.futures.TimeoutError: pass
OK, I wrote a solution to this based on digging in cypython source and some bug reports - but it's not pretty. If you want to be able to interrupt a future, especially on Windows, the following seems to work: @contextlib.contextmanager def interrupt_futures(futures): # pragma: no cover """Allows a list of futures to be interrupted. If an interrupt happens, they will all have their exceptions set to KeyboardInterrupt """ # this has to be manually tested for now, because the tests interfere with the test runner def do_interr(*_): for ent in futures: try: ent.set_exception(KeyboardInterrupt) except: # if the future is already resolved or cancelled, ignore it pass return 1 if sys.platform == "win32": from ctypes import wintypes # pylint: disable=import-outside-toplevel kernel32 = ctypes.WinDLL("kernel32", use_last_error=True) CTRL_C_EVENT = 0 CTRL_BREAK_EVENT = 1 HANDLER_ROUTINE = ctypes.WINFUNCTYPE(wintypes.BOOL, wintypes.DWORD) @HANDLER_ROUTINE def handler(ctrl): if ctrl == CTRL_C_EVENT: handled = do_interr() elif ctrl == CTRL_BREAK_EVENT: handled = do_interr() else: handled = False # If not handled, call the next handler. return handled if not kernel32.SetConsoleCtrlHandler(handler, True): raise ctypes.WinError(ctypes.get_last_error()) was = signal.signal(signal.SIGINT, do_interr) yield signal.signal(signal.SIGINT, was) # restore default handler kernel32.SetConsoleCtrlHandler(handler, False) else: was = signal.signal(signal.SIGINT, do_interr) yield signal.signal(signal.SIGINT, was) This allows you to do this: with interrupt_futures([fut]): fut.result() For the duration of that call, interrupt signals will be intercepted and will result in the future raising a KeyboardInterrupt to the caller requesting the result - instead of simply ignoring all interrupts.
5
2
72,593,814
2022-6-12
https://stackoverflow.com/questions/72593814/cannot-import-name-soft-unicode-from-markupsafe-in-google-colab
I'm trying to install pycaret==3.0.0 in google colab, But I'm having a problem, the library requires Jinja2 to be installed which I did, but then It finally throws off another error. ImportError Traceback (most recent call last) <ipython-input-26-4f8843d24b3a> in <module>() ----> 1 import jinja2 2 from pycaret.regression import * 3 frames /usr/local/lib/python3.7/dist-packages/jinja2/filters.py in <module>() 11 from markupsafe import escape 12 from markupsafe import Markup ---> 13 from markupsafe import soft_unicode 14 15 from ._compat import abc ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/root/.local/lib/python3.7/site-packages/markupsafe/__init__.py)
This is caused by upgrade in MarkupSafe:2.1.0 where they have removed soft_unicode, try using: pip install markupsafe==2.0.1
12
22
72,581,774
2022-6-11
https://stackoverflow.com/questions/72581774/how-to-use-python-3-6-in-google-colab
I want to work with python 3.6 due to some file compatibility issue, can anyone help me out, where I can use python 3.6 in google Colab, apart from that I want to use tensorflow 2.0 and opencv 4.0.0.21, which I have already install in Colab am only stuck with python 3.6. any help would be appreciated
You can use the command below and then select the alternative version to change the python version. !sudo update-alternatives --config python3 Output: There are 2 choices for the alternative python3 (providing /usr/bin/python3). Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/bin/python3.7 2 auto mode 1 /usr/bin/python3.6 1 manual mode 2 /usr/bin/python3.7 2 manual mode Press <enter> to keep the current choice[*], or type selection number: 1 update-alternatives: using /usr/bin/python3.6 to provide /usr/bin/python3 (python3) in manual mode To check the python version use the following code. python --version # 3.6.9
4
2
72,541,971
2022-6-8
https://stackoverflow.com/questions/72541971/what-is-the-best-solution-for-cron-schedule-environmental-variables-in-a-docker
I have a Python script that will run on certain intervals as the cron schedule will invoke the Python script inside a Docker container. I want the cron schedule expression to be set through an environment variable, like this: CRON_SCHEDULE="*/5 * * * *" So the user can freely choose how often the script will run. On the other hand, I have a hard time making a bash script that will read this environment file and replace the existing crontab using sed as I need to escape any possible character. That brings me to the point where I'm wondering if there is any better solution for running a Python script on a schedule while also having easy configuration of the running schedule?
You should use the solution described in this answer https://stackoverflow.com/a/70897876/3669093 Whenever the environment variable is changed the container is restarted and the schedule is updated. If you need more guidance, shoot your questions.
5
2
72,593,019
2022-6-12
https://stackoverflow.com/questions/72593019/how-can-i-fire-and-forget-a-task-without-blocking-main-thread
What I have in mind is a very generic BackgroundTask class that can be used within webservers or standalone scripts, to schedule away tasks that don't need to be blocking. I don't want to use any task queues (celery, rabbitmq, etc.) here because the tasks I'm thinking of are too small and fast to run. Just want to get them done as out of the way as possible. Would that be an async approach? Throwing them onto another process? First solution I came up with that works: # Need ParamSpec to get correct type hints in BackgroundTask init P = ParamSpec("P") class BackgroundTask(metaclass=ThreadSafeSingleton): """Easy way to create a background task that is not dependent on any webserver internals. Usage: async def sleep(t): time.sleep(t) BackgroundTask(sleep, 10) <- Creates async task and executes it separately (nonblocking, works with coroutines) BackgroundTask(time.sleep, 9) <- Creates async task and executes it separately (nonblocking, works with normal functions) """ background_tasks = set() lock = threading.Lock() def __init__(self, func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None: """Uses singleton instance of BackgroundTask to add a task to the async execution queue. Args: func (typing.Callable[P, typing.Any]): _description_ """ self.func = func self.args = args self.kwargs = kwargs self.is_async = asyncio.iscoroutinefunction(func) async def __call__(self) -> None: if self.is_async: with self.lock: task = asyncio.create_task(self.func(*self.args, **self.kwargs)) self.background_tasks.add(task) print(len(self.background_tasks)) task.add_done_callback(self.background_tasks.discard) # TODO: Create sync task (this will follow a similar pattern) async def create_background_task(func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None: b = BackgroundTask(func, *args, **kwargs) await b() # Usage: async def sleep(t): time.sleep(t) await create_background_task(sleep, 5) I think I missed the point by doing this though. If I ran this code along with some other async code, then yes, I would get a performance benefit since blocking operations aren't blocking the main thread anymore. I'm thinking I maybe need something more like a separate process to handle such background tasks without blocking the main thread at all (the above async code will still be run on the main thread). Does it make sense to have a separate thread that handles background jobs? Like a simple job queue but very lightweight and does not require additional infrastructure? Or does it make sense to create a solution like the one above? I've seen that Starlette does something like this (https://github.com/encode/starlette/blob/decc5279335f105837987505e3e477463a996f3e/starlette/background.py#L15) but they await the background tasks AFTER a response is returned. This makes their solution dependent on a web server design (i.e. doing things after response is sent is OK). I'm wondering if we can build something more generic where you can run background tasks in scripts or webservers alike, without sacrificing performance. Not that familiar with async/concurrency features, so don't really know how to compare these solutions. Seems like an interesting problem! Here is what I came up with trying to perform the tasks on another process: class BackgroundTask(metaclass=ThreadSafeSingleton): """Easy way to create a background task that is not dependent on any webserver internals. Usage: async def sleep(t): time.sleep(t) BackgroundTask(sleep, 10) <- Creates async task and executes it separately (nonblocking, works with coroutines) BackgroundTask(time.sleep, 9) <- Creates async task and executes it separately (nonblocking, works with normal functions) BackgroundTask(es.transport.close) <- Probably most common use in our codebase """ background_tasks = set() executor = concurrent.futures.ProcessPoolExecutor(max_workers=2) lock = threading.Lock() def __init__(self, func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None: """Uses singleton instance of BackgroundTask to add a task to the async execution queue. Args: func (typing.Callable[P, typing.Any]): _description_ """ self.func = func self.args = args self.kwargs = kwargs self.is_async = asyncio.iscoroutinefunction(func) async def __call__(self) -> None: if self.is_async: with self.lock: loop = asyncio.get_running_loop() with self.executor as pool: result = await loop.run_in_executor( pool, functools.partial(self.func, *self.args, **self.kwargs))
Your questions are so abstract that I'll try to give common answers to all of them. How can I "fire and forget" a task without blocking main thread? It depends on what you mean by saying forget. If you are not planning to access that task after running, you can run it in a parallel process. If the main application should be able to access a background task, then you should have an event-driven architecture. In that case, the things previously called tasks will be services or microservices. I don't want to use any task queues (celery, rabbitmq, etc.) here because the tasks I'm thinking of are too small and fast to run. Just want to get them done as out of the way as possible. Would that be an async approach? Throwing them onto another process? If it contains loops or other CPU-bound operations, then right to use a subprocess. If the task makes a request (async), reads files, logs to stdout, or other I/O bound operations, then it is right to use coroutines or threads. Does it make sense to have a separate thread that handles background jobs? Like a simple job queue but very lightweight and does not require additional infrastructure? We can't just use a thread as it can be blocked by another task that uses CPU-bound operations. Instead, we can run a background process and use pipes, queues, and events to communicate between processes. Unfortunately, we cannot provide complex objects between processes, but we can provide basic data structures to handle status changes of the tasks running in the background. Regarding the Starlette and the BackgroundTask Starlette is a lightweight ASGI framework/toolkit, which is ideal for building async web services in Python. (README description) It is based on concurrency. So even this is not a generic solution for all kinds of tasks. NOTE: Concurrency differs from parallelism. I'm wondering if we can build something more generic where you can run background tasks in scripts or webservers alike, without sacrificing performance. The above-mentioned solution suggests use a background process. Still, it will depend on the application design as you must do things (emit an event, add an indicator to the queue, etc.) that are needed for communication and synchronization of running processes (tasks). There is no generic tool for that, but there are situation-dependent solutions. Situation 1 - The tasks are asynchronous functions Suppose we have a request function that should call an API without blocking the work of other tasks. Also, we have a sleep function that should not block anything. import asyncio import aiohttp async def request(url): async with aiohttp.ClientSession() as session: async with session.get(url) as response: try: return await response.json() except aiohttp.ContentTypeError: return await response.read() async def sleep(t): await asyncio.sleep(t) async def main(): background_task_1 = asyncio.create_task(request("https://google.com/")) background_task_2 = asyncio.create_task(sleep(5)) ... # here we can do even CPU-bound operations result1 = await background_task_1 ... # use the 'result1', etc. await background_task_2 if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() In this situation, we use asyncio.create_task to run a coroutine concurrently (like in the background). Sure we could run it in a subprocess, but there is no reason for that as it would use more resources without improving the performance. Situation 2 - The tasks are synchronous functions (I/O bound) Unlike the first situation where the functions were already asynchronous, in this situation, those are synchronous but not CPU-bound (I/O bound). This gives an ability to run them in threads or make them asynchronous (using asyncio.to_thread) and run concurrently. import time import asyncio import requests def asynchronous(func): """ This decorator converts a synchronous function to an asynchronous Usage: @asynchronous def sleep(t): time.sleep(t) async def main(): await sleep(5) """ async def wrapper(*args, **kwargs): await asyncio.to_thread(func, *args, **kwargs) return wrapper @asynchronous def request(url): with requests.Session() as session: response = session.get(url) try: return response.json() except requests.JSONDecodeError: return response.text @asynchronous def sleep(t): time.sleep(t) async def main(): background_task_1 = asyncio.create_task(request("https://google.com/")) background_task_2 = asyncio.create_task(sleep(5)) ... Here we used a decorator to convert a synchronous (I/O bound) function to an asynchronous one and use them like in the first situation. Situation 3 - The tasks are synchronous functions (CPU-bound) To run CPU-bound tasks parallelly in the background we have to use multiprocessing. And for ensuring the task is done we use the join method. import time import multiprocessing def task(): for i in range(10): time.sleep(0.3) def main(): background_task = multiprocessing.Process(target=task) background_task.start() ... # do the rest stuff that does not depend on the background task background_task.join() # wait until the background task is done ... # do stuff that depends on the background task if __name__ == "__main__": main() Suppose the main application depends on the parts of the background task. In this case, we need an event-driven design as the join cannot be called multiple times. import multiprocessing event = multiprocessing.Event() def task(): ... # synchronous operations event.set() # notify the main function that the first part of the task is done ... # synchronous operations event.set() # notify the main function that the second part of the task is also done ... # synchronous operations def main(): background_task = multiprocessing.Process(target=task) background_task.start() ... # do the rest stuff that does not depend on the background task event.wait() # wait until the first part of the background task is done ... # do stuff that depends on the first part of the background task event.wait() # wait until the second part of the background task is done ... # do stuff that depends on the second part of the background task background_task.join() # wait until the background task is finally done ... # do stuff that depends on the whole background task if __name__ == "__main__": main() As you already noticed with events we can just provide binary information and those are not effective if the processes are more than two (It will be impossible to know where the event was emitted from). So we use pipes, queues, and manager to provide non-binary information between the processes.
4
7
72,567,630
2022-6-9
https://stackoverflow.com/questions/72567630/different-approaches-for-applying-svm-in-keras
I want to build a multi-class classification model using Keras. My data is containing 7 features and 4 labels. If I am using Keras I have seen two ways to apply the Support vector Machine (SVM) algorithm. First: A Quasi-SVM in Keras By using the (RandomFourierFeatures layer) presented here I have built the following model: def create_keras_model(): initializer = tf.keras.initializers.GlorotNormal() return tf.keras.models.Sequential([ layers.Input(shape=(7,)), RandomFourierFeatures(output_dim=4822, kernel_initializer=initializer), layers.Dense(units=4, activation='softmax'), ]) Second: Using the last layer in the network as described here as follows: def create_keras_model(): return tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(7,)), tf.keras.layers.Dense(64), tf.keras.layers.Dense(4, kernel_regularizer=l2(0.01)), tf.keras.layers.Softmax() ]) note: CategoricalHinge() was used as the loss function. My question is: are these approaches appropriate and can be defined as applying of SVM model or it is just an approximation of the model architecture? in short, can I say this is applying of SVM model?
You can check two models on your data like below: I check on mnist dataset and get the below result: Less overfitting with the second approach Fast training time with the first approach Less trainable params with the first approach Accuracy for two approaches same as each other from keras.utils.layer_utils import count_params import matplotlib.pyplot as plt import tensorflow as tf import seaborn as sns import pandas as pd import time def create_model(approach): model = tf.keras.Sequential() model.add(tf.keras.Input(shape=(784,))) if approach == 'Quasi_SVM': model.add(tf.keras.layers.experimental.RandomFourierFeatures( output_dim=4096, scale=10.0, kernel_initializer="gaussian")) model.add(tf.keras.layers.Dense(10)) if approach == 'kernel_regularizer': model.add(tf.keras.layers.Dense(128, activation='relu')) model.add(tf.keras.layers.Dense(64, activation='relu')) model.add(tf.keras.layers.Dense(32, activation='relu')) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(10, kernel_regularizer = tf.keras.regularizers.l2(0.01), activation='softmax')) model.compile( optimizer = 'adam', loss = 'hinge', metrics=['accuracy'], ) return model (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = x_train.reshape(-1, 784).astype("float32") / 255 x_test = x_test.reshape(-1, 784).astype("float32") / 255 y_train = tf.keras.utils.to_categorical(y_train) y_test = tf.keras.utils.to_categorical(y_test) for approach in ['Quasi_SVM', 'kernel_regularizer']: model = create_model(approach) start = time.time() history = model.fit(x_train, y_train, epochs=30, batch_size=128, validation_split=0.2) print(f'Training time {approach} : {time.time() - start} sec') print(f'Trainable params {approach} : {count_params(model.trainable_weights)}') print(f'Accuracy on x_test {approach} : {model.evaluate(x_test, y_test, verbose=0)[1]}') df = pd.DataFrame(history.history).rename_axis('epoch').reset_index().melt(id_vars=['epoch']) fig, axes = plt.subplots(1,2, figsize=(18,6)) for ax, mtr in zip(axes.flat, ['loss', 'accuracy']): ax.set_title(f'{approach} {mtr.title()} Plot') dfTmp = df[df['variable'].str.contains(mtr)] sns.lineplot(data=dfTmp, x='epoch', y='value', hue='variable', ax=ax) fig.tight_layout() plt.show() Output: (benchmark on colab) Training time Quasi_SVM : 43.78484082221985 sec Trainable params Quasi_SVM : 40970 Accuracy on x_test Quasi_SVM : 0.9729999899864197 Training time kernel_regularizer : 45.47012114524841 sec Trainable params kernel_regularizer : 111514 Accuracy on x_test kernel_regularizer : 0.972100019454956
4
1
72,596,436
2022-6-12
https://stackoverflow.com/questions/72596436/how-to-perform-approximate-structural-pattern-matching-for-floats-and-complex
I've read about and understand floating point round-off issues such as: >>> sum([0.1] * 10) == 1.0 False >>> 1.1 + 2.2 == 3.3 False >>> sin(radians(45)) == sqrt(2) / 2 False I also know how to work around these issues with math.isclose() and cmath.isclose(). The question is how to apply those work arounds to Python's match/case statement. I would like this to work: match 1.1 + 2.2: case 3.3: print('hit!') # currently, this doesn't match
The key to the solution is to build a wrapper that overrides the __eq__ method and replaces it with an approximate match: import cmath class Approximately(complex): def __new__(cls, x, /, **kwargs): result = complex.__new__(cls, x) result.kwargs = kwargs return result def __eq__(self, other): try: return isclose(self, other, **self.kwargs) except TypeError: return NotImplemented It creates approximate equality tests for both float values and complex values: >>> Approximately(1.1 + 2.2) == 3.3 True >>> Approximately(1.1 + 2.2, abs_tol=0.2) == 3.4 True >>> Approximately(1.1j + 2.2j) == 0.0 + 3.3j True Here is how to use it in a match/case statement: for x in [sum([0.1] * 10), 1.1 + 2.2, sin(radians(45))]: match Approximately(x): case 1.0: print(x, 'sums to about 1.0') case 3.3: print(x, 'sums to about 3.3') case 0.7071067811865475: print(x, 'is close to sqrt(2) / 2') case _: print('Mismatch') This outputs: 0.9999999999999999 sums to about 1.0 3.3000000000000003 sums to about 3.3 0.7071067811865475 is close to sqrt(2) / 2
55
59
72,610,665
2022-6-14
https://stackoverflow.com/questions/72610665/in-the-latest-version-of-pytorch-what-is-best-practice-to-get-all-tensors-to-us
In pytorch, if I do something like import torch x = torch.randn(3) y = x + 5 all tensors correspond to the "cpu" device by default. Is there some way to make to make it so, by default, all tensors are on another device (e.g. "cuda:0")? I know I can always be careful to add .cuda() or specify cuda whenever creating a tensor, but it would be great if I could just change the default device directly at the beginning of the program and be done with it, so that torch.randn(3) comes from the desired device without having to specify it every time. Or would that be a bad thing to do for some reason? E.g. is there any reason I wouldn't want every tensor/operation to be done on cuda by default?
Pytorch has an optional function to change the default type of tensor set_default_tensor_type. Applying the default type on the main script: >>> import torch >>> >>> if __name__ == '__main__': ... cuda = torch.cuda.is_available() ... if cuda: ... torch.set_default_tensor_type('torch.cuda.FloatTensor') ... a = torch.randn(3,3) ... print(a.device) ... cuda:0 Or would that be a bad thing to do for some reason? E.g. is there any reason I wouldn't want every tensor/operation to be done on cuda by default? I couldn't find any reference or any document to answer this question. However, in my opinion, it's to avoid the memory fragmentation in GPU memory. I'm not an expert, but the data in memory should be arranged in an efficient way, if not, the redundant space will cause OOM. That's why, in default, Tensorflow will take all of your GPU's memory no matter how many parameters your model has. You can improve the space and speed just by setting the tensor shape multiples of 8 amp documents. In practice, higher performance is achieved when A and B dimensions are multiples of 8. In conclusion, I think it's better to control the device of tensor manually instead of setting it gpu as default.
4
2
72,614,335
2022-6-14
https://stackoverflow.com/questions/72614335/how-to-share-initialize-and-close-aiohttp-clientsession-between-django-async-v
Django supports async views since version 3.1, so it's great for non-blocking calls to e.g. external HTTP APIs (using, for example, aiohttp). I often see the following code sample, which I think is conceptually wrong (although it works perfectly fine): import aiohttp from django.http import HttpRequest, HttpResponse async def view_bad_example1(request: HttpRequest): async with aiohttp.ClientSession() as session: async with session.get("https://example.com/") as example_response: response_text = await example_response.text() return HttpResponse(response_text[:42], content_type="text/plain") This code creates a ClientSession for each incoming request, which is inefficient. aiohttp cannot then use e.g. connection pooling. Don’t create a session per request. Most likely you need a session per application which performs all requests altogether. Source: https://docs.aiohttp.org/en/stable/client_quickstart.html#make-a-request The same applies to httpx: On the other hand, a Client instance uses HTTP connection pooling. This means that when you make several requests to the same host, the Client will reuse the underlying TCP connection, instead of recreating one for every single request. Source: https://www.python-httpx.org/advanced/#why-use-a-client Is there any way to globally instantiate aiohttp.ClientSession in Django so that this instance can be shared across multiple requests? Don't forget that ClientSession must be created in a running eventloop (Why is creating a ClientSession outside of an event loop dangerous?), so we can't instantiate it e.g. in Django settings or as a module-level variable. The closest I got is this code. However, I think this code is ugly and doesn't address e.g. closing the session. CLIENT_SESSSION = None async def view_bad_example2(request: HttpRequest): global CLIENT_SESSSION if not CLIENT_SESSSION: CLIENT_SESSSION = aiohttp.ClientSession() example_response = await CLIENT_SESSSION.get("https://example.com/") response_text = await example_response.text() return HttpResponse(response_text[:42], content_type="text/plain") Basically I'm looking for the equivalent of Events from FastAPI that can be used to create/close some resource in an async context. By the way here is a performance comparison using k6 between the two views: view_bad_example1: avg=1.32s min=900.86ms med=1.14s max=2.22s p(90)=2s p(95)=2.1s view_bad_example2: avg=930.82ms min=528.28ms med=814.31ms max=1.66s p(90)=1.41s p(95)=1.52s
Django doesn't implement the ASGI Lifespan protocol. Ref: https://github.com/django/django/pull/13636 Starlette does. FastAPI directly uses Starlette's implementation of event handlers. Here's how you can achieve that with Django: Implement the ASGI Lifespan protocol in a subclass of Django's ASGIHandler. import django from django.core.asgi import ASGIHandler class MyASGIHandler(ASGIHandler): def __init__(self): super().__init__() self.on_shutdown = [] async def __call__(self, scope, receive, send): if scope['type'] == 'lifespan': while True: message = await receive() if message['type'] == 'lifespan.startup': # Do some startup here! await send({'type': 'lifespan.startup.complete'}) elif message['type'] == 'lifespan.shutdown': # Do some shutdown here! await self.shutdown() await send({'type': 'lifespan.shutdown.complete'}) return await super().__call__(scope, receive, send) async def shutdown(self): for handler in self.on_shutdown: if asyncio.iscoroutinefunction(handler): await handler() else: handler() def my_get_asgi_application(): django.setup(set_prefix=False) return MyASGIHandler() Replace the application in asgi.py. # application = get_asgi_application() application = my_get_asgi_application() Implement a helper get_client_session to share the instance: import asyncio import aiohttp from .asgi import application CLIENT_SESSSION = None _lock = asyncio.Lock() async def get_client_session(): global CLIENT_SESSSION async with _lock: if not CLIENT_SESSSION: CLIENT_SESSSION = aiohttp.ClientSession() application.on_shutdown.append(CLIENT_SESSSION.close) return CLIENT_SESSSION Usage: async def view(request: HttpRequest): session = await get_client_session() example_response = await session.get("https://example.com/") response_text = await example_response.text() return HttpResponse(response_text[:42], content_type="text/plain")
4
6
72,561,628
2022-6-9
https://stackoverflow.com/questions/72561628/why-such-a-big-pickle-of-a-sklearn-decision-tree-30k-times-bigger
Why pickling a sklearn decision tree can generate a pickle thousands times bigger (in terms of memory) than the original estimator? I ran into this issue at work where a random forest estimator (with 100 decision trees) over a dataset with around 1_000_000 samples and 7 features generated a pickle bigger than 2GB. I was able to track down the issue to the pickling of a single decision tree and I was able to replicate the issue with a generated dataset as below. For memory estimations I used pympler library. Sklearn version used is 1.0.1 # here using a regressor tree but I would expect the same issue to be present with a classification tree import pickle from sklearn.tree import DecisionTreeRegressor from sklearn.datasets import make_friedman1 # using a dataset generation function from sklear from pympler import asizeof # function that creates the dataset and trains the estimator def make_example(n_samples: int): X, y = make_friedman1(n_samples=n_samples, n_features=7, noise=1.0, random_state=49) estimator = DecisionTreeRegressor(max_depth=50, max_features='auto', min_samples_split=5) estimator.fit(X, y) return X, y, estimator # utilities to compute and compare the size of an object and its pickled version def readable_size(size_in_bytes: int, suffix='B') -> str: num = size_in_bytes for unit in ['', 'k', 'M', 'G', 'T', 'P', 'E', 'Z']: if abs(num) < 1024.0: return "%3.1f %s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f%s%s" % (num, 'Yi', suffix) def print_size(obj, skip_detail=False): obj_size = asizeof.asized(obj).size print(readable_size(obj_size)) return obj_size def compare_with_pickle(obj): size_obj = print_size(obj) size_pickle = print_size(pickle.dumps(obj)) print(f"Ratio pickle/obj: {(size_pickle / size_obj):.2f}") _, _, model100K = make_example(100_000) compare_with_pickle(model100K) _, _, model1M = make_example(1_000_000) compare_with_pickle(model1M) output: 1.7 kB 4.9 MB Ratio pickle/obj: 2876.22 1.7 kB 49.3 MB Ratio pickle/obj: 28982.84
As pointed out by @pygeek's answer and subsequent comments, the wrong assumption of the question is that the pickle is increasing the size of the object substantially. Instead the issue lies with pympler.asizeof which is not giving the correct estimate of the tree object. Indeed the DecisionTreeRegressor object has a tree_ attribute that has a number of arrays of length tree_.node_count. Using help(sklearn.tree._tree.Tree) we can see that there are 8 such arrays (values, children_left, children_right, feature, impurity, threshold, n_node_samples, weighted_n_node_samples) and the underlying type of every array (except possibly the values array, see note below) should be an underlying 64 bit integer or 64 bit float (the underlying Tree object is a cython object), so a better estimate of the size of a DecisionTree is estimator.tree_.node_count*8*8. Computing this estimate for the models above: def print_tree_estimate(tree): print(f"A tree with max_depth {tree.max_depth} can have up to {2**(tree.max_depth -1)} nodes") print(f"This tree has node_count {tree.node_count} and a size estimate is {readable_size(tree.node_count*8*8)}") print_tree_estimate(model100K.tree_) print() print_tree_estimate(model1M.tree_) gives as output: A tree with max_depth 37 can have up to 68719476736 nodes This tree has node_count 80159 and a size estimate is 4.9 MB A tree with max_depth 46 can have up to 35184372088832 nodes This tree has node_count 807881 and a size estimate is 49.3 MB and indeed these estimates are in line with the sizes of pickle objects. Further note that the only way to be sure to bound the size of DecisionTree is to bound max_depth, since a binary tree can have a maximum number of nodes that is bounded by 2**(max_depth - 1), but the specific tree realizations above have a number of nodes well below this theoretical bound. note: the above estimate is valid for this decision tree regressor which has a single output and no classes. estimator.tree_.values is an array of shape [node_count, n_outputs, max_n_classes] so for n_outputs > 1 and/or max_n_classes > 1 the size estimate would need to take into account those and the correct estimate would be estimator.tree_.node_count*8*(7 + n_outputs*max_n_classes)
7
1
72,598,852
2022-6-13
https://stackoverflow.com/questions/72598852/getcacheentry-failed-cache-service-responded-with-503
I am trying to check the lint on the gitubaction. my github action steps are as below lint: name: Lint runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version-file: '.python-version' cache: 'pip' cache-dependency-path: 'requirements.txt' Error screenshot Attached below Could you please help me how to fix this?
lint: name: Lint runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version-file: '.python-version' - name: Cache dependencies uses: actions/cache@v3 with: path: ~/.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }} restore-keys: | ${{ runner.os }}-pip- ${{ runner.os }}- I too faced the same problem. It is because of the cache server not responding that includes internal server error or any other. You can use actions/cache@v3 instead of the automatic cache by python using cache: 'pip' because the action/cache does the same but only gives warning on the server error
26
2
72,618,062
2022-6-14
https://stackoverflow.com/questions/72618062/pypdf-does-not-read-the-pdf-text-line-by-line
I was using PyPdf to read text from a pdf file. However pyPDF does not read the text in pdf line by line, Its reading in some haphazard manner. Putting new line somewhere when its not even present in the pdf. import PyPDF2 pdf_path = r'C:\Users\PDFExample\Desktop\Temp\sample.pdf' pdfFileObj = open(pdf_path, 'rb') pdfReader = PyPDF2.PdfFileReader(pdfFileObj) page_nos = pdfReader.numPages for i in range(page_nos): # Creating a page object pageObj = pdfReader.getPage(i) # Printing Page Number print("Page No: ",i) # Extracting text from page # And splitting it into chunks of lines text = pageObj.extractText().split(" ") # Finally the lines are stored into list # For iterating over list a loop is used for i in range(len(text)): # Printing the line # Lines are seprated using "\n" print(text[i],end="\n\n") print() This gives me content as Our Ref : 21 1 8 88 1 11 5 Name: S ky Blue Ref 1 : 1 2 - 34 - 56789 - 2021/2 Ref 2: F2021004 444 Amount: $ 1 00 . 11 ... Whereas expected was Our Ref :2118881115 Name: Sky Blue Ref 1 :12-34-56789-2021/2 Ref 2:F2021004444 Amount: $100.11 Total Paid:$0.00 Balance: $100.11 Date of A/C: 01/08/2021 Date Received: 10/12/2021 Last Paid: Amt Last Paid: A/C Status: CLOSED Collector : Sunny Jane Here is the link to the pdf file https://pdfhost.io/v/eCiktZR2d_sample2
I tried a different package called as pdfplumber. It was able to read the pdf line by line in exact way in which I wanted. 1. Install the package pdfplumber pip install pdfplumber 2. Get the text and store it in some container import pdfplumber pdf_text = None with pdfplumber.open(pdf_path) as pdf: first_page = pdf.pages[0] pdf_text = first_page.extract_text()
4
5
72,622,309
2022-6-14
https://stackoverflow.com/questions/72622309/calculating-the-averages-of-elements-in-one-array-based-on-data-in-another-array
I need to average the Y values corresponding to the values in the X array... X=np.array([ 1, 1, 2, 2, 2, 2, 3, 3 ... ]) Y=np.array([ 10, 30, 15, 10, 16, 10, 15, 20 ... ]) In other words, the equivalents of the 1 values in the X array are 10 and 30 in the Y array, and the average of this is 20, the equivalents of the 2 values are 15, 10, 16, and 10, and their average is 12.75, and so on... How can I calculate these average values?
import numpy as np X = np.array([ 1, 1, 2, 2, 2, 2, 3, 3]) Y = np.array([ 10, 30, 15, 10, 16, 10, 15, 20]) # Only unique values unique_vals = np.unique(X); # Loop for every value for val in unique_vals: # Search for proper indexes in Y idx = np.where(X == val) # Mean for finded indexes aver = np.mean(Y[idx]) print(f"Average for {val}: {aver}") Result: Average for 1: 20.0 Average for 2: 12.75 Average for 3: 17.5
3
2