question_id
int64
59.5M
79.7M
creation_date
stringdate
2020-01-01 00:00:00
2025-06-13 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,446,008
2024-5-8
https://stackoverflow.com/questions/78446008/altair-boxplot-border-to-be-set-black-and-median-line-red
I want to set a black border color to the boxplot of a altair graph, i try to add stroke parameter to black on the encoding chanel but this overrided my red median line to black. This is the code I am trying: def plot_hourly_boxplot_altair(data, column, session_state=None): # Convert 'fecha' column to datetime format data['fecha'] = pd.to_datetime(data['fecha']) # Filter out rows where the specified column has NaN values data = data.dropna(subset=[column]) if session_state and not session_state.zero_values: # Erase 0 values from the data data = data[data[column] != 0] # filter the data to just get the date range selected data = range_selector(data, min_date=session_state.min_date, max_date=session_state.max_date) # filter the data to just get the days of the week selected if session_state.days: data = data[data['fecha'].dt.dayofweek.isin(session_state.days)] if data.empty: print(f"No valid data for column '{column}'.") return None # Create a boxplot using Altair with x axis as the hour of the day on 24 h format and # y axis as the demand that is on the data[column] data['fecha'] = data['fecha'].dt.strftime('%Y-%m-%dT%H:%M:%S') boxplot = alt.Chart(data).mark_boxplot(size = 23,median={'color': 'red'}).encode( x=alt.X('hours(fecha):N', title='Hora', axis=alt.Axis(format='%H'), sort='ascending'), y=alt.Y(f'{column}:Q', title='Demanda [kW]'), stroke = alt.value('black'), # Set thke color of the boxplot strokeWidth=alt.value(1), # Set the width of the boxplot # color=alt.value('#4C72B0'), # Set the color of the boxplot color=alt.value('#2d667a'), # Set the color of the bars opacity=alt.value(1), # Set the opacity of the bars tooltip=[alt.Tooltip('hours(fecha):N', title='Hora')] # Customize the tooltip ) chart = (boxplot).properties( width=600, # Set the width of the chart height=600, # Set the height of the chart title=(f'Boxplot de demanda de potencia {column}') # Remove date from title ).configure_axis( labelFontSize=12, # Set the font size of axis labels titleFontSize=14, # Set the font size of axis titles grid=True, # color of labels of x-axis and y-axis is black labelColor='black', # x-axis and y-axis titles are bold titleFontWeight='bold', # color of x-axis and y-axis titles is black titleColor='black', gridColor='#4C72B0', # Set the color of grid lines gridOpacity=0.2 # Set the opacity of grid lines ).configure_view( strokeWidth=0, # Remove the border of the chart fill='#FFFFFF' # Set background color to white ) return chart # Enable zooming and panning and this is my result: I tryed conditional stroke with this code: stroke=alt.condition( alt.datum._argmax == 'q3', # condition for the stroke color (for the box part) alt.value('black'), # color for the stroke alt.value('red') # color for the median line ), but got median and border red as seen here: how can i achieve my objective? i.e a red median line and black border. I also saw this note on the altair documentation Note: The stroke encoding has higher precedence than color, thus may override the color encoding if conflicting encodings are specified. is there any way to achieve this?
You can set the properties of the box components inside mark_boxplot as mentioned here in the docs, rather than via the encoding: import altair as alt from vega_datasets import data source = data.cars() alt.Chart(source).mark_boxplot( color='lightblue', box={'stroke': 'black'}, # Could have used MarkConfig instead median=alt.MarkConfig(stroke='red'), # Could have used a dict instead ).encode( alt.X("Miles_per_Gallon:Q").scale(zero=False), alt.Y("Origin:N"), ) The advantage of using MarkConfig instead of a dict is that you can view all the available parameter names in the help popup.
3
3
78,448,761
2024-5-8
https://stackoverflow.com/questions/78448761/how-can-i-observe-the-intermediate-process-of-cv2-erode
I've been observing the results when I apply cv2.erode() with different kernel values. In the code below, it is (3, 3), but it is changed to various ways such as (1, 3) or (5, 1). The reason for this observation is to understand kernel. I understand it in theory. And through practice, I can see what kind of results I get. But I want to go a little deeper. I would like to see what happens every time the pixel targeted by kernel changes. It's okay if you have thousands of images stored. How can I observe the intermediate process of cv2.erode()? Am I asking too much? image = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE) _, thresholded_image = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY) inverted_image = cv2.bitwise_not(thresholded_image) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3)) eroded_image = cv2.erode(inverted_image, kernel, iterations=5) cv2.imshow('image', eroded_image) cv2.waitKey(0) cv2.destroyAllWindows()
When you call cv2.erode from Python it eventually gets down to one C++ API call of cv::erode. As you can see in the documentation, this API does not support inspecting intermediate result from the process. This means it is also not available from the Python wrapper. The only way you can achieve what you want is to download the C++ source code for opencv (as it is open-source), change it to support inspecting intermediate result (e.g. by adding callbacks, or additional output images), compile it to a library and wrap for Python. Keep in mind however that doing so is far from being trivial.
3
3
78,447,053
2024-5-8
https://stackoverflow.com/questions/78447053/create-multiple-columns-from-a-single-column-and-group-by-pandas
work = pd.DataFrame({"JOB" : ['JOB01', 'JOB01', 'JOB02', 'JOB02', 'JOB03', 'JOB03'], "STATUS" : ['ON_ORDER', 'ACTIVE','TO_BE_ALLOCATED', 'ON_ORDER', 'ACTIVE','TO_BE_ALLOCATED'], "PART" : ['PART01', 'PART02','PART03','PART04','PART05','PART06']}) How can I use Pandas to groupby the JOB, split Status into columns based on the values and concatenate the Part field based on the JOB. Desired Output: JOB | ON_ORDER | ACTIVE | TO_BE_ALLOCATED | PART_CON JOB01 | True | True | False | Part01\nPart02 JOB02 | True | False | True | Part03\nPart04 JOB03 | False | True | True | Part05\nPart06
Try: x = df.groupby("JOB")["PART"].agg(", ".join).rename("PART_CON") y = pd.crosstab(df["JOB"], df["STATUS"]).astype(bool) print(pd.concat([y, x], axis=1).reset_index()) Prints: JOB ACTIVE ON_ORDER TO_BE_ALLOCATED PART_CON 0 JOB01 True True False PART01, PART02 1 JOB02 False True True PART03, PART04 2 JOB03 True False True PART05, PART06
5
5
78,445,577
2024-5-8
https://stackoverflow.com/questions/78445577/polars-select-multiple-element-wise-products
Suppose I have the following dataframe: the_df = pl.DataFrame({'x1': [1,1,1], 'x2': [2,2,2], 'y1': [1,1,1], 'y2': [2,2,2]}) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ x1 ┆ x2 ┆ y1 ┆ y2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 β”‚ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 β”‚ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ And and two lists, xs = ['x1', 'x2'], ys = ['y1', 'y2']. Is there a good way to add the products between x1/y1 and x2/y2 using .select()? So the result should look like the following. Specifically, I want to use the lists rather than writing out z1=x1*y1, z2=x2*y2 (the real data has more terms I want to multiply). β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ x1 ┆ x2 ┆ y1 ┆ y2 ┆ z1 ┆ z2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 ┆ 1 ┆ 4 β”‚ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 ┆ 1 ┆ 4 β”‚ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 ┆ 1 ┆ 4 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
you can do something like this: zs = ['z1','z2'] df.with_columns( (pl.col(xc) * pl.col(yc)).alias(zc) for xc, yc, zc in zip(xs, ys, zs) ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ x1 ┆ x2 ┆ y1 ┆ y2 ┆ z1 ┆ z2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 ┆ 1 ┆ 4 β”‚ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 ┆ 1 ┆ 4 β”‚ β”‚ 1 ┆ 2 ┆ 1 ┆ 2 ┆ 1 ┆ 4 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
4
4
78,446,065
2024-5-8
https://stackoverflow.com/questions/78446065/socket-gaierror-errno-2-name-or-service-not-known-firebase-x-raspberry-pi
I am using a Python program to click a picture on my Raspberry Pi 3B+ when motion is detected and send this image to firebase storage. import RPi.GPIO as GPIO import gpiozero import datetime import picamera import time import os import pyrebase firebase_config = { "apiKey": "...", "authDomain": "x.firebaseapp.com", "databaseURL": "https://x.firebaseio.com", "projectId": "...", "storageBucket": "x.appspot.com", "messagingSenderId": "...", "appId": "..." } firebase = pyrebase.initialize_app(firebase_config) storage = firebase.storage() # Camera config camera = picamera.PiCamera() # Motion sensor pir = gpiozero.MotionSensor(4) print("Waiting for motion") pir.wait_for_motion() print(f"Motion detected") filename = datetime.datetime.now().strftime("%d%m%y%H%M%S")+".jpg" print(filename) camera.capture(filename) print(f"{filename} saved") storage.child(filename).put(filename) print("Image sent to firebase") os.remove(name) sleep(5) The picture gets clicked and saved on Pi but does not get sent to Firebase storage due to the following error: Waiting for motion Motion detected 080524135451.jpg 080524135451.jpg saved Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/requests/packages/urllib3/connection.py", line 141, in _new_conn conn = connection.create_connection( File "/usr/local/lib/python3.9/dist-packages/requests/packages/urllib3/util/connection.py", line 75, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.9/socket.py", line 953, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/raspi/Desktop/firebase.py", line 42, in <module> storage.child(filename).put(filename) File "/usr/local/lib/python3.9/dist-packages/pyrebase/pyrebase.py", line 405, in put request_object = self.requests.post(request_ref, data=file_object) File "/usr/local/lib/python3.9/dist-packages/requests/sessions.py", line 522, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/usr/local/lib/python3.9/dist-packages/requests/sessions.py", line 475, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.9/dist-packages/requests/sessions.py", line 596, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.9/dist-packages/requests/adapters.py", line 487, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='firebasestorage.googleapis.com', port=443): Max retries exceeded with url: /v0/b/devilberry0.appspot.com/o?name=080524135451.jpg (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x72df5268>: Failed to establish a new connection: [Errno -2] Name or service not known')) I tried the same task on my local computer using an existing photo and it worked => the account settings are fine.
Since you have been able to successfully transfer data using your local computer, this is purely a Pi and/or network issue. The error "socket. gaierror: [Errno -2] Name or service not known" typically points to a DNS resolution issue. This error indicates that the hostname used in your application cannot be resolved to an IP address. With that said: Recheck your internet connection. Ensure that if you're on restricted network (like office/school wifi), your Raspberry Pi is registered to it. Hope this helps!
2
2
78,445,852
2024-5-8
https://stackoverflow.com/questions/78445852/django-annotation-convert-time-difference-to-whole-number-or-decimal
I am trying to sum the hours user spent per day. I wanted to convert this into whole number or decimal to represent HOURS(e.g. 2, 6.5), to easily plot this to my chart. But the result of below code is in this format HH:mm:ss. Any one can help me with this? day = Clocking.objects.filter(clockout__isnull=False, user=nid).annotate(date=TruncDate('clockin')) .values('date').annotate(total=Sum(F('clockout') - F('clockin'))).values('total') Here is my models.py class Clocking(models.Model): user= models.ForeignKey('User', models.DO_NOTHING) clockin = models.DateTimeField() clockout = models.DateTimeField(null=True)
Based on Func using the sql functions TIMEDIFF and then TIME_TO_SEC, will return the seconds: day = Clocking.objects.filter(clockout__isnull=False, user=nid).annotate(date=TruncDate('clockin')) .values('date').annotate( total=Sum( Func( Func( F('clockout'),F('clockin'), function='TIMEDIFF'), function='TIME_TO_SEC') )).values('total')
3
2
78,436,539
2024-5-6
https://stackoverflow.com/questions/78436539/superimpose-plot-with-background-image-chart
I am trying to use an existing graph as a background for new data that I want to plot on top of the graph. I have been able to do so when using a graph with all information contained within the axes and using the extent parameter of plt.imshow because then I just have to scale the image. I would like to scale and shift the background graph. Replotting the background is not an option in the real use case. Here is what I tried so far : Generation of a background graph (reproducible example) import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot([0, 5, 10], [8, 5, 12]) ax.set_xlim(0, 20) ax.set_ylim(0, 15) ax.set_title('Background graph') fig.show() fig.savefig('bg_graph.png') Use plt.imshow() to add the background graph and then superimpose my data. bg_img = plt.imread('bg_graph.png') fig, ax = plt.subplots() ax.imshow(bg_img, extent=[0,50,0,50]) ax.scatter([4.9, 5.2], [7, 4.9]) fig.show() fig.savefig('result.png') I have made a mockup of the expected result using Excel : Is there a method to stretch a new graph onto existing axis (from an image) in order to plot new pieces of data ? I assume that the coordinates of the axis in the image are known or can be guessed through trial an error. One way to rephrase this is to say that I would like to stretch the new plot to the image and not the other way around.
We can follow this answer to a related question and adapt it to your needs (see code comments for explanations): import matplotlib.pyplot as plt bg_img = plt.imread('stackoverflow/bg.png') # TODO: Adjust as necessary bg_width, bg_xlim, bg_ylim = 6, (0, 20), (0, 15) # Create a figure with the same aspect ratio and scale as the image. # This provides the axes in which we will plot our new data figsize = (bg_width, bg_width * bg_img.shape[0] / bg_img.shape[1]) fig, axes = plt.subplots(nrows=1, ncols=1, figsize=figsize) axes.patch.set_alpha(0.0) # Make new figure's area transparent axes.set_xlim(*bg_xlim) # Adjust limits to background's limits axes.set_ylim(*bg_ylim) axes.scatter([4.9, 5.2], [7, 4.9], color='red') # Plot our new data # Optionally, turn off axes, as we already have them from # the background and they will not match perfectly: plt.axis('off') background_ax = plt.axes([0, 0, 1, 1]) # Create dummy subplot for background background_ax.set_zorder(-1) # Set background subplot behind the other background_ax.imshow(bg_img, aspect='auto') # Show background image plt.axis('off') # Turn off axes that surround the background For me, using the background image that you shared and loading it as bg.png results in the following plot: What if adjusting the whitespace is necessary? Luckily, the layout of the whitespace in your background image seems to match Matplotlib's defaults. If that was not the case, however, we could use subplots_adjust() on the foreground plot, together with a bit of trial and error, to make the axes of the foreground plot and background image align as perfectly as possible. In this case, I would initially leave the axes of the foreground plot turned on (and thus comment out the first plt.axis('off') in the code above) to make adjustments easier. To demonstrate this, I created a version of your background image with additional green padding (called bg_padded.png in the code below), which looks as follows: I then adjusted the code from above as follows: import matplotlib.pyplot as plt bg_img = plt.imread('stackoverflow/bg_padded.png') # TODO: Adjust as necessary bg_width, bg_xlim, bg_ylim = 7.5, (0, 20), (0, 15) # Create a figure with the same aspect ratio and scale as the image. # This provides the axes in which we will plot our new data figsize = (bg_width, bg_width * bg_img.shape[0] / bg_img.shape[1]) fig, axes = plt.subplots(nrows=1, ncols=1, figsize=figsize) # Adjust padding of foreground plot to padding of background image plt.subplots_adjust(left=.2, right=.82, top=.805, bottom=0.19) axes.patch.set_alpha(0.0) # Make new figure's area transparent axes.set_xlim(*bg_xlim) # Adjust limits to background's limits axes.set_ylim(*bg_ylim) axes.scatter([4.9, 5.2], [7, 4.9], color='red') # Plot our new data # Optionally, turn off axes, as we already have them from # the background and they will not match perfectly: # plt.axis('off') background_ax = plt.axes([0, 0, 1, 1]) # Create dummy subplot for background background_ax.set_zorder(-1) # Set background subplot behind the other background_ax.imshow(bg_img, aspect='auto') # Show background image plt.axis('off') # Turn off axes that surround the background Changes are: I loaded bg_padded.png rather than bg.png (obviously); I changed bg_width to 7.5 to account for the increased size of the background image and, with it, for the relative decrease in size (e.g. of the fonts) in the foreground plot; I added the line plt.subplots_adjust(left=.2, right=.82, top=.805, bottom=0.19) to adjust for the padding. This time, I also left the first plt.axis('off') commented out, as mentioned above, to see and to show how well the axes of the background image and the foreground plot match. The result looks as follows:
2
3
78,423,306
2024-5-3
https://stackoverflow.com/questions/78423306/how-to-asynchronously-run-matplolib-server-side-with-a-timeout-the-process-hang
I'm trying to reproduce the ChatGPT code interpreter feature where a LLM create figures on demand by executing to code. Unfortunately Matplotlib hangs 20% of time, I have not managed to understand why. I would like the implementation: to be non-blocking for the rest of the server to have a timeout in case the code is too long to execute I made a first implementation: import asyncio import psutil TIMEOUT = 5 async def exec_python(code: str) -> str: """Execute Python code. Args: code (str): Python code to execute. Returns: dict: A dictionary containing the stdout and the stderr from executing the code. """ code = preprocess_code(code) stdout = "" stderr = "" try: stdout, stderr = await run_with_timeout(code, TIMEOUT) except asyncio.TimeoutError: stderr = "Execution timed out." return {"stdout": stdout, "stderr": stderr} async def run_with_timeout(code: str, timeout: int) -> str: proc = await run(code) try: stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout) return stdout.decode().strip(), stderr.decode().strip() except asyncio.TimeoutError: kill_process(proc.pid) raise async def run(code: str): return await asyncio.create_subprocess_exec( "python", "-c", code, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, ) def kill_process(pid: int): try: parent = psutil.Process(pid) for child in parent.children(recursive=True): child.kill() parent.kill() print(f"Killing Process {pid} (timed out)") except psutil.NoSuchProcess: print("Process already killed.") PLT_OVERRIDE_PREFIX = """ import matplotlib matplotlib.use('Agg') # non-interactive backend import asyncio import matplotlib.pyplot as plt import io import base64 def custom_show(): buf = io.BytesIO() plt.gcf().savefig(buf, format='png') buf.seek(0) image_base64 = base64.b64encode(buf.getvalue()).decode('utf-8') print('[BASE_64_IMG]', image_base64) buf.close() # Close the buffer plt.close('all') matplotlib.pyplot.figure() # Create a new figure matplotlib.pyplot.close('all') # Close it to ensure the state is clean matplotlib.pyplot.cla() # Clear the current axes matplotlib.pyplot.clf() # Clear the current figure matplotlib.pyplot.close() # Close the current figure plt.show = custom_show """ def preprocess_code(code: str) -> str: override_prefix = "" code_lines = code.strip().split("\n") if not code_lines: return code # Return original code if it's empty if "import matplotlib.pyplot as plt" in code: override_prefix = PLT_OVERRIDE_PREFIX + "\n" code_lines = [ line for line in code_lines if line != "import matplotlib.pyplot as plt" ] last_line = code_lines[-1] # Check if the last line is already a print statement if last_line.strip().startswith("print"): return "\n".join(code_lines) try: compile(last_line, "<string>", "eval") # If it's a valid expression, wrap it with print code_lines[-1] = f"print({last_line})" except SyntaxError: # If it's not an expression, check if it's an assignment if "=" in last_line: variable_name = last_line.split("=")[0].strip() code_lines.append(f"print({variable_name})") return override_prefix + "\n".join(code_lines) I have already tried without success: ipython rather than python using threads rather than process saving the image on disk rather than on buffer What extremely weird is that I cannot reproduce the bug using the code above. And yet I see the error frequently in prod and on my machine.
I finally found the source of the bug, and it was NOT in the code interpretor (I should have expected it since I could not reproduce the bug in a simplified settings). It turns GPT does not always respect the signature of the tools. For instance instance instead of returning a dict {"code": "..."} it will sometimes directly return the code as a string. I improved the parsing to handle that case: try: parsed_args = json.loads(function_args) function_args = parsed_args except json.JSONDecodeError: function_args = { next( iter(inspect.signature(function_to_call).parameters) ): function_args }
4
0
78,436,536
2024-5-6
https://stackoverflow.com/questions/78436536/how-do-i-prevent-virtual-environments-having-access-to-system-packages
noob here. I have a dockerised jupyter lab instance where I have a bunch of packages installed for the root user, and now I want to add an additional kernel that has no packages at all (yet). Here's how I've tried to do that: # SYSTEM SETUP FROM python:3.11.5-bookworm ADD requirements/pip /requirements/pip RUN pip install -r /requirements/pip RUN mkdir /venv/ \ && cd /venv/ \ && python -m venv blank \ && /venv/blank/bin/pip install ipykernel \ && /venv/blank/bin/python -m ipykernel install --user --name=blank-env I expect the root python environment to have all the packages listed in my requirements/pip file and my blank-env environment to be empty. However, when I run jupyter lab in the docker container and select the blank-env kernel, it appears to have access to all of the packages from my requirements file. What am I doing wrong here?
Check Kernel Configuration: Make sure that when you install the ipykernel in your blank-env, you are activating the virtual environment before installing it. FROM python:3.11.5-bookworm ADD requirements/pip /requirements/pip RUN pip install -r /requirements/pip RUN python -m venv /venv/blank RUN /venv/blank/bin/pip install ipykernel RUN /venv/blank/bin/python -m ipykernel install --user --name=blank-env Restart Jupyter Lab Server Verify Kernel Environment import sys sys.executable This will show you the path to the Python executable that the kernel is using. If it's pointing to the global Python environment, then there might be an issue with how the kernel was installed or activated.
3
0
78,439,730
2024-5-7
https://stackoverflow.com/questions/78439730/openssl-3-0s-legacy-provider-failed-to-load
When I started Anaconda Prompt (Anaconda3),the following error message floats and I could not figure out hhow to resolve it. "Error while loading conda entry point: conda-content-trust (OpenSSL 3.0's legacy provider failed to load. This is a fatal error by default, but cryptography supports running without legacy algorithms by setting the environment variable CRYPTOGRAPHY_OPENSSL_NO_LEGACY. If you did not expect this error, you have likely made a mistake with your OpenSSL configuration.)" I have tried many proposed solutions from (https://stackoverflow.com) and others and none of them resolve the issue.
conda install cryptography If you are using linux / Mac then add this command to .bashrc or .zshrc file export CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 and then run source ./bashrc or ./zshrc If you are using Window Under "System variables", click "New..." and add CRYPTOGRAPHY_OPENSSL_NO_LEGACY as the variable name and 1 as the value. Click "OK" to save the changes.
3
3
78,430,267
2024-5-4
https://stackoverflow.com/questions/78430267/why-is-pandas-str-functions-slower-than-apply
I'm processing a large dataset, particularly these two columns: Genotype Iteration 10010110011011101101011000010011111011111000000111001001101111111101101111001011 0 00011100001011010000000110010010100101101011001010101110110111000101000110000000 0 00100100100100101000100101100110100101110000100111000000011001011001101111000011 0 10001010101100000101110001011111000110101100101010111100110011011101010011111110 0 11010101010010001110100110110001001010101001111000111011110110101101010100011110 0 I want to create a new column that tells me how many 1s the column Genotype has. I've tried two methods: With built-in str module %%timeit total_df['Count_1'] = total_df['Genotype'].str.count('1') 10.9 s Β± 183 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) With apply(): %%timeit total_df['Count_1'] = total_df['Genotype'].apply(lambda x: x.count('1')) 2.63 s Β± 13.8 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) A significant improvement is obtained using the second approach but, as far as I knew, the apply() method was slower than using pandas built-in methods. What am I missing? I don't know if this is useful but pd.__version__ is 2.0.3.
The answer is that the str operations are not vectorize as @mozway and @roganjosh mentioned.
3
1
78,444,253
2024-5-7
https://stackoverflow.com/questions/78444253/autoencoders-and-polar-coordinates
Can an autoencoder learn the transformation into polar coordinates? If a set of 2D data lies approximately on a circle, there is a lower-dimensional manifold, parameterized by the angle, that describes the data 'best'. I tried various versions without success. import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import layers, losses from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam class DeepAutoEncoder(Model): def __init__(self, dim_data: int, num_hidden: int, num_comp: int, activation: str = 'linear'): super(DeepAutoEncoder, self).__init__() self.encoder = tf.keras.Sequential([ layers.Dense(num_hidden, activation=activation), layers.Dense(num_comp, activation=activation), ]) self.decoder = tf.keras.Sequential([ layers.Dense(num_hidden, activation=activation), layers.Dense(dim_data, activation='linear') ]) def call(self, x): encoded = self.encoder(x) decoded = self.decoder(encoded) return decoded # Data num_obs = 1000 np.random.seed(1238) e = np.random.randn(num_obs, 1) t = np.linspace(0, 2*np.pi, num_obs) x = 1 * np.cos(t) y = np.sin(t) + 0.2*e[:, 0] X = np.column_stack((x, y)) num_comp = 1 activations = ['linear', 'sigmoid'] ae = {a: None for a in activations} for act in activations: ae[act] = DeepAutoEncoder(dim_data=2, num_comp=num_comp, num_hidden=3, activation=act) ae[act].compile(optimizer=Adam(learning_rate=0.01), loss='mse') ae[act].build(input_shape=(None, 2)) ae[act].summary() history = ae[act].fit(X, X, epochs=200, batch_size=32, shuffle=True) ae[act].summary() plt.plot(history.history["loss"], label=act) plt.legend() f, axs = plt.subplots(2, 2) for i, a in enumerate(activations): axs[0, i].plot(x, y, '.', c='k') z = ae[a].encoder(X) # x_ae = ae[a].decoder(ae[a].encoder(X)) x_ae = ae[a](X) axs[0, i].plot(x_ae[:, 0], x_ae[:, 1], '.') # axs[0, i].plot(x_pca[:, 0], x_pca[:, 1], '.', c='C3') axs[1, i].plot(z) axs[0, i].axis('equal') axs[0, i].set(title=a) The reconstructed data looks like: I assume that the reason is that the transformation sigmoid(W * z + b) is far away from a non-linear matrix [[cos(theta) sin(theta)], [-sin(theta) sin(theta)]] required to map the latent variable back into the original space. Any thoughts would be great! Thank you very much.
Can an autoencoder learn the transformation into polar coordinates? If a set of 2D data lies approximately on a circle, there is a lower-dimensional manifold, parameterized by the angle, that describes the data 'best'. Neural nets can be trained to learn arbitrary transformations, including analytical ones like mappings between coordinate systems. In the present case the net will encode the (x, y) coordinates using an arbitrary encoding that will correlate strongly with the angle (a 'pseudo-angle', if you like). The neural net in your question is trying to encode 3 key variables into a 1D space (the single encoder unit): the sign of x, the sign of y, and their relative sizes. These three pieces of information are all required for determining the correct quadrant. I think the main reason the net is not learning is because its capacity is too little for capturing arctan2 complexity. Suppose we limit the data to x > 0; in this case the only thing the net needs to encode is the sign of y and its relative size to x. In this case your net works fine, as it just needs to learn arctan: The figure below illustrates how the learnt encoding carries information about the angle, allowing the net to uniquely determine the location along the circle. Notice how there is a unique value of the encoding for each point along the circumference. There's a linear relationship between the inputs and their reconstructions, indicating that the net has learnt to reproduce them. As soon as you allow x to be both positive or negative, it needs to learn the more complex arctan2 function. The net fails and instead just captures arctan; your results show that y is being encoded correctly, but for any y it can't determine which side of the plane x should be, resulting in an average x of 0 with a correct projection of the points onto y. The figure on the left illustrates what is happening. The encoding is correct if you trace it from +90 to -90, but then it repeats. In other words, it is capturing the correct angles in the right-hand plane, but they are duplicated for the left-hand plane. Both positive and negative x correspond to the same encoding, leading to x averaging out to 0. The second figure shows how, for any x, it basically predicts 0, whilst it learns the correct positioning for y in the third figure. I made the following changes, all of which I found were important for improving performance using this dataset and the given model: Make the encoder deeper (and to a lesser extent, the decoder as well) Use tanh activations rather than ReLU or sigmoid, consistent with the data's range Use a small batch size, giving the net more steps for exploring the loss space I've tried to keep the architecture close to the original in order to demonstrate the point of network depth. The results below are with an easier dataset (less noise), showing the model's performance after 75 epochs: Model comprises 135 trainable parameters [epoch 1/75] trn loss: 0.291 [rmse: 0.540] | val loss: 0.284 [rmse: 0.533] [epoch 5/75] trn loss: 0.252 [rmse: 0.502] | val loss: 0.248 [rmse: 0.498] ... [epoch 70/75] trn loss: 0.005 [rmse: 0.072] | val loss: 0.005 [rmse: 0.074] [epoch 75/75] trn loss: 0.009 [rmse: 0.095] | val loss: 0.005 [rmse: 0.070] The net has learnt a unique encoding for each point on the circle. There's a discontinuity near y=0 that I haven't looked into. The recons generally track the inputs. In going from modelling half of the plane (arctan) to the full plane (arctan2), the model size increases from about 27 to 135 parameters (5x), and the depth increases from 1 layer to 9. Whilst arctan is a single equation, arctan2 is discontinuous below x < 0, meaning it is defined by 3 equations rather than 1, and that's aside from the 2/3 other points where special cases apply. It seems like the depth grows exponentially with the additional complexity, suggesting that the model needs higher-level encodings rather than merely more detailed ones. There could be more efficient architectures that are more expressive with lesser depth, in which case we wouldn't need as many layers, but this example sticks to a simple stacked arrangement. PyTorch example code. import numpy as np from matplotlib import pyplot as plt #Data for testing as per OP num_obs = 1000 np.random.seed(1238) e = np.random.randn(num_obs) #Shuffle it in advance t = np.linspace(0, 2 * np.pi, num_obs)[np.random.permutation(num_obs)] x = np.cos(t) y = np.sin(t) + 0.2 * e / 10 data = np.column_stack((x, y)) # data = data[x > 0, :] #Limit to RH plane # #Split the data (just train & validation for this demo) # n_train = int(0.7 * len(data)) data_trn = data[:n_train] data_val = data[n_train:] f, (ax_trn, ax_val) = plt.subplots( ncols=2, figsize=(5.2, 3), sharex=True, sharey=True, layout='tight' ) for ax, arr in [[ax_trn, data_trn], [ax_val, data_val]]: ax.scatter(arr[:, 0], arr[:, 1], marker='.', s=2, color='dodgerblue') ax_trn.set_title('train') ax_trn.set(xlabel='x', ylabel='y') ax_trn.spines[['top', 'right']].set_visible(False) ax_val.set_title('validation') ax_val.set_xlabel('x') ax_val.spines[['top', 'right', 'left']].set_visible(False) ax_val.tick_params(axis='y', left=False) # # Prepare data for training # import torch from torch import nn #Data to float tensors X_trn = torch.tensor(data_trn).float() X_val = torch.tensor(data_val).float() # #Define the model # torch.manual_seed(1000) n_features = X_trn.shape[1] hidden_size = 3 latent_size = 1 activation_layer = nn.Tanh() encoder = nn.Sequential( nn.Linear(n_features, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, latent_size), ) decoder = nn.Sequential( activation_layer, nn.Linear(latent_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, hidden_size), activation_layer, nn.Linear(hidden_size, n_features), ) autoencoder = nn.Sequential(encoder, decoder) print( 'Model comprises', sum([p.numel() for p in autoencoder.parameters() if p.requires_grad]), 'trainable parameters' ) optimiser = torch.optim.NAdam(autoencoder.parameters()) loss_fn = nn.MSELoss() # # Training loop # metrics_dict = dict(epoch=[], trn_loss=[], val_loss=[]) for epoch in range(n_epochs := 75): autoencoder.train() train_shuffled = X_trn[torch.randperm(len(X_trn))] for sample in train_shuffled: recon = autoencoder(sample).ravel() loss = loss_fn(recon, sample) optimiser.zero_grad() loss.backward() optimiser.step() #/end of epoch if not (epoch == 0 or (epoch + 1) % 5 == 0): continue autoencoder.eval() with torch.no_grad(): trn_recon = autoencoder(X_trn) val_recon = autoencoder(X_val) trn_encodings = encoder(X_trn) val_encodings = encoder(X_val) trn_loss = loss_fn(trn_recon, X_trn) val_loss = loss_fn(val_recon, X_val) print( f'[epoch {epoch + 1:>2d}/{n_epochs:>2d}]', f'trn loss: {trn_loss:>5.3f} [rmse: {trn_loss**0.5:>5.3f}] |', f'val loss: {val_loss:>5.3f} [rmse: {val_loss**0.5:>5.3f}]' ) metrics_dict['epoch'].append(epoch + 1) metrics_dict['trn_loss'].append(trn_loss) metrics_dict['val_loss'].append(val_loss) #Overlay results for ax, recon in [[ax_trn, trn_recon], [ax_val, val_recon]]: ax.scatter(recon[:, 0], recon[:, 1], color='crimson', marker='.', s=2) #Legend ax_trn.scatter([], [], color='dodgerblue', marker='.', s=5, label='data') ax_trn.scatter([], [], color='crimson', marker='.', s=5, label='recon') f.legend(framealpha=1, scatterpoints=5, loc='upper left', labelspacing=0.05) #View learning curve f, ax = plt.subplots(figsize=(6, 2)) for key in ['trn_loss', 'val_loss']: ax.plot( metrics_dict['epoch'][0:], metrics_dict[key][0:], marker='o', linestyle='-' if 'trn' in key else '--', label=key[:3] ) ax.set(xlabel='epoch', ylabel='MSE loss') ax.legend(framealpha=0, loc='upper right') ax.spines[['top', 'right']].set_visible(False) #View encodings f, axs = plt.subplots(ncols=4, figsize=(10, 3), layout='tight') cmap = 'seismic' ax = axs[0] im = ax.scatter(X_trn[:, 0], X_trn[:, 1], c=trn_encodings, cmap=cmap, marker='.') ax.set(xlabel='x', ylabel='y', title='inputs & learnt encoding') ax = axs[1] ax.scatter(X_trn[:, 0], trn_recon[:, 0], c=trn_encodings, cmap=cmap, marker='.') ax.set(xlabel='x', ylabel='recon_x', title='x recon') ax = axs[2] ax.scatter(X_trn[:, 1], trn_recon[:, 1], c=trn_encodings, cmap=cmap, marker='.') ax.set(xlabel='y', ylabel='recon_y', title='y recon') ax = axs[3] ax.scatter(trn_recon[:, 0], trn_recon[:, 1], c=trn_encodings, cmap=cmap, marker='.') ax.set(xlabel='x_recon', ylabel='y_recon', title='recon') [ax.set(xlim=[-1.5, 1.5], ylim=[-1.5, 1.5]) for ax in axs] f.colorbar(im, label='encoder output', ax=axs[3])
3
2
78,418,808
2024-5-2
https://stackoverflow.com/questions/78418808/how-to-write-a-test-if-the-argument-defines-a-type
This is not a pydantic question, but to explain why am I asking: pydantic.TypeAdapter() accepts (among many others) all the following type definitions as its argument and can create a working validator for them: int int|str list list[str|int] typing.Union[int,str] typing.Literal[10,20,30] Example: >>> validator = pydantic.TypeAdapter(list[str|int]).validate_python >>> validator([10,20,"stop"]) [10, 20, 'stop'] >>> validator([10,20,None]) (traceback deleted) pydantic_core._pydantic_core.ValidationError: ... I want to make a test if an argument is a such type defition. How do I write such test? I started with isinstance(arg, type) for simple types like int or list then I added isinstance(arg, types.GenericAlias for list[str] etc. then I realized it does not recognize int|str (which itself behaves differently than typing.Union[int,str]). Also the Literal[] is not recognized ... I'm probably on a wrong track.
I think you can use typing.get_origin combined with isinstance(x, type) like you already mentioned: import typing def defines_type(x): return isinstance(x, type) or typing.get_origin(x) is not None Testing it out we see our defines_type function returns True for all these: int int|str list list[str|int] typing.Union[int,str] typing.Literal[10,20,30] type(None) Point # here Point = collections.namedtuple('Point', ['x', 'y']) Vector # here Vector = list[float] VectorExplicit # VectorExplicit: typing.TypeAlias = list[float] typing.Optional[int] collections.OrderedDict and False for all these 1 'a' [10,20,'stop'] None [10,20,None] Point(10, -2) # here Point = collections.namedtuple('Point', ['x', 'y']) typing.Union # Interesting case! not a valid type or type annotation (I assume this behaviour is what you want) {'a', 'b'}
4
2
78,421,110
2024-5-2
https://stackoverflow.com/questions/78421110/how-to-draw-a-polygon-spanning-the-pole-with-cartopy
I am trying to draw polygons on a map at arbitrary locations, including places that span the pole and the dateline. Conceptually, consider drawing instrument footprints for orbital measurements, where the corners of the footprints are known in lat/long (or cartesian). I have been able to get mid-latitude polygons to draw, but shapes spanning the pole come up blank. Here is some code to show where I have gotten: from matplotlib.patches import Polygon import matplotlib.pyplot as plt import cartopy.crs as ccrs # https://stackoverflow.com/questions/73689586/ geodetic = ccrs.Geodetic() # Define a polygon over the pole (long, lat) points_north = [ (0, 75), (270, 75), (180, 75), (90, 75), ] # Mid-latitude polygon (long, lat) points_mid = [ (10, 70), (30, 20), (60, 20), (80, 70), ] # Create a PlateCarree projection proj0 = ccrs.PlateCarree() proj0._threshold /= 5. # https://stackoverflow.com/questions/59020032/ ax = plt.axes( projection = proj0 ) # Add the polygons to the plot ax.add_patch( Polygon( points_north, alpha = 0.2, transform = geodetic ) ) ax.add_patch( Polygon( points_mid, alpha = 0.2, transform = geodetic ) ) # Some window dressing longlocs = list( range( -180, 181, 30 ) ) latlocs = [ -75, -60, -30, 0, 30, 60, 75 ] ax.gridlines( xlocs = longlocs, ylocs = latlocs, draw_labels = True, crs = proj0 ) ax.set_global() ax.set_xlabel('Longitude') ax.set_ylabel('Latitude') # Show the plot plt.show() This produces the following map: The mid-latitude polygon draws as expected, but the north pole square does not. I would expect that polygon to be smeared across the entire north with scalloped edges, similar to what is shown in the rotated pole boxes example. I have played around with RotatedPole but I really haven't come close to a general solution that lets me plot arbitrary footprints.
I think that the following is a general solution that is sufficient for my purposes: from matplotlib.patches import Polygon import matplotlib.pyplot as plt import cartopy.crs as ccrs # Test patches are boxes with regular angular width qSz = 40/2 poly = [ ( -qSz, qSz ), ( qSz, qSz ), ( qSz, -qSz ), ( -qSz, -qSz ) ] fig = plt.figure() ax = plt.axes( projection=ccrs.PlateCarree() ) # Patch locations are defined by their centers centerLats = range(-75, 90, 15 ) centerLons = range( 30, 360, 30 ) for centerLat, centerLon in zip( centerLats, centerLons ): rotated_pole = ccrs.RotatedPole( pole_latitude = centerLat - 90, pole_longitude = centerLon ) ax.add_patch( Polygon( poly, transform = rotated_pole, alpha = 0.3 ) ) ax.gridlines( draw_labels = True, crs = ccrs.PlateCarree() ) ax.set_global() plt.show() This results in the following plot: The gotcha is that the patches now have to be defined in the rotated frame, so for each of my patches (which I have in cartesian coordinates, including the center) I'll need to compute the corners in that rotated frame, which means that they'll all be small angles bracketing (0,0) like the example. But at least the rotated pole handles the pole and dateline when rotated back to PlateCarree.
2
1
78,438,668
2024-5-6
https://stackoverflow.com/questions/78438668/generating-random-passphrases-from-sets-of-strings-with-secrets-random-is-not-ve
I have a requirement for a basic passphrase generator that uses set lists of words, one for each position in the passphrase. def generate(self) -> str: passphrase = "%s-%s-%s-%s-%s%s%s" % ( self.choose_word(self.verbs), self.choose_word(self.determiners), self.choose_word(self.adjectives), self.choose_word(self.nouns), self.choose_word(self.numbers), self.choose_word(self.numbers), self.choose_word(self.numbers), ) The lists chosen from contain 100 adjectives, 9 determiners, 217 nouns, 67 verbs, and digits 1-9 and the choices are made using secrets.choice def choose_word(cls, word_list: List[str]) -> str: return secrets.choice(word_list) In theory I thought this would give me a shade under 13 billion unique passwords. However I have written a test case that generates 10,000 passphrases and checks that they are all unique through a membership check on a sequence of the generated passphrases def test_can_generate_10000_passphrases_without_collision(passphrase: Passphrase): generated_passphrases = [] for i in range(10000): generated_passphrase = passphrase.generate() assert generated_passphrase is not None and len(generated_passphrase) > 10 assert generated_passphrase not in generated_passphrases generated_passphrases.append(generated_passphrase) assert len(generated_passphrases) == 10000 However, the test does not reflect the probability of duplicates I expected. Using pytest-repeat I setup this test to run 10,000 times and it failed/generated duplicate passwords 24 times in 4145 runs before I killed the process. In each case the output of the test is truncated but shows that a chosen passphrase was found 'in' the set of generated phrases, and the phrase is different each time. I don't really have a specific question here, it just seems like I'm misunderstanding something, either my probability calculations are wrong and I need to add more words to the lists or something about the test in sequence check is doing a looser match then I expected. I switched from random.choices to secrets.choices, I tried re-instantiating the password generator class between runs, tried adding checks that the password was non-empty (because empty strings always match) and also tried running in and out of a docker container thinking there might be something messing up the randomness.
Nature of the Problem Let's start by asserting that there is a 1-to-1 mapping between k-dimensional list indices for lists of length β„“1,...,β„“k and integers in the range [0,...,Ξ (β„“1,...,β„“k)-1], where Ξ  denotes the product of the set of values. The following python class shows an implementation of the math which does that mapping. class Mapping: def __init__(self, dimensions): self.dimensions = dimensions self.n_dimensions = len(dimensions) self.cumulative = dimensions.copy() self.cumulative.append(1) for i in range(1, self.n_dimensions): idx = self.n_dimensions - i self.cumulative[idx] *= self.cumulative[idx+1] self.capacity = self.cumulative.pop(0) self.capacity *= self.cumulative[0] def indices_to_int(self, indices): result = 0 for i in range(self.n_dimensions): result += indices[i] * self.cumulative[i] return result def int_to_indices(self, index): indices = [None] * self.n_dimensions for i in range(self.n_dimensions): indices[i] = index // self.cumulative[i] index -= indices[i] * self.cumulative[i] return indices Using this with a small set of indices shows the 1-to-1 mapping pretty clearly. This code: list_lengths = [2,3,4] map = Mapping(list_lengths) int_set = range(map.capacity) for number in int_set: index_list = map.int_to_indices(number) print(number, index_list, map.indices_to_int(index_list)) produces the results given below. The output consists of an integer in the left column, which maps to the list of index values in the center. That list is subsequently mapped back to the original integer to demonstrate the 1-to-1 nature of the mapping. 0 [0, 0, 0] 0 1 [0, 0, 1] 1 2 [0, 0, 2] 2 3 [0, 0, 3] 3 4 [0, 1, 0] 4 5 [0, 1, 1] 5 6 [0, 1, 2] 6 7 [0, 1, 3] 7 8 [0, 2, 0] 8 9 [0, 2, 1] 9 10 [0, 2, 2] 10 11 [0, 2, 3] 11 12 [1, 0, 0] 12 13 [1, 0, 1] 13 14 [1, 0, 2] 14 15 [1, 0, 3] 15 16 [1, 1, 0] 16 17 [1, 1, 1] 17 18 [1, 1, 2] 18 19 [1, 1, 3] 19 20 [1, 2, 0] 20 21 [1, 2, 1] 21 22 [1, 2, 2] 22 23 [1, 2, 3] 23 This means that your problem of trying to generate unique combinations of elements from a set of lists is equivalent to generating a set of unique integer values in a well-defined range. With list lengths of [2,3,4], that's range(2*3*4). For your actual problem, the applicable range is range(100*9*217*67*10*10*10), i.e., range(13_085_100_000). You might think that if you take a sample of size 10_000 from a pool of 13 billion integers, the chance of getting any duplicates would be close to zero. The birthday problem, also known as the birthday paradox, tells us otherwise. When you sample values independently, each additional sample has the task of avoiding all of the values that have already been sampled. The probabilities of avoidance start off high, but they are less than 1, diminish as the quantity of values sampled increases, and get multiplied (due to independent sampling), so the probability of all of them simultaneously missing each other diminishes surprisingly rapidly towards zero. It turns out that with 365 days in a year (ignoring leap years), the probability of getting one or more duplicate birthdays exceeds 1/2 by the time you introduce the 23rd person to the group. By the time you get to 57 people, the probability of having two or more who share birthdays exceeds 0.99. The pool size of 13_085_100_000 is much larger than 365, but the logic works the same way. The following program shows that with a sample size of 10_000 integer values, the analytically derived probability of having one or more duplicate values is about 0.00381. We would expect 38 out of 10_000 such experiments to have at least one duplicate. With a sample size of 135_000 the probability of duplicates exceeds 0.5. import fractions pool_size = fractions.Fraction(13_085_100_000) # use big-integer rational arithmetic samp_size = 10_000 p_no_duplicates = fractions.Fraction(1) # no duplicates when one item in the room... numerator = fractions.Fraction(pool_size + 1) # ...so start with the second item. for i in range(2, samp_size): # i is number of items currently in the room p_no_duplicates *= fractions.Fraction(numerator - i) / pool_size print("For " + str(samp_size) + " items and a pool size of " + str(pool_size) + ", P(Duplicate) = " + str(1.0 - p_no_duplicates)) # For 10000 items and a pool size of 13085100000, P(Duplicate) = 0.0038127078842439266 One Possible Solution A possible solution is to use python's random.sample(), which does sampling without replacement. That means that once a value is selected it gets removed from the pool of remaining candidates. (That changes the probabilities of all the remaining values in the pool, so this is not independent sampling.) No duplicates will occur until the entire pool has been sampled. Using random.sample() will guarantee unique integers which can then be mapped into the unique set of indices from which to construct your set of passphrases. The following illustrates this using the [2,3,4] indices from the earlier example: import random # list_lengths = [100,9,217,67,10,10,10] list_lengths = [2,3,4] map = Mapping(list_lengths) int_set = random.sample(range(map.capacity), map.capacity) for number in int_set: index_list = map.int_to_indices(number) # Use index_list to generate a unique password # from the lists of component elements. This will produce all sets of indices uniquely, but in a random order. Toggle the comments on the list_lengths assignment lines and set the sample size to 10_000 to obtain a randomized subsample of unique indices in index_list.
2
2
78,424,396
2024-5-3
https://stackoverflow.com/questions/78424396/plotting-a-combined-heatmap-and-clustermap-problems-with-adding-two-colorbars
I have two omics datasets which I woud like to compare. For this I plot one as a clustermap, and extract the order from it and plot the exact same genes as a heatmap, so that I can make a direct comparison between the two datasets. However, I would like to show the colorbars for both, and I cannot seem to get it to work. the colorbar for the clustermap (left) is where I want it and the size I want it, however it is not showing y-ticks for the range of my data the colorbar for the heatmap (right) is not where I want it, too large, and also not showing tickmarks. I would like it to be exactly like the first colorbar but on the right side instead. I used this to combine the two plots, and this to adjust the colorbar of the clustermap, but this does not work as a function for the heatmap. I was playing around with cbar_kws but I don't think this allows me to do what I want, though I don't fully understand the potential there I think. Using python in VScode; import matplotlib.gridspec #clustermap from averaged protein data! g = sns.clustermap(hmprot, figsize=(8,12), col_cluster=False, yticklabels=True, cmap = 'viridis') labels = [t.get_text() for t in g.ax_heatmap.yaxis.get_majorticklabels()] g.gs.update(left=0.05, right=0.49) #create new gridspec for the right part gs2 = matplotlib.gridspec.GridSpec(1,1) hmrna = hmrna.reindex(index=labels) # create axes within this new gridspec ax2 = g.fig.add_subplot(gs2[0]) # get position of heatmap heatmap_bbox = g.ax_heatmap.get_position() ax2.set_position([0.5, heatmap_bbox.y0, .35, heatmap_bbox.height]) # plot heatmap in the new axes sns.heatmap(hmrna, ax=ax2, cmap = 'viridis', cbar=True, yticklabels=True, cbar_kws= dict(use_gridspec=False,location = 'right', shrink= 0.5)) #change font size for the genes on the y-axis g.tick_params(axis='y', labelsize=6, labelright = False, right=False) g.tick_params(axis='x', labelbottom = True, bottom=False) ax2.tick_params(axis='y', labelsize=8, labelright=True, left=False, labelleft=False, labelrotation = 0) ax2.tick_params(axis='x', labelbottom = True, bottom=False) #then adjusting the labels for each axis individually # g.set_xlabel(' ') -> Doesn't work ax2.set_xlabel(' ') ax2.set_title('RNAseq', weight="bold") # ax2.set_title('Proteomics(C)', weight="bold") -> Doesn't work x0, _y0, _w, _h = g.cbar_pos g.ax_cbar.set_position([0.01, 0.04, 0.02, 0.1]) g.ax_cbar.set_title('Z-score') g.ax_cbar.tick_params(axis='x', length=10) title = "Clustermap of Protein (left) & RNA (right) " + str(GO_term) plt.suptitle(title, weight="bold", fontsize=20, y=0.85) fig.tight_layout() plt.show() I have also tried the suggestion mentioned here, with the following code; cbar_2_ax = fig.add_axes([0.95, 0.04, 0.02, 0.1]) cbar_2 = mp.colorbar(ax2, cax=cbar_2_ax) Then I get an error :'Axes' object has no attribute 'get_array' I'm not sure how to proceed now, I'm probably not using the right functions to change this but I have not been able to find something that does work. In addition, I managed to 'configure' the heatmap with its own title, and removed the 'group' label on the x-axis. I have not been able to figure out how to do the same for the clustermap, to give it it's own title, and remove this 'group' label. The same functions for the heatmap do not work here. A dummy version of my code that replicates the issues for me. How do I fix the right colorbar?? Some dummy version of my code that replicates the issue: df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) df2 = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) #clustermap from df1 g = sns.clustermap(df, figsize=(12,18), col_cluster=False, yticklabels=True, cmap = 'viridis') g.gs.update(left=0.05, right=0.49) #create new gridspec for the right part gs2 = matplotlib.gridspec.GridSpec(1,1) # create axes within this new gridspec ax2 = g.fig.add_subplot(gs2[0]) # get position of heatmap heatmap_bbox = g.ax_heatmap.get_position() ax2.set_position([0.5, heatmap_bbox.y0, .35, heatmap_bbox.height]) # plot boxplot in the new axes sns.heatmap(df2, ax=ax2, cmap = 'viridis', cbar=True, yticklabels=True, cbar_kws= dict(use_gridspec=False,location = 'right', shrink= 0.5) ) g.tick_params(axis='y', labelsize=6, labelright = False, right=False) g.tick_params(axis='x', labelbottom = True, bottom=False) ax2.tick_params(axis='y', labelsize=8, labelright=True, left=False, labelleft=False, labelrotation = 0) ax2.tick_params(axis='x', labelbottom = True, bottom=False) ax2.set_title('title', weight="bold") # Set a custom title x0, _y0, _w, _h = g.cbar_pos g.ax_cbar.set_position([0.01, 0.04, 0.02, 0.1]) g.ax_cbar.set_title('Z-score') g.ax_cbar.tick_params(axis='x', length=10) title = "Clustermap (left) & heatmap (right) " plt.suptitle(title, weight="bold", fontsize=20, y=0.85) fig.tight_layout() plt.show() Below is my output figure from this code. I've highlighted the issues I'm hoping to fix.
Below you find some example code to adapt the layout and add the colorbars. Some remarks: sns.clustermap has a parameter cbar_pos to directly set the position of the colobar sns.clustermap also has a parameter dendrogram_ratio which define the space for the row and column dendrograms (the column dendrograms aren't used in this example, so the spacing can be set smaller) g.tick_params changes the ticks of all the subplots. This probably causes the missing ticks of the colorbars. To only change the ticks of the clustermap's heatmap, you can use g.ax_heatmap.tick_params() g.ax_heatmap.set_title() sets a title for the clustermap g.ax_heatmap.set_xlabel('') removes the label (apparently labeled 'group' in the original image) of the clustermap tight_layout() doesn't work when axes are created with fixed coordinates Something that is rather unclear, is that the clustermap changes the order of the rows (it places "similar" rows closer together). But the heatmap and its row labels at the right still seem to use their original order. To reorder the rows of the heatmap to the same order as the clustermap, the y-tick-labels of the clustermap can be extracted. Then, df2.reindex(...) reorders the rows of the heatmap. As the tick labels are strings, the example code below supposes the index of the dataframe are strings. import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np index = [f'r{i:02}' for i in range(50)] df = pd.DataFrame(np.random.randint(0, 100, size=(50, 4)), columns=list('ABCD'), index=index) df2 = pd.DataFrame(np.random.randint(0, 100, size=(50, 4)), columns=list('ABCD'), index=index) # clustermap from df1 g = sns.clustermap(df, figsize=(12, 18), col_cluster=False, yticklabels=True, cmap='viridis', dendrogram_ratio=(0.12, 0.04), # space for the left and top dendograms cbar_pos=[0.02, 0.04, 0.02, 0.1]) g.ax_cbar.set_title('Z-score') g.ax_heatmap.set_xlabel('') # remove possible xlabel g.ax_heatmap.set_title('clustermap title', weight="bold", fontsize=16) # Set a custom title # extract the order of the y tick labels of the clustermap (before removing the ticks) new_index = [t.get_text() for t in g.ax_heatmap.get_yticklabels()] # remove right ticks and tick labels of the clustermap g.ax_heatmap.tick_params(axis='y', right=False, labelright=False) g.ax_heatmap.tick_params(axis='x', labelbottom=True, bottom=False) # get position of heatmap heatmap_bbox = g.ax_heatmap.get_position() # make space for the right heatmap by reducing the size of the clustermap's heatmap g.ax_heatmap.set_position([heatmap_bbox.x0, heatmap_bbox.y0, 0.49 - heatmap_bbox.x0, heatmap_bbox.height]) ax2 = plt.axes([0.50, heatmap_bbox.y0, 0.38, heatmap_bbox.height]) cbar_2_ax = plt.axes([0.94, 0.04, 0.02, 0.1]) # plot heatmap in the new axes, reordering the rows similar as in the clustermap sns.heatmap(df2.reindex(new_index), cmap='viridis', cbar=True, yticklabels=True, ax=ax2, cbar_ax=cbar_2_ax) ax2.tick_params(axis='y', labelsize=8, labelright=True, left=False, labelleft=False, labelrotation=0) ax2.tick_params(axis='x', labelbottom=True, bottom=False) ax2.set_title('heatmap title', weight="bold", fontsize=16) # Set a custom title cbar_2_ax.set_title('Z-score') # title = "Clustermap (left) & heatmap (right)" # plt.suptitle(title, weight="bold", fontsize=20) plt.show()
2
1
78,443,779
2024-5-7
https://stackoverflow.com/questions/78443779/how-to-check-if-a-lazyframe-is-empty
Polars dataframes have an is_empty attribute: import polars as pl df = pl.DataFrame() df.is_empty() # True df = pl.DataFrame({"a": [], "b": [], "c": []}) df.is_empty() # True This is not the case for Polars lazyframes, so I devised the following helper function: def is_empty(data: pl.LazyFrame) -> bool: return ( data.width == 0 # No columns or data.null_count().collect().sum_horizontal()[0] == 0 # Columns exist, but are empty ) other = pl.LazyFrame() other.pipe(is_empty) # True other = pl.LazyFrame({"a": [], "b": [], "c": []}) other.pipe(is_empty) # True Is there a better way to do this? By better, I mean either without collecting or less memory-intensive if collecting can not be avoided.
As explained in the comments, "A LazyFrame doesn't have length. It is a promise on future computations. If we would do those computations implicitly, we would trigger a lot of work silently. IMO when the length is needed, you should materialize into a DataFrame and cache that DataFrame so that that work isn't done twice". So, calling collect is inevitable, but one can limit the cost by collecting only the first row (if any) with Polars limit, as suggested by @Timeless: import polars as pl df = pl.LazyFrame() df.limit(1).collect().is_empty() # True df= pl.LazyFrame({"a": [], "b": [], "c": []}) df.limit(1).collect().is_empty() # True df = pl.LazyFrame({col: range(100_000_000) for col in ("a", "b", "c")}) df.limit(1).collect().is_empty() # False, no memory cost
5
6
78,445,033
2024-5-7
https://stackoverflow.com/questions/78445033/implement-frequency-encoding-in-polars
I want to replace the categories with their occurrence frequency. My dataframe is lazy and currently I cannot do it without 2 passes over the entire data and then one pass over a column to get the length of the dataframe. Here is how I am doing it: Input: df = pl.DataFrame({"a": [1, 8, 3], "b": [4, 5, None], "c": ["foo", "bar", "bar"]}).lazy() print(df.collect()) output: shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ 1 ┆ 4 ┆ foo β”‚ β”‚ 8 ┆ 5 ┆ bar β”‚ β”‚ 3 ┆ null ┆ bar β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Required output: shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ════════════════════║ β”‚ 1 ┆ 4 ┆ 0.3333333333333333 β”‚ β”‚ 8 ┆ 5 ┆ 0.6666666666666666 β”‚ β”‚ 3 ┆ null ┆ 0.6666666666666666 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ transformation code: l = df.select("c").collect().shape[0] rep = df.group_by("c").len().collect().with_columns(pl.col("len")/l).lazy() df_out = df.with_context(rep.select(pl.all().name.prefix("context_"))).with_columns(pl.col("c").replace(pl.col("context_c"), pl.col("context_len"))).collect() print(df_out) output: shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ════════════════════║ β”‚ 1 ┆ 4 ┆ 0.3333333333333333 β”‚ β”‚ 8 ┆ 5 ┆ 0.6666666666666666 β”‚ β”‚ 3 ┆ null ┆ 0.6666666666666666 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ As you can see I am collecting the data 2 times full and there is one collect over a single column. Can I do better?
pl.len() will evaluate to the "column length". You can also use it in a group context (agg/over) as a way to count the values. df.with_columns(pl.len().over("c") / pl.len()).collect() shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ══════════║ β”‚ 1 ┆ 4 ┆ 0.333333 β”‚ β”‚ 8 ┆ 5 ┆ 0.666667 β”‚ β”‚ 3 ┆ null ┆ 0.666667 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ By grouping by the values, their "frequency count" is the group length. >>> df.group_by("c").len() shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ c ┆ len β”‚ β”‚ --- ┆ --- β”‚ β”‚ cat ┆ u32 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ foo ┆ 1 β”‚ β”‚ bar ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
2
2
78,444,101
2024-5-7
https://stackoverflow.com/questions/78444101/footnotes-causing-errant-match-using-regex-in-python
I'm parsing text in Python using regex that typically looks like some version of this: JT Meta Platforms, Inc. - Class A Common Stock (META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 F S: New S O: Morgan Stanley - Select UMA Account # 1 JT Microsoft Corporation - Common Stock (MSFT) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 F S: New S O: Morgan Stanley - Select UMA Account # 1 JT Microsoft Corporation - Common Stock (MSFT) [OP]P 02/13/2024 03/05/2024 $500,001 - $1,000,000 F S: New S O: Morgan Stanley - Portfolio Management Active Assets Account D: Call options; Strike price $170; Expires 01/17 /2025 C: Ref: 044Q34N6 I've set up a regex to extract 'ticker' (eg, MSFT, META) that looks like this: r"\(([A-Z]+\.?[A-Z]*?)\)" This pulls capitalized adjacent chars (eg, IBM, T) sitting within parens, and also allows there to be an optional period (happens occasionally eg, BRK.B) for certain situations. The below example is matching starting with the '(BM)' characters, which are included in a footnote for a previous transaction, and not valid tickers. The logic to identify these would be that they're preceded by the 'F S:' footnote designation, where there can be slight deviations in spaces but will include always include 'FS:'. How to exclude these errant situations using regex in Python? Alibaba Group Holding Limited American Depositary Shares each representing eight Ordinary share (BABA) [ST]S 01/19/2024 01/22/2024 $1,001 - $15,000 F S: New S O: Iron Gate GA Brokerage Portfolio > IGGA IRA (BM) Alphabet Inc. - Class C Capital Stock (GOOG) [ST]S 01/19/2024 01/22/2024 $1,001 - $15,000 F S: New S O: Iron Gate GA Brokerage Portfolio > IGGA ROTH IRA (JM) Alphabet Inc. - Class C Capital Stock (GOOG) [ST]S 01/19/2024 01/22/2024 $15,001 - $50,000 F S: New S O: Iron Gate GA Brokerage Portfolio > IGGA IRA (BM) Amazon.com, Inc. (AMZN) [ST] S 01/19/2024 01/22/2024 $15,001 - $50,000 F S: New S O: Iron Gate GA Brokerage Portfolio > IGGA IRA (BM) Amazon.com, Inc. (AMZN) [ST] S 01/19/2024 01/22/2024 $1,001 - $15,000 F S: New S O: Iron Gate GA Brokerage Portfolio > IGGA ROTH IRA (JM) Amazon.com, Inc. (AMZN) [ST] S 01/19/2024 01/22/2024 $1,001 - $15,000 F S: New S O: Iron Gate GA Brokerage Portfolio > IGGA ROTH IRA (BM)Filing ID #20024354
Since you expect only single matches per line, you can use ^(?![A-Z] [A-Z]:).*\(([A-Z]+\.?[A-Z]*)\) See the regex demo. Details: ^ - start of string (?![A-Z] [A-Z]:) - exclude the string that starts with an uppercase letter + space + uppercase letter + : .* - any zero or more chars other than line break chars as many as possible \( - a ( char ([A-Z]+\.?[A-Z]*) - Group 1: one or more uppercase letters, an optional dot and then zero or more uppercase letters \) - a ) char.
2
1
78,440,198
2024-5-7
https://stackoverflow.com/questions/78440198/restarting-the-python-interpreter-using-python-c-api
I have a C++/Qt application where I am running a Python interpreter session within the process of the main application. I have built a 'python console' as a QPlainTextEdit widget, which handles the input and output for the interpreter using the Python C API. The point of all this is that Python will have direct access to data that I'm holding inside the main application. All is well, except that I want to be able to end the current interpreter session and restart it without exiting the main application. Currently, I'm trying the obvious way: Py_Initialize(); // Run the main session... Py_FinalizeEx(); // Restart the session Py_Initialize(); From reading other posts and Python's documentation, there are possibly issues with reloading certain modules after finalizing the interpreter due to memory leaks. This seems to be true in my case: re-importing certain modules (like 'numpy') but not others (like 'sys') will trigger an exception and fail. Is there a good workaround strategy to restart the interpreter without these issues? For instance, if I spawn a sub-interpreter, will finalizing it and restarting a new sub-interpreter run into the same problems I'm having? The alternative I was trying to avoid is to run Python out-of-process, which I think would allow me to restart by basically killing the process and starting a new one. Any strategy advice would be much appreciated.
From Py_FinalizeEx documentation: Bugs and caveats: The destruction of modules and objects in modules is done in random order; this may cause destructors (del() methods) to fail when they depend on other objects (even functions) or modules. Dynamically loaded extension modules loaded by Python are not unloaded. Small amounts of memory allocated by the Python interpreter may not be freed (if you find a leak, please report it). Memory tied up in circular references between objects is not freed. Some memory allocated by extension modules may not be freed. Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application calls Py_Initialize() and Py_FinalizeEx() more than once. Specifically note that: Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application calls Py_Initialize() and Py_FinalizeEx() more than once. (emphasys is mine) It is clear that the design does not support finalization and reinitialization as you attempt. It is therefore recommended that you initialize the python engine once (when your process starts or at the first time you need it), and only finalize it upon exit (or at least when you are sure you will not need it anymore). Even if you find some workaround that will work in a specific scenario I do not think it is a good idea given the information above. Note: If you have some requirement that mandates creating a new Python envirionment for each session or operation, the best approach is to run Python as an external process.
3
1
78,425,424
2024-5-3
https://stackoverflow.com/questions/78425424/how-can-you-specify-python-runtime-version-in-vercel
I am trying to deploy a simple FastAPI app to vercel for the first time. Vercel.json is exactly below. { "devCommand": "uvicorn main:app --host 0.0.0.0 --port 3000", "builds": [ { "src": "api/index.py", "use": "@vercel/python", "config": { "maxLambdaSize": "15mb", "runtime": "python3.9" } } ], "routes": [ { "src": "/(.*)", "dest": "api/index.py" } ] } I have specified runtime as python3.9, but this doesn't reflect actual runtime which is still python3.12 (default).This ends up causing internal error. How can I configure runtime version correctly? I also read the official docs which says builds property shouldn't be used. So I tried to rewrite like below. { "devCommand": "uvicorn main:app --host 0.0.0.0 --port 3000", "functions": { "api/index.py": { "runtime": "[email protected]" } }, "routes": [ { "src": "/(.*)", "dest": "api/index.py" } ] } This didn't work as well. Maybe I shouldn't use vercel for python project?(little information in the internet)
I've scrolled through vercel docs but wasn't been able find any references that python version could be specified in builds or functions section of Vercel.json. in 'builds' you specify npm package @vercel/python. I guess this is more like interface for node.js to run python3 scripts. in 'functions' you kind off specify serverless functions. I'm not sure that this is what you need. Ref: https://vercel.com/docs/projects/project-configuration Solution: However as it is listed in Vercel documentation python version could be defined in the Pipfile. Ref: https://vercel.com/docs/functions/runtimes/python Note Vercel only supports python 3.12 (default) and python 3.9 (requires legacy image i.e. use Node.js 16 or 18.) Vercel Manuals: This links might be useful for setting up your first python project on vercel: https://github.com/vercel/examples/tree/main/python/hello-world https://github.com/vercel/examples/tree/main/python/flask3 https://github.com/vercel/examples/tree/main/python What worked for me First create your http handler or flask app or else: ## my_app.py from http.server import BaseHTTPRequestHandler, HTTPServer import sys class GETHandler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type','text/plain') self.end_headers() self.wfile.write('Hello, world!\n'.encode('utf-8')) python_version = f"{sys.version_info[0]}.{sys.version_info[1]}.{sys.version_info[2]}" self.wfile.write(f'Python version {python_version}'.encode('utf-8')) # variable required by Vercel handler = GETHandler Next create a Pipfile. This bash commands will automatically create a Pipfile. ## run in bash pip install pipenv pipenv install Automatically generated pipfile should look smth like this: ## Pipfile [[source]] url = "https://pypi.org/simple" verify_ssl = true name = "pypi" [packages] pipenv = "~=2023.12" [dev-packages] [requires] python_version = "3.9" Next create package.json. You need to set node.js version to 18.x for running your scripts in python 3.9. ## package.json { "engines": { "node": "18.x" } } Third, you need to define routes that will trigger executing python server: ## vercel.json { "builds": [ { "src": "*.py", "use": "@vercel/python" } ], "redirects": [ { "source": "/", "destination": "/my_app.py" } ] } Finally let's set up Deployment Settings. In my project settings I don't add any run or build commands: However for python 3.9 check that your node.js version is 18.x (same page, i.e. in the Deployment Settings). That's it. After the deployment visit the generated route link and it will automatically redirect you to /my_app.py. You can see that python 3.9 was used to generate the page:
3
2
78,442,162
2024-5-7
https://stackoverflow.com/questions/78442162/how-to-construct-a-networkx-graph-from-a-dictionary-with-format-node-neighbor
I have the following dictionary that contains node-neighbor-weight pairs: graph = { "A": {"B": 3, "C": 3}, "B": {"A": 3, "D": 3.5, "E": 2.8}, "C": {"A": 3, "E": 2.8, "F": 3.5}, "D": {"B": 3.5, "E": 3.1, "G": 10}, "E": {"B": 2.8, "C": 2.8, "D": 3.1, "G": 7}, "F": {"G": 2.5, "C": 3.5}, "G": {"F": 2.5, "E": 7, "D": 10}, } The visual representation of the graph is this: How do I go from that dictionary to a networkx graph object: import networkx as nx G = nx.Graph() # YOUR SUGGESTION HERE #
You need to add the dictionary as edges to the graph import networkx as nx from matplotlib import pyplot as plt graph = { "A": {"B": 3, "C": 3}, "B": {"A": 3, "D": 3.5, "E": 2.8}, "C": {"A": 3, "E": 2.8, "F": 3.5}, "D": {"B": 3.5, "E": 3.1, "G": 10}, "E": {"B": 2.8, "C": 2.8, "D": 3.1, "G": 7}, "F": {"G": 2.5, "C": 3.5}, "G": {"F": 2.5, "E": 7, "D": 10}, } G = nx.Graph() for node, neighbors in graph.items(): for neighbor, weight in neighbors.items(): G.add_edge(node, neighbor, weight=weight) pos = nx.spring_layout(G) nx.draw(G, pos) labels = nx.get_edge_attributes(G, 'weight') nx.draw_networkx_edge_labels(G, pos, edge_labels=labels) nx.draw_networkx_labels(G, pos) plt.show()
2
1
78,426,073
2024-5-3
https://stackoverflow.com/questions/78426073/best-way-to-avoid-a-loop
I have 2 dataframes of number x and y of same length and an input number a. I would like to find the fastest way to calculate a third list z such as : z[0] = a z[i] = z[i-1]*(1+x[i]) + y[i] without using a loop like that : a = 213 x = pd.DataFrame({'RandomNumber': np.random.rand(200)}) y = pd.DataFrame({'RandomNumber': np.random.rand(200)}) z = pd.Series(index=x.index, dtype=float) z[0] = a for i in range(1,len(x.index)): z[i] = z[i-1]*(1+x.iloc[i]) + y.iloc[i]
You can't really vectorize this function, the development of the operation becomes too complex. For instance, z[i+1] expressed in function of z[i-1] is equal to: z[i-1]*(1+x[i])+y[i] + z[i]*(x[i+1]+x[i]*x[i+1]) + y[i]*x[i] + y[i+1] And this get worse for each step As suggested in comment, if speed is a concern, you could use numba: from numba import jit @jit(nopython=True) def f(a, x, y): out = [a] for i in range(1, len(x)): out.append(out[i-1] * (1 + x[i]) + y[i]) return out out = pd.Series(f(213, x['RandomNumber'].to_numpy(), y['RandomNumber'].to_numpy()), index=x.index) Output (using np.random.seed(0) and with 5 rows): 0 213.000000 1 365.772922 2 587.139217 3 908.025165 4 1293.097825 dtype: float64 Timings (200 rows): # python loop 59.8 ms Β± 908 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) # numba 151 Β΅s Β± 968 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
3
1
78,438,776
2024-5-6
https://stackoverflow.com/questions/78438776/linear-regression-on-groupby-pandas-dataframe
Currently I have my code set up like this: def lregression(data, X, y): X = df['sales'].values.reshape(-1, 1) y = df['target'] model = LinearRegression() result = model.fit(X, y) return model.score(X, y) Then, I'm trying to apply this model per brand: df.groupby('brand').apply(lregression, X, y) But the result just gets applied to the full dataset: Brand A 0.734 Brand B 0.734 Brand C 0.734 Am I missing something here? I want the model to run separately for each group, but instead I'm apparently getting the model applied to the full dataset and then having the overall score returned for each group. Thanks!
DATAFRAME A minimal reproducible example is always nice to have, I'll provide it here: np.random.seed(42) data = { 'brand': np.random.choice(['Brand A', 'Brand B', 'Brand C'], size=300), 'sales': np.random.randint(100, 1000, size=300), 'target': np.random.randint(100, 1000, size=300) } df = pd.DataFrame(data) FUNCTION To me it's not clear whether you want to return the score (namely R^2) or the coef of the single regressions, in both cases the function changes only slightly: Score def lregression(group): X = group['sales'].values.reshape(-1, 1) y = group['target'] model = LinearRegression() result = model.fit(X, y) return result.score(X, y) Coefficients def lregression(group): X = group['sales'].values.reshape(-1, 1) y = group['target'] model = LinearRegression() result = model.fit(X, y) return result.coef_ Then the final step (coef_ scenario): >>> df.groupby('brand').apply(lregression) brand Brand A [0.20322970187699263] Brand B [0.09134770152569331] Brand C [0.043343302335992005] dtype: object Which works as expected
2
1
78,440,430
2024-5-7
https://stackoverflow.com/questions/78440430/sorting-a-polars-liststruct-by-struct-value
How do I use polars.Expr.list.sort to sort a list of structs by one of the struct values, i.e. df = pl.DataFrame([{"id": 1, "data": [{"key": "A", "value": 2}, {"key": "B", "value": 1}]}]) I want to sort data by the value field, i.e. the result should be df = pl.DataFrame([{"id": 1, "data": [{"key": "B", "value": 1}, {"key": "A", "value": 2}]}]) df.with_columns(pl.col("b").list.sort()) Doesn't work and list.sort doesn't accept arguments?
Indeed, pl.Expr.list.sort does not take any arguments on the comparison function used for the sorting. This seems like a reasonable feature request for polars' Github page. Until this is implemented, I can think of 2 possible approaches. Polars seems to sort the list primarily based on the first struct field. Hence, reordering the struct fields before sorting influences the sorting behaviour. ( df .with_columns( pl.col("data") .list.eval( pl.struct( pl.element().struct.field("value"), pl.element().struct.field("key"), ) ) .list.sort() ) ) shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ list[struct[2]] β”‚ β•žβ•β•β•β•β•β•ͺ════════════════════║ β”‚ 1 ┆ [{1,"B"}, {2,"A"}] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Another more general approach is to explode the list column into multiple rows. Next, the frame can be sorted by the "value" field of the struct column "data" within each group defined by the column "id". This method offers a bit more flexibility as the sorting can be done based on multiple struct fields (to handle matches) and with mixed values for the descending parameter of pl.DataFrame.sort. ( df .explode("data") .sort("id", pl.col("data").struct.field("value")) .group_by("id") .agg("data") ) shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ list[struct[2]] β”‚ β•žβ•β•β•β•β•β•ͺ════════════════════║ β”‚ 1 ┆ [{"B",1}, {"A",2}] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
5
78,440,228
2024-5-7
https://stackoverflow.com/questions/78440228/understanding-the-role-of-shuffle-in-np-random-generator-choice
From the documentation for numpy's random.Generator.choice function, one of the arguments is shuffle, which defaults to True. The documentation states: shuffle bool, optional Whether the sample is shuffled when sampling without replacement. Default is True, False provides a speedup. There isn't enough information for me to figure out what this means. I don't understand why we would shuffle if it's already appropriately random, and I don't understand why I would be given the option to not shuffle if that yields a biased sample. If I set shuffle to False am I still getting a random (independent) sample? I'd love to also understand why I would ever want the default setting of True.
You are still getting a random choice regardless of your selection for shuffle. If you select shuffle=False, however, the ordering of the output is not independent of the ordering of the input. This is easiest to see when the number of items chosen equals the total number of items: import numpy as np rng = np.random.default_rng() x = np.arange(10) rng.choice(x, 10, replace=False, shuffle=False) # array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) rng.choice(x, 10, replace=False, shuffle=True) # array([8, 1, 3, 9, 6, 5, 0, 7, 4, 2]) If you reduce the number of items chosen and use shuffle=False, you can confirm that which item(s) are missing is distributed as expected. import numpy as np import matplotlib.pyplot as plt rng = np.random.default_rng() x = np.arange(10) set_x = set(x) missing = [] for i in range(10000): # By default, all `p` are equal, so which item is # missing should be uniformly distributed y = rng.choice(x, 9, replace=False, shuffle=False) set_y = set(y) missing.append(set_x.difference(set_y).pop()) plt.hist(missing) But you'll see that items that appeared earlier in x tend to appear earlier in the output and vice-versa. That is, the input and output orders are correlated. x = np.arange(10) correlations = [] for i in range(10000): y = rng.choice(x, 9, replace=False, shuffle=False) correlations.append(stats.spearmanr(np.arange(9), y).statistic) plt.hist(correlations) If that is ok for your application, feel free to set shuffle=False for a speedup. %timeit rng.choice(10000, 5000, replace=False, shuffle=True) # 187 Β΅s Β± 26.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) %timeit rng.choice(10000, 5000, replace=False, shuffle=False) # 146 Β΅s Β± 18.4 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) The more items that are to be chosen, the more pronounced the speedup. %timeit rng.choice(10000, 1, replace=False, shuffle=True) # 17.6 Β΅s Β± 3.64 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) %timeit rng.choice(10000, 1, replace=False, shuffle=False) # 16.5 Β΅s Β± 2.47 Β΅s per loop (mean Β± std. dev. of 7 runs, 100000 loops each) vs %timeit rng.choice(10000, 9999, replace=False, shuffle=True) # 214 Β΅s Β± 32.7 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) %timeit rng.choice(10000, 9999, replace=False, shuffle=False) # 124 Β΅s Β± 27.5 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each)
3
2
78,426,546
2024-5-3
https://stackoverflow.com/questions/78426546/where-did-sys-modules-go
>>> import sys >>> del sys.modules['sys'] >>> import sys >>> sys.modules Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'sys' has no attribute 'modules' Why does re-imported sys module not have some attributes anymore? I am using Python 3.12.3 and it happens in macOS, Linux, and Windows. It happens in both the REPL and in a .py script. It does not happen in Python 3.11.
This is pretty obviously something you shouldn't do, naturally liable to break things. It happens to break things this particular way on the Python implementation you tried, but Python doesn't promise what will happen. Most of what I am about to say is implementation details. The sys module cannot be initialized like normal built-in modules, as it's responsible for so much core functionality. Instead, on interpreter startup, Python creates the sys module with the special function _PySys_Create. This function is responsible for (part of the job of) correctly setting up the sys module, including the sys.modules attribute: if (PyDict_SetItemString(sysdict, "modules", modules) < 0) { goto error; } When you do del sys.modules['sys'], the import system loses track of the sys module. When you try to import it again, Python tries to create an entirely new sys module, and it does so as if sys were an ordinary built-in module. It goes through the procedure for initializing ordinary built-in modules. This procedure leaves the new sys module in an inconsistent, improperly initialized state, as sys was never designed to be initialized this way. There is support for reloading sys, although I believe the dev team is thinking of taking this support out - the use cases are very obscure, and the only one I can think of off the top of my head is obsolete. Part of the reinitialization ends up hitting a code path intended for reloading sys, which updates its __dict__ from a copy created early in the original initialization of sys, right before sys.modules is set: interp->sysdict_copy = PyDict_Copy(sysdict); if (interp->sysdict_copy == NULL) { goto error; } if (PyDict_SetItemString(sysdict, "modules", modules) < 0) { goto error; } This copy is handled differently on earlier Python versions, hence the version-related behavior differences.
8
9
78,434,935
2024-5-6
https://stackoverflow.com/questions/78434935/in-python-how-can-should-decorators-be-used-to-implement-function-polymorphism
Supposing we have a class as follows: class PersonalChef(): def cook(): print("cooking something...") And we want what it cooks to be a function of the time of day, we could do something like this: class PersonalChef(): def cook(time_of_day): ## a few ways to do this, but this is quite concise: meal = {'morning':'breakfast', 'midday':'lunch', 'evening':'dinner'}[time_of_day] print("Cooking", meal) PersonalChef().cook('morning') >>> Cooking breakfast A potentially nice syntax form for this would be using decorators. With some under-the-hood machinery buried inside at_time, it ought to be possible to get it to work like this: class PersonalChef(): @at_time('morning') def cook(): print("Cooking breakfast") @at_time('midday') def cook(): print("Cooking lunch") @at_time('evening') def cook(): print("Cooking dinner") PersonalChef().cook('morning') >>> Cooking breakfast The reason this could be a nice syntax form is shown by how it then shows up in subclasses: class PersonalChefUK(PersonalChef): @at_time('evening') def cook(): print("Cooking supper") The code written at the sub-class level is extremely minimal and doesn't require any awareness of the base-class implementation/data structures and doesn't require any calls to super() to pass-through the other scenarios. So it could be nice in a situation where there are a large number of people writing derived-classes for whom we want to pack and hide complexity away in the base class and make it hard for them to break the functionality. However, I've tried a few different ways of implementing this and gotten stuck. I'm quite new to decorators, though, so probably missing something important. Any suggestions/comments?
A simpler approach would be to create a decorator factory that stores the decorated function in a dict that maps the function name and time of day to the function object, and returns a wrapper function that calls the stored function in the first class in the MRO that has it defined for the given time of day: def at_time(time_of_day, _actions={}): def decorator(func): def wrapper(self, time_of_day): for cls in type(self).__mro__: if func := _actions.get((f'{cls.__qualname__}.{name}', time_of_day)): return func(self) raise ValueError(f'No {name} found in the {time_of_day} time') name = func.__name__ _actions[func.__qualname__, time_of_day] = func return wrapper return decorator so that: class PersonalChef(): @at_time('morning') def cook(self): print("Cooking breakfast") @at_time('evening') def cook(self): print("Cooking dinner") class PersonalChefUK(PersonalChef): @at_time('evening') def cook(self): print("Cooking supper") PersonalChef().cook('morning') PersonalChef().cook('evening') PersonalChefUK().cook('morning') PersonalChefUK().cook('evening') outputs: Cooking breakfast Cooking dinner Cooking breakfast Cooking supper Demo: https://ideone.com/9T94sq
3
1
78,439,363
2024-5-6
https://stackoverflow.com/questions/78439363/how-does-the-javascript-pendant-of-a-python-class-implementation-look-like
Am studying the difference between those two languages and i was wondering why i can't access the variables in javascript classes without initiating an instance but i can do that in python here is an example of what am talking about: PYTHON CLASS class Car: all =[] def __init__(self, name): self.name = name Car.all.append(self) def get_car_name(self): return self.name bmw = Car("BMW") mercedez = Car("MERCEDEZ") print(Car.all) Running this code returns a list of all cars (which are the instance that i have created) [<main.Car object at 0x0000022D667664E0>, <main.Car object at 0x0000022D66766510>] JAVASCRIPT Class class Car { all = []; constructor(name, miles) { this.name = name; this.miles = miles; this.all.push(this); } } let ford = new Car("ford", 324); let tesla = new Car("tesla", 3433); console.log(Car.all); if i used this code the console.log will return undefined in javascript if i want to get the value of all i have to use an instance like this console.log(ford.all); this will return only the instance of ford that was created [ Car { all: [Circular *1], name: 'ford', miles: 324 } ] but otherwise in python this if i printed out an instance all it will return this print(bmw.all) [<__main__.Car object at 0x00000199CAF764E0>, <__main__.Car object at 0x00000199CAF76510>] it returns the two instance that is created even if i called the all of one instance
You need to declare it static, otherwise it's an instance property: class Car { static all = []; constructor(name, miles) { this.name = name; this.miles = miles; Car.all.push(this); } } let ford = new Car("ford", 324); let tesla = new Car("tesla", 3433); console.log(Car.all); In Python variables declared outside methods are static (no static keyword is needed).
2
3
78,428,358
2024-5-4
https://stackoverflow.com/questions/78428358/create-multipolygon-objects-from-struct-type-and-listlistlistf64-columns-u
I have downloaded the NYC Taxi Zones dataset (downloaded from SODA Api and saved as json file - Not GeoJson or Shapefile). The dataset is rather small, thus I am using the whole information included. For the convenience of the post I am presenting the first 2 rows of the dataset: original (with struct type value of the_geom). dataset after unpacking the struct type with unnest() command in polars. --updated with write_ndjson() command The original dataset: Below the dataset after applying unnest() and selecting some columns and the first 2 rows. You may import the data using polars with the following command import polars as pl poc = pl.read_json("./data.json")) I am interested in the multipolygons. I am actually trying to re-calculate the shape_area by using the multipolygon and wkt (Well-Known-Text representation) - method used by shapely module. What I have done so far is to use the column coordinates and transform it to a MultiPolygon() object - readable by the Shapely module. def flatten_list_of_lists(data): return [subitem3 for subitem1 in data for subitem2 in subitem1 for subitem3 in subitem2] The function takes as input a list[list[list[list[f64]]]] object and transforms to a list[list[f64]] object. flattened_lists = [flatten_list_of_lists(row) for row in poc["coordinates"].to_list()] print(flattened_lists) [[[-74.18445299999996, 40.694995999999904], [-74.18448899999999, 40.69509499999987], [-74.18449799999996, 40.69518499999987], [-74.18438099999997, 40.69587799999989], [-74.18428199999994, 40.6962109999999], [-74.18402099999997, 40.69707499999989]... Then I use the function below that applies string concatenation and basically: Transforms the list[list[f64]] object to String. Adds the keyword MultiPolygon in front of the string. Replaces '[' and ']' with '('., ')' respectively. def polygon_to_wkt(polygon): # Convert each coordinate pair to a string formatted as "lon lat" coordinates_str = ", ".join(f"{lon} {lat}" for lon, lat in polygon) # Format into a WKT Multipolygon string (each polygon as a single polygon multipolygon) return f"""MULTIPOLYGON ((({coordinates_str})))""" formatted_wkt = [polygon_to_wkt(polygon) for polygon in flattened_lists] poc = poc.with_columns(pl.Series("WKT", formatted_wkt)) Finally, I use the method wkt.loads("MultiPolygon ((()))").area to compute the shape area of the Multipolygon object def convert_to_shapely_area(wkt_str): try: return wkt.loads(wkt_str).area except Exception as e: print("Error converting to WKT:", e) return None poc = poc.with_columns( pl.col("WKT").map_elements(convert_to_shapely_area).alias("shapely_geometry") ) print(poc.head()) Even though for the first shape the WKT correctly returns the area of the object, while for the second MultiPolygon the methods returns the following error: IllegalArgumentException: Points of LinearRing do not form a closed linestring What I have noticed between the two rows, is that the multipolygon of Newark Airport is a continues object of list[list[f64]]] coordinates. Whereas, the Jamaika Bay has multiple sublists [list[list[f64]]] elements (check column coordinates to verify this). Also the screenshot below verify this statement. Thus, is there any way to unify the multipolygons of Jamaica Bay into one single GEOmetric object before applying WKT? P.S: Many solutions on GitHub use the shape file, but I would like to regularly re-download the NYC zones automatically with the SODA API. To download the raw .json file from SODA API (omit logger_object lets pretend it's print()) import requests params = { "$limit": geospatial_batch_size, #i.e. 50_000 "$$app_token": config.get("api-settings", "app_token") } response = requests.get(api_url, params=params) if response.status_code == 200: data = response.json() if not data: logger_object.info("No data found, please check the API connection.") sys.exit() with open("{0}/nyc_zone_districts_data.json".format(geospatial_data_storage), "w") as f: json.dump(data, f, indent=4) else: logger_object.error("API request failed.") logger_object.error("Error: {0}".format(response.status_code)) logger_object.error(response.text)
I have eventually found a solution for this problem. As already described I am flattening the list of coordinates (a.k.a Points). A point in geospatial data is a (x,y) coordinate. Thus, a MultiPolygon is a combination of multiple Points. def flatten_list_of_lists(data) -> list: return [subitem2 for subitem1 in data for subitem2 in subitem1] , this abovementioned function is important because the object data used as input argument is of type [list[list[list[f64]]]] and a MultiPolygon has a specific cardinality level per point. Then I transform its flattened list to a MultiPolygon object using shapely from shapely.geometry import Polygon, MultiPolygon, Point def transform_polygons_to_multipolygons(flat_list:list) -> list: return [ MultiPolygon( [Polygon(coord) for coord in polygon]).wkt for polygon in flat_list] You will notice that after creating each MultiPolygon object I save it to WKT format (thus as a string). Finally, I compute the area of the multipolygon from shapely import wkt def compute_geo_areas(multipolygons:MultiPolygon) -> float: return wkt.loads(multipolygons).area The final code flattened_lists:list = [flatten_list_of_lists(row) for row in df_geo["coordinates"].to_list()] multipolygons:list = transform_polygons_to_multipolygons(flattened_lists) df_geo = df_geo.with_columns(pl.Series("multipolygons", multipolygons)) \ .with_columns(polygon_area=pl.col("multipolygons").map_elements(compute_geo_areas)) \ .drop("coordinates") , where df_geo is the Polars DataFrame loaded from the JSON file attached on the SO question.
3
0
78,438,578
2024-5-6
https://stackoverflow.com/questions/78438578/replace-dots-commas-in-string-so-that-it-can-be-cast-to-float
I'm trying to convert a str column from str to float (it's intended to be a float). However, the string have commas and dots and I'm not being able to correctly replace the values: import polars as pl df = pl.DataFrame({"numbers": ["1.004,00", "2.005,00", "3.006,00"]}) df = df.with_column( df["numbers"].str.replace(".", "").str.replace(",", ".").cast(pl.Float64) ) print(df) I'm getting: ComputeError: conversion from str to f64 failed in column 'numbers' for 3 out of 3 values: [".004.00", ".005.00", ".006.00"] I Also tried just removing the "." with nothing using: df = df.with_columns(df["numbers"].str.replace(".", "")) print(df) But I'm getting the values without the first number.
str.replace uses regular expressions, so the dot matches the first character. Just escape it: import polars as pl df = pl.DataFrame({ "numbers": ["1.004,00", "2.005,00", "3.006,00"] }) df = ( df .with_columns( pl.col('numbers').str.replace("\.", "").str.replace(",", ".").cast(pl.Float64) ) ) print(df) Output: shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ numbers β”‚ β”‚ --- β”‚ β”‚ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•‘ β”‚ 1004.0 β”‚ β”‚ 2005.0 β”‚ β”‚ 3006.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
3
78,434,965
2024-5-6
https://stackoverflow.com/questions/78434965/numpy-index-argsorted-with-integer-array-while-retaining-sorting-order
I have an array x and the result of argsorting it i. I need to sort x (and y, irrelevant here) according to i hundreds of times. It is therefore not doable to sort the data twice, everything needs to be achieved via the initial sorting i. If I take x[i] it returns the sorted x as expected. However, I now only want to use certain rows of x via n. So x[n] returns the values of x as expected. My problem is that I need to sort these x[n] via i (and will have to do the same for y[n]. # Example data x = np.array([14, 15, 9, 6, 19, 18, 4, 11, 10, 0]) i = np.argsort(x) n = np.array([2, 5, 7, 8]) #x[n] -> array([ 9, 18, 11, 10]) Desired output: index_sort(x, n, i) = array([ 9, 10, 11, 18]) Some simple (failed) attempts: x[n][i] -> Indexing error, as x is now too small. x[i[n]] -> array([ 6, 11, 15, 18]), Is sorted, but contains the wrong data x[i][n] -> Same For more context: I'm creating a specific type of decision tree model. For each layer of the tree I need the above operation a different n. Sorting becomes prohibitively expensive and even checking for set membership via np.isin might be too slow as well already. My intuition (though perhaps wrong) says it should be possible to achieve this via indexing only, without ever having to sort or check for set membership. For all these layers x and i remain the same, but a different n is used each time.
In [263]: x = np.array([14, 15, 9, 6, 19, 18, 4, 11, 10, 0]) ...: i = np.argsort(x) ...: n = np.array([2, 5, 7, 8]) i and n do different, and unrelated indexing operations. Both make copies (not views), which don't retain any information on the original x: In [264]: x[i] Out[264]: array([ 0, 4, 6, 9, 10, 11, 14, 15, 18, 19]) In [265]: x[n] Out[265]: array([ 9, 18, 11, 10]) Let's try working with a boolean mask: In [266]: m = np.zeros_like(x, dtype=bool) In [267]: m[n] = True; m Out[267]: array([False, False, True, False, False, True, False, True, True, False]) It selects elements from x same as n (though it won't handle duplicates the same): In [268]: x[m] Out[268]: array([ 9, 18, 11, 10]) Now try applying the sort to m: In [269]: mi = m[i]; m Out[269]: array([False, False, True, False, False, True, False, True, True, False]) It does select the desired elements from the sorted x[i]: In [270]: x[i][mi] Out[270]: array([ 9, 10, 11, 18]) We could also convert that boolean mask back to indices: In [272]: ni = np.nonzero(mi)[0]; ni Out[272]: array([3, 4, 5, 8], dtype=int64) In [273]: x[i][ni] Out[273]: array([ 9, 10, 11, 18])
2
1
78,437,755
2024-5-6
https://stackoverflow.com/questions/78437755/plotly-how-to-change-the-tick-text-of-the-colorbar-in-a-heatmap
I am using Plotly package for my visualizations. This MWE creates a 10x10 matrix and plots it in a Heatmap. import numpy as np import pandas as pd import plotly.graph_objects as go vals = np.random.rand(10,10)*5 vals = np.around(vals) tick_text = ['A','B','C','D','E'] df = pd.DataFrame(vals) fig = go.Figure(go.Heatmap( z=df.values, x = df.index, y = df.columns )) fig.update_layout( height=500, width=500 ) fig.update_layout( coloraxis_colorbar=dict( tickmode = 'array', tickvals = [i for i in range(5)], ticktext = tick_text, ) ) fig.show() This is the output I am trying to change the text displayed next to the colorbar. In this case I want to see A,B,C,D,E instead of 1,2,3,4,5. I followed the documentation and as far as i understood i have to change coloraxis_colorbar but this doesn't seem to work.
You also need to specify the coloraxis the trace should refer to when building the heatmap. coloraxis - Sets a reference to a shared color axis. References to these shared color axes are "coloraxis", "coloraxis2", "coloraxis3", etc. Settings for these shared color axes are set in the layout, under layout.coloraxis, layout.coloraxis2, etc. Just add this line, and your code should work as expected : fig = go.Figure(go.Heatmap( z=df.values, x = df.index, y = df.columns, coloraxis='coloraxis' # <- here ))
2
1
78,420,645
2024-5-2
https://stackoverflow.com/questions/78420645/worker-failed-to-index-functions-with-azure-functions-fastapi-local-using-asgi
Running into an error that has started after a reboot. I am testing using Azure Functions in conjunction with FastAPI based on: https://dev.to/manukanne/azure-functions-and-fastapi-14b6 Code was operating and test API call worked as expected. After a reboot of the machine and restarting VSCode I am now running into an issue when attempting to run locally. I worked through SO: 76842742 which is slightly different setup but similar type of error being seen. Virtual environment is running as expected and requirements.txt showing installed. Have run with verbose flag as well but no additional error messages were provided. Current error: Worker failed to index functions Result: Failure Exception: ValueError: Could not find top level function app instances in function_app.py. My function.json has the scriptFile: init.py which I thought would need to be function_app.py but the error indicates it is looking for the function_app.py regardless. I created a copy of my function_app.py and named it "__init __.py" as a potential fix but that produced the same error code. At this point I am at a loss as to why the index function is not being found. Any thoughts around a potential solution? Found Python version 3.11.0 (py) Core Tools Version: 4.0.5611 VS Code: 1.88.1 function_app.py import azure.functions as func from fastapi import FastAPI, Request from fastapi.responses import JSONResponse import logging app = FastAPI() @app.exception_handler(Exception) async def handle_exception(request: Request, exc: Exception): return JSONResponse( status_code=400, content={"message": str(exc)}, ) @app.get("/") async def home(): return { "info": "Try the API path for success" } @app.get("/v1/test/{test}") async def get_test( test: str,): return { "test": test, } def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse: return func.AsgiMiddleware(app).handle_async(req, context) function.json { "scriptFile": "__init__.py", "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get", "post" ], "route": "/{*route}" }, { "type": "http", "direction": "out", "name": "$return" } ] } ADDITIONAL ADD host.json { "$schema": "https://raw.githubusercontent.com/Azure/azure-functions-host/dev/schemas/json/host.json", "version": "2.0", "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" } } }, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[4.*, 5.0.0)" }, "extensions": { "http": { "routePrefix": "" } } }
Based on comments by JarroVGIT, I refactored the function_app.py code to: import azure.functions as func from fastapi import FastAPI, Request, Response import logging fast_app = FastAPI() @fast_app.exception_handler(Exception) async def handle_exception(request: Request, exc: Exception): return Response( status_code=400, content={"message": str(exc)}, ) @fast_app.get("/") async def home(): return { "info": "Try the API path for success" } @fast_app.get("/v1/test/{test}") async def get_test( test: str,): return { "test": test, } app = func.AsgiFunctionApp(app=fast_app, http_auth_level=func.AuthLevel.ANONYMOUS,) Removing the def main and adding the app call. This has resolved the issue and allows the azure function to start normally.
2
2
78,437,508
2024-5-6
https://stackoverflow.com/questions/78437508/verifying-constructor-calling-another-constructor
I want to verify Foo() calls Bar() without actually calling Bar(). And then I want to verify that obj is assigned with whatever Bar() returns. I tried the following: class Bar: def __init__(self, a): print(a) class Foo: def __init__(self): self.obj = Bar(1) ### import pytest from unittest.mock import Mock, patch from mod import Foo, Bar @pytest.fixture # With stdlib def mock_bar(): with patch('mod.Bar') as mock: yield mock def test_foo(mock_bar): result = Foo() mock_bar.assert_called_once_with(1) assert result.obj == mock_bar But it would fail and say that: E AssertionError: assert <MagicMock na...='5265642864'> == <MagicMock na...='5265421696'> E Full diff: E - <MagicMock name='Bar' id='5265421696'> E ? ^ ^^ E + <MagicMock name='Bar()' id='5265642864'> E ? ++ + ^ ^
This line: assert result.obj == mock_bar Should be: assert result.obj == mock_bar.return_value It is the result of calling Bar which gets assigned in self.obj = Bar(1), not Bar itself.
2
2
78,434,960
2024-5-6
https://stackoverflow.com/questions/78434960/how-to-mock-property-with-side-effects-based-on-self-using-pytest
I've tried the code bellow, using new_callable=PropertyMock to mock a property call, and autospec=True to be able to access self in the side effect function: from unittest.mock import PropertyMock def some_property_mock(self): if self.__some_member == "some_value" return "some_different_value" else: return "some_other_value" mocker.patch.object( SomeClass, "some_property", new_callable=PropertyMock, autospec=True, side_effect=some_property_mock) It throws the following exception: ValueError: Cannot use 'autospec' and 'new_callable' together Is there any alternative to achieve the expected behavior? Edit: I have tried the solution provided in this post https://stackoverflow.com/a/77940234/7217960 but it doesn't seem to work with PropertyMock. Printing result gives MyMock name='my_property()' id='136687332325264' instead of 2 as expected. from unittest import mock class MyClass(object): def __int__(self): self.my_attribute = 10 @property def my_property(self): return self.my_attribute + 1 def unit_under_test(): inst = MyClass() return inst.my_property class MyMock(mock.PropertyMock): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) print("MyMock __init__ called.") with mock.patch.object(mock, 'MagicMock', MyMock): with mock.patch.object(MyClass, 'my_property', autospec=True, side_effect=lambda self: 2) as spy: result = unit_under_test() assert result == 2 assert spy.call_count == 1
Since the property some_property is patched with MagicMock, which is then patched with MyMock, and you want to set the return value of the property callable when it's called, you should do so on the mock object returned from patching MagicMock with MyMock instead: with mock.patch.object(mock, 'MagicMock', MyMock) as property_mock: with mock.patch.object(MyClass, 'my_property', autospec=True) as spy: property_mock.side_effect = lambda self: 2 # or property_mock.return_value = 2 result = unit_under_test() assert result == 2 assert spy.call_count == 1 Demo: https://ideone.com/rKo8mO
4
2
78,436,247
2024-5-6
https://stackoverflow.com/questions/78436247/tkinter-control-the-location-of-colorchooser
I have a program where I want to use the colorchooser dialog from tkinter. My problem is that the color chooser dialog is always opening on the top left of the root window. For example with the following code I get it as shown in the picture. import tkinter as tk from tkinter import ttk from tkinter.colorchooser import askcolor class App(): def __init__(self, master): self.master = master self.master.geometry('400x200') self.button = ttk.Button(self.master, text='Select a Color', command=self.change_color) self.button.pack(expand=True) def change_color(self): colors = askcolor(title="Tkinter Color Chooser") root.configure(bg=colors[1]) root = tk.Tk() app = App(root) app.master.mainloop() Is there a possibility to adjust the initial location of the dialog? For example that it is always orientated relatively to the button which opens the dialog?
It seems it must be at this position relative to parent. So a working trick with a fake hidden parent : import tkinter as tk from tkinter import ttk from tkinter.colorchooser import askcolor class App: def __init__(self, master): self.master = master self.master.geometry('400x200') self.button = ttk.Button(self.master, text='Select a Color', command=self.change_color) self.button.pack(expand=True) self.toplevel = tk.Toplevel(self.master) self.toplevel.withdraw() def change_color(self): x, y = self.button.winfo_rootx(), self.button.winfo_rooty() self.toplevel.geometry(f'+{x}+{y}') colors = askcolor(title="Tkinter Color Chooser", parent=self.toplevel) root.configure(bg=colors[1]) root = tk.Tk() app = App(root) app.master.mainloop()
5
6
78,436,275
2024-5-6
https://stackoverflow.com/questions/78436275/convert-raw-bytes-image-data-received-from-python-into-c-qt-qicon-to-be-displa
I am creating small GUI system and I would like to get an image in the form of raw bytes from Python code and then create QImage/QIcon using those raw bytes. For C++/Python interaction I am using Boost Python. On Python code side I printed the raw bytes: b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00@\x00\x00\x00@\x08\x02\x00\x00\x00%\x0b\xe6\x89\x00\x00\x00\x03sBIT\x08\x08\x08\xdb\xe1O\xe0\x00\x00\x00\tpHYs\x00\x00\x0e\xc4\x00\x00\x0e\xc4\x01\x95+\x0e\x1b\x00\x00\x06\xabIDATh\x81\xed\x9aOh\x13O\x14\xc7wv\x93\xdd\xfcQ1\xa9\x8d.... I send them as string from python to the C++ code like: data.rawBytes = str(thumbnail._raw_bytes) On C++ side, I extract these bytes in the form of string as: std::string rawBytes = boost::python::extract<std::string>(obj.attr("rawBytes")); The above rawBytes received on C++ side is same as the python print above. Now in the UI code, I try to use these raw bytes to create QIcon like: std::string rawbytes = data.rawBytes; QByteArray arr(); arr.append(rawbytes.c_str(), rawbytes.length()); bool flag = pixmap.loadFromData(arr, "PNG"); QStandardItem* item = new QStandardItem(name); item->setIcon(QIcon(pixmap)); The icon doesn't get displayed in the UI and moreover, the 'flag' returned from pixmap.loadFromData is false which means there is some problem in conversion of the raw bytes. Can someone point out if there needs to be some kind of conversion from python to c++ code to properly render this image on the UI?
Assuming on the Python side you have data.rawBytes = thumbnail._raw_bytes without the str wrapper, and a C++ variable obj referring to data, you can use the Python Buffer protocol like so: Py_Buffer view = {0}; int ret = PyObject_GetBuffer(obj.attr("rawBytes").ptr(), &view, PyBUF_SIMPLE); if (ret == -1) { // handle error } QByteArray arr = QByteArray::fromRawData(reinterpret_cast<const char *>(view.buf), view.len); bool flag = pixmap.loadFromData(arr, "PNG"); PyObject_ReleaseBuffer(&view); You can of course skip the copying to a QByteArray and read directly from Python buffer: pixmap.loadFromData(reinterpret_cast<uchar *>(view.buf), view.len, "PNG"));
2
2
78,436,093
2024-5-6
https://stackoverflow.com/questions/78436093/inspect-asm-gives-no-output
I have this simple MWE: from numba import njit @njit def add(a, b): return a + b # Now let's inspect the assembly code for the 'add()' function. for k, v in add.inspect_asm().items(): print(k) when I run it I get no output. What is the right way to inspect the assembly?
You need to first compile the function to populate .inspect_asm(), either by calling it or by specifying the signature. E.g.: from numba import njit @njit def add(a, b): return a + b # first call add() to compile it add(1, 2) print(add.inspect_asm()) Prints: {(int64, int64): '\t.text\n\t.file\t"<string>"\n\t.globl\t_ZN8__main__3addB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dExx\n\t.p2align\t4, ... OR: from numba import njit # specify the signature first: @njit("int64(int64, int64)") def add(a, b): return a + b print(add.inspect_asm())
2
2
78,435,402
2024-5-6
https://stackoverflow.com/questions/78435402/compute-number-of-hours-the-user-spent-per-day
I have Clocking table in database. I wanted to count the users' time spent per day. For example 2024-03-21, user 1 spend 6.8 hours, the next day he spends n number of hours and so on (['6.8', 'n', ... 'n']) user date timein timeout 1 2024-03-21 10:42 AM 12:00 PM 1 2024-03-21 01:10 PM 06:00 PM 1 2024-03-22 01:00 PM 05:47 PM ... ... ... ... This is my models.py class Clocking(models.Model): user = models.ForeignKey('User', models.DO_NOTHING) timein = models.CharField(max_length=15, blank=True, null=True) timeout = models.CharField(max_length=15, blank=True, null=True) date = models.DateField(null=False) I wanted to get the list of hours spent per day, which would be really useful for me in plotting charts.
Please don't use a CharField to keep track. Imagine that a person enters now, then how will that work? It means you make the number of possible values a lot larger, and also increases disk space usage, altough that last one is probably not that much of an issue. There are essentially two options: working with TimeFields: from django.conf import settings class Clocking(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, models.DO_NOTHING) date = models.DateField() timein = models.TimeField() timeout = models.TimeField(null=True) or work with two DateTimeFields: from django.conf import settings class Clocking(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, models.DO_NOTHING) timein = models.DateTimeField() timeout = models.DateTimeField(null=True) The second has three advantages: (1) it will also easily register time correctly on dates where the daylight saving time changes. Although these typically change on a sunday around 2:00 AM so to minimize the amount of problematic bookkeeping, it is still possible that this might happen, for example if the person is working in a different timezone at that time; (2) it also allows to have multiple Clockings on the same date easily, without extra bookkeeping; and (3) finally it also means we can have a Clocking that runs over midnight. Indeed, imagine a person clocking in at 9:00 PM and clocking out at 01:00 AM the next day, then with the old model this will be problematic, it would require first checking if the end time is earlier than the begin time. With this we can generate statistics for the date when the item started with: # second model from django.db.models import F, Sum from django.db.models.functions import TruncDate Clocking.objects.filter(timeout__isnull=False).values( date=TruncDate('timein') ).annotate(total=Sum(F('timeout') - F('timein'))).order_by('date') Note: Specifying null=False [Django-doc] is not necessary: fields are by default not NULLable. Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.
2
3
78,434,296
2024-5-6
https://stackoverflow.com/questions/78434296/polars-dataframe-how-do-i-drop-alternate-rows-by-group
I have a sorted dataframe with a column that represents a group. How do I filter it to remove all the alternate rows by the group. The dataframe length is guaranteed to be an even number if it matters. Sample Input: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group_col ┆ value_col β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ 10 β”‚ β”‚ 1 ┆ 20 β”‚ β”‚ 1 ┆ 30 β”‚ β”‚ 1 ┆ 40 β”‚ β”‚ 2 ┆ 50 β”‚ β”‚ 2 ┆ 60 β”‚ β”‚ 3 ┆ 70 β”‚ β”‚ 3 ┆ 80 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group_col ┆ value_col β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ 10 β”‚ β”‚ 1 ┆ 30 β”‚ β”‚ 2 ┆ 50 β”‚ β”‚ 3 ┆ 70 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ df = pl.DataFrame({ 'group_col': [1, 1, 1, 1, 2, 2, 3, 3], 'value_col': [10, 20, 30, 40, 50, 60, 70, 80] }) So i would like to retain the odd number rows for every group_col. Polars version is 0.19.
.gather_every() exists. In order to use it with .over() - you can change the default mapping_strategy= to explode df.select( pl.all().gather_every(2).over("group_col", mapping_strategy="explode") ) shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group_col ┆ value_col β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ 10 β”‚ β”‚ 1 ┆ 30 β”‚ β”‚ 2 ┆ 50 β”‚ β”‚ 3 ┆ 70 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ It's essentially the same as: (df.group_by("group_col", maintain_order=True) .agg(pl.all().gather_every(2)) .explode(pl.exclude("group_col")) ) For the simplified case, i.e. a guaranteed even number of group rows - you could use the frame-level method. df.gather_every(2) shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group_col ┆ value_col β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ 10 β”‚ β”‚ 1 ┆ 30 β”‚ β”‚ 2 ┆ 50 β”‚ β”‚ 3 ┆ 70 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
2
78,434,979
2024-5-6
https://stackoverflow.com/questions/78434979/how-to-fit-axis-and-radius-of-3d-cylinder
Once I get some 3-D point coordinates, what algorithm do I use to fit an optimal cylindrical and get the direction vector and radius of the central axis? My previous idea was to divide a cylinder into layers, and as the number of layers increased, the figure formed by the dots got closer to the cylinder, but in this case I couldn't get an exact radius of the cylinder. (The central axis is obtained by fitting the center of the circle through each layer)
Here is a MCVE to regress axis defined by p0 and p1 (vectors) and radius R (scalar). First we create some dummy dataset: import numpy as np import matplotlib.pyplot as plt from scipy import optimize from scipy.spatial.transform import Rotation def cylinder(n=60, m=20, r=2, h=5): t = np.linspace(0, m * 2 * np.pi, m * n) z = np.linspace(0, h, m * n) x = r * np.cos(t) y = r * np.sin(t) return np.stack([x, y, z]).T X = cylinder() rot = Rotation.from_rotvec(np.array([-1, 2, 0.5])) x0 = np.array([1., 2., 0.]) X = rot.apply(X) X = X + x0 Which create a generic use case including origin shift. Now it is sufficient to write down the geometric equation (see equation 10) as residuals and minimize it by least squares. def residuals(p, xyz): return np.power(np.linalg.norm(np.cross((xyz - p[0:3]), (xyz - p[3:6])), axis=1) / np.linalg.norm((p[3:6] - p[0:3])), 2) - p[6] ** 2 p, res = optimize.leastsq(residuals, x0=[0., 0., 0., 1., 1., 1., 0.], args=(X,), full_output=False) Which in this case returns: # array([ -1.8283916 , -1.65918186, 3.29901757, # p0 # 20.31455462, 26.98786514, -22.52837088, # p1 # 2. ]) # R Graphically it leads to:
3
2
78,435,356
2024-5-6
https://stackoverflow.com/questions/78435356/python-hexbin-plot-with-2d-function
I'm trying to display a two dimensional function on a hexagonal grid with pyplot.hexbin but it only produces a line and ignores the rest of the function. How do I solve this? def func(x,y): return x*2+y*2 x = np.linspace((-0.5)/4*5, (0.5)/4*5, int(2e3)) y = np.linspace(-0.5, 0.5, int(2e3)) plt.hexbin(x, y, func(x,y), gridsize=(8,4), cmap='gnuplot') plt.show() enter image description here
The func(x,y) function returns a scalar value for each (x, y) pair, but hexbin expects x and y to be arrays of the same length representing coordinates. Input another array Z with the scalar values which will be the colours of each hex. Try: import numpy as np import matplotlib.pyplot as plt def func(x,y): return x*2+y*2 x = np.linspace((-0.5) / 4 * 5, (0.5) / 4 * 5, int(2e3)) y = np.linspace(-0.5, 0.5, int(2e3)) X, Y = np.meshgrid(x, y) # Evaluate the function for each combination of x and y Z = func(X, Y) plt.hexbin(X.flatten(), # Flatten X and Y to 1D arrays Y.flatten(), C=Z.flatten(), # Flatten Z to 1D array for color values gridsize=(8, 4), cmap='gnuplot') plt.show() You can also add a colorbar before the show like: plt.colorbar(label='Function Value')
2
1
78,435,157
2024-5-6
https://stackoverflow.com/questions/78435157/confusing-conversion-of-types-in-pandas-dataframe
Suppose I have a list of list of numbers that happen to be encoded as strings. import pandas as pd pylist = [['1', '43'], ['2', '42'], ['3', '41'], ['4', '40'], ['5', '39']] Now I want a dataframe where these numbers are integers. I can see from pandas documentation that I can force a data type via dtype, but when I run the following: pyframe_1 = pd.DataFrame(pylist,dtype=int) I get the following warning: FutureWarning: Could not cast to int32, falling back to object. This behavior is deprecated. In a future version, when a dtype is passed to 'DataFrame', either all columns will be cast to that dtype, or a TypeError will be raised. and by inspection via dtypes: pytypes_1 = pyframe_1.dtypes.to_list() # dtype[object_] of numpy module my columns are np.object types. But I can cast my columns to integer via two ways: First one is column by column: pyframe_2 = pd.DataFrame(pylist) pyframe_2[0] = pyframe_2[0].astype(int) pyframe_2[1] = pyframe_1[1].astype(int) Second one is on the entire dataframe in an one-liner: pyframe_3 = pd.DataFrame(pylist).astype(int) Both give me a dataframe of integer columns from a list of list of strings. My question is why does the first case, where I explicitly use dtype when creating a dataframe raise a warning (or error) with no conversion for the types? Why even have it as an option in the first place? EDIT: Pandas version I'm running is 1.4.1. EDIT: As per suggestions of @mozway one workaround is using pyframe_1 = pd.DataFrame(pylist,dtype='Int32') Which does convert to integer. I mean, to me it's kinda unnatural using a string (which Int32 is) to force a cast instead of using much more intuitive int. Inspecting dtypes from the method, I get different integer types. Casting with dtype='Int32' at instantiation level gets me Int32Dtype object of pandas.core.arrays.integer module. (Upon closer inspection it has an attribute of numpy_dtype which is dtype[int32] object of numpy module). Casting with .astype(int) gives me dtype[int32] object of numpy module. So there's not much difference, I guess? IDK.
I would consider it a bug. During instantiation, the input goes though several checks (_homogenize / sanitize_array / _try_cast). I believe an intermediate float dtype is created which triggers the error (on pandas 2.2): ValueError: Trying to coerce float values to integers A workaround would be to use: pd.DataFrame(pylist, dtype='Int32') 0 1 0 1 43 1 2 42 2 3 41 3 4 40 4 5 39
3
3
78,434,699
2024-5-6
https://stackoverflow.com/questions/78434699/why-does-filtering-based-on-a-condition-results-in-an-empty-dataframe-in-pandas
I'm working with a DataFrame in Python using pandas, and I'm trying to apply multiple conditions to filter rows based on temperature values from multiple columns. However, after applying my conditions and using dropna(), I end up with zero rows even though I expect some data to meet these conditions. The goal is compare with Ambient temp+40 C and if the value is more than this, replace it with NaN. Otherwise, keep the original value. Here's a sample of my DataFrame and the conditions I'm applying: data = { 'Datetime': ['2022-08-04 15:06:00', '2022-08-04 15:07:00', '2022-08-04 15:08:00', '2022-08-04 15:09:00', '2022-08-04 15:10:00'], 'Temp1': [53.4, 54.3, 53.7, 54.3, 55.4], 'Temp2': [57.8, 57.0, 87.0, 57.2, 57.5], 'Temp3': [59.0, 58.8, 58.7, 59.1, 59.7], 'Temp4': [46.7, 47.1, 80, 46.9, 47.3], 'Temp5': [52.8, 53.1, 53.0, 53.1, 53.4], 'Temp6': [50.1, 69, 50.3, 50.3, 50.6], 'AmbientTemp': [29.0, 28.8, 28.6, 28.7, 28.9] } df1 = pd.DataFrame(data) df1['Datetime'] = pd.to_datetime(df1['Datetime']) df1.set_index('Datetime', inplace=True) Code: temp_cols = ['Temp1', 'Temp2', 'Temp3', 'Temp4', 'Temp5', 'Temp6'] ambient_col = 'AmbientTemp' condition = (df1[temp_cols].lt(df1[ambient_col] + 40, axis=0)) filtered_df = df1[condition].dropna() print(filtered_df.shape) Response: (0, 99) Problem: Despite expecting valid data that meets the conditions, the resulting DataFrame is empty after applying the filter and dropping NaN values. What could be causing this issue, and how can I correct it?
Use DataFrame.where: condition = (df1[temp_cols].lt(df1[ambient_col] + 40, axis=0)) df1[temp_cols] = df1[temp_cols].where(condition) If need new DataFrame add DataFrame.reindex: temp_cols = ['Temp1', 'Temp2', 'Temp3', 'Temp4', 'Temp5', 'Temp6'] ambient_col = 'AmbientTemp' condition = (df1[temp_cols].lt(df1[ambient_col] + 40, axis=0)) filtered_df = df1.where(condition.reindex(df1.columns, fill_value=True, axis=1)) print(filtered_df) Temp1 Temp2 Temp3 Temp4 Temp5 Temp6 AmbientTemp Datetime 2022-08-04 15:06:00 53.4 57.8 59.0 46.7 52.8 50.1 29.0 2022-08-04 15:07:00 54.3 57.0 58.8 47.1 53.1 NaN 28.8 2022-08-04 15:08:00 53.7 NaN 58.7 NaN 53.0 50.3 28.6 2022-08-04 15:09:00 54.3 57.2 59.1 46.9 53.1 50.3 28.7 2022-08-04 15:10:00 55.4 57.5 59.7 47.3 53.4 50.6 28.9 How it working: #mask print(condition) Temp1 Temp2 Temp3 Temp4 Temp5 Temp6 Datetime 2022-08-04 15:06:00 True True True True True True 2022-08-04 15:07:00 True True True True True False 2022-08-04 15:08:00 True False True False True True 2022-08-04 15:09:00 True True True True True True 2022-08-04 15:10:00 True True True True True True #added Trues values in mask for missing columns - here `AmbientTemp` print (condition.reindex(df1.columns, fill_value=True, axis=1)) Temp1 Temp2 Temp3 Temp4 Temp5 Temp6 AmbientTemp Datetime 2022-08-04 15:06:00 True True True True True True True 2022-08-04 15:07:00 True True True True True False True 2022-08-04 15:08:00 True False True False True True True 2022-08-04 15:09:00 True True True True True True True 2022-08-04 15:10:00 True True True True True True True
2
3
78,433,658
2024-5-5
https://stackoverflow.com/questions/78433658/what-is-this-einsum-operation-doing-e-np-einsumij-kl-il-a-b
given A = np.array([[1,2],[3,4],[5,6],[7,8]]) B = np.array([[9,10,11],[12,13,14]]) matrix multiplication would be if I did C = np.einsum('ij,jk->ik', A, B) but if I don't multiply j in the input instead using k ... E = np.einsum('ij,kl->il', A, B) what am I effectively summing across? Is there a way to think about this intuitively? I'm confused because the dimensions end up the same as if it were matrix multiplication. I've tried playing around with different numbers in the matrices A and B to get a feel for it but I'm wondering if someone can break it down for me so I can understand what is going on in this example.
This sums A over axis=1, B over axis=0 and computes the outer product: np.einsum('ij,kl->il', A, B) np.outer(A.sum(1), B.sum(0)) array([[ 63, 69, 75], [147, 161, 175], [231, 253, 275], [315, 345, 375]]) You could check the individual einsum results: # sum over 1 np.einsum('ij->i', A) # array([ 3, 7, 11, 15]) # sum over 0 np.einsum('kl->l', B) # array([21, 23, 25]) # outer product np.einsum('i,l->il', A.sum(1), B.sum(0)) # array([[ 63, 69, 75], # [147, 161, 175], # [231, 253, 275], # [315, 345, 375]])
3
2
78,430,312
2024-5-4
https://stackoverflow.com/questions/78430312/how-to-get-pyparsing-to-match-1-day-or-2-days-but-fail-1-days-and-2-day
I'm trying to match a sentence fragment of the form "after 3 days" or "after 1 month". I want to be particular with the single and plural forms, so "1 day" is valid but "1 days" is not. I have the following code which is nearly there but the first two entries in the failure tests don't fail. Any suggestions please that use syntactical notation as I'd like, if possible, to avoid a set_parse_action() that checks the numeric value against the unit's plurality. from pyparsing import * units = Keyword('days') ^ Keyword('months') unit = Keyword('day') ^ Keyword('month') single = Literal('1') + unit multi = Word(nums) + units after = Keyword('after') + ( single ^ multi ) a = after.run_tests(''' after 1 day after 2 days after 1 month after 2 months ''') print('=============') b = after.run_tests(''' after 1 days after 2 day after 1day after 2days ''', failure_tests = True) print('Success tests', 'passed' if a[0] else 'failed') print('Failure tests', 'passed' if b[0] else 'failed')
Only the case after 1 days passes when it should fail, the other three cases fail as expected. The issue is that the check multi = Word(nums) + units uses nums which includes 1, so even if your singular variant does not work, this one will. I looked up how nums is defined, apparently it is nums = '0123456789' (see here). Consequently you remove the 1. This works for me: ... multi_nums = '023456789' # nums excluding 1 single = Literal('1') + unit multi = Word(multi_nums) + units ... EDIT: The above fails for double digits including 1, see comments. Fixed version as per comments: single = Literal('1') + unit multi = ~Keyword('1') + Word(nums) + units
2
1
78,433,277
2024-5-5
https://stackoverflow.com/questions/78433277/how-to-deal-with-a-python-project-dependency-that-must-exist-in-the-env-but-ca
I don't have a lot of experience with python project organization, but I've been trying to follow modern best practices using a pyproject.toml and structure as in this video (minus the ci bits). This is ultimately a geoprocessing tool for ArcGIS Pro, though I've tried to abstract the logic of the tool from the 'backend' calls to ArcGIS Pro so that I could create other backends in the future, for example for QGIS or geopandas and rasterio. However, the arcpy module that allows one to interface with ArcGIS Pro is not a module you can just install into your environment from pypi. It simply "exists" in the conda environment provided by the install of ArcGIS Pro. I've been writing and running scripts by activating this conda env. I don't think it would work to list arcpy as a dependency in pyproject.toml since it can't be installed. How do you deal with this situation? What are best practices? Note: The arcpy package might be installable via conda, according to this doc from esri. At the very least, I can clone the default env provided by ArcGIS Pro and modify that (to install dev deps). As stated, I'm quite new to python project management (and it's rather confusing), but if I listed my deps in pyproject.toml, generated a requirements.txt, then installed them into a conda dev env from requirements.txt, might that work? pip can see arcpy in the conda env: PS> pip list Package Version --------------------------------- ----------------- anyio 3.5.0 appdirs 1.4.4 arcgis 2.2.0.1 ...
How do you deal with this situation? What are best practices? So you should list arcpy in your pyproject.toml or requirements. It simply "exists" in the conda environment provided by the install of ArcGIS Pro Great. So if you list the dependency, users will know that you require the ArcGIS Pro to work. I've tried to abstract the logic of the tool from the 'backend' calls to ArcGIS Pro so that I could create other backends in the future, for example for QGIS or geopandas and rasterio. Great, so you are making two packages. There is "backend" which implements some interface on top of ArcGIS Pro. That interface is then registered in what you would call "frontend" that does not depend on ArcGIS Pro. These two packages have separate requirements. To give a real life example, pyscada project has https://github.com/pyscada/PyScada/blob/main/setup.py#L33 and then has plugins with separate https://github.com/pyscada/PyScada-BACnet/blob/main/setup.py#L33 dependencies. if I listed my deps in pyproject.toml, generated a requirements.txt, then installed them into a conda dev env from requirements.txt, might that work? Yes. You either list the dependencies in pyproject.toml like https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#a-full-example or write requirements.txt, which you can then load dynamically from pyproject.toml like How to reference a requirements.txt in the pyproject.toml of a setuptools project? .
3
1
78,428,161
2024-5-4
https://stackoverflow.com/questions/78428161/how-to-create-two-frames-top-and-bottom-that-have-the-same-height-using-tkint
In Tkinter Python, I want to create two frames and place one on top of the other, and then insert a matplotlib plot inside each of these two frames. That, I managed to do. The problem is that, when created, these two frames always seem to have different heights (the bottom one is always smaller than the top one). My goal is to get the two frames to have the same height. Here's a simplified version of the code that produces the said undesirable result : import tkinter as tk from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import matplotlib.pyplot as plt import numpy as np # Root root = tk.Tk() root.state('zoomed') # Upper frame canv = tk.Frame(root) canv.pack() plt.rcParams["axes.prop_cycle"] = plt.cycler(color=["#4C2A85", "#BE96FF", "#957DAD", "#5E366E", "#A98CCC"]) plt.style.use('ggplot') L = [i for i in range(10)] fig, ax = plt.subplots() l = ax.fill_between(L, L) ax.set_title("Upper plot") canvas = FigureCanvasTkAgg(fig, canv) canvas.draw() canvas.get_tk_widget().pack() # Lower frame canv2 = tk.Frame(root) canv2.pack() fig2, ax2 = plt.subplots() l2 = ax2.fill_between(L, L) ax2.set_title("Lower plot") canvas2 = FigureCanvasTkAgg(fig2, canv2) canvas2.draw() canvas2.get_tk_widget().pack() root.mainloop Thanks to anyone willing to help.
Different idea: create one FigureCanvasTkAgg but two axis fig, (ax1, ax2) = plt.subplots(2) and matplotlib will control size. import tkinter as tk from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import matplotlib.pyplot as plt import numpy as np root = tk.Tk() #root.state('zoomed') plt.rcParams["axes.prop_cycle"] = plt.cycler(color=["#4C2A85", "#BE96FF", "#957DAD", "#5E366E", "#A98CCC"]) plt.style.use('ggplot') L = [i for i in range(10)] fig, (ax1, ax2) = plt.subplots(2) canvas = FigureCanvasTkAgg(fig, root) canvas.draw() canvas.get_tk_widget().pack(fill='both', expand=True) l1 = ax1.fill_between(L, L) ax1.set_title("Upper plot") l2 = ax2.fill_between(L, L) ax2.set_title("Lower plot") fig.tight_layout() # suggestion from @Zakaria comment root.mainloop()
2
1
78,430,938
2024-5-5
https://stackoverflow.com/questions/78430938/polars-how-to-partition-a-big-dataframe-and-save-each-one-in-parallel
I have a big Polars dataframe with a lot of groups. Now, I want to partition the dataframe by group and save all sub-dataframes. I can easily do this as follows: for d in df.partition_by(["group1", "group2"]): d.write_csv(f"~/{d[0, 'group1']}_{d[0, 'group2']}.csv") However, the approach above is sequential and slow when the df is very large and has a whole lot of partitions. Is there any Polars native way to parallelize it (the code section above)? If not, how can I do it in a Python native way instead?
As far as I am aware, Polars does not currently offer a way to parallelize .write_*() / .sink_*() calls. For Parquet files, this would be similar to writing a "Hive Partitioned Dataset": https://github.com/pola-rs/polars/issues/11500 https://github.com/pola-rs/polars/issues/15441 # CSV Hive partitioning Multiprocessing In my own testing - a ProcessPool has performed slightly faster than a ThreadPool for this task. With Polars + multiprocessing - you need to use get_context("spawn") See: https://docs.pola.rs/user-guide/misc/multiprocessing/ import polars as pl from multiprocessing import get_context # from multiprocessing.dummy import Pool # <- threadpool def write_csv(args): group, df = args df.write_csv(f"{group}.csv") if __name__ == "__main__": df = pl.DataFrame({ "group": [1, 2, 1], "val": ["a", "b", "c"] }) with get_context("spawn").Pool() as pool: groups = df.group_by("group") results = pool.map(write_csv, groups) # this is lazy # we must consume `results` - use empty for loop for result in results: ... (note: You can use group_by instead of partition_by) Pool() methods There's a lot of discussion out there about the different Pool methods to use: .map .imap .imap_unordered In my simple testing, .map produced the fastest results - with similar memory usage. But depending on your data size + system specs, you may need to investigate the others: https://stackoverflow.com/a/26521507
2
2
78,431,472
2024-5-5
https://stackoverflow.com/questions/78431472/how-to-extract-text-from-an-element-using-bs4
I am scraping Airbnb (Link to the following page), and one of the things I want to get is since when is the host hosting, as shown in the picture below (marked with red pen): image example The code I am currently using to solve this is: account_active_since = soup.find('li', class_='l7n4lsf atm_9s_1o8liyq_keqd55 dir dir-ltr').getText() but with this code I get the following output: 3 guestsStudio1 bed1 bath The HTML tag for it is: <div class="s1l7gi0l atm_c8_km0zk7 atm_g3_18khvle atm_fr_1m9t47k atm_7l_1esdqks dir dir-ltr"><ol class="lgx66tx atm_gi_idpfg4 atm_l8_idpfg4 dir dir-ltr"><li class="l7n4lsf atm_9s_1o8liyq_keqd55 dir dir-ltr">3 years hosting</li></ol></div> HTML What am I doing wrong? Thank you. Here is the whole code. The part of the code that needs fixing is in line 77 (in the for loop): from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import StaleElementReferenceException import time import re URL = "https://www.airbnb.com/s/San-Francisco--California--United-States/homes?tab_id=home_tab&refinement_paths%5B%5D=%2Fhomes&flexible_trip_lengths%5B%5D=one_week&monthly_start_date=2024-06-01&monthly_length=3&monthly_end_date=2024-09-01&price_filter_input_type=0&channel=EXPLORE&query=San%20Francisco%2C%20California%2C%20United%20States&place_id=ChIJIQBpAG2ahYAR_6128GcTUEo&date_picker_type=calendar&checkin=2024-05-08&checkout=2024-05-22&source=structured_search_input_header&search_type=autocomplete_click" driver = webdriver.Chrome() driver.get(URL) driver.maximize_window() time.sleep(4) try: accept_cookies = driver.find_element(By.XPATH, '//*[@id="react-application"]/div/div/div[1]/div/div[6]/section/div[2]/div[2]/button') accept_cookies.click() except NoSuchElementException: print("No 'Accept cookies' found.") elements = driver.find_elements(By.XPATH, "//div[@id='site-content']//div[@data-testid='card-container']//div[@class = ' dir dir-ltr']//a[1]") while True: try: for element in elements: try: element.click() except StaleElementReferenceException: print("No more elements to click.") break # Switch to the newly opened tab driver.switch_to.window(driver.window_handles[-1]) time.sleep(4) page_source = driver.page_source soup = BeautifulSoup(page_source, "html.parser") try: close_pop_up = driver.find_element(By.XPATH, '/html/body/div[9]/div/div/section/div/div/div[2]/div/div[1]/button') close_pop_up.click() except NoSuchElementException: print("No pop up element found.") try: apartment_name = soup.find('h1', class_='hpipapi').getText() except AttributeError: apartment_name = "Name not specified" try: short_description = soup.find('h2', class_='hpipapi').getText() except AttributeError: short_description = "Description not specifed" try: rooms_bathrooms = soup.find('div', class_='o1kjrihn').getText() except AttributeError: rooms_bathrooms = "Utilities not specified" try: price_per_night = soup.find('span', class_='_1y74zjx').getText() except AttributeError: price_per_night = "Price not specified" try: host_name = soup.find('div', class_='cm0tib6').find('div', class_='t1pxe1a4').getText() except (AttributeError, IndexError): host_name = "Host name not specified" # This variable needs to be fixed try: account_active_since = soup.find('li', class_='l7n4lsf atm_9s_1o8liyq_keqd55 dir dir-ltr').getText() except AttributeError: account_active_since = "Active since not specified" try: guest_favourite_stars = soup.find('div', class_='a8jhwcl').getText() guest_favourite_stars = re.search(r'\d+\.\d+', guest_favourite_stars).group() except AttributeError: guest_favourite_stars = "Not guest favourite" try: guest_favourite_reviews = soup.find('div', class_='r16onr0j').getText() except AttributeError: guest_favourite_reviews = "Not guest favourite" print("Apartment Name:", apartment_name) print("Short Description:", short_description) print("Rooms and Bathrooms:", rooms_bathrooms) print("Price per Night:", price_per_night) print("Host Name:", host_name) print("Account Active Since:", account_active_since) print("Guest favourite stars:", guest_favourite_stars) print("Guest favourite reviews:", guest_favourite_reviews) time.sleep(1) driver.execute_script(f"window.scrollTo(0, 100);") driver.save_screenshot(f'screenshots/{apartment_name.replace(" ", "_")}.png') # Close the current tab driver.close() time.sleep(1) # Switch back to the main tab driver.switch_to.window(driver.window_handles[0]) except NoSuchElementException: print("No more elements to click. Heading to the next page.") try: next_page = driver.find_element(By.XPATH, '//*[@id="site-content"]/div/div[3]/div/div/div/nav/div/a[5]') next_page.click() time.sleep(2) except NoSuchElementException: print("No more pages to click.") break elements = driver.find_elements(By.XPATH, "//div[@id='site-content']//div[@data-testid='card-container']//div[@class = ' dir dir-ltr']//a[1]") driver.quit()
Here is one way of getting that information (no BeautifulSoup needed): from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument('disable-notifications') with webdriver.Chrome(options=chrome_options) as browser: wait = WebDriverWait(browser, 5) url = 'https://www.airbnb.com/rooms/46181703?check_in=2024-05-08&check_out=2024-05-22&source_impression_id=p3_1714848136_vsUX6ezsi0Z7N%2FSG&previous_page_section_name=1000&federated_search_id=efebfc50-8682-44ac-acee-e5d374cb3da4' browser.get(url) hosting_since = wait.until(EC.presence_of_element_located((By.XPATH, '//div[@data-section-id="HOST_OVERVIEW_DEFAULT"]//li[contains(text(), "hosting")]'))).text print(hosting_since) Result in terminal: 3 years hosting Selenium documentation can be found here.
2
1
78,429,932
2024-5-4
https://stackoverflow.com/questions/78429932/langchain-ollama-and-llama-3-prompt-and-response
Currently, I am getting back multiple responses, or the model doesn't know when to end a response, and it seems to repeat the system prompt in the response(?). I simply want to get a single response back. My setup is very simple, so I imagine I am missing implementation details, but what can I do to only return the single response? from langchain_community.llms import Ollama llm = Ollama(model="llama3") def get_model_response(user_prompt, system_prompt): prompt = f""" <|begin_of_text|> <|start_header_id|>system<|end_header_id|> { system_prompt } <|eot_id|> <|start_header_id|>user<|end_header_id|> { user_prompt } <|eot_id|> <|start_header_id|>assistant<|end_header_id|> """ response = llm.invoke(prompt) return response
Using a PromptTemplate from Langchain, and setting a stop token for the model, I was able to get a single correct response. from langchain_community.llms import Ollama from langchain import PromptTemplate # Added llm = Ollama(model="llama3", stop=["<|eot_id|>"]) # Added stop token def get_model_response(user_prompt, system_prompt): # NOTE: No f string and no whitespace in curly braces template = """ <|begin_of_text|> <|start_header_id|>system<|end_header_id|> {system_prompt} <|eot_id|> <|start_header_id|>user<|end_header_id|> {user_prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> """ # Added prompt template prompt = PromptTemplate( input_variables=["system_prompt", "user_prompt"], template=template ) # Modified invoking the model response = llm(prompt.format(system_prompt=system_prompt, user_prompt=user_prompt)) return response
2
8
78,429,220
2024-5-4
https://stackoverflow.com/questions/78429220/how-can-i-fit-my-position-vs-light-intensity-data-into-a-diffraction-pattern-cur
I am trying to fit my position vs. light intensity data into a function of the form: Intensity pattern of single slit diffraction The data is from a single slit diffraction experiment, the slit with was 0.08 mm = 0.00008 m, wavelength was 650 nm = 0.00000065 m. The distance between the slit and the light sensor was 70 cm = 0.7 m. In the experiment, light sensor was placed in front of the slit with a distance 70 cm (lets call the point where slit stands the point S and lets call the point where light sensor placed the point O), before taking data, the location of the light sensor is changed along the axis that is perpendicular to SO line, then the data is taken along this axis. I tried to fit my data into intensity of single slit diffraction equation in python but since it had to work with numbers that are too small it couldn't be optimized, then I tried with Matlab, it also doesn't work and I don't know why. Matlab curve fitting Note that sin(theta) is x/sqrt(x^2+L^2) where L=0.7 and b is the horizontal shifting of x values. import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def intensityFunction(x, a, b): numerator = np.sin(np.pi * (x - b) * 0.00008 / (np.sqrt(0.49 + x ** 2 - 2 * b * x + b ** 2) * 0.00000065)) denominator = np.pi * (x - b) * 0.00008 / (np.sqrt(0.49 + x ** 2 - 2 * b * x + b ** 2) * 0.00000065) return a* (numerator/denominator)**2 positions = np.array(positionArray) lightIntensity=np.array([LightIntensityArray]) popt, pcov = curve_fit(intensityFunction, positions, lightIntensity, p0=[1,0]) a_opt, b_opt = popt plt.scatter(positions, lightIntensity, label="Data", s=4) plt.plot(positions, intensityFunction(positions, a_opt, b_opt), color='red', label='Fit: a={:.2f}, b={:.2f}'.format(a_opt, b_opt)) plt.xlabel('Position (m)') plt.ylabel('Light Intensity (% max)') plt.legend() plt.show() Position (m): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001108, 0.0001108, 0.0001663, 0.0003325, 0.0004433, 0.000665, 0.0007758, 0.0008867, 0.0008867, 0.0009421, 0.0009975, 0.001, 0.001, 0.002, 0.002, 0.002, 0.002, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.004, 0.004, 0.004, 0.004, 0.004, 0.004, 0.004, 0.005, 0.005, 0.005, 0.005, 0.006, 0.006, 0.007, 0.007, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.01, 0.01, 0.01, 0.011, 0.011, 0.012, 0.012, 0.013, 0.013, 0.013, 0.014, 0.014, 0.014, 0.014, 0.014, 0.014, 0.015, 0.015, 0.015, 0.015, 0.016, 0.016, 0.017, 0.017, 0.017, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.019, 0.019, 0.019, 0.02, 0.02, 0.02, 0.021, 0.021, 0.022, 0.022, 0.023, 0.023, 0.023, 0.023, 0.023, 0.023, 0.024, 0.024, 0.025, 0.025, 0.025, 0.025, 0.026, 0.026, 0.026, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.028, 0.028, 0.028, 0.029, 0.03, 0.031, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.033, 0.033, 0.033, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.035, 0.035, 0.035, 0.035, 0.036, 0.037, 0.037, 0.037, 0.037, 0.038, 0.038, 0.039, 0.039, 0.04, 0.04, 0.041, 0.041, 0.042, 0.042, 0.043, 0.043, 0.044, 0.044, 0.044, 0.045, 0.045, 0.045, 0.045, 0.045, 0.046, 0.046, 0.047, 0.047, 0.048, 0.048, 0.048, 0.048, 0.049, 0.049, 0.049, 0.049, 0.049, 0.049, 0.049, 0.05, 0.05, 0.05, 0.051, 0.051, 0.051, 0.052, 0.052, 0.052, 0.053, 0.053, 0.053, 0.053, 0.053, 0.053, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.055, 0.056, 0.056, 0.056, 0.056, 0.057, 0.057, 0.057, 0.057, 0.058, 0.059, 0.059, 0.059, 0.06, 0.06, 0.061, 0.061, 0.061, 0.061, 0.061, 0.062, 0.062, 0.062, 0.062, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.064, 0.064, 0.064, 0.065, 0.066, 0.066, 0.066, 0.067, 0.067, 0.067, 0.068, 0.068, 0.068, 0.069, 0.07, 0.07, 0.071, 0.071, 0.071, 0.072, 0.072, 0.073, 0.073, 0.073, 0.073, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.075, 0.075, 0.075, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.078, 0.078, 0.079, 0.079, 0.079, 0.08, 0.081, 0.082, 0.082, 0.083, 0.084, 0.085, 0.086, 0.086, 0.086, 0.087, 0.087, 0.087, 0.087, 0.087, 0.087, 0.088, 0.088, 0.088, 0.089, 0.09, 0.09, 0.091, 0.091, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.093, 0.093, 0.094, 0.094, 0.094, 0.095, 0.095, 0.095, 0.096, 0.096, 0.096, 0.097, 0.097, 0.097, 0.098, 0.098, 0.098, 0.099, 0.099, 0.099, 0.1, 0.1, 0.1, 0.101, 0.101, 0.101, 0.101, 0.101, 0.102, 0.102, 0.102, 0.102, 0.102, 0.103, 0.103, 0.103, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104] Light Intensity (%max): [0.061, 0.061, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.067, 0.061, 0.067, 0.067, 0.061, 0.079, 0.055, 0.061, 0.055, 0.061, 0.061, 0.067, 0.079, 0.067, 0.061, 0.061, 0.067, 0.067, 0.061, 0.073, 0.049, 0.055, 0.067, 0.073, 0.061, 0.061, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.061, 0.061, 0.061, 0.049, 0.055, 0.067, 0.049, 0.055, 0.055, 0.043, 0.067, 0.061, 0.067, 0.067, 0.061, 0.061, 0.085, 0.067, 0.055, 0.061, 0.061, 0.055, 0.055, 0.049, 0.061, 0.055, 0.043, 0.061, 0.067, 0.061, 0.079, 0.043, 0.061, 0.049, 0.061, 0.055, 0.061, 0.061, 0.061, 0.055, 0.067, 0.073, 0.061, 0.061, 0.061, 0.055, 0.055, 0.061, 0.067, 0.055, 0.043, 0.055, 0.049, 0.061, 0.055, 0.073, 0.061, 0.055, 0.061, 0.061, 0.055, 0.055, 0.073, 0.049, 0.061, 0.055, 0.073, 0.055, 0.073, 0.043, 0.061, 0.055, 0.049, 0.073, 0.061, 0.061, 0.061, 0.055, 0.049, 0.061, 0.073, 0.067, 0.061, 0.061, 0.049, 0.067, 0.049, 0.067, 0.055, 0.061, 0.061, 0.061, 0.067, 0.055, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.061, 0.061, 0.049, 0.079, 0.061, 0.079, 0.079, 0.067, 0.079, 0.079, 0.055, 0.049, 0.067, 0.085, 0.061, 0.067, 0.092, 0.067, 0.073, 0.079, 0.061, 0.049, 0.061, 0.073, 0.067, 0.061, 0.067, 0.067, 0.073, 0.079, 0.085, 0.085, 0.073, 0.085, 0.098, 0.067, 0.073, 0.098, 0.079, 0.067, 0.073, 0.085, 0.073, 0.073, 0.067, 0.067, 0.079, 0.049, 0.073, 0.055, 0.055, 0.049, 0.067, 0.067, 0.061, 0.079, 0.079, 0.085, 0.098, 0.098, 0.098, 0.098, 0.116, 0.067, 0.061, 0.055, 0.085, 0.067, 0.085, 0.104, 0.146, 0.146, 0.146, 0.171, 0.159, 0.159, 0.165, 0.153, 0.146, 0.098, 0.092, 0.079, 0.073, 0.079, 0.085, 0.098, 0.098, 0.104, 0.098, 0.11, 0.14, 0.183, 0.195, 0.244, 0.287, 0.342, 0.372, 0.391, 0.403, 0.391, 0.342, 0.293, 0.244, 0.238, 0.22, 0.238, 0.195, 0.195, 0.195, 0.195, 0.183, 0.177, 0.183, 0.165, 0.244, 0.488, 0.678, 0.708, 1.007, 1.428, 1.679, 1.99, 2.539, 3.272, 4.254, 5.176, 5.689, 5.963, 6.116, 6.299, 6.25, 6.11, 6.079, 6.012, 5.756, 5.438, 5.078, 4.651, 4.193, 3.864, 3.662, 3.565, 3.424, 3.272, 3.174, 3.082, 3.076, 3.125, 3.198, 3.369, 3.412, 3.418, 3.369, 3.29, 3.174, 3.131, 3.107, 3.095, 3.088, 3.088, 3.082, 3.064, 3.04, 3.052, 3.003, 2.948, 2.966, 2.93, 2.905, 2.863, 2.832, 2.783, 2.655, 2.344, 1.758, 1.27, 0.885, 0.482, 0.244, 0.159, 0.159, 0.165, 0.214, 0.25, 0.293, 0.354, 0.391, 0.385, 0.342, 0.269, 0.208, 0.189, 0.14, 0.098, 0.098, 0.098, 0.098, 0.134, 0.159, 0.146, 0.159, 0.153, 0.171, 0.177, 0.165, 0.195, 0.195, 0.183, 0.189, 0.171, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.189, 0.195, 0.177, 0.177, 0.165, 0.195, 0.159, 0.153, 0.153, 0.146, 0.146, 0.14, 0.153, 0.134, 0.122, 0.128, 0.116, 0.11, 0.098, 0.098, 0.073, 0.079, 0.073, 0.092, 0.079, 0.104, 0.14, 0.14, 0.098, 0.098, 0.092, 0.061, 0.079, 0.073, 0.073, 0.085, 0.085, 0.104, 0.098, 0.098, 0.104, 0.098, 0.098, 0.098, 0.079, 0.079, 0.061, 0.061, 0.067, 0.061, 0.061, 0.055, 0.055, 0.055, 0.067, 0.085, 0.079, 0.073, 0.085, 0.079, 0.079, 0.092, 0.104, 0.073, 0.073, 0.073, 0.085, 0.055, 0.067, 0.055, 0.067, 0.061, 0.055, 0.067, 0.055, 0.085, 0.085, 0.073, 0.079, 0.073, 0.092, 0.073, 0.073, 0.079, 0.079, 0.079, 0.067, 0.085, 0.073, 0.055, 0.061, 0.055, 0.061, 0.061, 0.055, 0.061, 0.055, 0.073, 0.067, 0.055, 0.061, 0.067, 0.061, 0.055, 0.055, 0.061, 0.055, 0.067, 0.061, 0.055, 0.067, 0.061, 0.073, 0.061, 0.061, 0.061, 0.067, 0.049, 0.067, 0.043, 0.061, 0.055, 0.067, 0.067, 0.055, 0.061, 0.067, 0.061, 0.061, 0.055, 0.079, 0.061, 0.055, 0.061, 0.055, 0.061, 0.067, 0.067, 0.055, 0.055, 0.055, 0.061, 0.055, 0.055, 0.055, 0.055, 0.067, 0.061, 0.055, 0.061, 0.067, 0.055, 0.061, 0.061, 0.055, 0.067, 0.055, 0.061, 0.055, 0.055, 0.067, 0.061, 0.067, 0.061, 0.061, 0.067, 0.073, 0.067, 0.061, 0.055, 0.055, 0.055, 0.061]
Physically a diffraction pattern is similar to a Lorentzian curve. You can fit it as follows, by changing your p0. import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def intensityFunction(x, a, b): numerator = np.sin(np.pi * (x - b) * 0.00008 / (np.sqrt(0.49 + x ** 2 - 2 * b * x + b ** 2) * 0.00000065)) denominator = np.pi * (x - b) * 0.00008 / (np.sqrt(0.49 + x ** 2 - 2 * b * x + b ** 2) * 0.00000065) return a* (numerator/denominator)**2 positionArray = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001108, 0.0001108, 0.0001663, 0.0003325, 0.0004433, 0.000665, 0.0007758, 0.0008867, 0.0008867, 0.0009421, 0.0009975, 0.001, 0.001, 0.002, 0.002, 0.002, 0.002, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.004, 0.004, 0.004, 0.004, 0.004, 0.004, 0.004, 0.005, 0.005, 0.005, 0.005, 0.006, 0.006, 0.007, 0.007, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.01, 0.01, 0.01, 0.011, 0.011, 0.012, 0.012, 0.013, 0.013, 0.013, 0.014, 0.014, 0.014, 0.014, 0.014, 0.014, 0.015, 0.015, 0.015, 0.015, 0.016, 0.016, 0.017, 0.017, 0.017, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.019, 0.019, 0.019, 0.02, 0.02, 0.02, 0.021, 0.021, 0.022, 0.022, 0.023, 0.023, 0.023, 0.023, 0.023, 0.023, 0.024, 0.024, 0.025, 0.025, 0.025, 0.025, 0.026, 0.026, 0.026, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.028, 0.028, 0.028, 0.029, 0.03, 0.031, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.033, 0.033, 0.033, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.035, 0.035, 0.035, 0.035, 0.036, 0.037, 0.037, 0.037, 0.037, 0.038, 0.038, 0.039, 0.039, 0.04, 0.04, 0.041, 0.041, 0.042, 0.042, 0.043, 0.043, 0.044, 0.044, 0.044, 0.045, 0.045, 0.045, 0.045, 0.045, 0.046, 0.046, 0.047, 0.047, 0.048, 0.048, 0.048, 0.048, 0.049, 0.049, 0.049, 0.049, 0.049, 0.049, 0.049, 0.05, 0.05, 0.05, 0.051, 0.051, 0.051, 0.052, 0.052, 0.052, 0.053, 0.053, 0.053, 0.053, 0.053, 0.053, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.055, 0.056, 0.056, 0.056, 0.056, 0.057, 0.057, 0.057, 0.057, 0.058, 0.059, 0.059, 0.059, 0.06, 0.06, 0.061, 0.061, 0.061, 0.061, 0.061, 0.062, 0.062, 0.062, 0.062, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.064, 0.064, 0.064, 0.065, 0.066, 0.066, 0.066, 0.067, 0.067, 0.067, 0.068, 0.068, 0.068, 0.069, 0.07, 0.07, 0.071, 0.071, 0.071, 0.072, 0.072, 0.073, 0.073, 0.073, 0.073, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.075, 0.075, 0.075, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.078, 0.078, 0.079, 0.079, 0.079, 0.08, 0.081, 0.082, 0.082, 0.083, 0.084, 0.085, 0.086, 0.086, 0.086, 0.087, 0.087, 0.087, 0.087, 0.087, 0.087, 0.088, 0.088, 0.088, 0.089, 0.09, 0.09, 0.091, 0.091, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.093, 0.093, 0.094, 0.094, 0.094, 0.095, 0.095, 0.095, 0.096, 0.096, 0.096, 0.097, 0.097, 0.097, 0.098, 0.098, 0.098, 0.099, 0.099, 0.099, 0.1, 0.1, 0.1, 0.101, 0.101, 0.101, 0.101, 0.101, 0.102, 0.102, 0.102, 0.102, 0.102, 0.103, 0.103, 0.103, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104] LightIntensityArray = [0.061, 0.061, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.067, 0.061, 0.067, 0.067, 0.061, 0.079, 0.055, 0.061, 0.055, 0.061, 0.061, 0.067, 0.079, 0.067, 0.061, 0.061, 0.067, 0.067, 0.061, 0.073, 0.049, 0.055, 0.067, 0.073, 0.061, 0.061, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.061, 0.061, 0.061, 0.049, 0.055, 0.067, 0.049, 0.055, 0.055, 0.043, 0.067, 0.061, 0.067, 0.067, 0.061, 0.061, 0.085, 0.067, 0.055, 0.061, 0.061, 0.055, 0.055, 0.049, 0.061, 0.055, 0.043, 0.061, 0.067, 0.061, 0.079, 0.043, 0.061, 0.049, 0.061, 0.055, 0.061, 0.061, 0.061, 0.055, 0.067, 0.073, 0.061, 0.061, 0.061, 0.055, 0.055, 0.061, 0.067, 0.055, 0.043, 0.055, 0.049, 0.061, 0.055, 0.073, 0.061, 0.055, 0.061, 0.061, 0.055, 0.055, 0.073, 0.049, 0.061, 0.055, 0.073, 0.055, 0.073, 0.043, 0.061, 0.055, 0.049, 0.073, 0.061, 0.061, 0.061, 0.055, 0.049, 0.061, 0.073, 0.067, 0.061, 0.061, 0.049, 0.067, 0.049, 0.067, 0.055, 0.061, 0.061, 0.061, 0.067, 0.055, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.061, 0.061, 0.049, 0.079, 0.061, 0.079, 0.079, 0.067, 0.079, 0.079, 0.055, 0.049, 0.067, 0.085, 0.061, 0.067, 0.092, 0.067, 0.073, 0.079, 0.061, 0.049, 0.061, 0.073, 0.067, 0.061, 0.067, 0.067, 0.073, 0.079, 0.085, 0.085, 0.073, 0.085, 0.098, 0.067, 0.073, 0.098, 0.079, 0.067, 0.073, 0.085, 0.073, 0.073, 0.067, 0.067, 0.079, 0.049, 0.073, 0.055, 0.055, 0.049, 0.067, 0.067, 0.061, 0.079, 0.079, 0.085, 0.098, 0.098, 0.098, 0.098, 0.116, 0.067, 0.061, 0.055, 0.085, 0.067, 0.085, 0.104, 0.146, 0.146, 0.146, 0.171, 0.159, 0.159, 0.165, 0.153, 0.146, 0.098, 0.092, 0.079, 0.073, 0.079, 0.085, 0.098, 0.098, 0.104, 0.098, 0.11, 0.14, 0.183, 0.195, 0.244, 0.287, 0.342, 0.372, 0.391, 0.403, 0.391, 0.342, 0.293, 0.244, 0.238, 0.22, 0.238, 0.195, 0.195, 0.195, 0.195, 0.183, 0.177, 0.183, 0.165, 0.244, 0.488, 0.678, 0.708, 1.007, 1.428, 1.679, 1.99, 2.539, 3.272, 4.254, 5.176, 5.689, 5.963, 6.116, 6.299, 6.25, 6.11, 6.079, 6.012, 5.756, 5.438, 5.078, 4.651, 4.193, 3.864, 3.662, 3.565, 3.424, 3.272, 3.174, 3.082, 3.076, 3.125, 3.198, 3.369, 3.412, 3.418, 3.369, 3.29, 3.174, 3.131, 3.107, 3.095, 3.088, 3.088, 3.082, 3.064, 3.04, 3.052, 3.003, 2.948, 2.966, 2.93, 2.905, 2.863, 2.832, 2.783, 2.655, 2.344, 1.758, 1.27, 0.885, 0.482, 0.244, 0.159, 0.159, 0.165, 0.214, 0.25, 0.293, 0.354, 0.391, 0.385, 0.342, 0.269, 0.208, 0.189, 0.14, 0.098, 0.098, 0.098, 0.098, 0.134, 0.159, 0.146, 0.159, 0.153, 0.171, 0.177, 0.165, 0.195, 0.195, 0.183, 0.189, 0.171, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.189, 0.195, 0.177, 0.177, 0.165, 0.195, 0.159, 0.153, 0.153, 0.146, 0.146, 0.14, 0.153, 0.134, 0.122, 0.128, 0.116, 0.11, 0.098, 0.098, 0.073, 0.079, 0.073, 0.092, 0.079, 0.104, 0.14, 0.14, 0.098, 0.098, 0.092, 0.061, 0.079, 0.073, 0.073, 0.085, 0.085, 0.104, 0.098, 0.098, 0.104, 0.098, 0.098, 0.098, 0.079, 0.079, 0.061, 0.061, 0.067, 0.061, 0.061, 0.055, 0.055, 0.055, 0.067, 0.085, 0.079, 0.073, 0.085, 0.079, 0.079, 0.092, 0.104, 0.073, 0.073, 0.073, 0.085, 0.055, 0.067, 0.055, 0.067, 0.061, 0.055, 0.067, 0.055, 0.085, 0.085, 0.073, 0.079, 0.073, 0.092, 0.073, 0.073, 0.079, 0.079, 0.079, 0.067, 0.085, 0.073, 0.055, 0.061, 0.055, 0.061, 0.061, 0.055, 0.061, 0.055, 0.073, 0.067, 0.055, 0.061, 0.067, 0.061, 0.055, 0.055, 0.061, 0.055, 0.067, 0.061, 0.055, 0.067, 0.061, 0.073, 0.061, 0.061, 0.061, 0.067, 0.049, 0.067, 0.043, 0.061, 0.055, 0.067, 0.067, 0.055, 0.061, 0.067, 0.061, 0.061, 0.055, 0.079, 0.061, 0.055, 0.061, 0.055, 0.061, 0.067, 0.067, 0.055, 0.055, 0.055, 0.061, 0.055, 0.055, 0.055, 0.055, 0.067, 0.061, 0.055, 0.061, 0.067, 0.055, 0.061, 0.061, 0.055, 0.067, 0.055, 0.061, 0.055, 0.055, 0.067, 0.061, 0.067, 0.061, 0.061, 0.067, 0.073, 0.067, 0.061, 0.055, 0.055, 0.055, 0.061] positions = np.array(positionArray) lightIntensity=np.array(LightIntensityArray) x0_loren = np.sum(np.multiply(positions,lightIntensity)) / np.sum(lightIntensity) k_loren = 1 / (np.pi*np.max(lightIntensity)) p = [k_loren, x0_loren] popt, pcov = curve_fit(intensityFunction, positions, lightIntensity, p0=p) a_opt, b_opt = popt plt.scatter(positions, lightIntensity, label="Data", s=4) plt.plot(positions, intensityFunction(positions, a_opt, b_opt), color='red', label='Fit: a={:.2f}, b={:.2f}'.format(a_opt, b_opt)) plt.xlabel('Position (m)') plt.ylabel('Light Intensity (% max)') plt.legend() plt.show() Output: Compared to Lorentzian, gaussian is not a good fit. It will miss the side bumps of the diffraction patterns. See the following fitting result. import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def intensityFunction(x,a,b,sigma): return a*np.exp(-(x-b)**2/(2*sigma**2)) positionArray = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0001108, 0.0001108, 0.0001663, 0.0003325, 0.0004433, 0.000665, 0.0007758, 0.0008867, 0.0008867, 0.0009421, 0.0009975, 0.001, 0.001, 0.002, 0.002, 0.002, 0.002, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.004, 0.004, 0.004, 0.004, 0.004, 0.004, 0.004, 0.005, 0.005, 0.005, 0.005, 0.006, 0.006, 0.007, 0.007, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.008, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.009, 0.01, 0.01, 0.01, 0.011, 0.011, 0.012, 0.012, 0.013, 0.013, 0.013, 0.014, 0.014, 0.014, 0.014, 0.014, 0.014, 0.015, 0.015, 0.015, 0.015, 0.016, 0.016, 0.017, 0.017, 0.017, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.019, 0.019, 0.019, 0.02, 0.02, 0.02, 0.021, 0.021, 0.022, 0.022, 0.023, 0.023, 0.023, 0.023, 0.023, 0.023, 0.024, 0.024, 0.025, 0.025, 0.025, 0.025, 0.026, 0.026, 0.026, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.027, 0.028, 0.028, 0.028, 0.029, 0.03, 0.031, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.032, 0.033, 0.033, 0.033, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.034, 0.035, 0.035, 0.035, 0.035, 0.036, 0.037, 0.037, 0.037, 0.037, 0.038, 0.038, 0.039, 0.039, 0.04, 0.04, 0.041, 0.041, 0.042, 0.042, 0.043, 0.043, 0.044, 0.044, 0.044, 0.045, 0.045, 0.045, 0.045, 0.045, 0.046, 0.046, 0.047, 0.047, 0.048, 0.048, 0.048, 0.048, 0.049, 0.049, 0.049, 0.049, 0.049, 0.049, 0.049, 0.05, 0.05, 0.05, 0.051, 0.051, 0.051, 0.052, 0.052, 0.052, 0.053, 0.053, 0.053, 0.053, 0.053, 0.053, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.054, 0.055, 0.056, 0.056, 0.056, 0.056, 0.057, 0.057, 0.057, 0.057, 0.058, 0.059, 0.059, 0.059, 0.06, 0.06, 0.061, 0.061, 0.061, 0.061, 0.061, 0.062, 0.062, 0.062, 0.062, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.063, 0.064, 0.064, 0.064, 0.065, 0.066, 0.066, 0.066, 0.067, 0.067, 0.067, 0.068, 0.068, 0.068, 0.069, 0.07, 0.07, 0.071, 0.071, 0.071, 0.072, 0.072, 0.073, 0.073, 0.073, 0.073, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.074, 0.075, 0.075, 0.075, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.076, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.077, 0.078, 0.078, 0.079, 0.079, 0.079, 0.08, 0.081, 0.082, 0.082, 0.083, 0.084, 0.085, 0.086, 0.086, 0.086, 0.087, 0.087, 0.087, 0.087, 0.087, 0.087, 0.088, 0.088, 0.088, 0.089, 0.09, 0.09, 0.091, 0.091, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.092, 0.093, 0.093, 0.094, 0.094, 0.094, 0.095, 0.095, 0.095, 0.096, 0.096, 0.096, 0.097, 0.097, 0.097, 0.098, 0.098, 0.098, 0.099, 0.099, 0.099, 0.1, 0.1, 0.1, 0.101, 0.101, 0.101, 0.101, 0.101, 0.102, 0.102, 0.102, 0.102, 0.102, 0.103, 0.103, 0.103, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104, 0.104] LightIntensityArray = [0.061, 0.061, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.067, 0.061, 0.067, 0.067, 0.061, 0.079, 0.055, 0.061, 0.055, 0.061, 0.061, 0.067, 0.079, 0.067, 0.061, 0.061, 0.067, 0.067, 0.061, 0.073, 0.049, 0.055, 0.067, 0.073, 0.061, 0.061, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.061, 0.061, 0.061, 0.049, 0.055, 0.067, 0.049, 0.055, 0.055, 0.043, 0.067, 0.061, 0.067, 0.067, 0.061, 0.061, 0.085, 0.067, 0.055, 0.061, 0.061, 0.055, 0.055, 0.049, 0.061, 0.055, 0.043, 0.061, 0.067, 0.061, 0.079, 0.043, 0.061, 0.049, 0.061, 0.055, 0.061, 0.061, 0.061, 0.055, 0.067, 0.073, 0.061, 0.061, 0.061, 0.055, 0.055, 0.061, 0.067, 0.055, 0.043, 0.055, 0.049, 0.061, 0.055, 0.073, 0.061, 0.055, 0.061, 0.061, 0.055, 0.055, 0.073, 0.049, 0.061, 0.055, 0.073, 0.055, 0.073, 0.043, 0.061, 0.055, 0.049, 0.073, 0.061, 0.061, 0.061, 0.055, 0.049, 0.061, 0.073, 0.067, 0.061, 0.061, 0.049, 0.067, 0.049, 0.067, 0.055, 0.061, 0.061, 0.061, 0.067, 0.055, 0.061, 0.061, 0.061, 0.061, 0.073, 0.061, 0.061, 0.061, 0.049, 0.079, 0.061, 0.079, 0.079, 0.067, 0.079, 0.079, 0.055, 0.049, 0.067, 0.085, 0.061, 0.067, 0.092, 0.067, 0.073, 0.079, 0.061, 0.049, 0.061, 0.073, 0.067, 0.061, 0.067, 0.067, 0.073, 0.079, 0.085, 0.085, 0.073, 0.085, 0.098, 0.067, 0.073, 0.098, 0.079, 0.067, 0.073, 0.085, 0.073, 0.073, 0.067, 0.067, 0.079, 0.049, 0.073, 0.055, 0.055, 0.049, 0.067, 0.067, 0.061, 0.079, 0.079, 0.085, 0.098, 0.098, 0.098, 0.098, 0.116, 0.067, 0.061, 0.055, 0.085, 0.067, 0.085, 0.104, 0.146, 0.146, 0.146, 0.171, 0.159, 0.159, 0.165, 0.153, 0.146, 0.098, 0.092, 0.079, 0.073, 0.079, 0.085, 0.098, 0.098, 0.104, 0.098, 0.11, 0.14, 0.183, 0.195, 0.244, 0.287, 0.342, 0.372, 0.391, 0.403, 0.391, 0.342, 0.293, 0.244, 0.238, 0.22, 0.238, 0.195, 0.195, 0.195, 0.195, 0.183, 0.177, 0.183, 0.165, 0.244, 0.488, 0.678, 0.708, 1.007, 1.428, 1.679, 1.99, 2.539, 3.272, 4.254, 5.176, 5.689, 5.963, 6.116, 6.299, 6.25, 6.11, 6.079, 6.012, 5.756, 5.438, 5.078, 4.651, 4.193, 3.864, 3.662, 3.565, 3.424, 3.272, 3.174, 3.082, 3.076, 3.125, 3.198, 3.369, 3.412, 3.418, 3.369, 3.29, 3.174, 3.131, 3.107, 3.095, 3.088, 3.088, 3.082, 3.064, 3.04, 3.052, 3.003, 2.948, 2.966, 2.93, 2.905, 2.863, 2.832, 2.783, 2.655, 2.344, 1.758, 1.27, 0.885, 0.482, 0.244, 0.159, 0.159, 0.165, 0.214, 0.25, 0.293, 0.354, 0.391, 0.385, 0.342, 0.269, 0.208, 0.189, 0.14, 0.098, 0.098, 0.098, 0.098, 0.134, 0.159, 0.146, 0.159, 0.153, 0.171, 0.177, 0.165, 0.195, 0.195, 0.183, 0.189, 0.171, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.195, 0.189, 0.195, 0.177, 0.177, 0.165, 0.195, 0.159, 0.153, 0.153, 0.146, 0.146, 0.14, 0.153, 0.134, 0.122, 0.128, 0.116, 0.11, 0.098, 0.098, 0.073, 0.079, 0.073, 0.092, 0.079, 0.104, 0.14, 0.14, 0.098, 0.098, 0.092, 0.061, 0.079, 0.073, 0.073, 0.085, 0.085, 0.104, 0.098, 0.098, 0.104, 0.098, 0.098, 0.098, 0.079, 0.079, 0.061, 0.061, 0.067, 0.061, 0.061, 0.055, 0.055, 0.055, 0.067, 0.085, 0.079, 0.073, 0.085, 0.079, 0.079, 0.092, 0.104, 0.073, 0.073, 0.073, 0.085, 0.055, 0.067, 0.055, 0.067, 0.061, 0.055, 0.067, 0.055, 0.085, 0.085, 0.073, 0.079, 0.073, 0.092, 0.073, 0.073, 0.079, 0.079, 0.079, 0.067, 0.085, 0.073, 0.055, 0.061, 0.055, 0.061, 0.061, 0.055, 0.061, 0.055, 0.073, 0.067, 0.055, 0.061, 0.067, 0.061, 0.055, 0.055, 0.061, 0.055, 0.067, 0.061, 0.055, 0.067, 0.061, 0.073, 0.061, 0.061, 0.061, 0.067, 0.049, 0.067, 0.043, 0.061, 0.055, 0.067, 0.067, 0.055, 0.061, 0.067, 0.061, 0.061, 0.055, 0.079, 0.061, 0.055, 0.061, 0.055, 0.061, 0.067, 0.067, 0.055, 0.055, 0.055, 0.061, 0.055, 0.055, 0.055, 0.055, 0.067, 0.061, 0.055, 0.061, 0.067, 0.055, 0.061, 0.061, 0.055, 0.067, 0.055, 0.061, 0.055, 0.055, 0.067, 0.061, 0.067, 0.061, 0.061, 0.067, 0.073, 0.067, 0.061, 0.055, 0.055, 0.055, 0.061] positions = np.array(positionArray) lightIntensity=np.array(LightIntensityArray) mean = sum(np.multiply(positions,lightIntensity))/len(positions) sigma = sum(lightIntensity * (positions - mean)**2)/len(positions) p = [1, mean, sigma] popt, pcov = curve_fit(intensityFunction, positions, lightIntensity, p0=p) plt.scatter(positions, lightIntensity, s=4) plt.plot(positions, gaus(positions,*popt), color='red') plt.xlabel('Position (m)') plt.ylabel('Light Intensity (% max)') plt.legend() plt.show() Output:
2
1
78,427,306
2024-5-3
https://stackoverflow.com/questions/78427306/field-validator-not-getting-called-in-model-serializer
I am quite new to Django. I am working on a django app which needs to store films, and when I try to load film instances with the following program: import os import json import requests # Assuming the JSON content is stored in a variable named 'films_json' json_file_path: str = os.path.join(os.getcwd(), "films.json") # Load the JSON content from the file with open(json_file_path, "r") as file: films_json: dict = json.load(file) # URL of the endpoint endpoint_url = "http://127.0.0.1:8000/site-admin/add-film/" for film_data in films_json["films"]: print() print(json.dumps(film_data, indent=4)) response: requests.Response = requests.post(endpoint_url, json=film_data) if response.status_code == 201: print(f"Film '{film_data['name']}' added successfully") else: print(f"Failed to add film '{film_data['name']}':") print(json.dumps(dict(response.headers),indent=4)) print(f"{json.dumps(response.json(), indent=4)}") I get the following response (for each film): { "name": "Pulp Fiction", "genre": "Crime", "duration": 154, "director": "Quentin Tarantino", "cast": [ "John Travolta", "Uma Thurman", "Samuel L. Jackson", "Bruce Willis" ], "description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", "image_url": "https://m.media-amazon.com/images/M/MV5BNGNhMDIzZTUtNTBlZi00MTRlLWFjM2ItYzViMjE3YzI5MjljXkEyXkFqcGdeQXVyNzkwMjQ5NzM@._V1_.jpg", "release": "1994-00-00" } Failed to add film 'Pulp Fiction': Bad Request { "Date": "Fri, 03 May 2024 22:35:32 GMT", "Server": "WSGIServer/0.2 CPython/3.10.10", "Content-Type": "application/json", "Vary": "Accept, Cookie", "Allow": "POST, OPTIONS", "djdt-store-id": "bf9dc157db354419a70d69f3be5e8dd7", "Server-Timing": "TimerPanel_utime;dur=0;desc=\"User CPU time\", TimerPanel_stime;dur=0;desc=\"System CPU time\", TimerPanel_total;dur=0;desc=\"Total CPU time\", TimerPanel_total_time;dur=50.99039999186061;desc=\"Elapsed time\", SQLPanel_sql_time;dur=2.606799971545115;desc=\"SQL 14 queries\", CachePanel_total_time;dur=0;desc=\"Cache 0 Calls\"", "X-Frame-Options": "DENY", "Content-Length": "197", "X-Content-Type-Options": "nosniff", "Referrer-Policy": "same-origin", "Cross-Origin-Opener-Policy": "same-origin" } { "release": [ "Date has wrong format. Use one of these formats instead: YYYY-MM-DD." ], "director_id": [ "Invalid pk \"1\" - object does not exist." ], "cast": [ "Invalid pk \"2\" - object does not exist." ] } This is my view: class AggregateFilmView(generics.CreateAPIView): serializer_class: type = FilmSerializer def post(self, request) -> Response: print(f"### {type(request.data)}", file=sys.stderr) with transaction.atomic(): # Ensure all or nothing persistence try: film_data: dict = request.data # Handle Director if "director" not in film_data.keys(): raise ValidationError("director field must be provided") director_name: str = film_data.pop("director", {}) if not type(director_name) is str: raise ValidationError("diretor name must be a string") director_data: dict[str, str] = {"name": director_name} director_serializer = DirectorSerializer(data=director_data) director_serializer.is_valid(raise_exception=True) director: Director = director_serializer.save() # Handle Cast (Actors) if "cast" not in film_data.keys(): raise ValidationError("cast field must be provided") cast_names: list[str] = film_data.pop("cast", []) if type(cast_names) is not list: raise ValidationError("cast must be a list of names (list[string])") cast_data: list[dict[str, str]] = [] if not len(cast_names) == 0: for name in cast_names: if not type(name) is str: raise ValidationError( "cast must be a list of names (list[string])" ) actor_data: dict[str, str] = {"name": name} cast_data.append(actor_data) actor_serializer = ActorSerializer(data=cast_data, many=True) actor_serializer.is_valid(raise_exception=True) actors: Actor = actor_serializer.save() # Handle Film film_data["director_id"] = director.id # director ID film_data["cast"] = [actor.id for actor in actors] # list of actor IDs types: list[type] = [type(field) for field in film_data.values()] print(types, file=sys.stderr) print(film_data, file=sys.stderr) film_serializer = FilmSerializer(data=film_data) film_serializer.is_valid(raise_exception=True) film_serializer.save() response: Response = Response( {"detail": "Film added succesfully"}, status=status.HTTP_201_CREATED ) except ValidationError as error: response = Response( {"error": error.message}, status=status.HTTP_400_BAD_REQUEST, ) # Return serialized data (customize as needed) return response This is my serializer: class FilmSerializer(serializers.ModelSerializer): class Meta: model = Film # fields: str = "__all__" fields: list[str] = [ "name", "release", "genre", "description", "duration", "director_id", "cast", ] def validate_name(self, value) -> str: print(f"Name validation {value}", file=sys.stderr) if value is None: raise serializers.ValidationError("Film name is required") return value def validate_release(self, value) -> str: print(f"Date validation {value}", file=sys.stderr) if value is None: raise serializers.ValidationError("Release year is required") try: patterns: dict[str, str] = { "%Y": r"^\d{4}$", "%Y-%m": r"^\d{4}-\d{2}$", "%Y-%m-%d": r"^\d{4}-\d{2}-\d{2}$", "%m-%Y": r"^\d{2}-\d{4}$", "%d-%m-%Y": r"^\d{2}-\d{2}-\d{4}$", } format_match: bool = False for date_format, pattern in patterns.items(): if re.match(pattern, value): format_match = True print(f"Matched {date_format} format", file=sys.stderr) return datetime.datetime.strptime(value, date_format) if not format_match: raise serializers.ValidationError( "Invalid date format.", "Accepted formats: %Y, %Y-%m, %Y-%m-%d, %m-%Y, %d-%m-%Y", ) except ValueError: raise serializers.ValidationError("Year must be a valid date") return value def validate_genre(self, value) -> str: print(f"Genre validation {value}", file=sys.stderr) if value is None: raise serializers.ValidationError("Genre is required") if value not in Film.GENRE_CHOICES: raise serializers.ValidationError("Invalid genre") return value def validate_description(self, value) -> str: print(f"Description validation {value}", file=sys.stderr) if value is None: raise serializers.ValidationError("Description is required") if len(value) > 300: raise serializers.ValidationError( "Description cannot exceed 300 characters" ) return value def validate_director_id(self, value) -> str: print(f"Director Id {value}", file=sys.stderr) if value is None: raise serializers.ValidationError("Director is required") try: Director.objects.get(pk=value) except Director.DoesNotExist: raise serializers.ValidationError("Invalid director ID") return value def validate_cast(self, value) -> str: print(f"Cast {value}", file=sys.stderr) if value is None: raise serializers.ValidationError("At least one cast member is required") try: Actor.objects.filter(pk__in=value) except Actor.DoesNotExist: raise serializers.ValidationError("Invalid cast ID") And this is my model: class Film(models.Model): GENRE_CHOICES: list[str] = [ "Action", "Comedy", "Crime", "Doctumentary", "Drama", "Horror", "Romance", "Sci-Fi", "Thriller", "Western", ] id = models.AutoField(primary_key=True) name = models.CharField(max_length=64, unique=True) release = models.DateField() genre = models.CharField(max_length=64) description = models.TextField() duration = models.FloatField() director_id = models.ForeignKey( User, on_delete=models.CASCADE, related_name="director" ) cast = models.ManyToManyField(User, related_name="cast") The problem is that the release field is getting refused by a validation different from the one I have defined in the serializer, and the invalidation it is reporting should be avoided by the modifications in the value that my validate_release method is supposed to do. The thing is that the validate_release method is not getting called, as can be seen in the terminal running the server: ### <class 'dict'> [<class 'str'>, <class 'str'>, <class 'int'>, <class 'str'>, <class 'str'>, <class 'str'>, <class 'int'>, <class 'list'>] {'name': 'Pulp Fiction', 'genre': 'Crime', 'duration': 154, 'description': "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", 'image_url': 'https://m.media-amazon.com/images/M/MV5BNGNhMDIzZTUtNTBlZi00MTRlLWFjM2ItYzViMjE3YzI5MjljXkEyXkFqcGdeQXVyNzkwMjQ5NzM@._V1_.jpg', 'release': '1994-00-00', 'director_id': 1, 'cast': [2, 3, 4, 5]} Name validation Pulp Fiction Genre validation Crime Description validation The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption. Bad Request: /site-admin/add-film/ [04/May/2024 00:36:34] "POST /site-admin/add-film/ HTTP/1.1" 400 197 any ideas on how to fix this?
Because ModelSerializer will map release field to a DateField. This way the validation is made against the default input date formats set by the framework. In order to override that, you need to set your own valid inputs list (I will do it at settings.py, but of course can be anywhere). DATE_INPUT_FORMATS = [ "%Y", "%Y-%m", "%m-%Y", "%Y-%m-%d", "%d-%m-%Y", ] serializers.py class FilmSerializer(serializers.ModelSerializer): release = serializers.DateField(input_formats=settings.DATE_INPUT_FORMATS) class Meta: model = Film fields = "__all__"
2
1
78,428,016
2024-5-4
https://stackoverflow.com/questions/78428016/using-matplotlib-pcolormesh-how-can-i-stop-the-drawn-tiles-from-one-row-to-be-c
I have individual x arrays for each intensity (Z) array. Which means that the intensity rows will not be stacked on top of each other. I do not want the tiles to "tilt" so they are connected to the row above and below, I just want them to be straight in the y direction and connected in the x direction. Thank you very much in advance! Here is a dummy code: import numpy as np import matplotlib.pyplot as plt y_values = np.array([0, 1, 2, 3, 4, 5], dtype=float) x_list = range(50, 70) x_values_list = [] z_values_list = [] for i in range(len(y_values)): x_value_single_list = [np.array(0.01*x**2 + 0.1*x*i + 0.3 * i, dtype= float) for x in x_list] x_values_list.append( x_value_single_list) z_values_list.append(np.random.rand(20)) z_values_list = [np.array(arr, dtype=float) for arr in z_values_list] x_values_list = [np.array(arr, dtype=float) for arr in x_values_list] fig, ax = plt.subplots() c = ax.pcolormesh(x_values_list, y_values, z_values_list, cmap='viridis') #, shading='auto' plt.colorbar(c) plt.show() I have tried a bunch of kwargs but a lot of them does not seem to do anything and seem to not be implemented for pcolormesh and more suited for regular line plots. I have obviously tried a lot with ChatGPT.
If you use flat shading, where x and y represent the corners of each quadrilateral, then you can plot one row at a time. Note I explicitly set the vmin and vmax parameter in the pcolormesh call so that the colour range will be the same for each row even if their actual max and min differ. import numpy as np import matplotlib.pyplot as plt fig, ax = plt.subplots() y_values = np.array([0, 1, 2, 3, 4, 5], dtype=float) x_list = range(50, 70) for i in range(len(y_values) - 1): x_value_single_list = [np.array(0.01*x**2 + 0.1*x*i + 0.3 * i, dtype= float) for x in x_list] z_values = np.random.rand(1, 19) # 2D array but single row. c = ax.pcolormesh(x_value_single_list, y_values[i:i+2], z_values, cmap='viridis', shading='flat', vmin=0, vmax=1) plt.colorbar(c) plt.show()
3
2
78,428,311
2024-5-4
https://stackoverflow.com/questions/78428311/custom-hadamard-style-product-of-two-matrices-in-python
I have two numpy matrices A and B (of size m x n, m<>n, m=number of rows, n=number of columns). Both are digital matrices, consisting solely of entries 0 or 1. I want to calculate a matrix product C from these two matrices that obeys the following rules in terms of A(row, column): If A(i,j) = 1 and B(i,j)=1, C(i,j)=1. If A(i,j) = 1 and B(i,j)=0, C(i,j)=-q. If A(i,j) = 0, C(i,j) = 0, regardless of the value of B(i,j). where q satisfies the equation p-(n-p)*q=0 where p is the number of 1s in B(i,:). I could implement this with a couple of for loops, but I am tempted to ask if there is one of those single line pythonic ways of implementing this logic.
IIUC you can do: Suppose you have two matrices 5x2: # A [[0 1 0 1 1] [1 1 1 1 0]] # B [[0 0 1 1 1] [0 0 1 1 0]] Then: m, n = A.shape # 5, 2 p = B.sum(axis=1) # [3 2] q = p / (n - p) # [1.5 0.66666667] m1 = A & B m2 = (A & ~B) * -q[:, None] print(m1 + m2) Prints: [[ 0. -1.5 0. 1. 1. ] [-0.66666667 -0.66666667 1. 1. 0. ]]
4
1
78,427,983
2024-5-4
https://stackoverflow.com/questions/78427983/why-does-dictid-1-id-2-sometimes-raise-keyerror-id-instead-of-a
Normally, if you try to pass multiple values for the same keyword argument, you get a TypeError: In [1]: dict(id=1, **{'id': 2}) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [1], in <cell line: 1>() ----> 1 dict(id=1, **{'id': 2}) TypeError: dict() got multiple values for keyword argument 'id' But if you do it while handling another exception, you get a KeyError instead: In [2]: try: ...: raise ValueError('foo') # no matter what kind of exception ...: except: ...: dict(id=1, **{'id': 2}) # raises: KeyError: 'id' ...: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [2], in <cell line: 1>() 1 try: ----> 2 raise ValueError('foo') # no matter what kind of exception 3 except: ValueError: foo During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) Input In [2], in <cell line: 1>() 2 raise ValueError('foo') # no matter what kind of exception 3 except: ----> 4 dict(id=1, **{'id': 2}) KeyError: 'id' What's going on here? How could a completely unrelated exception affect what kind of exception dict(id=1, **{'id': 2}) throws? For context, I discovered this behavior while investigating the following bug report: https://github.com/tortoise/tortoise-orm/issues/1583 This has been reproduced on CPython 3.11.8, 3.10.5, and 3.9.5.
This looks like a Python bug. The code that's supposed to raise the TypeError works by detecting and replacing an initial KeyError, but this code doesn't work right. When the exception occurs in the middle of another exception handler, the code that should raise the TypeError fails to recognize the KeyError. It ends up letting the KeyError through, instead of replacing it with a TypeError. The bug appears to be gone on 3.12, due to changes in the exception implementation. Here's the deep dive, for the CPython 3.11.8 source code. Similar code exists on 3.10 and 3.9. As we can see by using the dis module to examine the bytecode for dict(id=1, **{'id': 2}): In [1]: import dis In [2]: dis.dis("dict(id=1, **{'id': 2})") 1 0 LOAD_NAME 0 (dict) 2 LOAD_CONST 3 (()) 4 LOAD_CONST 0 ('id') 6 LOAD_CONST 1 (1) 8 BUILD_MAP 1 10 LOAD_CONST 0 ('id') 12 LOAD_CONST 2 (2) 14 BUILD_MAP 1 16 DICT_MERGE 1 18 CALL_FUNCTION_EX 1 20 RETURN_VALUE Python uses the DICT_MERGE opcode to merge two dicts, to build the final keyword argument dict. The relevant part of the DICT_MERGE code is as follows: if (_PyDict_MergeEx(dict, update, 2) < 0) { format_kwargs_error(tstate, PEEK(2 + oparg), update); Py_DECREF(update); goto error; } It uses _PyDict_MergeEx to attempt to merge two dicts, and if that fails (and raises an exception), it uses format_kwargs_error to try to raise a different exception. When the third argument to _PyDict_MergeEx is 2, that function will raise a KeyError for duplicate keys, inside the dict_merge helper function. This is where the KeyError comes from. Once the KeyError is raised, format_kwargs_error has the job of replacing it with a TypeError. It tries to do so with the following code: else if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { PyObject *exc, *val, *tb; _PyErr_Fetch(tstate, &exc, &val, &tb); if (val && PyTuple_Check(val) && PyTuple_GET_SIZE(val) == 1) { but this code is looking for an unnormalized exception, an internal way of representing exceptions that isn't exposed to Python-level code. It expects the exception value to be a 1-element tuple containing the key that the KeyError was raised for, instead of an actual exception object. Exceptions raised inside C code are usually unnormalized, but not if they occur while Python is handling another exception. Unnormalized exceptions cannot handle exception chaining, which occurs automatically for exceptions raised inside an exception handler. In this case, the internal _PyErr_SetObject routine will automatically normalize the exception: exc_value = _PyErr_GetTopmostException(tstate)->exc_value; if (exc_value != NULL && exc_value != Py_None) { /* Implicit exception chaining */ Py_INCREF(exc_value); if (value == NULL || !PyExceptionInstance_Check(value)) { /* We must normalize the value right now */ Since the KeyError has been normalized, format_kwargs_error doesn't understand what it's looking at. It lets the KeyError through, instead of raising the TypeError it's supposed to. On Python 3.12, things are different. The internal exception representation has been changed, so any raised exception is always normalized. Thus, the Python 3.12 version of format_kwargs_error looks for a normalized exception instead of an unnormalized exception, and if _PyDict_MergeEx has raised a KeyError, the code will recognize it: else if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { PyObject *exc = _PyErr_GetRaisedException(tstate); PyObject *args = ((PyBaseExceptionObject *)exc)->args; if (exc && PyTuple_Check(args) && PyTuple_GET_SIZE(args) == 1) {
49
48
78,427,086
2024-5-3
https://stackoverflow.com/questions/78427086/parse-string-with-specific-characters-if-they-exist-using-regex
I have text containing assorted transactions that I'm trying to parse using regex. Text looks like this: JT Meta Platforms, Inc. - Class A Common Stock (META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 F S: New S O: Morgan Stanley - Select UMA Account # 1 JT Microsoft Corporation - Common Stock (MSFT) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 F S: New S O: Morgan Stanley - Select UMA Account # 1 JT Microsoft Corporation - Common Stock (MSFT) [OP]P 02/13/2024 03/05/2024 $500,001 - $1,000,000 F S: New S O: Morgan Stanley - Portfolio Management Active Assets Account D: Call options; Strike price $170; Expires 01/17 /2025 C: Ref: 044Q34N6 I've created a regex to parse out individual transactions, denoted by combination of ticker (eg, (MSFT)), type (eg, [ST], [OP]) and amount (eg, $500,000, etc) as follows: transactions = rx.findall(r"\([A-Z][^$]*\$[^$]*\$[,\d]+", text) Transactions are returned as a list and look like this for example: (META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 I'd like to add logic to include description details (ie, 'D:...') if they exist. I tried with the below pattern, but it winds up returning just one large transaction since the first two transactions don't have description details (ie, 'D:'). I'd like to see this: (META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 .. (MSFT) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 .. (MSFT) [OP]P 02/13/2024 03/05/2024 $500,001 - $1,000,000 F S: New S O: Morgan Stanley - Portfolio Management Active Assets Account D: Call options; Strike price $170; Expires 01/17 /2025 What am I doing wrong? rx.findall(r"\([A-Z][^$]*\$[^$]*\$[,\d]+[\s\S]*?D:(.*)", text) Edit: To deal with cases where colon isn't contiguous to 'D' (imperfect PDF parsing), added to @zdim's answer and this addresses the above issues: rx.findall('\([A-Z][^$]*\$[^$]*\$[,\d]+(?:[^$]*D:?.+)?', text)
Make the "rest" after the transaction part (up to and including D:... line) optional trans = re.findall( r'\([A-Z] [^$]* \$[^$]*\$[,\d]+ (?: [^$]* D:.+ )?', text, re.X) for t in trans: print(t,'\n---') This prints (META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 --- (MSFT) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 --- (MSFT) [OP]P 02/13/2024 03/05/2024 $500,001 - $1,000,000 F S: New S O: Morgan Stanley - Portfolio Management Active Assets Account D: Call options; Strike price $170; Expires 01/17 /2025 Now if you need to further process that extra text it should be easy.
2
1
78,426,876
2024-5-3
https://stackoverflow.com/questions/78426876/accessing-values-in-a-dictionary-with-jmespath-in-python-when-the-value-is-an-in
Using the dictionary: d = {'1': 'a'} How can I extract the value a using the JMESPath library in Python? The following attempts did not work: import jmespath value = jmespath.search("1", d) # throws jmespath.exceptions.ParseError: invalid token value = jmespath.search("'1'", d) # returns the key '1' instead of the value 'a'
You can directly address an identifier if it is an unquoted-string, so A-Za-z0-9_ and if the string starts with A-Za-z_. From the grammar rule listed above identifiers can be one or more characters, and must start with A-Za-z_. An identifier can also be quoted. This is necessary when an identifier has characters not specified in the unquoted-string grammar rule. In this situation, an identifier is specified with a double quote, followed by any number of unescaped-char or escaped-char characters, followed by a double quote. Source: JMESPath documentation, Identifiers Since your identifier is 0-9, you have to use double quoted string for this identifier, so "1". So, your working Python code would be: import jmespath d = {'1': 'a'} value = jmespath.search('"1"', d) print(value)
2
2
78,426,163
2024-5-3
https://stackoverflow.com/questions/78426163/how-do-python-main-process-and-forked-process-share-gc-information
From the post https://stackoverflow.com/a/53504673/9191338 : spawn objects are cloned and each is finalised in each process fork/forkserver objects are shared and finalised in the main process This seems to be the case: import os from multiprocessing import Process import multiprocessing multiprocessing.set_start_method('fork', force=True) class Track: def __init__(self): print(f'{os.getpid()=} object created in {__name__=}') def __getstate__(self): print(f'{os.getpid()=} object pickled in {__name__=}') return {} def __setstate__(self, state): print(f'{os.getpid()=} object unpickled in {__name__=}') return self def __del__(self): print(f'{os.getpid()=} object deleted in {__name__=}') def f(x): print(f'{os.getpid()=} function executed in {__name__=}') if __name__ == '__main__': x = Track() for i in range(2): print(f'{os.getpid()=} Iteration: {i}, Process object created') p = Process(target=f, args=(x,)) print(f'{os.getpid()=} Iteration: {i}, Process created and started') p.start() print(f'{os.getpid()=} Iteration: {i}, Process starts to run functions') p.join() The output is: os.getpid()=30620 object created in __name__='__main__' os.getpid()=30620 Iteration: 0, Process object created os.getpid()=30620 Iteration: 0, Process created and started os.getpid()=30620 Iteration: 0, Process starts to run functions os.getpid()=30623 function executed in __name__='__main__' os.getpid()=30620 Iteration: 1, Process object created os.getpid()=30620 Iteration: 1, Process created and started os.getpid()=30620 Iteration: 1, Process starts to run functions os.getpid()=30624 function executed in __name__='__main__' os.getpid()=30620 object deleted in __name__='__main__' Indeed the object is only deleted in the main process. My question is, how is this achieved? Although the new process is forked from the main process, after fork, the new process is essentially another process, how can these two processes share gc information? In addition, does the gc information sharing happen for every object, or just the object passed as argument for subprocess?
on linux using fork start_method when creating child processes, python uses os._exit() (note the underscore) to terminate the child process once the function ends .... this is somewhat equivalent to the process crashing, therefore no destructors have any chance of being called, the process just terminates, and the OS just reclaims whatever resources were allocated to this process. so you shouldn't rely on destructors being called by child processes to release resources. (for example to shut down an external server)
2
2
78,425,500
2024-5-3
https://stackoverflow.com/questions/78425500/how-to-parallelize-a-simple-loop-over-a-matrix
I have a large matrix and I want to output all the indices where the elements in the matrix are less than 0. I have this MWE in numba: import numba as nb import numpy as np A = np.random.random(size = (1000, 1000)) - 0.1 @nb.njit(cache=True) def numba_only(arr): rows = np.empty(arr.shape[0]*arr.shape[1]) cols = np.empty(arr.shape[0]*arr.shape[1]) idx = 0 for i in range(arr.shape[0]): for j in range(A.shape[1]): if arr[i, j] < 0: rows[idx] = i cols[idx] = j idx += 1 return rows[:idx], cols[:idx] Timing, I get: %timeit numba_only(A) 2.29 ms Β± 114 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) This is a faster than np.where(A<0) (without numba) which gives: %timeit numpy_only(A) 3.56 ms Β± 59.1 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) But it is a little slower than np.where wrapped by @nb.njit: @nb.njit(cache=True) def numba_where(arr): return np.where(arr<0) which gives: %timeit numba_where(A) 1.76 ms Β± 84.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 1 loop each) Can the numba code be sped up by parallelization somehow? I realise it might be memory bound but I think that modern hardware should allow some level of parallel access to memory. I am not sure how to use nb.prange to achieve this due to the index idx in the loop.
The provided algorithm is inherently sequential due to the long dependency chain of instruction which is also data dependent. However, this is possible to do an equivalent operation in parallel using a scan. Implementing such a parallel algorithm based on a scan is much more complex (especially an efficient cache-friendly scan). A relatively simple implementation is to iterate twice on the dataset so to compute local sums, then compute cumulative sums and finally do the actual work. @nb.njit(cache=True, parallel=True) def numba_parallel_where(arr): flattenArr = arr.reshape(-1) arrSize = flattenArr.size chunkSize = 2048 chunkCount = (arrSize + chunkSize - 1) // chunkSize counts = np.empty(chunkCount, dtype=np.int32) for chunkId in nb.prange(chunkCount): start = chunkId * chunkSize end = min(start + chunkSize, arrSize) count = 0 for i in range(start, end): count += flattenArr[i] < 0 counts[chunkId] = count offsets = np.empty(chunkCount + 1, dtype=np.int64) offsets[0] = 0 for chunkId in range(chunkCount): offsets[chunkId + 1] = offsets[chunkId] + counts[chunkId] outSize = offsets[-1] rows = np.empty(outSize, dtype=np.int32) cols = np.empty(outSize, dtype=np.int32) n = np.int32(arr.shape[1]) for chunkId in nb.prange(chunkCount): start = chunkId * chunkSize end = min(start + chunkSize, arrSize) idx = offsets[chunkId] for i in range(start, end): if flattenArr[i] < 0: rows[idx] = i // n cols[idx] = i % n idx += 1 return rows, cols Performance Results Here are performance results on my 6-core i5-9600KF CPU with a 40 GiB/s RAM (optimal bandwidth for reads) on Windows: Sequential: 1.68 ms Parallel (no pre-allocations): 0.85 ms Parallel (with pre-allocations): 0.55 ms <----- Theoretical optimal: 0.20 ms The parallel implementation is about twice faster. It succeed to reach a memory throughput of ~19 GiB/s on my machine which is sub-optimal, but not so bad. While this seems disappointing on a 6-core CPU, the parallel implementation is not very optimized and there are several points to consider: allocating the output is really slow (is seems to take 0.2-0.3 ms) the input array is read twice putting a lot of pressure on memory modulo/division used to compute the indices are a bit expensive Pre-allocating the output (filled manually with zeros to avoid possible page-faults) and passing it to the function helps to significantly reduce the execution time. This optimized version is 3 times faster than the initial (sequential) one. It reaches ~28 GiB/s which is relatively good. Notes about Performance Other possible optimizations include: using clusters of chunks so to avoid iterating twice on the whole dataset (this is hard in Numba to write a fast implementation since parallelism is very limited compared to native languages); iterate over lines of the matrix to avoid modulos/divisions assuming the number of lines is neither too big nor to small for sake of performance. AFAIK, there is an opened bug for the slow Numba allocations since few years but it has obviously not been solved yet. On top of that, relatively large allocations can be quite expensive on Windows too. A natively-compiled code (written in C/C++/Fortran/Rust) on Linux should not exhibit such an issue. Note that such optimizations will make the implementation even more complex than the above one.
2
2
78,423,942
2024-5-3
https://stackoverflow.com/questions/78423942/computing-line-that-follows-objects-orientation-in-contours
I am using cv2 to compute the line withing the object present in the mask image attached. The orientation/shape of the object might vary (horizontal, vertical) from the other mask images in my dataset. But the problem is, the method I have used to compute the line is not reliable. It works for the few images but failed to draw a line accurately for other mask images. Could anyone suggest an alternative approach? this is the raw mask image: This is how a line shall be drawn (considering object orientation) Here is the code which represents my approach. I will highly appreciate any help from your side. import numpy as np import cv2 import matplotlib.pyplot as plt image_bgr = cv2.imread(IMAGE_PATH) mask = masks[2] mask_uint8 = mask.astype(np.uint8) * 255 contours, _ = cv2.findContours(mask_uint8, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for c in contours: # Calculate the centroid (center point) of the contour M = cv2.moments(c) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.drawContours(image_bgr, [c], -1, (255, 0, 0), 3) cv2.circle(image_bgr, (cx, cy), 5, (0, 255, 0), -1) left_side_point = tuple(c[c[:, :, 0].argmin()][0]) right_side_point = tuple(c[c[:, :, 0].argmax()][0]) center_point = (cx, cy) left_center_point = ((left_side_point[0] + center_point[0]) // 2, (left_side_point[2] + center_point[2]) // 2) right_center_point = ((right_side_point[0] + center_point[0]) // 2, (right_side_point[2] + center_point[2]) // 2) cv2.line(image_bgr, left_side_point, left_center_point, (0, 0, 255), 2) cv2.line(image_bgr, left_center_point, center_point, (0, 0, 255), 2) cv2.line(image_bgr, center_point, right_center_point, (0, 0, 255), 2) cv2.line(image_bgr, right_center_point, right_side_point, (0, 0, 255), 2) plt.imshow(image_bgr) plt.show() ´´´ [1]: https://i.sstatic.net/ZQja9VmS.png [2]: https://i.sstatic.net/mLK66nAD.png
When I was doing particle scan analysis, I eventually went with a PCA-like approach: %matplotlib notebook import matplotlib.pyplot as plt import numpy as np import cv2 im = cv2.imread("mask.png", 0) # read as gray y, x = np.where(im) # get non-zero elements centroid = np.mean(x), np.mean(y) # get the centroid of the particle x_diff = x - centroid[0] # center x y_diff = y - centroid[1] # center y cov_matrix = np.cov(x_diff, y_diff) # get the convariance eigenvalues, eigenvectors = np.linalg.eig(cov_matrix) # apply EVD indicesForSorting = np.argsort(eigenvalues)[::-1] # sort to get the primary first eigenvalues = eigenvalues[indicesForSorting] eigenvectors = eigenvectors[:, indicesForSorting] plt.figure() plt.imshow(im, cmap = "gray") # plot image vecPrimary = eigenvectors[:, 0] * np.sqrt(eigenvalues[0]) plt.plot([centroid[0] - vecPrimary[0], centroid[0] + vecPrimary[0]], [centroid[1] - vecPrimary[1], centroid[1] + vecPrimary[1]]) vecSecondary = eigenvectors[:, 1] * np.sqrt(eigenvalues[1]) plt.plot([centroid[0] - vecSecondary[0], centroid[0] + vecSecondary[0]], [centroid[1] - vecSecondary[1], centroid[1] + vecSecondary[1]]) I like this approach because it also scales the line. If this is not desirable in your case, you can get the angle of this line and draw an infinite one, then mask it with the image. Hope this helps you further Edit: drawing the lines with opencv ### same analysis as before im = cv2.imread("mask.png") pt1 = (int(centroid[0] - vecPrimary[0]), int(centroid[1] - vecPrimary[1])) pt2 = (int(centroid[0] + vecPrimary[0]), int(centroid[1] + vecPrimary[1])) cv2.line(im, pt1, pt2, (255, 0, 0), 2) # blue line pt1 = (int(centroid[0] - vecSecondary[0]), int(centroid[1] - vecSecondary[1])) pt2 = (int(centroid[0] + vecSecondary[0]), int(centroid[1] + vecSecondary[1])) cv2.line(im, pt1, pt2, (0, 0, 255), 2) # redline cv2.imwrite("maskWithLines.png", im) Results: EDIT: as I said in my answer, just multiply the vectors by a scale and then use the mask to biwise_and: im = cv2.imread("mask.png") imGray = cv2.imread("mask.png", 0) # read imgray y, x = np.where(imGray) # get non-zero elements centroid = np.mean(x), np.mean(y) # get the centroid of the particle x_diff = x - centroid[0] # center x y_diff = y - centroid[1] # center y cov_matrix = np.cov(x_diff, y_diff) # get the convariance eigenvalues, eigenvectors = np.linalg.eig(cov_matrix) # apply EVD indicesForSorting = np.argsort(eigenvalues)[::-1] # sort to get the primary first and secondary second eigenvalues = eigenvalues[indicesForSorting] # sort eigenvalues eigenvectors = eigenvectors[:, indicesForSorting] # sort eigenvectors Scale = 100 # this can be adjusted, iut is actually not important as long as it is a very high value vecPrimary = eigenvectors[:, 0] * np.sqrt(eigenvalues[0]) * Scale vecSecondary = eigenvectors[:, 1] * np.sqrt(eigenvalues[1]) * Scale pt1 = (int(centroid[0] - vecPrimary[0]), int(centroid[1] - vecPrimary[1])) pt2 = (int(centroid[0] + vecPrimary[0]), int(centroid[1] + vecPrimary[1])) cv2.line(im, pt1, pt2, (255, 0, 0), 2) # blue line pt1 = (int(centroid[0] - vecSecondary[0]), int(centroid[1] - vecSecondary[1])) pt2 = (int(centroid[0] + vecSecondary[0]), int(centroid[1] + vecSecondary[1])) cv2.line(im, pt1, pt2, (0, 0, 255), 2) # red line im = cv2.bitwise_and(im, im, mask = imGray) # mask the lines cv2.imwrite("maskWithLines.png", im) The results will look like this:
2
3
78,423,810
2024-5-3
https://stackoverflow.com/questions/78423810/sympy-piecewise-with-numpy-array
I'd like to implement sympy.Piecewise on numpy arrays. Consider: import numpy as np from sympy import Piecewise x = np.array([0.1,1,2]) y = np.array([10,10,10]) Piecewise((x * y, x > 0.9),(0, True)) However, when I tried running it I got the following error: TypeError: Argument must be a Basic object, not ndarray Is there a way to get around this? I've tried list comprehension. However, it gets more difficult when there are more variables involved.
The simpy.Piecewise is not directly compatible with numpy arrays. Use np.where to achieve similar functionality: import numpy as np x = np.array([0.1,1,2]) y = np.array([10,10,10]) result = np.where(x > 0.9, x * y, 0)
2
1
78,423,647
2024-5-3
https://stackoverflow.com/questions/78423647/lambda-comparing-a-timestamp-to-all-timestamps-from-another-column-within-a-grou
I'm having trouble with specifying a lambda function. I would like to have something like the lambda below, but not quite. The code should compare a rejected_time with any paid_out_time within the group and return True, if a rejected_time occurs within 5 minutes after any paid_out_time. f = lambda x: ((x['rejected_time'].dropna() - x['paid_out_time'].dropna()).between(pd.Timedelta(0), pd.Timedelta(minutes=5))) Using x['paid_out_time'].min() generates about 100k True values, but dropping .min() results in dramatic reduction. I can't figure out how to use all paid_out_times in the comparison against a row-wise rejected_time and see if the rejected time occurs 0-5minutes after the paid_out_time. I've been testing this code: cols = ['paid_out_time', 'rejected_time'] df[cols] = df[cols].apply(pd.to_datetime, errors='coerce') f = lambda x: ((x['rejected_time'].dropna() - x['paid_out_time'].dropna().min()).between(pd.Timedelta(0), pd.Timedelta(minutes=5))) df['paid_out_auto_rejection'] = df.groupby('personal_id', group_keys=False).apply(f).astype(int) Here's some test data: personal_id application_id rejected_time paid_out_time expected 26A 1ab 2022-09-12 09:20:40.592 NaT 1 26A 1ab 2022-08-23 07:40:03.447463 NaT 0 26A 1ab 2022-08-02 23:16:59.545392 NaT 1 26A 1ab 2022-08-02 23:16:59.545392 NaT 1 26A 1ab 2022-09-12 09:20:40.592000 2022-08-02 23:16:59.545392 1 26A 1ab 2022-09-02 18:33:42.226000 NaT 0 26A 8f0 2022-09-12 09:20:40.592000 NaT 1 26A 8f0 2022-09-12 09:20:40.592000 NaT 1 26A 8f0 NaT 2022-09-12 09:20:40.592 0 26A 8f0 2022-09-12 09:21:08.604000 NaT 1 26A 8f0 2022-09-22 08:27:45.693060 NaT 0
EDIT: For improve performance is used merge_asof with tolerance parameter: cols = ['paid_out_time', 'rejected_time'] df[cols] = df[cols].apply(pd.to_datetime, errors='coerce') df1 = df[['personal_id','application_id','rejected_time']].reset_index() df2 = df[['personal_id','application_id','paid_out_time']] df3 = pd.merge_asof(df1.sort_values('rejected_time').dropna(subset=['rejected_time']), df2.sort_values('paid_out_time').dropna(subset=['paid_out_time']), left_on='rejected_time', right_on='paid_out_time', by='personal_id', direction="nearest", tolerance=pd.Timedelta("5Min") ).set_index('index') df['new'] = (df3['rejected_time'].sub(df3['paid_out_time']).notna() .reindex(df1.index, fill_value=0) .astype(int)) print (df) personal_id application_id rejected_time \ 0 26A 1ab 2022-09-12 09:20:40.592000 1 26A 1ab 2022-08-23 07:40:03.447463 2 26A 1ab 2022-08-02 23:16:59.545392 3 26A 1ab 2022-08-02 23:16:59.545392 4 26A 1ab 2022-09-12 09:20:40.592000 5 26A 1ab 2022-09-02 18:33:42.226000 6 26A 8f0 2022-09-12 09:20:40.592000 7 26A 8f0 2022-09-12 09:20:40.592000 8 26A 8f0 NaT 9 26A 8f0 2022-09-12 09:21:08.604000 10 26A 8f0 2022-09-22 08:27:45.693060 paid_out_time new 0 NaT 1 1 NaT 0 2 NaT 1 3 NaT 1 4 2022-08-02 23:16:59.545392 1 5 NaT 0 6 NaT 1 7 NaT 1 8 2022-09-12 09:20:40.592000 0 9 NaT 1 10 NaT 0 If need compare all non missing values use numpy broadcasting: cols = ['paid_out_time', 'rejected_time'] df[cols] = df[cols].apply(pd.to_datetime, errors='coerce') def f(x): arr = x['rejected_time'].dropna().to_numpy() - x['paid_out_time'].dropna().to_numpy()[:, None] m = (arr >= pd.Timedelta(0)) & (arr <= pd.Timedelta(minutes=5)) x.loc[x['rejected_time'].notna(), 'new'] = np.any(m, axis=0) return x out = (df.groupby('personal_id', group_keys=False).apply(f) .fillna({'new':False}).astype({'new':int})) print (out) personal_id application_id rejected_time \ 0 26A 1ab 2022-09-12 09:20:40.592000 1 26A 1ab 2022-08-23 07:40:03.447463 2 26A 1ab 2022-08-02 23:16:59.545392 3 26A 1ab 2022-08-02 23:16:59.545392 4 26A 1ab 2022-09-12 09:20:40.592000 5 26A 1ab 2022-09-02 18:33:42.226000 6 26A 8f0 2022-09-12 09:20:40.592000 7 26A 8f0 2022-09-12 09:20:40.592000 8 26A 8f0 NaT 9 26A 8f0 2022-09-12 09:21:08.604000 10 26A 8f0 2022-09-22 08:27:45.693060 paid_out_time new 0 NaT 1 1 NaT 0 2 NaT 1 3 NaT 1 4 2022-08-02 23:16:59.545392 1 5 NaT 0 6 NaT 1 7 NaT 1 8 2022-09-12 09:20:40.592000 0 9 NaT 1 10 NaT 0
2
1
78,422,647
2024-5-3
https://stackoverflow.com/questions/78422647/sort-a-list-of-points-in-row-major-snake-scan-order
I have a list of points that represent circles that have been detected in an image. [(1600.0, 26.0), (1552.0, 30.0), (1504.0, 32.0), (1458.0, 34.0), (1408.0, 38.0), (1360.0, 40.0), (1038.0, 54.0), (1084.0, 52.0), (1128.0, 54.0), (1174.0, 50.0), (1216.0, 52.0), (1266.0, 46.0), (1310.0, 46.0), (1600.0, 74.0), (1552.0, 76.0), (1504.0, 82.0), (1456.0, 80.0), (1406.0, 88.0), (1362.0, 86.0), (1310.0, 90.0), (1268.0, 94.0), (1224.0, 96.0), (1176.0, 98.0), (1128.0, 100.0), (1084.0, 100.0), (1040.0, 100.0), (996.0, 102.0), (992.0, 62.0), (950.0, 60.0), (950.0, 106.0), (908.0, 104.0), (904.0, 64.0), (862.0, 66.0), (862.0, 110.0), (820.0, 108.0), (816.0, 62.0), (776.0, 112.0), (774.0, 66.0), (732.0, 112.0), (730.0, 68.0), (686.0, 108.0), (684.0, 64.0), (642.0, 66.0), (600.0, 70.0), (600.0, 112.0), (558.0, 112.0), (552.0, 64.0), (512.0, 66.0), (510.0, 112.0), (470.0, 70.0), (464.0, 110.0), (420.0, 66.0), (376.0, 68.0), (376.0, 112.0), (332.0, 68.0), (332.0, 112.0), (426.0, 118.0), (1598.0, 124.0), (1552.0, 124.0), (1504.0, 126.0), (1454.0, 128.0), (1404.0, 134.0), (1362.0, 132.0), (1310.0, 138.0), (1266.0, 138.0), (1220.0, 136.0), (336.0, 156.0), (378.0, 156.0), (422.0, 156.0), (472.0, 160.0), (508.0, 154.0), (556.0, 154.0), (602.0, 156.0), (646.0, 156.0), (686.0, 154.0), (734.0, 158.0), (774.0, 152.0), (818.0, 152.0), (864.0, 154.0), (906.0, 150.0), (950.0, 152.0), (996.0, 148.0), (1038.0, 146.0), (1084.0, 146.0), (1128.0, 144.0), (1172.0, 144.0), (1598.0, 170.0), (1552.0, 170.0), (1506.0, 174.0), (1458.0, 170.0), (1408.0, 178.0), (1360.0, 178.0), (1314.0, 178.0), (332.0, 200.0), (382.0, 204.0), (422.0, 200.0), (466.0, 202.0), (512.0, 200.0), (556.0, 198.0), (600.0, 202.0), (642.0, 198.0), (690.0, 200.0), (732.0, 196.0), (776.0, 198.0), (820.0, 196.0), (862.0, 198.0), (908.0, 200.0), (950.0, 196.0), (998.0, 196.0), (1042.0, 196.0), (1084.0, 192.0), (1130.0, 188.0), (1174.0, 186.0), (1220.0, 184.0), (1266.0, 186.0), (1596.0, 220.0), (1554.0, 216.0), (1500.0, 222.0), (1454.0, 222.0), (1402.0, 226.0), (1354.0, 228.0), (1312.0, 230.0), (1264.0, 232.0), (1220.0, 232.0), (1176.0, 232.0), (1128.0, 234.0), (1084.0, 236.0), (1038.0, 236.0), (996.0, 238.0), (950.0, 238.0), (906.0, 238.0), (864.0, 244.0), (818.0, 240.0), (776.0, 242.0), (734.0, 244.0), (690.0, 244.0), (644.0, 242.0), (602.0, 246.0), (554.0, 242.0), (514.0, 248.0), (466.0, 244.0), (422.0, 244.0), (378.0, 244.0), (1456.0, 266.0), (1504.0, 264.0), (1552.0, 264.0), (1406.0, 272.0), (1360.0, 272.0), (1312.0, 276.0), (1270.0, 276.0), (1218.0, 278.0), (1172.0, 280.0), (1128.0, 280.0), (1084.0, 280.0), (1040.0, 280.0), (996.0, 282.0), (952.0, 282.0), (908.0, 284.0), (866.0, 288.0), (820.0, 284.0), (776.0, 286.0), (732.0, 292.0), (688.0, 286.0), (644.0, 286.0), (600.0, 288.0), (558.0, 290.0), (510.0, 288.0), (466.0, 288.0), (422.0, 288.0), (378.0, 288.0), (1504.0, 310.0), (1556.0, 308.0), (1454.0, 316.0), (1406.0, 318.0), (1360.0, 316.0), (1312.0, 318.0), (1270.0, 318.0), (1220.0, 320.0), (1176.0, 320.0), (1130.0, 320.0), (1086.0, 324.0), (1040.0, 328.0), (998.0, 326.0), (954.0, 328.0), (908.0, 328.0), (864.0, 328.0), (822.0, 330.0), (778.0, 334.0), (734.0, 330.0), (692.0, 332.0), (648.0, 334.0), (602.0, 332.0), (556.0, 330.0), (512.0, 332.0), (468.0, 332.0), (422.0, 332.0), (378.0, 332.0), (378.0, 376.0), (428.0, 378.0), (470.0, 378.0), (512.0, 376.0), (556.0, 376.0), (602.0, 376.0), (646.0, 374.0), (690.0, 374.0), (736.0, 378.0), (780.0, 376.0), (824.0, 372.0), (866.0, 372.0), (910.0, 372.0), (954.0, 370.0), (998.0, 370.0), (1042.0, 370.0), (1084.0, 368.0), (1130.0, 370.0), (1178.0, 366.0), (1220.0, 366.0), (1270.0, 362.0), (1316.0, 362.0), (1364.0, 360.0), (1412.0, 358.0), (1460.0, 356.0), (1510.0, 356.0), (1554.0, 356.0), (1504.0, 404.0), (1454.0, 408.0), (1408.0, 406.0), (1362.0, 406.0), (1312.0, 410.0), (1268.0, 408.0), (1220.0, 410.0), (1176.0, 410.0), (1130.0, 412.0), (1088.0, 414.0), (1042.0, 414.0), (996.0, 418.0), (954.0, 416.0), (910.0, 416.0), (866.0, 416.0), (822.0, 416.0), (778.0, 418.0), (734.0, 416.0), (688.0, 418.0), (644.0, 418.0), (602.0, 420.0), (560.0, 420.0), (514.0, 418.0), (468.0, 420.0), (422.0, 420.0), (472.0, 466.0), (516.0, 466.0), (560.0, 466.0), (604.0, 464.0), (646.0, 464.0), (688.0, 462.0), (734.0, 462.0), (778.0, 462.0), (822.0, 460.0), (866.0, 464.0), (908.0, 460.0), (952.0, 460.0), (998.0, 460.0), (1042.0, 462.0), (1086.0, 458.0), (1130.0, 456.0), (1176.0, 456.0), (1224.0, 454.0), (1270.0, 454.0), (1316.0, 454.0), (1362.0, 456.0), (1408.0, 454.0), (1460.0, 450.0), (1412.0, 496.0), (1366.0, 496.0), (1314.0, 500.0), (1272.0, 500.0), (1222.0, 500.0), (1174.0, 504.0), (1132.0, 502.0), (1088.0, 502.0), (1042.0, 502.0), (998.0, 504.0), (954.0, 504.0), (910.0, 504.0), (864.0, 504.0), (820.0, 506.0), (778.0, 504.0), (736.0, 506.0), (690.0, 506.0), (648.0, 508.0), (602.0, 510.0), (560.0, 506.0), (514.0, 510.0), (558.0, 552.0), (602.0, 550.0), (646.0, 550.0), (692.0, 554.0), (734.0, 550.0), (780.0, 548.0), (824.0, 548.0), (868.0, 552.0), (912.0, 548.0), (956.0, 548.0), (996.0, 550.0), (1042.0, 548.0), (1088.0, 546.0), (1134.0, 546.0), (1178.0, 546.0), (1224.0, 546.0), (1272.0, 544.0), (1316.0, 544.0), (1368.0, 542.0)] I want to sort the points in "snake scan" order like so: Here is the original image converted to jpg to fit the maximum upload size on SO: I'm having trouble though because the rows don't have the exact same y value and the columns don't have the exact same x value, it's a non-regular grid. I tried using the the following function, but it's really sensitive to the tolerance value and doesn't quite work right. def sort_points_snake_scan(points, tolerance=10): """Sort the points in a snake-scan pattern with a tolerance for y-coordinates. Arguments: - points (list): a list of (x, y) points - tolerance (int): the y-coordinate tolerance for grouping points into rows Returns: - snake_scan_order (list): the points sorted in snake-scan order. """ # Group points by the y-coordinate with tolerance if not points: return [] # Sort points by y to simplify grouping points = sorted(points, key=lambda p: p[1]) rows = [] current_row = [points[0]] for point in points[1:]: if abs(point[1] - current_row[-1][1]) <= tolerance: current_row.append(point) else: rows.append(current_row) current_row = [point] if current_row: rows.append(current_row) snake_scan_order = [] reverse = True ind = 0 for row in rows: # Sort the row by x-coordinate, alternating order for each row row_sorted = sorted(row, key=lambda point: point[0], reverse=reverse) snake_scan_order.extend(row_sorted) reverse = not reverse # Flip the order for the next row ind += 1 return snake_scan_order Here is the result: Does anyone know the right way to do this?
I would first sort the data by a diagonal sweep, i.e. by increasing sum-of-coordinates. At each point (in that order) determine what would be a good left-neighbor among the points that were already processed. A good neighbor will be in the ">" shaped cone at its left, and the rightmost candidate. Once attached, the attached nodes form a near-horizontal line, and we only need to consider the rightmost of each line when looking for a match in the ">"-cone of a next point. This way we build all the lines from left to right. It is then a small task to turn that into a snake order. Here is a Python function to perform this algorithm: def sort_points_snake_scan(data): lines = [] for x, y in sorted(data, key=sum): # visited by a diagonal sweep for line in lines: x1, y1 = line[-1] # Is this point inside the ">"-cone of x,y? if abs(x1 - x) > abs(y1 - y): line.append([x, y]) break else: # Not found: this is a point on a new line lines.append([[x, y]]) # sort lines from top to bottom lines.sort(key=lambda line: line[0][1]) # concatenate the lines into snake order return [ point for i, line in enumerate(lines) for point in (line[::-1] if i%2 else line) ] No need for a tolerance parameter: the algorithm assumes that the grid is not tilted more than 45Β° (up or down) at any point.
4
4
78,421,170
2024-5-2
https://stackoverflow.com/questions/78421170/pydantic-how-do-i-represent-a-list-of-objects-as-dict-serialize-list-as-dict
In Pydantic I want to represent a list of items as a dictionary. In that dictionary I want the key to be the id of the Item to be the key of the dictionary. I read the documentation on Serialization, on Mapping types and on Sequences. However, I didn't find a way to create such a representation. I want these in my api to have easy access from generated api clients. class Item(BaseModel): uid: UUID = Field(default_factory=uuid4) updated: datetime = Field(default_factory=datetime_now) ... class ResponseModel: ... print ResponseModel.model_dump() #> { "67C812D7-B039-433C-B925-CA21A1FBDB23": { "uid": "67C812D7-B039-433C-B925-CA21A1FBDB23", "updated": "2024-05-02 20:24:00" },{ "D39A8EF1-E520-4946-A818-9FA8664F63F6": { "uid": "D39A8EF1-E520-4946-A818-9FA8664F63F6", "updated":"2024-05-02 20:25:00" } }
You can use PlainSerializer(docs) to change the serialization of a field, e.g., to change the desired output format of datetime: from pydantic import Field, BaseModel, PlainSerializer from uuid import UUID, uuid4 from datetime import datetime from typing import Annotated CustomUUID = Annotated[ UUID, PlainSerializer(lambda v: str(v), return_type=str) ] CustomDatetime = Annotated[ datetime, PlainSerializer( lambda v: v.strftime("%Y-%m-%d %H:%M:%S"), return_type=str ), ] class Item(BaseModel): uid: CustomUUID = Field(default_factory=uuid4) updated: CustomDatetime = Field(default_factory=datetime.now) Item().model_dump() # {'uid': '7b2b8e32-15c7-48b8-9ddb-2d25cc61bc61', # 'updated': '2024-05-03 00:13:38'} Then you can use model_serializer (docs) to serialize the ResponseModel: from pydantic import Field, BaseModel, model_serializer, PlainSerializer from uuid import UUID, uuid4 from datetime import datetime from typing import Annotated CustomUUID = Annotated[ UUID, PlainSerializer(lambda v: str(v), return_type=str) ] CustomDatetime = Annotated[ datetime, PlainSerializer( lambda v: v.strftime("%Y-%m-%d %H:%M:%S"), return_type=str ), ] class Item(BaseModel): uid: CustomUUID = Field(default_factory=uuid4) updated: CustomDatetime = Field(default_factory=datetime.now) class ResponseModel(BaseModel): items: list[Item] @model_serializer def serialize_model(self): return {str(item.uid): item.model_dump() for item in self.items} ResponseModel(items=[Item() for i in range(2)]).model_dump() # { # "e41c4446-60e3-45c5-ab0a-a25f15febac4": { # "uid": "e41c4446-60e3-45c5-ab0a-a25f15febac4", # "updated": "2024-05-03 00:14:58", # }, # "e07d4152-755a-4a0f-9312-29d343bda883": { # "uid": "e07d4152-755a-4a0f-9312-29d343bda883", # "updated": "2024-05-03 00:14:58", # }, # }
3
1
78,421,550
2024-5-2
https://stackoverflow.com/questions/78421550/how-to-retrieve-file-position-during-json-parsing-with-ijson-in-python
I'm using the ijson library in Python to parse a large JSON file and I need to find the position in the file where specific data is located. I want to use file.tell() to get the current position of the file reader during the parsing process. But it's only giving me the length of the file. from ijson import parse with open('file','r') as f: for a, b, c in parse(f): print(f.tell())
ijson.parse is using buffered reads of the source file: >>> help(ijson.parse) Help on function parse in module ijson.common: parse(source, buf_size=65536, **config) If your file is smaller than 64K it will look like f.tell() is returning the file size. If you use parse(f, buf_size=1) the f.tell() should be accurate, but parsing will likely be slower.
2
2
78,417,252
2024-5-2
https://stackoverflow.com/questions/78417252/overload-function-with-supportsint-and-not-supportsint
I want to overload the function below so if passed a value that supports int() Python type hints int otherwise Python type hints the value passed. Python typing module provides a SupportsInt type we can use to check if our value supports int. from typing import Any, SupportsInt, overload @overload def to_int(value: SupportsInt) -> int: ... @overload def to_int[T: NotSupportsInt???](value: T) -> T: ... def to_int(value: Any) -> Any: try: return int(value) except TypeError: return value But in our second overload statement how can we specify all values that don't support int?
Types make positive promises - they specify things an object can do. Not things an object can't do. If you have an object of static type float, you know this object supports __int__, so you know it's a member of type SupportsInt. If you have an object of static type object, object does not have an __int__ method, but this object still might support __int__. The object could be a float, or an instance of some other subclass of object that supports __int__. You don't know whether passing it to to_int will return an int. Your second overload is trying to say "Doesn't support __int__? Then return the original type." What it should say is "Don't know if it supports __int__? Then maybe return the original type, or maybe return int." You'd do that with an unbounded type variable and a union return type, like this: from typing import overload, SupportsInt, TypeVar T = TypeVar('T') @overload def to_int(value: SupportsInt) -> int: ... @overload def to_int(value: str) -> int: ... @overload def to_int(value: T) -> T | int: ... def to_int(value): try: return int(value) except TypeError: return value Note that I've also added a separate overload for str. str doesn't actually support __int__, so it doesn't fall under the SupportsInt overload - calls like int('35') are handled by a special case in int.__new__. And sure, sometimes you'll have a case like to_int([]), where you know the specific concrete type of the object, and you know the int call will fail. But the type system doesn't propagate that information the way it would need to for more specific annotations to work. To the type checker, an instance of list might be an instance of some weird __int__-supporting list subclass, even if a human looking at the code can see it's not.
2
3
78,416,773
2024-5-2
https://stackoverflow.com/questions/78416773/how-do-i-use-python-logging-with-uvicorn-fastapi
Here is a small application that reproduces my problem: import fastapi import logging import loguru instance = fastapi.FastAPI() @instance.on_event("startup") async def startup_event(): logger = logging.getLogger("mylogger") logger.info("I expect this to log") loguru.logger.info("but only this logs") When I launch this application with uvicorn app.main:instance --log-level=debug I see this in my terminal: INFO: Waiting for application startup. 2024-05-02 13:14:45.118 | INFO | app.main:startup_event:28 - but only this logs INFO: Application startup complete. Why does only the loguru logline work, and how can I make standard python logging work as expected?
It's because the --log-level=debug only applies to uvicorn's logger, not your mylogger logger - whose level remains set at WARNING. If you add a line to the end of your script such as logging.basicConfig(level=logging.DEBUG, format="%(asctime)s | %(levelname)-8s | " "%(module)s:%(funcName)s:%(lineno)d - %(message)s") and run, you'll see something like $ uvicorn main:instance --log-level=debug INFO: Started server process [1311] INFO: Waiting for application startup. 2024-05-02 16:44:55,736 | INFO | main:startup_event:12 - I expect this to log 2024-05-02 16:44:55.736 | INFO | main:startup_event:13 - but only this logs INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) The extra line configures the root logger and adds a console handler with that format, and events logged to the mylogger logger are passed up the hierarchy to the root logger's console handler.
2
3
78,418,960
2024-5-2
https://stackoverflow.com/questions/78418960/how-do-i-know-if-flag-was-passed-by-the-user-or-has-default-value
Sample code: import click @click.command @click.option('-f/-F', 'var', default=True) def main(var): click.echo(var) main() Inside main() function, how can I check if var parameter got True value by default, or it was passed by the user? What I want to achieve: I will have few flags. When the user does not pass any of the flags, I want them all to be True. When the user passes at least one of the flags, only the passed flags should be True, the other flags False.
As I haven't really used click - maybe there's some functionality to handle this but as a workaround, I'd set the default value to None and check against if the value if its None when command is called. Something like this: import click @click.command @click.option('-f/-F', 'var', default=None) def main(var): if var is None: var = True print("No Value Provided - default to True") click.echo(var) main()
2
2
78,419,347
2024-5-2
https://stackoverflow.com/questions/78419347/iterate-over-the-next-12-months-from-today-using-python
I am trying to write python code that takes the first of the next month and creates a dataframe of the next 12 months. I have tried two ways to which did not work. I created a FOR LOOP but I am stuck on the fact that it does not store the range in the list. I got a Value Error: Month must be in 1..12 so I'm stuck as well. Can I get help with either? # FIRST TRY # import packages # python import pandas as pd import numpy as np import datetime as dt from datetime import timedelta from datetime import datetime from datetime import date from dateutil.relativedelta import relativedelta #get todays date today = dt.date.today() # Get the first day of next month first_day_next_month = (today.replace(day=1) + dt.timedelta(days=32)).replace(day=1) # Get the first day of the next 12 months first_day_next_12_months = [] for i in range(1, 13): next_month = first_day_next_month + relativedelta(months=i) first_day_next_12_months.append(next_month) print(first_day_next_12_months) #SECOND TRY # import packages # python import pandas as pd import numpy as np import datetime as dt from datetime import timedelta from datetime import datetime from datetime import date from dateutil.relativedelta import relativedelta #get todays date today = dt.date.today() # Get the first day of next month first_day_next_month = (today.replace(day=1) + dt.timedelta(days=32)).replace(day=1) # Get the first day of next month first_day_next_month = (today.replace(day=1) + dt.timedelta(days=32)).replace(day=1) print("First day of next month:", first_day_next_month) # Get the first day of the next 12 months first_day_next_12_months = [] for i in range(1, 13): next_month = (first_day_next_month.replace(month=first_day_next_month.month + i-1) + datetime.timedelta(days=32)).replace(day=1) first_day_next_12_months.append(next_month) print("First day of the next 12 months:") for date in first_day_next_12_months: print(date)
If working with pandas.DataFrame, you mostly don't need for loops. Just do : import pandas as pd import datetime as dt from dateutil.relativedelta import relativedelta today = dt.date.today() first_day_next_month = (today.replace(day=1) + relativedelta(months=1)) first_day_next_12_months = [first_day_next_month + relativedelta(months=i) for i in range(12)] df_months = pd.DataFrame(first_day_next_12_months, columns=['First Day of Month'])
2
3
78,413,671
2024-5-1
https://stackoverflow.com/questions/78413671/duckdb-insert-the-hive-partitions-into-parquet-file
I have jsonl files partitioned by user_id, and report_date. I am converting these jsonl files into parquet files and save them in the same folder using the following commands in DuckDB jsonl_file_path ='/users/user_id=123/report_date=2024-04-30/data.jsonl' out_path = '/users/user_id=123/report_date=2024-04-30/data.parquet' db.sql( f""" COPY ( SELECT * FROM read_json_auto( '{jsonl_file_path}', maximum_depth=-1, sample_size=-1, ignore_errors=true ) ) TO '{out_path}' ( FORMAT PARQUET, ROW_GROUP_SIZE 100000, OVERWRITE_OR_IGNORE 1 ); """ ) It works fine, but the problem is DuckDB is inserting the hive partition values into the parquet file which are user_id and report_date, these values are not in jsonl file. anyone know how to solve this issue?
If I understand correctly, you want to do a partitioned write and need to use PARTITION_BY. When doing a partitioned write, you should not include the hive partition as part of your output path. The partitioned write will build those paths and file names for the case where there are many files per partition. You can template the file format using FILENAME_PATTERN. jsonl_file_path ='/users/user_id=123/report_date=2024-04-30/data.jsonl' out_path = '/users' file_pattern = 'data_{i}' db.sql( f""" COPY ( SELECT * FROM read_json_auto( '{jsonl_file_path}', maximum_depth=-1, sample_size=-1, ignore_errors=true ) ) TO '{out_path}' ( FORMAT PARQUET, PARTITION_BY (user_id, report_date), FILENAME_PATTERN {file_pattern} ROW_GROUP_SIZE 100000, OVERWRITE_OR_IGNORE 1 ); """ ) If you want to overwrite the single file, modify the projection in the SELECT not to use *. The * includes the "virtual" columns that are part of the hive table's partition. They are columns in the hive table stored in the path, not the physical parquet files. jsonl_file_path ='/users/user_id=123/report_date=2024-04-30/data.jsonl' out_path = '/users/user_id=123/report_date=2024-04-30/data.parquet' db.sql( f""" COPY ( SELECT * EXCLUDE(user_id, report_date) FROM read_json_auto( '{jsonl_file_path}', maximum_depth=-1, sample_size=-1, ignore_errors=true ) ) TO '{out_path}' ( FORMAT PARQUET, ROW_GROUP_SIZE 100000, OVERWRITE_OR_IGNORE 1 ); """ )
2
2
78,419,061
2024-5-2
https://stackoverflow.com/questions/78419061/stable-icon-recognition-by-opencv-python
I have the following issue: I need to detect icons in an image reliably. The image also contains text, and the icons come in various sizes. Currently, I'm using Python with the cv2 library for this task. However, unfortunately, the current contour detection algorithm using cv2.findContours isn't very reliable. Here's how I'm currently doing it: gray = cv2.cvtColor(self.image, cv2.COLOR_BGR2GRAY) binary = cv2.adaptiveThreshold(self.gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 17, 1) contours, _ = cv2.findContours(self.binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) Then follows contour filtering, merging filtered contours, and filtering again. However, this method has proven to be unreliable. I've also tried using getStructuringElement, but it gives unreliable results for icons on a white background. I can't disclose real input data, but I used the Amazon logo and created an example demonstrating the issue. For colored icons, when using contours, I often get two or three icons of incorrect sizes, and merging them loses the precise size. For icons on a white background, the approach with getStructuringElement doesn't detect the boundary well. My questions: What do you suggest? My ideas: Train a Haar cascade. Would it help much? Fine-tune parameters of one of the current methods. Any other methods/libraries. I'm open to any suggestions, or let me know if anyone has experience solving such a problem. Img: https://i.sstatic.net/bZYrnCTU.jpg PS: For another people, who's maybe be interested in more or less the same task Answer from @Tino-d really great, Bot you may get and icon, which more or less would be divided into big amount of simple polygons For example(After canny processing) What you can do its just draw contours with bigger thickness(2 for example), and then refounding contours on new image contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cv2.drawContours(edges, contours, -1, (255, 255, 255), 2) contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) Seems like more or less super-easy and very fast merging contours algo)
I think edge detection can work quite well here, I did a small example which worked for now: im = cv2.imread("logos.jpg") imGray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(imGray,5, 20) This gave the following result: After this, detecting contours and filtering by area will work quite nicely, as the squares of the logos seem to all be the same size: contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) sortedContours = sorted(contours, key = cv2.contourArea, reverse = True) for c in sortedContours: print(cv2.contourArea(c)) We see that indeed the three biggest contours by area all have around 10500 pixels: 10787.0 10715.0 10128.0 7391.5 4555.5 3539.0 3420.0 . . . Fill those first three contours: im1 = cv2.drawContours(im.copy(), sortedContours, 2, (255,0,0), -1) im2 = cv2.drawContours(im.copy(), sortedContours, 1, (0,255,0), -1) im3 = cv2.drawContours(im.copy(), sortedContours, 0, (0,0,255), -1) And this is what you will get: I assume what you want is a boolean mask to be able to get those pixels. So something like mask = np.zeros_like(imGray) mask = cv2.drawContours(mask, sortedContours, 2, 1, -1) firstLogo = cv2.bitwise_and(im, im, mask = mask) Can do the job. You can automate this quite easily by filtering the contours, I'm just nudging a POC to you. E: forgive the weird colours, forgot to convert to RGB before imshow with Matplotlib...
2
2
78,417,247
2024-5-2
https://stackoverflow.com/questions/78417247/how-to-reverse-a-numpy-array-using-stride-tricks-as-strided
Is it possible to reverse an array using as_strided in NumPy? I tried the following and got garbage results Note : I am aware of indexing tricks like ::-1, but what to know if this can be achieved thru ax_strided. import numpy as np from numpy.lib.stride_tricks import as_strided elems = 16 inp_arr = np.arange(elems).astype(np.int8) print(inp_arr) print(inp_arr.shape) print(inp_arr.strides) expanded_input = as_strided(inp_arr[15], shape = inp_arr.shape, strides = (-1,)) print(expanded_input) print(expanded_input.shape) print(expanded_input.strides) Output [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15] (16,) (1,) [ 15 -113 0 21 -4 -119 -53 3 79 0 0 0 0 0 0 2] (16,) (-1,)
Indeed you can reverse an array using as_strided() (notwithstanding the question of whether you should); however, one has to get both the strides and the array offset correct: In the 1-dimensional case, one has to start from the last element and negate the stride (which, for an 8-bit/1-byte data type, is the same as setting it to -1). Generally, in the N-dimensional case of an arbitrary data type, one has to start from the last element in the axis of interest and negate the stride in the axis of interest. See code below, where r_1d and r_nd are exemplary results of reversing with as_strided(): import numpy as np from numpy.lib.stride_tricks import as_strided # 1D, 8-bit/1-byte dtype: (1) start from last element, (2) set stride == -1 a_1d = np.arange(16, dtype=np.int8) r_1d = as_strided(a_1d[-1:], shape=a_1d.shape, strides=(-1, )) assert (r_1d == a_1d[::-1]).all() # ND, arbitrary dtype: (1) start from last element in axis of interest, (2) # negate stride in axis of interest (below, axis of interest is axis 1) a_nd = np.arange(2*3*5, dtype=float).reshape(2, 3, 5) a_offset = a_nd[:, -1:] r_strides = np.multiply([1, -1, 1], a_nd.strides) r_nd = as_strided(a_offset, shape=a_nd.shape, strides=r_strides) assert (r_nd == a_nd[:, ::-1]).all() As a cautionary note, I should add that I only tried with contiguous arrays (both C and F), others might require more effort.
2
2
78,417,828
2024-5-2
https://stackoverflow.com/questions/78417828/are-there-good-reason-to-accept-float-but-not-int-in-a-function-in-python
In a big library, I had a 'bug' because a function which accept only float but not int def foo(penalty: float): if not isinstance(penalty, float): raise ValueError(f"`penalty` has to be a float, but is {penalty}") # some instruction I would like to open a PR with the following : def foo(penalty: int | float): if not isinstance(penalty, (int,float)): raise ValueError(f"`penalty` has to be a float or int, but is {penalty}") # some instruction but before this, I ask myself : are there good reason, in Python to accept float but not int ?
A few reasons might exist: Floats have methods that integers don't. For example the fromhex or hex methods exist on float objects, but not int objects. If the function makes use of such methods, accepting an int will cause it to fail. Sometimes, floats are specifically needed when dealing with foreign functions, for example, a function expecting a C double, a float works for this purpose, but an int does not. Other types are accepted by isinstance checks of int, for example bool objects, which may be undesirable. In Python2, arithmetic with integers can sometimes produce different results than when a float is used. If the library is (or was) backwards compatible to Python2, it could be an artifact of that fact A function may explicitly expect a value that can only be expressed as a float. For example, if a function is expecting a value between (non-inclusive) 0 and 1, an int will never make sense. Presumably, a function could just upcast the integer to a float as-needed: if isinstance(arg, int): arg = float(arg) But for performance, it may be better to error out and inform the user a float is needed (so perhaps the user can use a float to begin with).
2
4
78,416,528
2024-5-2
https://stackoverflow.com/questions/78416528/only-do-this-action-once-in-a-for-loop
I need to do this action once under certain conditions in a for loop to stop duplicates appearing in the checkboxes. Theoretically, this should work I believe but when the conditions are met for this if statement (cycles == 1 & len(tasks) > 1) it skips over the statement anyway which really confuses me. I have been trying to debug this with breakpoints but I can't figure it out. tasks = [a, b] def addToList(): cycles = 0 for x in range(len(tasks)): cycles += 1 if cycles == 1 & len(tasks) > 1: checkList.destroy() checkList = Checkbutton(viewTab,text=tasks[x]) checkList.pack(anchor='w',side=BOTTOM)
The issue lies in the way you're using the '&' operator. In Python '&' is a bitwise AND operator, not a logical AND operator. For logical AND, you should use 'and' So, instead of: if cycles == 1 & len(tasks) > 1: You should use: if cycles == 1 and len(tasks) > 1:
2
3
78,416,147
2024-5-1
https://stackoverflow.com/questions/78416147/changing-the-exception-type-in-a-context-manager
I currently have code that looks like this scattered throughout my codebase: try: something() except Exception as exc: raise SomethingError from exc I would like to write a context manager that would remove some of this boiler-plate: with ExceptionWrapper(SomethingError): something() It looks like it is possible to suppress exceptions inside a context manager - see: contextlib.suprress. It doesn't look like it is possible to change what exception is being raised. However, I haven't been able to find clear documentation on what the return value of the __exit__ function of a context manager is.
This sort of simple context manager is easiest to implement using contextlib.contextmanager, which creates a context manager out of a generator. Simply wrap the first yield statement in the generator with a try block and raise the desired exception in the except block: import contextlib from typing import Iterator class SomethingError(Exception): pass @contextlib.contextmanager def exception_wrapper(message: str) -> Iterator[None]: """Wrap Exceptions with SomethingError.""" try: yield except Exception as exc: raise SomethingError(message) from exc This can be used like so: >>> with exception_wrapper("and eggs!"): ... raise Exception("spam") Traceback (most recent call last): File "/path/to/script.py", line 11, in exception_wrapper yield File "/path/to/script.py", line 17, in <module> raise Exception("spam") Exception: spam The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/path/to/script.py", line 16, in <module> with exception_wrapper("and eggs!"): File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/path/to/script.py", line 13, in exception_wrapper raise SomethingError(message) from exc __main__.SomethingError: and eggs!
3
4
78,402,347
2024-4-29
https://stackoverflow.com/questions/78402347/pandera-errors-backendnotfounderror-with-pandas-dataframe
pandera: 0.18.3 pandas: 2.2.2 python: 3.9/3.11 Hi, I am unable to setup the pandera for pandas dataframe as it complains: File "/anaconda/envs/data_quality_env/lib/python3.9/site-packages/pandera/api/base/schema.py", line 96, in get_backend raise BackendNotFoundError( pandera.errors.BackendNotFoundError: Backend not found for backend, class: (<class 'data_validation.schemas.case.CaseSchema'>, <class 'pandas.core.frame.DataFrame'>). Looked up the following base classes: (<class 'pandas.core.frame.DataFrame'>, <class 'pandas.core.generic.NDFrame'>, <class 'pandas.core.base.PandasObject'>, <class 'pandas.core.accessor.DirNamesMixin'>, <class 'pandas.core.indexing.IndexingMixin'>, <class 'pandas.core.arraylike.OpsMixin'>, <class 'object'>) My folder structure is: project/ data_validation/ schema/ case.py validation/ validations.py pipeline.py case.py: import pandas as pd import pandera as pa class CaseSchema(pa.DataFrameSchema): case_id = pa.Column(pa.Int) validations.py import pandas as pd from data_validation.schemas.case import CaseSchema def validate_case_data(df: pd.DataFrame) -> pd.DataFrame: """Validate a DataFrame against the PersonSchema.""" schema = CaseSchema() return schema.validate(df) pipeline.py import pandas as pd from data_validation.validation.validations import validate_case_data def validate_df(df: pd.DataFrame) -> pd.DataFrame: """Process data, validating it against the PersonSchema.""" validated_df = validate_case_data(df) return validated_df df = pd.DataFrame({ "case_id": [1, 2, 3] }) processed_df = validate_df(df)
This can be solved by including a get_backend method in CaseSchema: import pandas as pd import pandera as pa from pandera.backends.pandas.container import DataFrameSchemaBackend class CaseSchema(pa.DataFrameSchema): case_id = pa.Column(pa.Int) @classmethod def get_backend(cls, check_obj=None, check_type=None): if check_obj is not None: check_obj_cls = type(check_obj) elif check_type is not None: check_obj_cls = check_type else: raise ValueError("Must pass in one of `check_obj` or `check_type`.") cls.register_default_backends(check_obj_cls) return DataFrameSchemaBackend()
5
0
78,400,266
2024-4-29
https://stackoverflow.com/questions/78400266/python-rolling-indexing-in-polars-library
I'd like to ask around if anyone knows how to do rolling indexing in polars? I have personally tried a few solutions which did not work for me (I'll show them below): What I'd like to do: Indexing the number of occurrences within the past X days by Name Example: Let's say I'd like to index occurrences within the past 2 days: df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ Date ┆ Counter β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ns] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ═════════║ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 1 β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 2 β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 3 β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 4 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 5 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 6 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 7 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 8 β”‚ β”‚ John ┆ 2023-01-03 00:00:00 ┆ 5 β”‚ β”‚ John ┆ 2023-01-03 00:00:00 ┆ 6 β”‚ β”‚ New Guy ┆ 2023-01-01 00:00:00 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ """) In this case, the counter resets to "1" starting from the past X days (e.g. for 3 Jan 23, it starts "1" from 2 Jan 23), or if a new name is detected What I've tried: (df.rolling(index_column='Date', period='2d', group_by='Name') .agg((pl.col("Date").rank(method='ordinal')).alias("Counter")) ) The above does not work because it outputs: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ Date ┆ Counter β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ns] ┆ list[u32] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ══════════════════════════║ β”‚ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] β”‚ β”‚ John ┆ 2023-01-03 00:00:00 ┆ [1, 2, 3, 4, 5, 6] β”‚ β”‚ John ┆ 2023-01-03 00:00:00 ┆ [1, 2, 3, 4, 5, 6] β”‚ β”‚ New Guy ┆ 2023-01-01 00:00:00 ┆ [1] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ (df.with_columns(Mask=1) .with_columns(Counter=pl.col("Mask").rolling_sum_by(window_size='2d', by="Date")) ) But it outputs: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ Date ┆ Counter ┆ mask β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ns] ┆ i32 ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ═════════β•ͺ══════║ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 β”‚ β”‚ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 β”‚ β”‚ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 β”‚ β”‚ John ┆ 2023-01-03 00:00:00 ┆ 6 ┆ 1 β”‚ β”‚ John ┆ 2023-01-03 00:00:00 ┆ 6 ┆ 1 β”‚ β”‚ New Guy ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ And it also cannot handle "New Guy" correctly because rolling_sum cannot do group_by=["Name", "Date"] df.with_columns(Counter = pl.col("Date").rank(method='ordinal').over("Name", "Date") ) The above code works correctly, but can only be used for indexing within the same day (i.e. period="1d") Additional Notes I also did this in Excel, and also using a brute/raw method of using a "for"-loop. Both worked perfectly, however they struggled with huge amounts of data. What I read: Some references to help in answers: (Most didn't work because they have fixed rolling window instead of a dynamic window by "Date") How to implement rolling rank in Polars version 0.19 https://github.com/pola-rs/polars/issues/4808 How to do rolling() grouped by day by hour in in Polars? How to groupby and rolling in polars? https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.rank.html https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.rolling.html#polars.DataFrame.rolling
You could start with the approach giving the maximum count for each group (using pl.len() within the aggregation) and post-process the Counter column to make it's values increase within each group. ( df .rolling(index_column="Date", period="2d", group_by="Name") .agg( pl.len().alias("Counter") ) .with_columns( (pl.col("Counter") - pl.len() + 1 + pl.int_range(pl.len())).over("Name", "Date") ) ) shape: (11, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ Date ┆ Counter β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═════════║ β”‚ John ┆ 2023-01-01 ┆ 1 β”‚ β”‚ John ┆ 2023-01-01 ┆ 2 β”‚ β”‚ John ┆ 2023-01-01 ┆ 3 β”‚ β”‚ John ┆ 2023-01-01 ┆ 4 β”‚ β”‚ John ┆ 2023-01-02 ┆ 5 β”‚ β”‚ John ┆ 2023-01-02 ┆ 6 β”‚ β”‚ John ┆ 2023-01-02 ┆ 7 β”‚ β”‚ John ┆ 2023-01-02 ┆ 8 β”‚ β”‚ John ┆ 2023-01-03 ┆ 5 β”‚ β”‚ John ┆ 2023-01-03 ┆ 6 β”‚ β”‚ New Guy ┆ 2023-01-01 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Explanation. After the aggregation, the Counter column will take a constant value V within each name-date-group of length L. The objective is to make Counter take the values V-L+1 to V (one value for each row) instead. Therefore, we can subtract L-1 from Counter and add an int range with increasing values from 0 to L-1.
4
3
78,382,516
2024-4-25
https://stackoverflow.com/questions/78382516/pyenv-switching-between-python-and-pyspark-versions-without-hardcoding-environ
I have trouble getting different versions of PySpark to work correctly on my windows machine in combination with different versions of Python installed via PyEnv. The setup: I installed pyenv and let it set the environment variables (PYENV, PYENV_HOME, PYENV_ROOT and the entry in PATH) I installed Amazon Coretto Java JDK (jdk1.8.0_412) and set the JAVA_HOME environment variable. I downloaded the winutils.exe & hadoop.dll from here and set the HADOOP_HOME environment variable. Via pyenv I installed Python 3.10.10 and then pyspark 3.4.1 Via pyenv I installed Python 3.8.10 and then pyspark 3.2.1 Python works as expected: I can switch between different versions with pyenv global <version> When I use python --version in PowerShell it always shows the version that I set before with pyenv. But I'm having trouble with PySpark. For one, I cannot start PySpark via the powershell console by running pyspark >>> The term 'pyspark' is not recognized as the name of a cmdlet, function, script file..... More annoyingly, my repo-scripts (with a .venv created via pyenv & poetry) also fail: Caused by: java.io.IOException: Cannot run program "python3": CreateProcess error=2, The system cannot find the file specified [...] Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified However, both work after I add the following two entries to the PATH environment variable: C:\Users\myuser\.pyenv\pyenv-win\versions\3.10.10 C:\Users\myuser\.pyenv\pyenv-win\versions\3.10.10\Scripts but I would have to "hardcode" the Python Version - which is exactly what I don't want to do while using pyenv. If I hardcode the path, even if I switch to another Python version (pyenv global 3.8.10), once I run pyspark in Powershell, the version PySpark 3.4.1 starts from the environment PATH entry for Python 3.10.10. I also cannot just do anything with python in the command line as it always points to the hardcoded python version, no matter what I do with pyenv. I was hoping to be able to start PySpark 3.2.1 from Python 3.8.10 which I just "activated" with pyenv globally. What do I have to do to be able to switch between the Python installations (and thus also between PySparks) with pyenv without "hardcoding" the Python paths? Example PySpark script: from pyspark.sql import SparkSession spark = ( SparkSession .builder .master("local[*]") .appName("myapp") .getOrCreate() ) data = [("Finance", 10), ("Marketing", 20), ] df = spark.createDataFrame(data=data) df.show(10, False)
I "solved" the issue by completely removing the Python path from the PATH environment variable and doing everything exclusively via pyenv. I suppose my original task is not possible. I can still start a Python process by running pyenv exec python in the terminal. But disappointingly I cannot launch a Spark process from the terminal anymore. At least my repositories work as expected when setting the pyenv versions (pyenv local 3.8.10 / pyenv global 3.10.10).
3
0
78,396,068
2024-4-27
https://stackoverflow.com/questions/78396068/how-to-plot-shap-summary-plots-for-all-classes-in-multiclass-classification
I am using XGBoost with SHAP to analyze feature importance in a multiclass classification problem and need help plotting the SHAP summary plots for all classes at once. Currently, I can only generate plots one class at a time. SHAP version: 0.45.0 Python version: 3.10.12 Here is my code: import xgboost as xgb import shap import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.datasets import make_classification from sklearn.metrics import accuracy_score # Generate synthetic data X, y = make_classification(n_samples=500, n_features=20, n_informative=4, n_classes=6, random_state=42) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) # Train a XGBoost model for multiclass classification model = xgb.XGBClassifier(objective="multi:softprob", random_state=42) model.fit(X_train, y_train) I then tried to plot the shape values: # Create a SHAP TreeExplainer explainer = shap.TreeExplainer(model) # Calculate SHAP values for the test set shap_values = explainer.shap_values(X_test) # Attempt to plot summary for all classes shap.summary_plot(shap_values, X_test, plot_type="bar") I got this interaction plot instead: I remedied the problem with help from this post: shap.summary_plot(shap_values[:,:,0], X_test, plot_type="bar") which gives a normal bar plot for class 0: I can then do the same with classes 1, 2, 3, etc. The question is, how can you make a summary plot for all the classes? I.e., a single plot showing the contribution of a feature to each class?
The issue is that explainer.shap_values(X_test) will return a 3D DataFrame of shape (rows, features, classes) and to show a bar plot summary_plot(shap_values) requires shap_values to be a list of (rows, features) where the list is: length = number of classes. For my own purposes, I used the following function which converts your shap_values into the format that you need: def shap_values_to_list(shap_values, model): shap_as_list=[] for i in range(len(model.classes_)): shap_as_list.append(shap_values[:,:,i]) return shap_as_list Then you can do: shap_as_list = shap_values_to_list(shap_values, model) shap.summary_plot(shap_as_list, X_test, plot_type="bar") You can always add feature_names and class_names to the summary_plot if you need. With my own example I went from having the same kind of interaction plot that you did to the following: Example of shap.summary_plot output using shap_values converted to a list of shap_values
3
5
78,379,820
2024-4-24
https://stackoverflow.com/questions/78379820/llm-studio-fail-to-download-model-with-error-unable-to-get-local-issuer-certif
In LLM studio, when I try to download any model, I am facing following error: Download Failed: unable to get local issuer certificate
I've experienced the same in a corporate network. To get around I have been manually downloading through my browser (the url is to the left of the error message in your screenshot) and manually loading the models into the appropriate dir: ~/.cache/lm-studio/models in the appropriate subpath. ie ~/.cache/lm-studio/models/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q6_K.gguf Not the most elegant solution but I have yet to find information on a better workaround.
2
4
78,389,654
2024-4-26
https://stackoverflow.com/questions/78389654/exporting-pointset-unstructuredgrid-data-as-stl-file-from-pyvista
I am using pyvista to mesh 3D scan data of terrain and save them to stl files to save in blender. Some of the .xyz files contain data for a single continuous area while others include a number of smaller areas, with no data inbetwean. I have managed to get this to work for the continuous regions: import numpy as np import pyvista as pv xyz_path = r"Path_to_file.xyz" stl_path = r"Path_to_file.stlr" data = np.loadtxt(xyz_path) cloud = pv.PolyData(data) surf = cloud.delaunay_2d() surf.plot(show_edges=True) surf.save(stl_name) for the discontinuous reqions I got large faces linking the different areas that I would like to delete. I found a way to do this based on the area of the face. import numpy as np import pyvista as pv xyz_path = r"Path_to_file.xyz" stl_path = r"Path_to_file.stlr" data = np.loadtxt(xyz_path) cloud = pv.PolyData(data) surf = cloud.delaunay_2d() cell_sizes = surf.compute_cell_sizes(length=False, area=True, volume=False) areas = cell_sizes['Area'] max_area_threshold = 100 # Set the area threshold # Create a mask for cells that meet the area criteria valid_cells = areas < max_area_threshold # Use the mask to filter out large triangles filtered_mesh = surf.extract_cells(valid_cells) filtered_mesh.plot(show_edges=True, scalars='Area') surf.save(stl_name) From the plot I can see that the correct elements have been deleted. However, this process also converts the data structure from core.pointset.PolyData (which can be saved to stl) to core.pointset.UnstructuredGrid (which cant be saved to stl). Is there a way I con convert the core.pointset.UnstructuredGrid object to a format that can be saved to stl or delete the unwanted elements in a way that I can still save the surface to stl. My current workaround is to create the STL file leaving in the unwanted faces then delete these in MeshLab but if possible I would like to avoid this step if there is a simple solution.
Based on the input from Andras I used extract_surface() to extract the surface and save to stl file. import numpy as np import pyvista as pv xyz_path = r"Path_to_file.xyz" stl_path = r"Path_to_file.stlr" data = np.loadtxt(xyz_path) cloud = pv.PolyData(data) surf = cloud.delaunay_2d() cell_sizes = surf.compute_cell_sizes(length=False, area=True, volume=False) areas = cell_sizes['Area'] max_area_threshold = 100 # Set the area threshold #Create a mask for cells that meet the area criteria valid_cells = areas < max_area_threshold #Use the mask to filter out large triangles filtered_mesh = surf.extract_cells(valid_cells) filtered_mesh.plot(show_edges=True, scalars='Area') #Extract surface filtered_mesh_surface = filtered_mesh.extract_surface() filtered_mesh_surface.save(stl_path_filtered) This is covered in the documentation here: https://docs.pyvista.org/version/stable/examples/01-filter/extract-surface.html.
2
1
78,407,858
2024-4-30
https://stackoverflow.com/questions/78407858/how-do-i-create-a-constructor-that-would-receive-different-types-of-parameters
I have this Point class. I want it to be able to recieve double and SomeType parameters. Point.pxd: from libcpp.memory cimport shared_ptr, weak_ptr, make_shared from SomeType cimport _SomeType, SomeType cdef extern from "Point.h": cdef cppclass _Point: _Point(shared_ptr[double] x, shared_ptr[double] y) _Point(shared_ptr[double] x, shared_ptr[double] y, shared_ptr[double] z) _Point(shared_ptr[_SomeType] x, shared_ptr[_SomeType] y) _Point(shared_ptr[_SomeType] x, shared_ptr[_SomeType] y, shared_ptr[_SomeType] z) shared_ptr[_SomeType] get_x() shared_ptr[_SomeType] get_y() shared_ptr[_SomeType] get_z() cdef class Point: cdef shared_ptr[_Point] c_point Point.pyx: from Point cimport * cdef class Point: def __cinit__(self, SomeType x=SomeType("0", None), SomeType y=SomeType("0", None), SomeType z=SomeType("0", None)): self.c_point = make_shared[_Point](x.thisptr, y.thisptr, z.thisptr) def __dealloc(self): self.c_point.reset() def get_x(self) -> SomeType: cdef shared_ptr[_SomeType] result = self.c_point.get().get_x() cdef SomeType coord = SomeType("", None, make_with_pointer = True) coord.thisptr = result return coord def get_y(self) -> SomeType: cdef shared_ptr[_SomeType] result = self.c_point.get().get_y() cdef SomeType coord = SomeType("", None, make_with_pointer = True) coord.thisptr = result return coord def get_z(self) -> SomeType: cdef shared_ptr[_SomeType] result = self.c_point.get().get_z() cdef SomeType coord = SomeType("", None, make_with_pointer = True) coord.thisptr = result return coord property x: def __get__(self): return self.get_x() property y: def __get__(self): return self.get_y() property z: def __get__(self): return self.get_z() How should I write my .pxd and .pyx files so that my Point constructor can receive different type of parameters?
In Cython, you cannot directly overload constructors (or any methods) as you might in C++ or other languages that support method overloading. However, you can achieve similar functionality by using factory methods or by using Python's flexibility with arguments. Given your scenario where the Point class needs to accept different types of parameters (either double or SomeType objects), you can implement this flexibility using Python's *args and **kwargs in combination with type checking and processing logic inside the constructor. Additionally, you can define class methods that act as alternative constructors, which is a common Pythonic approach to solve this issue. Here’s how you might adjust your .pxd and .pyx files to accommodate these requirements: Point.pxd This file remains largely the same but ensure it correctly declares everything you need: # Point.pxd from libcpp.memory cimport shared_ptr, make_shared from SomeType cimport _SomeType, SomeType cdef extern from "Point.h": cdef cppclass _Point: _Point(shared_ptr[double] x, shared_ptr[double] y) _Point(shared_ptr[double] x, shared_ptr[double] y, shared_ptr[double] z) _Point(shared_ptr[_SomeType] x, shared_ptr[_SomeType] y) _Point(shared_ptr[_SomeType] x, shared_ptr[_SomeType] y, shared_ptr[_SomeType] z) cdef class Point: cdef shared_ptr[_Point] c_point Point.pyx Modify this file to include a flexible constructor and additional class methods for different initialisations: # Point.pyx from Point cimport * from libc.stdlib cimport atof cdef class Point: def __cinit__(self, *args): if len(args) == 2 or len(args) == 3: if isinstance(args[0], SomeType): ptrs = [arg.thisptr for arg in args] else: ptrs = [make_shared[double](atof(arg)) for arg in args] if len(args) == 2: self.c_point = make_shared[_Point](ptrs[0], ptrs[1]) elif len(args) == 3: self.c_point = make_shared[_Point](ptrs[0], ptrs[1], ptrs[2]) else: raise ValueError("Invalid number of arguments") def __dealloc__(self): self.c_point.reset() @staticmethod def from_doubles(x, y, z=None): cdef shared_ptr[double] px = make_shared[double](x) cdef shared_ptr[double] py = make_shared[double](y) cdef shared_ptr[double] pz = make_shared[double](z) if z is not None else None if z is None: return Point(px, py) return Point(px, py, pz) # ... rest of the methods ... Here, __cinit__ accepts variable arguments (*args). It determines the type of each argument and constructs the c_point appropriately, based on the number of arguments and their types. The from_doubles static method provides a clearer, type-specific way to create instances from double values. This approach gives you the flexibility to initialise Point objects with different types while maintaining clean, readable code. Make sure that the make_shared[double](x) conversions handle input correctly, and adjust the type-checking and conversions as necessary for your specific needs and types.
4
-1
78,382,641
2024-4-25
https://stackoverflow.com/questions/78382641/can-three-given-coordinates-be-points-of-a-rectangle
I am currently learning how to code in Python and I came across this task and I am trying to solve it. But I think I got a mistake somewhere and I was wondering can maybe someone help me with it. The task is to write a code that will: Import four 2D coordinates (A, B, C and X) from a text file Check can A, B, C be points of a rectangle Check if X is inside the ABC rectangle Calculate the diagonal of the rectangle. So far I have this: import math def distance(point1, point2): return math.sqrt((point2[0] - point1[0])**2 + (point2[1] - point1[1])**2) def is_rectangle(point1, point2, point3): distances = [ distance(point1, point2), distance(point2, point3), distance(point3, point1) ] distances.sort() if distances[0] == distances[1] and distances[1] != distances[2]: return True else: return False def is_inside_rectangle(rectangle, point): x_values = [vertex[0] for vertex in rectangle] y_values = [vertex[1] for vertex in rectangle] if (min(x_values) < point[0] < max(x_values)) and (min(y_values) < point[1] < max(y_values)): return True else: return False with open('coordinates.txt', 'r') as file: coordinates = [] for line in file: x, y = map(int, line.strip()[1:-1].split(',')) coordinates.append((x, y)) rectangle = [coordinates[0], coordinates[1], coordinates[2]] diagonal1 = distance(coordinates[0], coordinates[2]) diagonal2 = distance(coordinates[1], coordinates[3]) if is_rectangle(coordinates[0], coordinates[1], coordinates[2]) and is_inside_rectangle(rectangle, coordinates[3]): print("True") print(f"Diagonal of the rectangle is: {max(diagonal1, diagonal2)}") else: print("False") The code works but I think it calculates the diagonal wrong. For example lets take this points for input: A(0, 0), B(5,0), C(0, 5) and X(2, 2). It says they can be points of a rectangle and that the diagonal is 5. When I put this points on paper, the fourth point can be D(5, 5) and then the diagonal is 7.07 (square root of 50). Or it can be D(-5, 5) but then it is a parallelogram and one diagonal is 5 but it is not the max one. Also I am trying to write a function that will check if all data in the text file are integers. Let's say B is (m, k), then it should return false and if all data are integers, then continue with the code. Any ideas on that one?
Let's look at the logic for your is_rectangle function. def is_rectangle(point1, point2, point3): distances = [ distance(point1, point2), distance(point2, point3), distance(point3, point1) ] distances.sort() if distances[0] == distances[1] and distances[1] != distances[2]: return True else: return False You are testing if the two smallest sides of triangle ABC are equal, and different from the largest side. Effectively answering the question: "Is ABC an obtuse isosceles triangle?" But the question you should be asking is: "Is ABC a right triangle?" There are several ways to test whether a triangle ABC is a right triangle. One way is to check if one of the three pairs of vectors (AB,AC), (BA,BC), (CA, CB) is orthogonal, ie if its dot product is 0. Another way is to check if the three distances AB, AC, BC form a Pythagorean triplet, ie if z**2 == x**2 + y**2 is true when x,y,z = sorted(map(distance,itertools.combinations(points,2))) PS: In general when you write if condition: return True else: return False, you might as well write directly return condition. With that in mind, the code you wrote in python for this function is equivalent to: from itertools import combinations def is_acute_isosceles_triangle(point1, point2, point3): distances = sorted(map(distance, combinations((point1,point2,point3), 2))) return (distances[0] == distances[1] and distances[1] != distances[2])
2
2
78,405,706
2024-4-30
https://stackoverflow.com/questions/78405706/auto-rescale-y-axis-for-range-slider-in-plotly-bar-graph-python
Is there a way to force the rangeslider in plotly to auto-rescale the y-axis? Manually selecting a date range with the DatePicker works fine as the y-axis automatically updates. However, when updating through the rangeslider, the y-axis doesn't auto-rescale. Without re-scaling, the figure is hard to interpret. It's almost not worth having at all. Are there any options to automatically force this? It may not be worthwhile also if a subsequent component (button) is required to update the plot. The y-axis is fine initially. But when using the rangeslider to show the most recent week, the y-asic does not update (see below). Although, it works fine when using the DatePicker to get the same time period. Both fixedrange or autorange does not force rescaling. import dash from dash import dcc from dash import html from dash.dependencies import Input, Output import dash_bootstrap_components as dbc import plotly.express as px import pandas as pd df1 = pd.DataFrame({ 'Type': ['B','K','L','N'], }) N = 300 df1 = pd.concat([df1] * N, ignore_index=True) df1['TIMESTAMP'] = pd.date_range(start='2024/01/01 07:36', end='2024/01/30 08:38', periods=len(df1)) df2 = pd.DataFrame({ 'Type': ['B','K','L','N'], }) N = 3 df2 = pd.concat([df2] * N, ignore_index=True) df2['TIMESTAMP'] = pd.date_range(start='2024/01/30 08:37', end='2024/02/28 08:38', periods=len(df2)) df = pd.concat([df1,df2]) df['Date'] = df['TIMESTAMP'].dt.date df['Date'] = df['Date'].astype(str) external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = dash.Dash(__name__, external_stylesheets = external_stylesheets) app.layout = dbc.Container([ dbc.Row([ html.Div(html.Div(children=[ html.Div(children=[ dcc.DatePickerRange( id = 'date-picker', display_format = 'DD/MM/YYYY', show_outside_days = True, minimum_nights = 0, initial_visible_month = df['Date'].min(), min_date_allowed = df['Date'].min(), max_date_allowed = df['Date'].max(), start_date = df['Date'].min(), end_date = df['Date'].max() ), dcc.Dropdown( id = 'Type', options = [ {'label': x, 'value': x} for x in df['Type'].unique() ], value = df['Type'].unique(), multi = True, ), ] ) ] )), dcc.Graph(id = 'date-bar-chart') ]) ]) @app.callback( Output('date-bar-chart', 'figure'), Input('date-picker','start_date'), Input('date-picker','end_date'), Input("Type", "value"), ) def chart(start_date, end_date, type): dff = df[(df['Date'] >= start_date) & (df['Date'] <= end_date)] dff = dff[dff['Type'].isin(type)] df_count = dff.groupby(['Date','Type'])['Type'].count().reset_index(name = 'counts') type_fig = px.bar(x = df_count['Date'], y = df_count['counts'], color = df_count['Type'] ) type_fig.update_layout( xaxis = dict( rangeselector = dict( buttons = list([ dict(count = 7, label = '1w', step = 'day', stepmode = 'backward'), dict(count = 1, label = '1m', step = 'month', stepmode = 'backward'), dict(count = 6, label = '6m', step = 'month', stepmode = 'backward'), dict(count = 1, label = 'YTD', step = 'year', stepmode = 'todate'), dict(count = 1, label = '1y', step = 'year', stepmode = 'backward'), dict(step = 'all') ]) ), rangeslider = dict( visible = False, autorange = True ), type = 'date' ), yaxis = dict( autorange = True, fixedrange = False, ), xaxis_rangeslider_visible=True, xaxis_rangeslider_yaxis_rangemode="auto" ) return type_fig if __name__ == '__main__': app.run_server(debug = True)
You can achieve this using a clientside callback. The idea is the same as for Autorescaling y axis range when range slider used : we register a JS handler for the relayout event that will update the yaxis range according to the y data that are within the xaxis range. Add this to your current code : app.clientside_callback( ClientsideFunction( namespace='someApp', function_name='onRelayout' ), Output('date-bar-chart', 'id'), Input('date-bar-chart', 'relayoutData') ) NB. Output('date-bar-chart', 'id'), is used as a "dummy output" (the zoom adjustment is done using the Plotly.js API for efficiency, so we don't need an output). While Dash supports no output callbacks (as of v2.17.0), there are still issues with clientside callbacks having no output, so we use an existing component here, and the callback function will prevent updating it by returning dash_clientside.no_update. In your assets_foler (default 'assets'), add a .js file with the following code : window.dash_clientside = Object.assign({}, window.dash_clientside, { someApp: { graphDiv: null, onRelayout(e) { if ( !e || e.autosize || e.width || e.height || // plot init or resizing e['yaxis.range'] || e['yaxis.autorange'] || // yrange already adjusted e['yaxis.range[0]'] || e['yaxis.range[1]'] // yrange manually set ) { // Nothing to do. return dash_clientside.no_update; } if (!window.dash_clientside.someApp.graphDiv) { const selector = '#date-bar-chart .js-plotly-plot'; window.dash_clientside.someApp.graphDiv = document.querySelector(selector); } const gd = window.dash_clientside.someApp.graphDiv; if (e['xaxis.autorange']) { Plotly.relayout(gd, {'yaxis.autorange': true}); return dash_clientside.no_update; } // Convert xrange to timestamp so we can easily filter y data. const toMsTimestamp = x => new Date(x).getTime(); const [x0, x1] = gd._fullLayout.xaxis.range.map(toMsTimestamp); // Filter y data according to the given xrange for each visible trace. const yFiltered = gd._fullData.filter(t => t.visible === true).flatMap(t => { return gd.calcdata[t.index].reduce((filtered, data) => { if (data.p >= x0 && data.p <= x1) { filtered.push(data.s0, data.s1); } return filtered; }, []); }); const ymin = Math.min(...yFiltered); const ymax = Math.max(...yFiltered); // Add some room if needed before adjusting the yrange, taking account of // whether the plot has positive only vs negative only vs mixed bars. const room = (ymax - ymin) / 20; const yrange = [ymin < 0 ? ymin - room : 0, ymax > 0 ? ymax + room : 0]; Plotly.relayout(gd, {'yaxis.range': yrange}); return dash_clientside.no_update; } } });
3
2
78,406,108
2024-4-30
https://stackoverflow.com/questions/78406108/update-dmc-slider-text-style-plotly-dash
How can the ticks be manipulated using dash mantine components? I've got a slider below, that alters the opacity of a bar graph. I know you can change the size and radius of the slider bar. But I want to change the fontsize, color of the xticks corresponding to the bar. You could use the following css with dcc.sliders but is there a similar way to control dmc.sliders? .rc-slider-mark-text { font-size: 10px; color: blue; } .rc-slider-mark-text-active { font-size: 10px; color: red; } I've tried to change the css file to no avail. Also, alter the fontsize or color in the style parameter has no affect. import dash from dash import dcc from dash import html from dash.dependencies import Input, Output import dash_bootstrap_components as dbc import dash_mantine_components as dmc import plotly.express as px import plotly.graph_objs as go import pandas as pd df = pd.DataFrame({ 'Fruit': ['Apple','Banana','Orange','Kiwi','Lemon'], 'Value': [1,2,4,8,6], }) external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = dash.Dash(__name__, external_stylesheets = external_stylesheets) filter_box = html.Div(children=[ html.Div(children=[ dmc.Text("trans"), dmc.Slider(id = 'bar_transp', min = 0, max = 1, step = 0.1, marks = [ {"value": 0, "label": "0"}, {"value": 0.2, "label": "0.2"}, {"value": 0.4, "label": "0.4"}, {"value": 0.6, "label": "0.6"}, {"value": 0.8, "label": "0.8"}, {"value": 1, "label": "1"}, ], value = 1, size = 'lg', style = {"font-size": 2, "color": "white"}, #doesn't work ), ], className = "vstack", ) ]) app.layout = dbc.Container([ dbc.Row([ dbc.Col(html.Div(filter_box), ), dcc.Graph(id = 'type-chart'), ]) ], fluid = True) @app.callback( Output('type-chart', 'figure'), [ Input('bar_transp', 'value'), ]) def chart(bar_transp): df_count = df.groupby(['Fruit'])['Value'].count().reset_index(name = 'counts') df_count = df_count type_fig = px.bar(x = df_count['Fruit'], y = df_count['counts'], color = df_count['Fruit'], opacity = bar_transp, ) return type_fig if __name__ == '__main__': app.run_server(debug = True)
According to the Styles API, you can use the static selector .mantine-Slider-markLabel to customize the tick labels, but it seems there is no selector for the active tick specifically, so you would have to use a clientside callback to apply a custom class, say mantine-Slider-markLabel-active, to the active element when the slider value changes. NB. While Dash supports no output callbacks (as of v2.17.0), DMC currently doesn't, so we have to use a "dummy output" for the clientside callback, but instead of creating a specific component we can use an existing one since the callback prevents the update anyway by returning dash_clientside.no_update. Note also that : It's required that you wrap your app with a dmc.MantineProvider, else dash will complain. Dash Mantine Components is based on REACT 18. You must set the env variable REACT_VERSION=18.2.0 before starting up the app. from dash import Dash, dcc, html, _dash_renderer from dash import callback, clientside_callback, Input, Output, ClientsideFunction import dash_bootstrap_components as dbc import dash_mantine_components as dmc import plotly.express as px import plotly.graph_objs as go import pandas as pd _dash_renderer._set_react_version('18.2.0') df = pd.DataFrame({ 'Fruit': ['Apple','Banana','Orange','Kiwi','Lemon'], 'Value': [1,2,4,8,6], }) external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = Dash(__name__, external_stylesheets = external_stylesheets) filter_box = html.Div(children=[ html.Div(children=[ dmc.Text("trans"), dmc.Slider( id = 'bar_transp', min = 0, max = 1, step = 0.1, marks = [ {"value": 0, "label": "0"}, {"value": 0.2, "label": "0.2"}, {"value": 0.4, "label": "0.4"}, {"value": 0.6, "label": "0.6"}, {"value": 0.8, "label": "0.8"}, {"value": 1, "label": "1"}, ], value = 1, size = 'lg', ), ], className = "vstack" ) ]) app.layout = dmc.MantineProvider( dbc.Container([ dbc.Row([ dbc.Col(html.Div(filter_box)), dcc.Graph(id = 'type-chart'), html.Div(id='dummy-output'), ]) ], fluid = True) ) @callback( Output('type-chart', 'figure'), Input('bar_transp', 'value')) def chart(bar_transp): df_count = df.groupby(['Fruit'])['Value'].count().reset_index(name = 'counts') df_count = df_count type_fig = px.bar( x = df_count['Fruit'], y = df_count['counts'], color = df_count['Fruit'], opacity = bar_transp, ) return type_fig clientside_callback( ClientsideFunction( namespace='someApp', function_name='onSliderChange' ), Output('bar_transp', 'key'), Input('bar_transp', 'value') ) if __name__ == '__main__': app.run_server(debug = True) In your assets_folder (default 'assets') : .js file : window.dash_clientside = Object.assign({}, window.dash_clientside, { someApp: { onSliderChange(value) { const activeCls = 'mantine-Slider-markLabel-active'; // Remove activeCls from the previously active mark if any. document.querySelector('.' + activeCls)?.classList.remove(activeCls); // And add it to the currently active mark const labels = document.querySelectorAll('.mantine-Slider-markLabel'); const active = [...labels].find(label => +label.textContent === value); active?.classList.add(activeCls); // Prevent updating the ouput return dash_clientside.no_update; } } }); .css file : .mantine-Slider-markLabel { font-size: 10px; color: blue; } .mantine-Slider-markLabel-active { font-size: 10px; color: red; }
3
1
78,379,995
2024-4-24
https://stackoverflow.com/questions/78379995/creating-a-custom-colorbar-in-matplotlib
How can I create a colorbar in matplotlib that looks like this: Here is what I tried: import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap from matplotlib.cm import ScalarMappable from matplotlib.colors import Normalize # Define the custom colormap colors = ['red', 'cyan', 'darkgreen'] cmap = LinearSegmentedColormap.from_list( 'custom_colormap', [(0.0, colors[0]), (0.5 / 2.0, colors[0]), (0.5 / 2.0, colors[1]), (1.5 / 2.0, colors[1]), (1.5 / 2.0, colors[2]), (2.0 / 2.0, colors[2])] ) # Create a scalar mappable object with the colormap sm = ScalarMappable(norm=Normalize(vmin=3.5, vmax=4.5), cmap=cmap) # Create the colorbar plt.figure(figsize=(3, 1)) cb = plt.colorbar(sm, orientation='horizontal', ticks=[3.5, 4.5], extend='neither') cb.set_label('')
One solution is to draw wide white edges around the color segments (using cb.solids.set below), hide the spines (cb.outline.set_visible) and draw vertical lines as dividers (cb.ax.axvline). To match the desired colorbar, make sure to pass a ymin that is greater than 0. import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap, Normalize from matplotlib.cm import ScalarMappable fig, ax = plt.subplots(figsize=(3, 1), layout='tight') colors = ['red', 'cyan', 'darkgreen'] ticks = [3.5, 4.5] cmap = ListedColormap(colors) norm = Normalize(vmin=2.5, vmax=5.5) cb = fig.colorbar( mappable=ScalarMappable(norm=norm, cmap=cmap), cax=ax, ticks=ticks, ticklocation='top', orientation='horizontal', ) cb.solids.set(edgecolor='white', linewidth=5) cb.outline.set_visible(False) cb.ax.tick_params(width=1, length=10, color='k') for bound in ticks: cb.ax.axvline(bound, c='k', linewidth=1, ymin=0.3, alpha=0.6) plt.setp(cb.ax.xaxis.get_ticklines(), alpha=0.6) cb.set_ticklabels(ticks, alpha=0.6, color='k', fontsize=15, fontfamily='Arial') Another solution is instead of drawing vertical lines as dividers, just use the divider lines defined on colorbar object itself (pass drawedges=True). However, the end result will be a little different from the desired result because the divider line draws from the bottom to the top (can't pass ymin like above). fig, ax = plt.subplots(figsize=(3, 1), layout='tight') colors = ['red', 'cyan', 'darkgreen'] ticks = [3.5, 4.5] cmap = ListedColormap(colors) norm = Normalize(vmin=2.5, vmax=5.5) cb = fig.colorbar( mappable=ScalarMappable(norm=norm, cmap=cmap), cax=ax, ticks=ticks, ticklocation='top', orientation='horizontal', drawedges=True ) cb.solids.set(edgecolor='white', linewidth=5) cb.outline.set_visible(False) cb.dividers.set(linewidth=1, alpha=0.6) cb.ax.tick_params(width=1, length=10, color='k') plt.setp(cb.ax.xaxis.get_ticklines(), alpha=0.6) cb.set_ticklabels(ticks, alpha=0.6, color='k', fontsize=15, fontfamily='Arial')
2
3
78,411,087
2024-4-30
https://stackoverflow.com/questions/78411087/split-pandas-column-and-create-new-columns-that-has-the-count-of-those-split-val
I have an excel file in which one of the columns has multiple values seperated by a comma (please see the attached image) For the StuType column, the numbers go all the way from 1 to 13 including some blank rows (as can be seen in ResponseID = 8538 and 8562). I am reading the file into pandas. The goal is to have 13 different columns with values 0 and 1 in there. My question is very similar to this one, however, both the solutions suggested there do not work for me. Can someone please help me? Thank you so very much! Edit: The solution from @ouroboros1 is perfect! There are some issues with my Excel file though (notice, how '5,8' is to the left and '10', '12', etc. are to the right...I fixed that in Excel before running the solutions provided). Once again, thanks a lot for everyone's help!
Edit: all credits to @mozway for the refactored solution (see this comment). Here's an approach with Series.str.get_dummies: df = ( df.join( df.pop('StuType') .str.get_dummies(sep=',') .rename(columns=int) .reindex(range(1, 14), axis=1, fill_value=0) ) ) Output Response 1 2 3 4 5 6 7 8 9 10 11 12 13 0 8524 0 0 0 0 1 0 0 1 0 0 0 0 0 1 8528 0 0 0 0 0 0 0 0 0 1 0 0 0 2 8538 0 0 0 0 0 0 0 0 0 0 0 1 0 3 8548 0 1 0 0 1 0 0 0 0 1 0 0 0 4 8558 0 0 0 0 0 0 0 0 0 0 0 0 1 5 8568 0 0 0 0 0 0 0 0 0 0 0 0 0 6 8578 0 0 0 0 0 0 1 0 0 0 0 0 0 7 8588 0 0 0 0 0 0 0 0 0 0 0 1 0 8 8598 0 0 0 0 1 0 0 0 0 0 0 0 0 9 8608 0 0 0 0 0 0 0 0 0 0 0 0 1 10 8618 0 0 0 0 0 0 0 0 0 0 0 0 1 11 8628 0 0 0 0 0 0 0 0 0 0 0 0 0 12 8638 0 0 0 0 1 0 0 1 0 1 1 0 0 Explanation Use df.pop to work with df['StuType'], while dropping it from df, and apply Series.str.get_dummies with `sep=','). Cast string column names of the resulting df to int with df.rename. Chain df.reindex to add columns (axis=1) within range(1, 14) that are not yet present, populating NaN values with 0 by setting fill_value=0. Join to original df (now without 'StuType' column) with df.join. If for some reason you want to keep the original df intact (df.pop will alter it), you can use pd.concat instead of df.pop + df.join: out = ( pd.concat([ df['Response'], ( df['StuType'] .str.get_dummies(sep=',') .rename(columns=int) .reindex(range(1, 14), axis=1, fill_value=0) ) ], axis=1) ) Solution before refactoring suggested by @mozway (edited): Here's an approach with pd.get_dummies: out = ( pd.concat([ df['Response'], ( pd.get_dummies(df['StuType'].str.split(',').explode()) .rename(columns=int) .groupby(level=0).sum() .reindex(range(1,14), axis=1, fill_value=0) ) ], axis=1) ) # same output Data used import pandas as pd import numpy as np data = {'Response': {0: 8524, 1: 8528, 2: 8538, 3: 8548, 4: 8558, 5: 8568, 6: 8578, 7: 8588, 8: 8598, 9: 8608, 10: 8618, 11: 8628, 12: 8638}, 'StuType': {0: '5,8', 1: '10', 2: '12', 3: '2,5,10', 4: '13', 5: np.nan, 6: '7', 7: '12', 8: '5', 9: '13', 10: '13', 11: np.nan, 12: '5,8,10,11'} } df = pd.DataFrame(data) df Response StuType 0 8524 5,8 1 8528 10 2 8538 12 3 8548 2,5,10 4 8558 13 5 8568 NaN 6 8578 7 7 8588 12 8 8598 5 9 8608 13 10 8618 13 11 8628 NaN 12 8638 5,8,10,11
2
4
78,408,880
2024-4-30
https://stackoverflow.com/questions/78408880/how-to-perform-matthews-corrcoef-in-sklearn-simultaneously-for-every-column-usin
I want to perform Matthews correlation coefficient (MCC) in sklearn to find the correlation between different features (boolean vectors) in a 2D numpyarray. What I have done so far is to loop through each column and find correlation value between features one by one. Here is my code: from sklearn.metrics import matthews_corrcoef import numpy as np X = np.array([[1, 0, 0, 0, 0], [1, 0, 0, 1, 0], [1, 0, 0, 0, 1], [1, 1, 0, 0, 0], [1, 1, 0, 1, 0], [1, 1, 0, 0, 1], [1, 0, 1, 0, 0], [1, 0, 1, 1, 0], [1, 0, 1, 0, 1], [1, 0, 0, 0, 0]]) n_sample, n_feature = X.shape rff_all = [] for i in range(n_feature): for j in range(i + 1, n_feature): coeff_f_f = abs(matthews_corrcoef(X[:, i], X[:, j])) rff_all.append(coeff_f_f) rff = np.mean(rff_all) As I have a huge dimension of 2D numpyarray, it seems to be really slow and impractical. What is the most efficient way to perform this kind of operation simultaneously without using the loops? Edit: I then come up with this idea but it is still pretty slow. from more_itertools import distinct_combinations all_c = [] for item in distinct_combinations(np.arange(X.shape[1]), r=2): c = matthews_corrcoef(X[:, item][:, 0], X[:, item][:, 1]) all_c.append(abs(c))
You can use numba to speed up the computing, e.g: import numba import numpy as np @numba.njit def _fill_cm(m, c1, c2): m[:] = 0 for a, b in zip(c1, c2): m[a, b] += 1 @numba.njit def mcc(confusion_matrix): # https://stackoverflow.com/a/56875660/992687 tp = confusion_matrix[0, 0] tn = confusion_matrix[1, 1] fp = confusion_matrix[1, 0] fn = confusion_matrix[0, 1] x = (tp + fp) * (tp + fn) * (tn + fp) * (tn + fn) return ((tp * tn) - (fp * fn)) / sqrt(x + 1e-6) @numba.njit def get_all_mcc_numba(X): rows, columns = X.shape confusion_matrix = np.zeros((2, 2), dtype="float32") out = [] for i in range(columns): c1 = X[:, i] for j in range(i + 1, columns): # make confusion matrix c2 = X[:, j] _fill_cm(confusion_matrix, c1, c2) out.append(abs(mcc(confusion_matrix))) return out Benchmark: from timeit import timeit from math import sqrt import numba import numpy as np from sklearn.metrics import matthews_corrcoef def get_all_mcc_normal(X): n_sample, n_feature = X.shape rff_all = [] for i in range(n_feature): for j in range(i + 1, n_feature): coeff_f_f = abs(matthews_corrcoef(X[:, i], X[:, j])) rff_all.append(coeff_f_f) return rff_all @numba.njit def _fill_cm(m, c1, c2): m[:] = 0 for a, b in zip(c1, c2): m[a, b] += 1 @numba.njit def mcc(confusion_matrix): # https://stackoverflow.com/a/56875660/992687 tp = confusion_matrix[0, 0] tn = confusion_matrix[1, 1] fp = confusion_matrix[1, 0] fn = confusion_matrix[0, 1] x = (tp + fp) * (tp + fn) * (tn + fp) * (tn + fn) return ((tp * tn) - (fp * fn)) / sqrt(x + 1e-6) @numba.njit def get_all_mcc_numba(X): rows, columns = X.shape confusion_matrix = np.zeros((2, 2), dtype="float32") out = [] for i in range(columns): c1 = X[:, i] for j in range(i + 1, columns): # make confusion matrix c2 = X[:, j] _fill_cm(confusion_matrix, c1, c2) out.append(abs(mcc(confusion_matrix))) return out # make 2000 x 100 0-1 matrix: np.random.seed(42) X = np.random.randint(low=0, high=2, size=(2000, 100), dtype="uint8") assert np.allclose(get_all_mcc_normal(X), get_all_mcc_numba(X)) t_normal = timeit("get_all_mcc_normal(X)", number=1, globals=globals()) t_numba = timeit("get_all_mcc_numba(X)", number=1, globals=globals()) print(f"{t_normal=}") print(f"{t_numba=}") Prints on my computer (AMD 5700x): t_normal=4.352220230008243 t_numba=0.008588693017372862 You can further speed up the computing using numba multiprocessing: @numba.njit(parallel=True) def get_all_mcc_numba_parallel(X): rows, columns = X.shape num_cpus = numba.get_num_threads() # for each thread, allocate array for confusion matrix cms = [] for i in range(num_cpus): cms.append(np.empty((2, 2), dtype="float32")) out = np.empty(shape=(columns, columns), dtype="float32") # make indexes for each thread thread_column_idxs = np.array_split(np.arange(columns), num_cpus) for i in range(columns): c1 = X[:, i] for thread_idx in numba.prange(num_cpus): for j in thread_column_idxs[thread_idx]: if j < i + 1: continue c2 = X[:, j] _fill_cm(cms[thread_idx], c1, c2) out[i, j] = abs(mcc(cms[thread_idx])) out2 = [] for i in range(columns): for j in range(i + 1, columns): out2.append(out[i, j]) return out2 Benchmark (using only numba/numba+parallel versions): import perfplot np.random.seed(42) X = np.random.randint(low=0, high=2, size=(2000, 100), dtype="uint8") assert np.allclose(get_all_mcc_normal(X), get_all_mcc_numba(X)) assert np.allclose(get_all_mcc_numba(X), get_all_mcc_numba_parallel(X)) perfplot.show( setup=lambda n: np.random.randint(low=0, high=2, size=(2000, n), dtype="uint8"), kernels=[ get_all_mcc_numba, get_all_mcc_numba_parallel, ], labels=["single", "parallel"], n_range=[100, 250, 500, 750, 1000], xlabel="2000 rows x N columns", logx=True, logy=True, equality_check=np.allclose, ) Creates this graph: For matrix with 10_000 rows and 10_000 columns: t_numba=782.5051075649972 (~13 minutes) t_numba_parallel=215.43611432801117 (~3.5 minutes)
2
1
78,410,548
2024-4-30
https://stackoverflow.com/questions/78410548/polars-use-column-values-to-reference-other-column-in-when-then-expression
I have a Polars dataframe where I'd like to derive a new column using a when/then expression. The values of the new column should be taken from a different column in the same dataframe. However, the column from which to take the values differs from row to row. Here's a simple example: df = pl.DataFrame( { "frequency": [0.5, None, None, None], "frequency_ref": ["a", "z", "a", "a"], "a": [1, 2, 3, 4], "z": [5, 6, 7, 8], } ) The resulting dataframe should look like this: res = pl.DataFrame( { "frequency": [0.5, None, None, None], "frequency_ref": ["a", "z", "a", "a"], "a": [1, 2, 3, 4], "z": [5, 6, 7, 8], "res": [0.5, 6, 3, 4] } ) I tried to create a dynamic reference using a nested pl.col: # Case 1) Fixed value is given fixed_freq_condition = pl.col("frequency").is_not_null() & pl.col("frequency").is_not_nan() # Case 2) Reference to distribution data is given ref_freq_condition = pl.col("frequency_ref").is_not_null() # Apply the conditions to calculate res df = df.with_columns( pl.when(fixed_freq_condition) .then(pl.col("frequency")) .when(ref_freq_condition) .then( pl.col(pl.col("frequency_ref")) ) .otherwise(0.0) .alias("res"), ) Which fails with TypeError: invalid input for "col". Expected "str" or "DataType", got 'Expr'. What works (but only as an intermediate solution) is by explicitly listing every possible column value in a very long when/then expression. This is far from optimal as the column names might change in the future and produces a lot of code repititon. df = df.with_columns( pl.when(fixed_freq_condition) .then(pl.col("frequency")) .when(pl.col("frequency_ref") == "a") .then(pl.col("a")) # ... more entries .when(pl.col("frequency_ref") == "z") .then(pl.col("z")) .otherwise(0.0) .alias("res"), )
You could build the when/then in a loop: freq_refs = df.get_column("frequency_ref") expr = pl.when(False).then(None) # dummy starter value for c in freq_refs: expr = expr.when(pl.col("frequency_ref") == c).then(pl.col(c)) expr = expr.otherwise(0) # Apply the conditions to calculate res df = df.with_columns( pl.when(fixed_freq_condition) .then(pl.col("frequency")) .when(ref_freq_condition) .then(expr) .otherwise(0.0) .alias("res"), ) df shape: (4, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ frequency ┆ frequency_ref ┆ a ┆ b ┆ res β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ a ┆ 1 ┆ 5 ┆ 1.0 β”‚ β”‚ null ┆ b ┆ 2 ┆ 6 ┆ 6.0 β”‚ β”‚ null ┆ a ┆ 3 ┆ 7 ┆ 3.0 β”‚ β”‚ null ┆ a ┆ 4 ┆ 8 ┆ 4.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
5
4
78,406,682
2024-4-30
https://stackoverflow.com/questions/78406682/dae-equation-system-solved-with-gekko-suddenly-not-working-anymore-error-messag
I have previously successfully solved a mechanical system using the gekko package. However, without having changed anything in my code, I now get the following error message: --------------------------------------------------------------------------- Exception Traceback (most recent call last) Cell In[3], line 15 13 m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v)) 14 m.Equation (a == (1000 - F_l)/mass) ---> 15 m.solve(disp=False) 16 plt.plot(x) 17 print(x) File ~/anaconda3/envs/conda_calculator/lib/python3.11/site-packages/gekko/gekko.py:2210, in GEKKO.solve(self, disp, debug, GUI, **kwargs) 2208 #print APM error message and die 2209 if (debug >= 1) and ('@error' in response): -> 2210 raise Exception(response) 2212 #load results 2213 def byte2str(byte): Exception: @error: Model File Not Found Model file does not exist: 130.92.213.86_gk_model0.apm STOPPING... To understand how gekko is working, I have run examples that are presented here. # Pendulum - Index 3 DAE from gekko import GEKKO import numpy as np mass = 1 g = 9.81 s = 1 m = GEKKO() x = m.Var(0) y = m.Var(-s) v = m.Var(1) w = m.Var(0) lam = m.Var(mass*(1+s*g)/2*s**2) m.Equation(x**2+y**2==s**2) m.Equation(x.dt()==v) m.Equation(y.dt()==w) m.Equation(mass*v.dt()==-2*x*lam) m.Equation(mass*w.dt()==-mass*g-2*y*lam) m.time = np.linspace(0,2*np.pi,100) m.options.IMODE=4 m.options.NODES=3 m.solve(disp=False) import matplotlib.pyplot as plt plt.figure(figsize=(10,5)) plt.subplot(3,1,1) plt.plot(m.time,x.value,label='x') plt.plot(m.time,y.value,label='y') plt.ylabel('Position') plt.legend(); plt.grid() plt.subplot(3,1,2) plt.plot(m.time,v.value,label='v') plt.plot(m.time,w.value,label='w') plt.ylabel('Velocity') plt.legend(); plt.grid() plt.subplot(3,1,3) plt.plot(m.time,lam.value,label=r'$\lambda$') plt.legend(); plt.grid() plt.xlabel('Time') plt.savefig('index3.png',dpi=600) plt.show() These were also working, and now show the same error message. Any idea what is going on or what I missed? Thanks!
The public server is under significant load with ~315,000 downloads of the gekko package each month. I recommend switching to remote=False for a faster and more reliable response by solving locally instead of through the public server. m = GEKKO(remote=False) This may change the behavior because there are more solver options on the public server, but should be the same in most cases. While it isn't a problem in this case, it is also a good idea to keep the version of gekko up-to-date with: pip install gekko --upgrade There are new features such as a gekko support agent. from gekko import support a = support.agent() a.ask("Can you optimize the Rosenbrock function?")
3
0
78,409,623
2024-4-30
https://stackoverflow.com/questions/78409623/pandas-divide-by-each-value-occurrence-in-one-column
I have a dataframe which contains a column labeled Pool. I want to create a new column labeled "Volume" which is 310 divided by the occurrence of each value of Pool. For example there are three occurrences of Pool 1 so 310/3 = 103.3. There are 4 occurrences of Pool 2 so 310/4 = 77.5 and so forth. Any ideas on how to do this with Pandas? Pool Volume 1 103.3 1 103.3 1 103.3 2 77.5 2 77.5 2 77.5 2 77.5 3 62 3 62 3 62 3 62 3 62 I tried: df["Volume"] = 310 / pool.get_group(1).value_counts() Thanks for your help.
Use Series.map with Series.value_counts for counts per groups: df["Volume"] = 310 / df['Pool'].map(df['Pool'].value_counts()) #alternative with division from right side - rdiv #df["Volume"] = df['Pool'].map(df['Pool'].value_counts()).rdiv(310) Or GroupBy.transform: df["Volume"] = 310 / df.groupby('Pool')['Pool'].transform('size') print (df) Pool Volume 0 1 103.333333 1 1 103.333333 2 1 103.333333 3 2 77.500000 4 2 77.500000 5 2 77.500000 6 2 77.500000 7 3 62.000000 8 3 62.000000 9 3 62.000000 10 3 62.000000 11 3 62.000000
2
2
78,405,417
2024-4-29
https://stackoverflow.com/questions/78405417/changing-the-color-scheme-in-a-matplotlib-animated-gif
I'm trying to create an animated gif that takes a python graph in something like the plt.style.use('dark_background') (i.e black background with white text) and fades it to something like the default style (ie. white background with black text). The above is the result of running the below. You'll see that it doesn't quite work because the area around the plot area and legend area stubbornly remains white. I've tried numerous variations on the below, but can't figure it out. I'm also trying to get it not to loop. The repeat=False doesn't seem to do it. But that's a secondary issue. The main one is: how do I get the background of the figure to change its color during an animation? import matplotlib.pyplot as plt import matplotlib.animation as animation import numpy as np import matplotlib as mpl # Data for the plot x1 = np.linspace(1,100) y1 = np.sin(x1) ##################################### # STEP 1: FADE TO WHITE - FAILED ##################################### # Create the figure and axis fig, ax = plt.subplots() # Plot the data line1, = ax.plot(x1, y1, label='sin') # Set the title and axis labels ax.set_title('Title') ax.set_xlabel('x axis') ax.set_ylabel('y axis') # Add a legend #ax.legend(loc='right') legend = ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) plt.subplots_adjust(right=0.79) # Function to update the plot for the animation def update1(frame): if frame>=noFrames: frame=noFrames-1 startWhite = tuple([(noFrames-frame-1)/(noFrames-1)]*3) startBlack = tuple([frame/(noFrames-1)]*3) ax.cla() # Plot the data line1, = ax.plot(x1, y1, label='sin') # Set the title and axis labels ax.set_title('Title',color=startWhite) ax.set_xlabel('X Axis',color=startWhite) ax.set_ylabel('Y Axis',color=startWhite) # Add a legend #ax.legend(loc='right') legend = ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) plt.subplots_adjust(right=0.79) fig.patch.set_color(None) fig.patch.set_facecolor(startBlack) ax.patch.set_facecolor(startBlack) fig.patch.set_edgecolor(startBlack) fig.patch.set_color(startBlack) plt.rcParams['axes.facecolor'] = startBlack plt.rcParams['axes.edgecolor'] = startBlack plt.rcParams['axes.labelcolor'] = startWhite plt.rcParams['axes.titlecolor'] = startWhite plt.rcParams['legend.facecolor'] = startBlack plt.rcParams['legend.edgecolor'] = startWhite plt.rcParams['legend.labelcolor'] = startWhite plt.rcParams['figure.facecolor'] = startBlack plt.rcParams['figure.edgecolor'] = startBlack plt.rcParams['xtick.color'] = startWhite plt.rcParams['ytick.color'] = startWhite plt.rcParams['text.color'] = startWhite fig.canvas.draw_idle() return fig, noFrames = 50 # Create the animation ani = animation.FuncAnimation(fig, update1, frames=range(noFrames*5), blit=False, repeat=False) ani.event_source.stop() #stop the looping # Save the animation as a GIF ani.save('TEST01_fade.gif', writer='pillow', fps=10) plt.close()
I couldn't figure out how to make the figure facecolor transition, but a workaround is to use a single subfigure, and adjust the facecolor of that. I also set the color properties directly on the artists rather than using rcParams as I think there is sometimes inconsistency about exactly when rcParams get applied (e.g. when the artist is created or when the artist is drawn). Rather than clear the axes and re-add everything, I just keep the original artists where possible. Legend does not seem to have the required set methods so I re-create that, but Legend always replaces the existing one anyway (as do the title and x-, y-labels). Tested with Matplotlib v3.8.2. import matplotlib.pyplot as plt import matplotlib.animation as animation import numpy as np # Data for the plot x1 = np.linspace(1,100) y1 = np.sin(x1) ####################### # STEP 1: FADE TO WHITE ####################### # Create the figure, subfigure and axis fig = plt.figure() sfig = fig.subfigures() ax = sfig.add_subplot() # Plot the data line1, = ax.plot(x1, y1, label='sin') # Set the title and axis labels ax.set_title('Title') ax.set_xlabel('x axis') ax.set_ylabel('y axis') # Add a legend legend = ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) plt.subplots_adjust(right=0.79) # Function to update the plot for the animation def update1(frame): if frame>=noFrames: frame=noFrames-1 startWhite = tuple([(noFrames-frame-1)/(noFrames-1)]*3) startBlack = tuple([frame/(noFrames-1)]*3) # Update background colours sfig.set_facecolor(startBlack) ax.set_facecolor(startBlack) # Set the title and axis labels ax.set_title('Title',color=startWhite) ax.set_xlabel('X Axis',color=startWhite) ax.set_ylabel('Y Axis',color=startWhite) ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0., facecolor=startBlack, edgecolor=startWhite, labelcolor=startWhite) # Update tick/label colours ax.tick_params(color=startWhite, labelcolor=startWhite) # Update spine colours ax.spines[:].set_color(startWhite) noFrames = 50 # Create the animation ani = animation.FuncAnimation(fig, update1, frames=range(noFrames*5), blit=False, repeat=False) # Save the animation as a GIF ani.save('TEST01_fade.gif', writer='pillow', fps=10) plt.close()
2
2
78,407,270
2024-4-30
https://stackoverflow.com/questions/78407270/rendering-depth-from-mesh-and-camera-parameters
I want to render depth from mesh file, and camera parameters, to do so I tried RaycastingScene from open3d with this mesh file as follows: #!/usr/bin/env python3 import numpy as np import open3d as o3d import matplotlib.pyplot as plt def render_depth( intrins:o3d.core.Tensor, width:int, height:int, extrins:o3d.core.Tensor, tmesh:o3d.t.geometry.TriangleMesh )->np.ndarray: """ Render depth from mesh file Parameters ---------- intrins : o3d.core.Tensor Camera Intrinsics matrix K: 3x3 width : int image width height : int image height extrins : o3d.core.Tensor camera extrinsics matrix 4x4 tmesh : o3d.t.geometry.TriangleMesh TriangleMesh Returns ------- np.ndarray Rendred depth image """ scene = o3d.t.geometry.RaycastingScene() scene.add_triangles(tmesh) rays = scene.create_rays_pinhole( intrinsic_matrix=intrins, extrinsic_matrix=extrins, width_px=width, height_px=height ) ans = scene.cast_rays(rays) t_hit = ans["t_hit"].numpy() / 1000.0 return t_hit if __name__=="__main__": import os mesh_path = f"{os.getenv('HOME')}/bbq_sauce.ply" mesh = o3d.t.io.read_triangle_mesh(mesh_path) mesh.compute_vertex_normals() # camera_info[k].reshape(3, 3) intrins_ = np.array([ [606.9275512695312, 0.0, 321.9704895019531], [0.0, 606.3505859375, 243.5377197265625], [0.0, 0.0, 1.0] ]) width_ = 640 # camera_info.width height_ = 480 # camera_info.height # root2cam 4x4 extrens_ = np.eye(4) # intrins_t = o3d.core.Tensor(intrins_) extrins_t = o3d.core.Tensor(extrens_) rendered_depth = render_depth( intrins=intrins_t, width=width_, height=height_, extrins = extrins_t, tmesh=mesh ) plt.imshow(rendered_depth) plt.show() but I'm getting a depth image which doesn't seem to be correct! Can you please tell me how can I fix that? thanks.
The raycasting scene is working correctly. However, your extrinsic matrix is setting the camera inside the mesh and thus you are seeing how mesh looks from inside. I roughly measured the mesh bbq_sauce in meshlab and it is of width ~53 units. You can either move the camera away from the mesh, using either: extrens_[2, 3] = 200 or instead of moving the object, just move the camera pose. rays = o3d.t.geometry.RaycastingScene.create_rays_pinhole( fov_deg=90, # Not computed from intrinsic parameters. center=[0, 0, 0], # Look towards eye=[100, 100, 0], # Look from up=[0, 1, 0], # This helps in orienting the camera so object is not inverted. width_px=640, height_px=480, )
2
1
78,407,769
2024-4-30
https://stackoverflow.com/questions/78407769/counting-values-of-columns-in-all-previous-rows-excluding-current-row
I'm currently learning Pandas and stuck with a problem. I have the following data: labels = [ 'date', 'name', 'opponent', 'gf', 'ga'] data = [ [ '2023-08-5', 'Liverpool', 'Man Utd', 5, 0 ], [ '2023-08-10', 'Liverpool', 'Everton', 0, 0 ], [ '2023-08-14', 'Liverpool', 'Tottenham', 3, 2 ], [ '2023-08-18', 'Liverpool', 'Arsenal', 4, 4 ], [ '2023-08-27', 'Liverpool', 'Man City', 0, 0 ], ] df = pd.DataFrame(data, columns=labels) The games / rows are sorted by date. for each row / game I would like to count the column values of 'goals_for' and 'goals_against' in the previous rows / games (excluding the current row or any after the date). So I would like the data to be like this: labels = [ 'date', 'name', 'opponent', 'gf', 'ga', 'total_gf', 'total_ga' ] data = [ [ '2023-08-5', 'Liverpool', 'Man Utd', 5, 0, 0, 0 ], [ '2023-08-10', 'Liverpool', 'Everton', 0, 0, 5, 0 ], [ '2023-08-14', 'Liverpool', 'Tottenham', 3, 2, 5, 0 ], [ '2023-08-18', 'Liverpool', 'Arsenal', 4, 4, 8, 2 ], [ '2023-08-27', 'Liverpool', 'Man City', 0, 0, 12, 6 ], ] I tried expanding() but it seems to include the current row. rolling has a parameter closed='left' but others don't have it. Any help or tips or links to similar solutions would be appreciated.
You can shift with fill_value=0, then cumsum: df['total_gf'] = df['gf'].shift(fill_value=0).cumsum() df['total_ga'] = df['ga'].shift(fill_value=0).cumsum() Alternatively, processing all columns at once: df[['total_gf', 'total_ga']] = df[['gf', 'ga']].shift(fill_value=0).cumsum() Or, create a new DataFrame: out = df.join(df[['gf', 'ga']].shift(fill_value=0).cumsum().add_prefix('total_')) Output: date name opponent gf ga total_gf total_ga 0 2023-08-5 Liverpool Man Utd 5 0 0 0 1 2023-08-10 Liverpool Everton 0 0 5 0 2 2023-08-14 Liverpool Tottenham 3 2 5 0 3 2023-08-18 Liverpool Arsenal 4 4 8 2 4 2023-08-27 Liverpool Man City 0 0 12 6
4
2
78,407,883
2024-4-30
https://stackoverflow.com/questions/78407883/add-column-with-a-count-of-other-columns-that-have-any-value-in
My data frame is similar to below: Name Col1 Col2 Col3 Col4 Col5 Col6 A Y Y Y B Y Y Y Y C Y Y Y D Y I want to add a column that counts the other columns containing a value similar to: Name Col1 Col2 Col3 Col4 Col5 Col6 Score A Y Y Y 3 B Y Y Y Y 4 C Y Y Y 3 D Y 1 I have tried the following but with no success: df['Score'] = df.count(df[['Col1','Col2','Col3','Col4','Col5','Col6']], axis='columns')
If there are missing values use DataFrame.count with subset of columns: df['Score'] = df[['Col1','Col2','Col3','Col4','Col5','Col6']].count(axis=1) print (df) Name Col1 Col2 Col3 Col4 Col5 Col6 Score 0 A Y Y NaN Y NaN NaN 3 1 B Y NaN Y Y NaN Y 4 2 C NaN Y Y NaN Y NaN 3 3 D Y NaN NaN NaN NaN NaN 1
2
4
78,406,587
2024-4-30
https://stackoverflow.com/questions/78406587/geodataframe-conversion-to-polars-with-from-pandas-fails-with-arrowtypeerror-di
I try to convert a GeoDataFrame to a polars DataFrame with from_pandas. I receive an ArrowTypeError: Did not pass numpy.dtype object exception. So I can continue working against the polars API. Expected outcome would be a polars DataFrame with the geometry column being typed as pl.Object. I'm aware of https://github.com/geopolars/geopolars (alpha) and https://github.com/pola-rs/polars/issues/1830 and would be OK with the shapely objects just being represented as pl.Object for now. Here is a minimal example to demonstrate the problem: ## Minimal example displaying the issue import geopandas as gpd print("geopandas version: ", gpd.__version__) import geodatasets print("geodatasets version: ", geodatasets.__version__) import polars as pl print("polars version: ", pl.__version__) gdf = gpd.GeoDataFrame.from_file(geodatasets.get_path("nybb")) print("\nOriginal GeoDataFrame") print(gdf.dtypes) print(gdf.head()) print("\nGeoDataFrame to Polars without geometry") print(pl.from_pandas(gdf.drop("geometry", axis=1)).head()) try: print("\nGeoDataFrame to Polars naiive") print(pl.from_pandas(gdf).head()) except Exception as e: print(e) try: print("\nGeoDataFrame to Polars with schema override") print(pl.from_pandas(gdf, schema_overrides={"geometry": pl.Object}).head()) except Exception as e: print(e) # again to print stack trace pl.from_pandas(gdf).head() Output geopandas version: 0.14.4 geodatasets version: 2023.12.0 polars version: 0.20.23 Original GeoDataFrame BoroCode int64 BoroName object Shape_Leng float64 Shape_Area float64 geometry geometry dtype: object BoroCode BoroName Shape_Leng Shape_Area \ 0 5 Staten Island 330470.010332 1.623820e+09 1 4 Queens 896344.047763 3.045213e+09 2 3 Brooklyn 741080.523166 1.937479e+09 3 1 Manhattan 359299.096471 6.364715e+08 4 2 Bronx 464392.991824 1.186925e+09 geometry 0 MULTIPOLYGON (((970217.022 145643.332, 970227.... 1 MULTIPOLYGON (((1029606.077 156073.814, 102957... 2 MULTIPOLYGON (((1021176.479 151374.797, 102100... 3 MULTIPOLYGON (((981219.056 188655.316, 980940.... 4 MULTIPOLYGON (((1012821.806 229228.265, 101278... GeoDataFrame to Polars without geometry shape: (5, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ BoroCode ┆ BoroName ┆ Shape_Leng ┆ Shape_Area β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════════════β•ͺ═══════════════β•ͺ════════════║ β”‚ 5 ┆ Staten Island ┆ 330470.010332 ┆ 1.6238e9 β”‚ β”‚ 4 ┆ Queens ┆ 896344.047763 ┆ 3.0452e9 β”‚ β”‚ 3 ┆ Brooklyn ┆ 741080.523166 ┆ 1.9375e9 β”‚ β”‚ 1 ┆ Manhattan ┆ 359299.096471 ┆ 6.3647e8 β”‚ β”‚ 2 ┆ Bronx ┆ 464392.991824 ┆ 1.1869e9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ GeoDataFrame to Polars naiive Did not pass numpy.dtype object GeoDataFrame to Polars with schema override Did not pass numpy.dtype object Stack trace (is the same with and without schema_overrides) --------------------------------------------------------------------------- ArrowTypeError Traceback (most recent call last) Cell In[59], line 27 24 print(e) 26 # again to print stack trace ---> 27 pl.from_pandas(gdf).head() File c:\Users\...\polars\convert.py:571, in from_pandas(data, schema_overrides, rechunk, nan_to_null, include_index) 568 return wrap_s(pandas_to_pyseries("", data, nan_to_null=nan_to_null)) 569 elif isinstance(data, pd.DataFrame): 570 return wrap_df( --> 571 pandas_to_pydf( 572 data, 573 schema_overrides=schema_overrides, 574 rechunk=rechunk, 575 nan_to_null=nan_to_null, 576 include_index=include_index, 577 ) 578 ) 579 else: 580 msg = f"expected pandas DataFrame or Series, got {type(data).__name__!r}" File c:\Users\...\polars\_utils\construction\dataframe.py:1032, in pandas_to_pydf(data, schema, schema_overrides, strict, rechunk, nan_to_null, include_index) 1025 arrow_dict[str(idxcol)] = plc.pandas_series_to_arrow( 1026 data.index.get_level_values(idxcol), 1027 nan_to_null=nan_to_null, 1028 length=length, 1029 ) 1031 for col in data.columns: -> 1032 arrow_dict[str(col)] = plc.pandas_series_to_arrow( 1033 data[col], nan_to_null=nan_to_null, length=length 1034 ) 1036 arrow_table = pa.table(arrow_dict) 1037 return arrow_to_pydf( 1038 arrow_table, 1039 schema=schema, (...) 1042 rechunk=rechunk, 1043 ) File c:\Users\...\polars\_utils\construction\other.py:97, in pandas_series_to_arrow(values, length, nan_to_null) 95 return pa.array(values, from_pandas=nan_to_null) 96 elif dtype: ---> 97 return pa.array(values, from_pandas=nan_to_null) 98 else: 99 # Pandas Series is actually a Pandas DataFrame when the original DataFrame 100 # contains duplicated columns and a duplicated column is requested with df["a"]. 101 msg = "duplicate column names found: " File c:\Users\...\pyarrow\array.pxi:323, in pyarrow.lib.array() File c:\Users\...\pyarrow\array.pxi:79, in pyarrow.lib._ndarray_to_array() File c:\Users\...\pyarrow\array.pxi:67, in pyarrow.lib._ndarray_to_type() File c:\Users\...\pyarrow\error.pxi:123, in pyarrow.lib.check_status() ArrowTypeError: Did not pass numpy.dtype object
You could drop the geometry before making the polars dataframe from_pandas, then assign it later as a new column : out = ( pl.from_pandas(gdf.drop(columns=["geometry"])) .with_columns(pl.Series("geometry", gdf["geometry"].tolist())) ) Output : β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ BoroCode ┆ BoroName ┆ Shape_Leng ┆ Shape_Area ┆ geometry β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ f64 ┆ f64 ┆ object β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════════════β•ͺ═══════════════β•ͺ════════════β•ͺ═══════════════════════════════════║ β”‚ 5 ┆ Staten Island ┆ 330470.010332 ┆ 1.6238e9 ┆ MULTIPOLYGON (((970217.022399902… β”‚ β”‚ 4 ┆ Queens ┆ 896344.047763 ┆ 3.0452e9 ┆ MULTIPOLYGON (((1029606.07659912… β”‚ β”‚ 3 ┆ Brooklyn ┆ 741080.523166 ┆ 1.9375e9 ┆ MULTIPOLYGON (((1021176.47900390… β”‚ β”‚ 1 ┆ Manhattan ┆ 359299.096471 ┆ 6.3647e8 ┆ MULTIPOLYGON (((981219.055786132… β”‚ β”‚ 2 ┆ Bronx ┆ 464392.991824 ┆ 1.1869e9 ┆ MULTIPOLYGON (((1012821.80578613… β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The geomtries are preserved. For instance, out[0, 4] shows this :
2
1
78,406,864
2024-4-30
https://stackoverflow.com/questions/78406864/how-to-do-groupby-on-multi-index-dataframe-based-on-condition
I have a multi index dataframe, and I want to combine rows based on certain conditions and I want to combine rows per index. import pandas as pd # data data = { 'date': ['01/01/17', '02/01/17', '03/01/17', '01/01/17', '02/01/17', '03/01/17'], 'language': ['python', 'python', 'python', 'r', 'r', 'r'], 'ex_complete': [6, 5, 10, 8, 8, 8] } # Convert to DataFrame df = pd.DataFrame(data) # Convert DataFrame to JSON json_data = df.to_json(orient='records') # Convert JSON data back to DataFrame df_from_json = pd.read_json(json_data, orient='records') # Set date and language as multi-index df_from_json.set_index(['date', 'language'], inplace=True) df_from_json.sort_index(inplace= True) df_from_json 1st Problem: I want to combine the dates '01/01/17', '02/01/17' and rename as '1_2', this should give me 4 rows: 2 rows for '1_2' - (Python and R) and 2 rows for '03/01/17' (Python and R) 2nd Problem: I want to combine Python and R rows and rename as Python_R, this should give 3 rows for 3 dates. Any guidance or pointer will be hugely appreciated.
IIUC use DataFrame.rename with aggregate, e.g. sum: out = (df_from_json.rename({pd.Timestamp('01/01/17'):'1_2', pd.Timestamp('02/01/17'):'1_2'}, level=0) .groupby(level=[0,1]).sum()) print (out) ex_complete date language 2017-03-01 00:00:00 python 10 r 8 1_2 python 11 r 16 out = (df_from_json.rename({'python':'Python_R', 'r':'Python_R'}, level=1) .groupby(level=[0,1]).sum()) print (out) ex_complete date language 2017-01-01 Python_R 14 2017-02-01 Python_R 13 2017-03-01 Python_R 18
2
1
78,405,685
2024-4-30
https://stackoverflow.com/questions/78405685/find-the-next-value-the-actual-value-plus-50-using-polars
I have the following dataframe: df = pl.DataFrame({ "Column A": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "Column B": [2, 3, 1, 4, 1, 7, 3, 2, 12, 0] }) I want to create a new column C that holds the distance, in rows, between the B value of the current row and the next value in column B that is greater than or equal to B + 50%. The end result should look like this: df = pl.DataFrame({ "Column A": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "Column B": [2, 3, 1, 4, 1, 7, 3, 2, 12, 0], "Column C": [1, 4, 1, 2, 1, 3, 2, 1, None, None] }) How can I efficiently achieve this using Polars, especially since I'm working with a large DataFrame?
Ok, so first I should say - this one looks like it requires join on inequality on multiple columns and from what I've found pure polars is not great with it. It's probably possible to do it with join_asof but I couldn't make it pretty. I'd probably use duckdb integration with polars to achieve the results: import duckdb duckdb.sql(""" select d."Column A", d."Column B", ( select tt."Column A" from df as tt where tt."Column A" > d."Column A" and tt."Column B" >= d."Column B" * 1.5 order by tt."Column A" asc limit 1 ) - d."Column A" as "Column C" from df as d """).pl() β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Column A ┆ Column B ┆ Column C β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════║ β”‚ 1 ┆ 2 ┆ 1 β”‚ β”‚ 2 ┆ 3 ┆ 4 β”‚ β”‚ 3 ┆ 1 ┆ 1 β”‚ β”‚ 4 ┆ 4 ┆ 2 β”‚ β”‚ 5 ┆ 1 ┆ 1 β”‚ β”‚ 6 ┆ 7 ┆ 3 β”‚ β”‚ 7 ┆ 3 ┆ 2 β”‚ β”‚ 8 ┆ 2 ┆ 1 β”‚ β”‚ 9 ┆ 12 ┆ null β”‚ β”‚ 10 ┆ 0 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
5
2