question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
72,400,867
2022-5-27
https://stackoverflow.com/questions/72400867/installing-python-on-ish
Not long ago my computer broke and I am stuck on an iPad. I installed iSH from the AppStore. Now I want to download Python and make sure pip works. I have tried apk add python, which lead to the pip issue, but pip installing is important for me. I have also found other ways using yam or apt(-get), but I do not know how to download either of them.
According information that you provided iSH using virtual environment with Alpine Linux x86 under the hood (I little bit simplify explanation, so it is not 100% correct. You can see details here). So if you want to install pip you have to search how to install pip in Alpine Linux. You will find many answers like that: apk add --update-cache python3 py3-pip This information applicable to any other package that you will try to install. Not just pip.
4
6
72,414,481
2022-5-28
https://stackoverflow.com/questions/72414481/error-in-anyjson-setup-command-use-2to3-is-invalid
#25 3.990 × python setup.py egg_info did not run successfully. #25 3.990 │ exit code: 1 #25 3.990 ╰─> [1 lines of output] #25 3.990 error in anyjson setup command: use_2to3 is invalid. #25 3.990 [end of output] This is a common error which the most common solution to is to downgrade setuptools to below version 58. This was not working for me. I tried installing python3-anyjson but this didn't work either. I'm at a complete loss.. any advice or help is much appreciated. If it matters: this application is legacy spaghetti and I am trying to polish it up for a migration. There's no documentation of any kind. The requirements.txt is as follows: cachetools>=2.0.0,<4 certifi==2018.10.15 Flask-Caching Flask-Compress Flask==2.0.3 cffi==1.2.1 diskcache earthengine-api==0.1.239 gevent==21.12.0 google-auth>=1.17.2 google-api-python-client==1.12.1 gunicorn==20.1.0 httplib2.system-ca-certs-locater httplib2==0.9.2 oauth2client==2.0.1 pyasn1-modules==0.2.1 redis requests==2.18.0 werkzeug==2.1.2 six==1.13.0 pyasn1==0.4.1 Jinja2==3.1.1 itsdangerous==2.0.1 Flask-Celery-Helper Flask-JWT==0.2.0 Flask-Limiter Flask-Mail Flask-Migrate Flask-Restless==0.16.0 Flask-SQLAlchemy Flask-Script Flask-Testing Flask==2.0.3 Pillow<=6.2.2 Shapely beautifulsoup4 boto celery==3.1.23 geopy gevent==21.12.0 numpy<1.17 oauth2client==2.0.1 passlib psycopg2 pyproj<2 python-dateutil==2.4.1 scipy
Downgrading setuptools worked for me pip install "setuptools<58.0.0" And then pip install django-celery
48
110
72,398,203
2022-5-26
https://stackoverflow.com/questions/72398203/concatenate-2-arrays-in-pyproject-toml
I'm giving a shot to the pyproject.toml file, and I'm stuck on this simple task. Consider the following optional dependencies: [project.optional-dependencies] style = ["black", "codespell", "isort", "flake8"] test = ["pytest", "pytest-cov"] all = ["black", "codespell", "isort", "flake8", "pytest", "pytest-cov"] Is there a way to avoid copy/pasting all the optional-dep in the all key? Is there a way to do all = style + test at least?
There is no such feature directly in the toml markup. However, there is a tricky way to do this in Python packaging by depending on yourself: [project.optional-dependencies] style = ["black", "codespell", "isort", "flake8"] test = ["pytest", "pytest-cov"] all = ["myproject[style]", "myproject[test]"] Source: Circular dependency is a feature that Python packaging is explicitly designed to allow, so it works and should continue to work.
5
9
72,409,563
2022-5-27
https://stackoverflow.com/questions/72409563/unsupported-hash-type-ripemd160-with-hashlib-in-python
After a thorough search, I have not found a complete explanation and solution to this very common problem on the entire web. All scripts that need to encode with hashlib give me error: Python 3.10 import hashlib h = hashlib.new('ripemd160') return: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.10/hashlib.py", line 166, in __hash_new return __get_builtin_constructor(name)(data) File "/usr/lib/python3.10/hashlib.py", line 123, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type ripemd160 I already tried to check if that hash exists in the library, and if I have it: print(hashlib.algorithms_available): {'md5', 'sm3', 'sha3_512', 'sha384', 'sha256', 'sha1', 'shake_128', 'sha224', 'sha512_224', 'sha512_256', 'blake2b', 'ripemd160', 'md5-sha1', 'sha512', 'sha3_256', 'shake_256', 'sha3_384', 'whirlpool', 'md4', 'blake2s', 'sha3_224'} I am having this problem in a vps with linux, but in my pc I use Windows and I don't have this problem. I sincerely appreciate any help or suggestion.
Hashlib uses OpenSSL for ripemd160 and apparently OpenSSL disabled some older crypto algos around version 3.0 in November 2021. All the functions are still there but require manual enabling. See issue 16994 of OpenSSL github project for details. To quickly enable it, find the directory that holds your OpenSSL config file or a symlink to it, by running the below command: openssl version -d You can now go to the directory and edit the config file (it may be necessary to use sudo): nano openssl.cnf Make sure that the config file contains following lines: openssl_conf = openssl_init [openssl_init] providers = provider_sect [provider_sect] default = default_sect legacy = legacy_sect [default_sect] activate = 1 [legacy_sect] activate = 1 Tested on: OpenSSL 3.0.2, Python 3.10.4, Linux Ubuntu 22.04 LTS aarch64, I have no access to other platforms at the moment.
14
29
72,454,393
2022-5-31
https://stackoverflow.com/questions/72454393/does-python-oracledb-thin-mode-have-any-performance-implications-compared-to-the
cx_Oracle was renamed to python-oracledb in the May 2022 release. It now comes with two modes, thin and thick. Thick mode uses the Oracle client libraries to connect to Oracle, while thin mode can connect directly. cx_Oracle previously always required using the Oracle client libraries. Is there any performance implications to using the thin mode instead of the thick mode?
Yes, there is, but it can vary depending on your workload. In our own tests we saw basic fetching and inserting performing with thin mode between 10% and 30% faster than thick mode. The main reason for the difference is the elimination of a copy/conversion step that is required in thick mode. Some more discussion can be found here: https://github.com/oracle/python-oracledb/discussions/5.
6
9
72,373,093
2022-5-25
https://stackoverflow.com/questions/72373093/how-to-define-python-requires-in-pyproject-toml-using-setuptools
Setuptools allows you to specify the minimum python version as such: from setuptools import setup [...] setup(name="my_package_name", python_requires='>3.5.2', [...] However, how can you do this with the pyproject.toml? The following two things did NOT work: [project] ... # ERROR: invalid key python_requires = ">=3" # ERROR: no matching distribution found dependencies = ["python>=3"]
According to PEP 621, the equivalent field in the [project] table is requires-python. More information about the list of valid configuration fields can be found in: https://packaging.python.org/en/latest/specifications/declaring-project-metadata/. The equivalent pyproject.toml of your example would be: [project] name = "my_package_name" requires-python = ">3.5.2" ...
18
22
72,450,373
2022-5-31
https://stackoverflow.com/questions/72450373/all-permutations-of-numbers-1-n-using-list-comprehension-without-itertools
I am currently using Python 3.7.7, and I posed a coding challenge for myself. I would like to list all permutations of integers from 1 to N using a one-line code (perhaps a list comprehension). I cannot use itertools (or other packages which solve this with one function). For N <= 9, I found "cheaty" method: N = 3 print([list(str(i)) for i in range(10**N) if all([str(i).count(str(j)) == 1 for j in range(1,N+1)])]) Example: Out: [['1', '2', '3'], ['1', '3', '2'], ['2', '1', '3'], ['2', '3', '1'], ['3', '1', '2'], ['3', '2', '1']] In the case of N = 3, this goes through all integers from 0 to 999 in order, and selects the ones that have exactly one 1, exactly one 2, and exactly one 3. (These are 123, 132, 213, 231, 312, 321; and from here, it's simple enough to convert them to a list.) However this obviously fails for N >= 10. I thought about converting the numbers to a higher base first, but that turned out to be even more difficult when restricting myself to only using list comprehension. Can anyone think of a way to do this for N >= 10?
A not-so-simple functional one-liner without any "outside" variable assignment except N. N = 3 (lambda n: (lambda f, n: f(f, n))(lambda f, n: [p[:i]+[n]+p[i:] for p in f(f, n-1) for i in range(len(p)+1)] if n > 1 else [[1]], n))(N) Output [[3, 2, 1], [2, 3, 1], [2, 1, 3], [3, 1, 2], [1, 3, 2], [1, 2, 3]]
3
4
72,454,228
2022-5-31
https://stackoverflow.com/questions/72454228/error-could-not-load-file-or-assembly-microsoft-azure-webjobs-script-abstracti
I'm trying to run an Azure Function locally following the Microsoft guide: https://learn.microsoft.com/nl-nl/azure/azure-functions/create-first-function-cli-python?tabs=azure-cli%2Cbash%2Cbrowser#create-venv Whatever I try I get the same error over and over again when i try to start the function using "func start": Found Python version 3.8.0 (py). Azure Functions Core Tools Core Tools Version: 4.0.4544 Commit hash: N/A (64-bit) Function Runtime Version: 4.3.2.18186 Could not load file or assembly 'Microsoft.Azure.WebJobs.Script.Abstractions, Version=1.0.0.0, Culture=neutral, PublicKeyToken=3c5b9424214e8f8c'. The system cannot find the file specified. My files are set up as follows Host.json { "version": "2.0", "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" } } }, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[2.*, 3.0.0)" } } local.setting.json { "IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "python", "AzureWebJobsStorage": "UseDevelopmentStorage=true" } } requirements.txt azure-functions function.json { "scriptFile": "__init__.py", "bindings": [ { "authLevel": "Anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "$return" } ] } init.py import logging import azure.functions as func def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') name = req.params.get('name') if not name: try: req_body = req.get_json() except ValueError: pass else: name = req_body.get('name') if name: return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.") else: return func.HttpResponse( "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", status_code=200 ) Any help would be appreciated!
I came across this too. Not sure what caused it. But what fixed it for me was to download, uninstall, and reinstall the Azure Functions Core Tools (using 'repair' just returned a generic "Failed due to error", and I had to kill PowerToys) Not sure if it's necessary, but I also cleared my nuget cache dotnet nuget locals all -c dotnet restore # in your sln folder And reinstalled my vscode Azure plugins
5
6
72,452,403
2022-5-31
https://stackoverflow.com/questions/72452403/cross-reference-between-numpy-arrays
I have a 1d array of ids, for example: a = [1, 3, 4, 7, 9] Then another 2d array: b = [[1, 4, 7, 9], [3, 7, 9, 1]] I would like to have a third array with the same shape of b where each item is the index of the corresponding item from a, that is: c = [[0, 2, 3, 4], [1, 3, 4, 0]] What's a vectorized way to do that using numpy?
Effectively, this solution is a one-liner. The only catch is that you need to reshape the array before you do the one-liner, and then reshape it back again: import numpy as np a = np.array([1, 3, 4, 7, 9]) b = np.array([[1, 4, 7, 9], [3, 7, 9, 1]]) original_shape = b.shape c = np.where(b.reshape(b.size, 1) == a)[1] c = c.reshape(original_shape) This results with: [[0 2 3 4] [1 3 4 0]]
6
1
72,370,894
2022-5-25
https://stackoverflow.com/questions/72370894/stream-image-from-android-with-ffmpeg
I'm currently receiving images from an external source as byte array and I would like to send it as raw video format via ffmpeg to a stream URL, where I have a RTSP server that receives RTSP streams (a similar unanswered question). However, I haven't worked with FFMPEG in Java, so i can't find an example on how to do it. I have a callback that copies the image bytes to a byte array as follows: public class MainActivity extends Activity { final String rtmp_url = "rtmp://192.168.0.12:1935/live/test"; private int PREVIEW_WIDTH = 384; private int PREVIEW_HEIGHT = 292; private String TAG = "MainActivity"; String ffmpeg = Loader.load(org.bytedeco.ffmpeg.ffmpeg.class); final String command[] = {ffmpeg, "-y", //Add "-re" for simulated readtime streaming. "-f", "rawvideo", "-vcodec", "rawvideo", "-pix_fmt", "bgr24", "-s", (Integer.toString(PREVIEW_WIDTH) + "x" + Integer.toString(PREVIEW_HEIGHT)), "-r", "10", "-i", "pipe:", "-c:v", "libx264", "-pix_fmt", "yuv420p", "-preset", "ultrafast", "-f", "flv", rtmp_url}; private UVCCamera mUVCCamera; public void handleStartPreview(Object surface) throws InterruptedException, IOException { Log.e(TAG, "handleStartPreview:mUVCCamera" + mUVCCamera + " mIsPreviewing:"); if ((mUVCCamera == null)) return; Log.e(TAG, "handleStartPreview2 "); try { mUVCCamera.setPreviewSize(mWidth, mHeight, 1, 26, 0, UVCCamera.DEFAULT_BANDWIDTH, 0); Log.e(TAG, "handleStartPreview3 mWidth: " + mWidth + "mHeight:" + mHeight); } catch (IllegalArgumentException e) { try { // fallback to YUV mode mUVCCamera.setPreviewSize(mWidth, mHeight, 1, 26, UVCCamera.DEFAULT_PREVIEW_MODE, UVCCamera.DEFAULT_BANDWIDTH, 0); Log.e(TAG, "handleStartPreview4"); } catch (IllegalArgumentException e1) { callOnError(e1); return; } } Log.e(TAG, "handleStartPreview: startPreview1"); int result = mUVCCamera.startPreview(); mUVCCamera.setFrameCallback(mIFrameCallback, UVCCamera.PIXEL_FORMAT_RGBX); mUVCCamera.startCapture(); Toast.makeText(MainActivity.this,"Camera Started",Toast.LENGTH_SHORT).show(); ProcessBuilder pb = new ProcessBuilder(command); pb.redirectErrorStream(true); Process process = pb.start(); BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream())); OutputStream writer = process.getOutputStream(); byte img[] = new byte[192*108*3]; for (int i = 0; i < 10; i++) { for (int y = 0; y < 108; y++) { for (int x = 0; x < 192; x++) { byte r = (byte)((x * y + i) % 255); byte g = (byte)((x * y + i*10) % 255); byte b = (byte)((x * y + i*20) % 255); img[(y*192 + x)*3] = b; img[(y*192 + x)*3+1] = g; img[(y*192 + x)*3+2] = r; } } writer.write(img); } writer.close(); String line; while ((line = reader.readLine()) != null) { System.out.println(line); } process.waitFor(); } public static void buildRawFrame(Mat img, int i) { int p = img.cols() / 60; img.setTo(new Scalar(60, 60, 60)); String text = Integer.toString(i+1); int font = Imgproc.FONT_HERSHEY_SIMPLEX; Point pos = new Point(img.cols()/2-p*10*(text.length()), img.rows()/2+p*10); Imgproc.putText(img, text, pos, font, p, new Scalar(255, 30, 30), p*2); //Blue number } Additionally: Android Camera Capture using FFmpeg uses ffmpeg to capture frame by frame from native android camera and instead of pushing it via RTMP, they used to generate a video file as output. Although how the image was passed via ffmpeg was not informed. frameData is my byte array and I'd like to know how can I write the necessary ffmpeg commands using ProcessBuilder to send an image via RTSP using ffmpeg for a given URL. An example of what I am trying to do, In Python 3 I could easily do it by doing: import cv2 import numpy as np import socket import sys import pickle import struct import subprocess fps = 25 width = 224 height = 224 rtmp_url = 'rtmp://192.168.0.13:1935/live/test' command = ['ffmpeg', '-y', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-pix_fmt', 'bgr24', '-s', "{}x{}".format(width, height), '-r', str(fps), '-i', '-', '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-preset', 'ultrafast', '-f', 'flv', rtmp_url] p = subprocess.Popen(command, stdin=subprocess.PIPE) while(True): frame = np.random.randint([255], size=(224, 224, 3)) frame = frame.astype(np.uint8) p.stdin.write(frame.tobytes()) I would like to do the same thing in Android Update: I can reproduce @Rotem 's answer on Netbeans although, in Android I am getting NullPointer exception error when trying to execute pb.start(). Process: com.infiRay.XthermMini, PID: 32089 java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012) at com.infiRay.XthermMini.MainActivity.handleStartPreview(MainActivity.java:512) at com.infiRay.XthermMini.MainActivity.startPreview(MainActivity.java:563) at com.infiRay.XthermMini.MainActivity.access$1000(MainActivity.java:49) at com.infiRay.XthermMini.MainActivity$3.onConnect(MainActivity.java:316) at com.serenegiant.usb.USBMonitor$3.run(USBMonitor.java:620) at android.os.Handler.handleCallback(Handler.java:938) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loopOnce(Looper.java:226) at android.os.Looper.loop(Looper.java:313) at android.os.HandlerThread.run(HandlerThread.java:67) 2022-06-02 11:47:20.300 32089-1049/com.infiRay.XthermMini E/libUVCCamera: [1049*UVCPreviewIR.cpp:505:uvc_preview_frame_callback]:receive err data 2022-06-02 11:47:20.304 32089-1049/com.infiRay.XthermMini E/libUVCCamera: [1049*UVCPreviewIR.cpp:505:uvc_preview_frame_callback]:receive err data 2022-06-02 11:47:20.304 32089-1049/com.infiRay.XthermMini E/libUVCCamera: [1049*UVCPreviewIR.cpp:505:uvc_preview_frame_callback]:receive err data 2022-06-02 11:47:20.308 32089-1049/com.infiRay.XthermMini E/libUVCCamera: [1049*UVCPreviewIR.cpp:505:uvc_preview_frame_callback]:receive err data 2022-06-02 11:47:20.312 32089-32089/com.infiRay.XthermMini E/MainActivity: onPause: 2022-06-02 11:47:20.314 32089-32581/com.infiRay.XthermMini I/Process: Sending signal. PID: 32089 SIG: 9
Here is a JAVA implementation that resembles the Python code: The example writes raw video frames (byte arrays) to stdin pipe of FFmpeg sub-process: _____________ ___________ ________ | JAVA byte | | | | | | Array | stdin | FFmpeg | | Output | | BGR (format)| --------> | process | -------------> | stream | |_____________| raw frame |___________| encoded video |________| Main stages: Initialize FFmpeg command arguments: final String command[] = {"ffmpeg", "-f", "rawvideo", ...} Create ProcessBuilder that executes FFmpeg as a sub-process: ProcessBuilder pb = new ProcessBuilder(command); Redirect stderr (required for reading FFmpeg messages), without it, the sub-process halts: pb.redirectErrorStream(true); Start FFmpeg sub-process, and create BufferedReader: Process process = pb.start(); BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream())); Create OutputStream for writing to stdin pipe of FFmpeg sub-process: OutputStream writer = process.getOutputStream(); Write raw video frames to stdin pipe of FFmpeg sub-process in a loop: byte img[] = new byte[width*height*3]; for (int i = 0; i < n_frmaes; i++) { //Fill img with pixel data ... writer.write(img); } Close stdin, read and print stderr content, and wait for sub-process to finish: writer.close(); String line; while ((line = reader.readLine()) != null) { System.out.println(line); } process.waitFor(); Code sample: The following code sample writes 10 raw video frames with size 192x108 to FFmpeg. Instead of streaming to RTMP, we are writing the result to test.flv file (for testing). The example uses hard coded strings and numbers (for simplicity). Note: The code sample assume FFmpeg executable is in the execution path. package myproject; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStream; public class FfmpegVideoWriter { public static void main(String[] args) throws IOException, InterruptedException { final String rtmp_url = "test.flv"; //Set output file (instead of output URL) for testing. final String command[] = {"ffmpeg", "-y", //Add "-re" for simulated readtime streaming. "-f", "rawvideo", "-vcodec", "rawvideo", "-pix_fmt", "bgr24", "-s", "192x108", "-r", "10", "-i", "pipe:", "-c:v", "libx264", "-pix_fmt", "yuv420p", "-preset", "ultrafast", "-f", "flv", rtmp_url}; //https://stackoverflow.com/questions/5483830/process-waitfor-never-returns ProcessBuilder pb = new ProcessBuilder(command); //Create ProcessBuilder pb.redirectErrorStream(true); //Redirect stderr Process process = pb.start(); BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream())); //Create OutputStream for writing to stdin pipe of FFmpeg sub-process. OutputStream writer = process.getOutputStream(); byte img[] = new byte[192*108*3]; //Dummy image //Write 10 video frames to stdin pipe of FFmpeg sub-process for (int i = 0; i < 10; i++) { //Fill image with some arbitrary pixel values for (int y = 0; y < 108; y++) { for (int x = 0; x < 192; x++) { //Arbitrary RGB values: byte r = (byte)((x * y + i) % 255); //Red component byte g = (byte)((x * y + i*10) % 255); //Green component byte b = (byte)((x * y + i*20) % 255); //Blue component img[(y*192 + x)*3] = b; img[(y*192 + x)*3+1] = g; img[(y*192 + x)*3+2] = r; } } writer.write(img); //Write img to FFmpeg } writer.close(); //Close stdin pipe. //Read and print stderr content //Note: there may be cases when FFmpeg keeps printing messages, so it may not be the best solution to empty the buffer only at the end. //We may consider adding an argument `-loglevel error` for reducing verbosity. String line; while ((line = reader.readLine()) != null) { System.out.println(line); } process.waitFor(); } } The code was tested in my PC (with Windows 10), and I am not sure it's going to work with Android... The above sample is simplistic and generic, in your case you may use rgba pixel format and write FrameData inside onFrame method. Sample video frame ("arbitrary pixel values"): Update: The following code sample uses JavaCV - writes Mat data to FFmpeg: package myproject; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStream; import org.opencv.core.Core; import org.opencv.core.CvType; import org.opencv.core.Mat; import org.opencv.core.Scalar; import org.opencv.core.Point; import org.opencv.imgproc.Imgproc; public class FfmpegVideoWriter { static { System.loadLibrary(Core.NATIVE_LIBRARY_NAME); } //Build synthetic "raw BGR" image for testing public static void buildRawFrame(Mat img, int i) { int p = img.cols() / 60; //Used as font size factor. img.setTo(new Scalar(60, 60, 60)); //Fill image with dark gray color String text = Integer.toString(i+1); int font = Imgproc.FONT_HERSHEY_SIMPLEX; Point pos = new Point(img.cols()/2-p*10*(text.length()), img.rows()/2+p*10); Imgproc.putText(img, text, pos, font, p, new Scalar(255, 30, 30), p*2); //Blue number } public static void main(String[] args) throws IOException, InterruptedException { final int cols = 192; final int rows = 108; final String rtmp_url = "test.flv"; //Set output file (instead of output URL) for testing. final String command[] = {"ffmpeg", "-y", //Add "-re" for simulated readtime streaming. "-f", "rawvideo", "-vcodec", "rawvideo", "-pix_fmt", "bgr24", "-s", (Integer.toString(cols) + "x" + Integer.toString(rows)), "-r", "10", "-i", "pipe:", "-c:v", "libx264", "-pix_fmt", "yuv420p", "-preset", "ultrafast", "-f", "flv", rtmp_url}; //https://stackoverflow.com/questions/5483830/process-waitfor-never-returns ProcessBuilder pb = new ProcessBuilder(command); //Create ProcessBuilder pb.redirectErrorStream(true); //Redirect stderr Process process = pb.start(); BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream())); //Create OutputStream for writing to stdin pipe of FFmpeg sub-process. OutputStream writer = process.getOutputStream(); //Dummy image (BGR pixel format). Mat img = new Mat(rows, cols, CvType.CV_8UC3, Scalar.all(0)); byte buffer[] = new byte[cols*rows*3]; //Byte array for storing img data //Write 10 video frames to stdin pipe of FFmpeg sub-process for (int i = 0; i < 10; i++) { buildRawFrame(img, i); //Build image with blue frame counter. img.get(0, 0, buffer); //Copy img data to buffer (not sure if this is the best solution). writer.write(buffer); //Write buffer (raw video frame as byte array) to FFmpeg } writer.close(); //Close stdin pipe. //Read and print stderr content String line; while ((line = reader.readLine()) != null) { System.out.println(line); } process.waitFor(); } } Sample output frame:
4
6
72,418,933
2022-5-28
https://stackoverflow.com/questions/72418933/find-boundary-points-of-xy-coordinates
I have a text file with xy-coordinates called xy.txt. 29.66150677 -98.39336541 29.66150677 -98.39337576 29.66150651 -98.39336541 29.66150328 -98.39337576 29.66150677 -98.39336475 29.66150677 -98.39338611 29.66150393 -98.39338611 29.66150677 -98.39339646 29.66150659 -98.39339646 29.66150677 -98.39339693 29.66151576 -98.39334472 29.66151576 -98.39335506 29.66151511 -98.39334472 29.66151058 -98.39335506 29.66151576 -98.39334322 29.66151576 -98.39336541 29.66151576 -98.39337576 29.66151576 -98.39338611 29.66151576 -98.39339646 29.66151576 -98.39340681 29.66151067 -98.39340681 29.66151576 -98.39341515 29.66152475 -98.39332402 29.66152475 -98.39333437 29.66152443 -98.39332402 29.66151973 -98.39333437 29.66152475 -98.39332332 29.66152475 -98.39334472 29.66152475 -98.39335506 29.66152475 -98.39336541 29.66152475 -98.39337576 29.66152475 -98.39338611 29.66152475 -98.39339646 29.66152475 -98.39340681 29.66152475 -98.39341716 29.66151699 -98.39341716 29.66152475 -98.39342722 29.66153375 -98.39331367 29.66153375 -98.39332402 29.6615302 -98.39331367 29.66153375 -98.3933086 29.66153375 -98.39333437 29.66153375 -98.39334472 29.66153375 -98.39335506 29.66153375 -98.39336541 29.66153375 -98.39337576 29.66153375 -98.39338611 29.66153375 -98.39339646 29.66153375 -98.39340681 29.66153375 -98.39341716 29.66153375 -98.39342751 29.66152507 -98.39342751 29.66153375 -98.39343443 29.66154274 -98.39330332 29.66154274 -98.39331367 29.66153745 -98.39330332 29.66154274 -98.39329625 29.66154274 -98.39332402 29.66154274 -98.39333437 29.66154274 -98.39334472 29.66154274 -98.39335506 29.66154274 -98.39336541 29.66154274 -98.39337576 29.66154274 -98.39338611 29.66154274 -98.39339646 29.66154274 -98.39340681 29.66154274 -98.39341716 29.66154274 -98.39342751 29.66154274 -98.39343786 29.66153992 -98.39343786 29.66154274 -98.3934387 29.66155173 -98.39329297 29.66155173 -98.39330332 29.6615457 -98.39329297 29.66155173 -98.39328644 29.66155173 -98.39331367 29.66155173 -98.39332402 29.66155173 -98.39333437 29.66155173 -98.39334472 29.66155173 -98.39335506 29.66155173 -98.39336541 29.66155173 -98.39337576 29.66155173 -98.39338611 29.66155173 -98.39339646 29.66155173 -98.39340681 29.66155173 -98.39341716 29.66155173 -98.39342751 29.66155173 -98.39343786 29.66155173 -98.39344106 29.66156073 -98.39328262 29.66156073 -98.39329297 29.66155555 -98.39328262 29.66156073 -98.39327744 29.66156073 -98.39330332 29.66156073 -98.39331367 29.66156073 -98.39332402 29.66156073 -98.39333437 29.66156073 -98.39334472 29.66156073 -98.39335506 29.66156073 -98.39336541 29.66156073 -98.39337576 29.66156073 -98.39338611 29.66156073 -98.39339646 29.66156073 -98.39340681 29.66156073 -98.39341716 29.66156073 -98.39342751 29.66156073 -98.39343786 29.66156073 -98.39344196 29.66156972 -98.39327227 29.66156972 -98.39328262 29.66156651 -98.39327227 29.66156972 -98.39326964 29.66156972 -98.39329297 29.66156972 -98.39330332 29.66156972 -98.39331367 29.66156972 -98.39332402 29.66156972 -98.39333437 29.66156972 -98.39334472 29.66156972 -98.39335506 29.66156972 -98.39336541 29.66156972 -98.39337576 29.66156972 -98.39338611 29.66156972 -98.39339646 29.66156972 -98.39340681 29.66156972 -98.39341716 29.66156972 -98.39342751 29.66156972 -98.39343786 29.66156972 -98.393442 29.66157871 -98.39327227 29.66157871 -98.39328262 29.66157871 -98.39326327 29.66157871 -98.39329297 29.66157871 -98.39330332 29.66157871 -98.39331367 29.66157871 -98.39332402 29.66157871 -98.39333437 29.66157871 -98.39334472 29.66157871 -98.39335506 29.66157871 -98.39336541 29.66157871 -98.39337576 29.66157871 -98.39338611 29.66157871 -98.39339646 29.66157871 -98.39340681 29.66157871 -98.39341716 29.66157871 -98.39342751 29.66157871 -98.39343786 29.66157871 -98.39344084 29.66158771 -98.39326192 29.66158771 -98.39327227 29.66158097 -98.39326192 29.66158771 -98.39325788 29.66158771 -98.39328262 29.66158771 -98.39329297 29.66158771 -98.39330332 29.66158771 -98.39331367 29.66158771 -98.39332402 29.66158771 -98.39333437 29.66158771 -98.39334472 29.66158771 -98.39335506 29.66158771 -98.39336541 29.66158771 -98.39337576 29.66158771 -98.39338611 29.66158771 -98.39339646 29.66158771 -98.39340681 29.66158771 -98.39341716 29.66158771 -98.39342751 29.66158771 -98.39343786 29.66158771 -98.39343926 29.66159226 -98.39343786 29.6615967 -98.39326192 29.6615967 -98.39327227 29.6615967 -98.39325426 29.6615967 -98.39328262 29.6615967 -98.39329297 29.6615967 -98.39330332 29.6615967 -98.39331367 29.6615967 -98.39332402 29.6615967 -98.39333437 29.6615967 -98.39334472 29.6615967 -98.39335506 29.6615967 -98.39336541 29.6615967 -98.39337576 29.6615967 -98.39338611 29.6615967 -98.39339646 29.6615967 -98.39340681 29.6615967 -98.39341716 29.6615967 -98.39342751 29.6615967 -98.39343623 29.66160569 -98.39325157 29.66160569 -98.39326192 29.66160564 -98.39325157 29.66160569 -98.39325156 29.66160569 -98.39327227 29.66160569 -98.39328262 29.66160569 -98.39329297 29.66160569 -98.39330332 29.66160569 -98.39331367 29.66160569 -98.39332402 29.66160569 -98.39333437 29.66160569 -98.39334472 29.66160569 -98.39335506 29.66160569 -98.39336541 29.66160569 -98.39337576 29.66160569 -98.39338611 29.66160569 -98.39339646 29.66160569 -98.39340681 29.66160569 -98.39341716 29.66160569 -98.39342751 29.66160569 -98.39343291 29.66161468 -98.39325157 29.66161468 -98.39326192 29.66161468 -98.39324921 29.66161468 -98.39327227 29.66161468 -98.39328262 29.66161468 -98.39329297 29.66161468 -98.39330332 29.66161468 -98.39331367 29.66161468 -98.39332402 29.66161468 -98.39333437 29.66161468 -98.39334472 29.66161468 -98.39335506 29.66161468 -98.39336541 29.66161468 -98.39337576 29.66161468 -98.39338611 29.66161468 -98.39339646 29.66161468 -98.39340681 29.66161468 -98.39341716 29.66161468 -98.39342751 29.66161468 -98.39342823 29.66161592 -98.39342751 29.66162368 -98.39325157 29.66162368 -98.39326192 29.66162368 -98.39324697 29.66162368 -98.39327227 29.66162368 -98.39328262 29.66162368 -98.39329297 29.66162368 -98.39330332 29.66162368 -98.39331367 29.66162368 -98.39332402 29.66162368 -98.39333437 29.66162368 -98.39334472 29.66162368 -98.39335506 29.66162368 -98.39336541 29.66162368 -98.39337576 29.66162368 -98.39338611 29.66162368 -98.39339646 29.66162368 -98.39340681 29.66162368 -98.39341716 29.66162368 -98.39342302 29.66163267 -98.39325157 29.66163267 -98.39326192 29.66163267 -98.39324642 29.66163267 -98.39327227 29.66163267 -98.39328262 29.66163267 -98.39329297 29.66163267 -98.39330332 29.66163267 -98.39331367 29.66163267 -98.39332402 29.66163267 -98.39333437 29.66163267 -98.39334472 29.66163267 -98.39335506 29.66163267 -98.39336541 29.66163267 -98.39337576 29.66163267 -98.39338611 29.66163267 -98.39339646 29.66163267 -98.39340681 29.66163267 -98.39341716 29.66163267 -98.39341722 29.66163275 -98.39341716 29.66164166 -98.39325157 29.66164166 -98.39326192 29.66164166 -98.39324588 29.66164166 -98.39327227 29.66164166 -98.39328262 29.66164166 -98.39329297 29.66164166 -98.39330332 29.66164166 -98.39331367 29.66164166 -98.39332402 29.66164166 -98.39333437 29.66164166 -98.39334472 29.66164166 -98.39335506 29.66164166 -98.39336541 29.66164166 -98.39337576 29.66164166 -98.39338611 29.66164166 -98.39339646 29.66164166 -98.39340681 29.66164166 -98.39341103 29.66164749 -98.39340681 29.66165066 -98.39325157 29.66165066 -98.39326192 29.66165066 -98.39324533 29.66165066 -98.39327227 29.66165066 -98.39328262 29.66165066 -98.39329297 29.66165066 -98.39330332 29.66165066 -98.39331367 29.66165066 -98.39332402 29.66165066 -98.39333437 29.66165066 -98.39334472 29.66165066 -98.39335506 29.66165066 -98.39336541 29.66165066 -98.39337576 29.66165066 -98.39338611 29.66165066 -98.39339646 29.66165066 -98.39340447 29.66165965 -98.39325157 29.66165965 -98.39326192 29.66165965 -98.39324479 29.66165965 -98.39327227 29.66165965 -98.39328262 29.66165965 -98.39329297 29.66165965 -98.39330332 29.66165965 -98.39331367 29.66165965 -98.39332402 29.66165965 -98.39333437 29.66165965 -98.39334472 29.66165965 -98.39335506 29.66165965 -98.39336541 29.66165965 -98.39337576 29.66165965 -98.39338611 29.66165965 -98.39339646 29.66165965 -98.39339783 29.6616615 -98.39339646 29.66166864 -98.39325157 29.66166864 -98.39326192 29.66166864 -98.39324424 29.66166864 -98.39327227 29.66166864 -98.39328262 29.66166864 -98.39329297 29.66166864 -98.39330332 29.66166864 -98.39331367 29.66166864 -98.39332402 29.66166864 -98.39333437 29.66166864 -98.39334472 29.66166864 -98.39335506 29.66166864 -98.39336541 29.66166864 -98.39337576 29.66166864 -98.39338611 29.66166864 -98.39339119 29.66167552 -98.39338611 29.66167764 -98.39325157 29.66167764 -98.39326192 29.66167764 -98.3932437 29.66167764 -98.39327227 29.66167764 -98.39328262 29.66167764 -98.39329297 29.66167764 -98.39330332 29.66167764 -98.39331367 29.66167764 -98.39332402 29.66167764 -98.39333437 29.66167764 -98.39334472 29.66167764 -98.39335506 29.66167764 -98.39336541 29.66167764 -98.39337576 29.66167764 -98.39338455 29.66168663 -98.39325157 29.66168663 -98.39326192 29.66168663 -98.39324315 29.66168663 -98.39327227 29.66168663 -98.39328262 29.66168663 -98.39329297 29.66168663 -98.39330332 29.66168663 -98.39331367 29.66168663 -98.39332402 29.66168663 -98.39333437 29.66168663 -98.39334472 29.66168663 -98.39335506 29.66168663 -98.39336541 29.66168663 -98.39337576 29.66168663 -98.39337791 29.66168954 -98.39337576 29.66169562 -98.39325157 29.66169562 -98.39326192 29.66169562 -98.39324277 29.66169562 -98.39327227 29.66169562 -98.39328262 29.66169562 -98.39329297 29.66169562 -98.39330332 29.66169562 -98.39331367 29.66169562 -98.39332402 29.66169562 -98.39333437 29.66169562 -98.39334472 29.66169562 -98.39335506 29.66169562 -98.39336541 29.66169562 -98.39337127 29.66170356 -98.39336541 29.66170462 -98.39325157 29.66170462 -98.39326192 29.66170462 -98.39324245 29.66170462 -98.39327227 29.66170462 -98.39328262 29.66170462 -98.39329297 29.66170462 -98.39330332 29.66170462 -98.39331367 29.66170462 -98.39332402 29.66170462 -98.39333437 29.66170462 -98.39334472 29.66170462 -98.39335506 29.66170462 -98.39336463 29.66171361 -98.39325157 29.66171361 -98.39326192 29.66171361 -98.39324213 29.66171361 -98.39327227 29.66171361 -98.39328262 29.66171361 -98.39329297 29.66171361 -98.39330332 29.66171361 -98.39331367 29.66171361 -98.39332402 29.66171361 -98.39333437 29.66171361 -98.39334472 29.66171361 -98.39335506 29.66171361 -98.39335799 29.66171758 -98.39335506 29.6617226 -98.39325157 29.6617226 -98.39326192 29.6617226 -98.393242 29.6617226 -98.39327227 29.6617226 -98.39328262 29.6617226 -98.39329297 29.6617226 -98.39330332 29.6617226 -98.39331367 29.6617226 -98.39332402 29.6617226 -98.39333437 29.6617226 -98.39334472 29.6617226 -98.39335135 29.66173159 -98.39334472 29.6617316 -98.39325157 29.6617316 -98.39326192 29.6617316 -98.393242 29.6617316 -98.39327227 29.6617316 -98.39328262 29.6617316 -98.39329297 29.6617316 -98.39330332 29.6617316 -98.39331367 29.6617316 -98.39332402 29.6617316 -98.39333437 29.6617316 -98.39334471 29.66174059 -98.39325157 29.66174059 -98.39326192 29.66174059 -98.393242 29.66174059 -98.39327227 29.66174059 -98.39328262 29.66174059 -98.39329297 29.66174059 -98.39330332 29.66174059 -98.39331367 29.66174059 -98.39332402 29.66174059 -98.39333437 29.66174059 -98.39333807 29.66174561 -98.39333437 29.66174958 -98.39325157 29.66174958 -98.39326192 29.66174958 -98.39324293 29.66174958 -98.39327227 29.66174958 -98.39328262 29.66174958 -98.39329297 29.66174958 -98.39330332 29.66174958 -98.39331367 29.66174958 -98.39332402 29.66174958 -98.39333143 29.66175858 -98.39325157 29.66175858 -98.39326192 29.66176663 -98.39325157 29.66176757 -98.39326192 29.66175858 -98.39324585 29.66175858 -98.39327227 29.66175858 -98.39328262 29.66175858 -98.39329297 29.66175858 -98.39330332 29.66175858 -98.39331367 29.66175858 -98.39332402 29.66175858 -98.39332427 29.6617589 -98.39332402 29.66176757 -98.39327227 29.66177412 -98.39326192 29.66177656 -98.39327227 29.66176757 -98.3932525 29.66176757 -98.39328262 29.66176757 -98.39329297 29.66176757 -98.39330332 29.66176757 -98.39331367 29.66177543 -98.39330332 29.66176974 -98.39331367 29.66176757 -98.3933162 29.66177656 -98.39328262 29.66177775 -98.39327227 29.66177872 -98.39328262 29.66177656 -98.39326599 29.66177656 -98.39329297 29.66177855 -98.39329297 29.66177656 -98.39330028 I read the file using import numpy as np xy = np.loadtxt('xy.txt') x, y = xy[:, 0], xy[:, 1] and I can plot the points with import matplotlib.pyplot as plt plt.plot(x, y, 'o', color='black', markersize=6) plt.show() Visually the data looks as follows: I want to retrieve the xy-coordinates of the points that form the boundary of the shape. Answers on similar questions suggest to use Concave Hull. With the help of this blog I write the following code: from scipy import spatial hull = spatial.ConvexHull(xy, incremental=False, qhull_options='Qt') hull_indices = hull.vertices boundary_x = [] boundary_y = [] for i in range(len(hull_indices)): index = hull_indices[i] boundary_x.append(xy[index, 0].astype('float32')) boundary_y.append(xy[index, 1].astype('float32')) plt.plot(boundary_x, boundary_y, 'o', color='red', markersize=6) plt.show() The output looks like Clearly the red points are not the boundary points. Some are enclosed by other points and they all are different from the original points. How do I define the boundary points in terms of the original points? Please advice
When I try to reproduce your code the convexHull function works perfectly. I changed your code so that the positions of black and red circles are rounded in the same way. And I reduced the radius of the red circles so you can better see if everything fits. import numpy as np import matplotlib.pyplot as plt from scipy import spatial xy = np.loadtxt('xy.txt') x, y = xy[:, 0].astype('float64'), xy[:, 1].astype('float64') hull = spatial.ConvexHull(xy, incremental=False, qhull_options='Qt') hull_indices = hull.vertices boundary_x = [] boundary_y = [] for i in range(len(hull_indices)): index = hull_indices[i] boundary_x.append(xy[index, 0].astype('float64')) boundary_y.append(xy[index, 1].astype('float64')) plt.plot(x, y, 'o', color='black', markersize=6) plt.plot(boundary_x, boundary_y, 'o', color='red', markersize=4) plt.show() Result: Update: If you need the concave parts of the boundary, too, you can use the python package alphashape instead and calculate the alpha shape. The points in your test data are very close, so I had to normalize the coordinates and adjust the alpha value to get reasonable results. import numpy as np import matplotlib.pyplot as plt import alphashape data = np.loadtxt('xy.txt') xy = (data + [-29.0, 98.0]) * 10 x, y = xy[:, 0], xy[:, 1] shape = alphashape.alphashape(xy, alpha=500.0) shape_x, shape_y = shape.exterior.coords.xy plt.plot(x, y, 'o', color='black', markersize=6) plt.plot(shape_x, shape_y, 'o', color='red', markersize=4) plt.show() Result:
3
9
72,444,301
2022-5-31
https://stackoverflow.com/questions/72444301/calculate-percentage-change-between-values-of-column-in-pandas-dataframe
I have a dataframe with some price indices across 5 years, from 2017 to 2021. It looks like this: Country Industry Year Index US Agriculture 2017 83 US Agriculture 2018 97.2 US Agriculture 2019 100 US Agriculture 2020 112 US Agriculture 2021 108 Japan Mining 2017 88 Japan Mining 2018 93 Japan Mining 2019 100 Japan Mining 2020 104 Japan Mining 2021 112 My base year is 2019, hence the Index for every row tagged with 2019 is 100. Everything else moves up or down. I want to generate another column called Percentage_Change showing the year on year change starting from 2019 as the base year. I tried using the pd.series.pct_change function, however, that calculates the year on year percentage change starting with 2017 and it generates an NaN value for all rows where the year is 2017, instead of 2019 which should be the base year. I want the output to look like this: Country Industry Year Index Percentage_change Japan Mining 2017 88 -5.37% Japan Mining 2018 93 -7% Japan Mining 2019 100 0 Japan Mining 2020 104 4% Japan Mining 2021 112 7.69% The percentage_change for Japan between 2021 and 2020 is (112-104)/104 = 7.69%, the difference between 2020 and 2019 is (104-100)/100 = 4%, the difference between 2018 and 2019 is (93-100)/100 = -7%, the difference between 2017 and 2018 is (88-93)/93 = -5.37% Is there any other way of calculating % change in pandas?
pct_change is computing a change relative to the previous value (which is why 2017 is NaN), and this doesn't seem to be what you want. If you want to compute a percentage change relative to 2019, as 2019 is already normalized to 100, simply subtract 100: df['Percentage_Change'] = df['Index'].sub(100) output: Country Industry Year Index Percentage_Change 0 US Agriculture 2017 83.0 -17.0 1 US Agriculture 2018 97.2 -2.8 2 US Agriculture 2019 100.0 0.0 3 US Agriculture 2020 112.0 12.0 4 US Agriculture 2021 108.0 8.0 5 Japan Mining 2017 88.0 -12.0 6 Japan Mining 2018 93.0 -7.0 7 Japan Mining 2019 100.0 0.0 8 Japan Mining 2020 104.0 4.0 9 Japan Mining 2021 112.0 12.0 bidirectional pct_change If you want a bidirectional pct_change "centered" on 100, you can use masks to compute the pct_change both ways: df['Percentage_Change'] = (df .assign(ref=df['Year'].eq(2019)) .groupby(['Country', 'Industry'], group_keys=False) .apply(lambda g: g['Index'].where(g['ref'].cummax()).pct_change() .fillna(g['Index'][::-1].pct_change().mask(g['ref'].cummax(), 0)) ) ) output: Country Industry Year Index Percentage_Change 0 US Agriculture 2017 83.0 -0.146091 1 US Agriculture 2018 97.2 -0.028000 2 US Agriculture 2019 100.0 0.000000 3 US Agriculture 2020 112.0 0.120000 4 US Agriculture 2021 108.0 -0.035714 5 Japan Mining 2017 88.0 -0.053763 6 Japan Mining 2018 93.0 -0.070000 7 Japan Mining 2019 100.0 0.000000 8 Japan Mining 2020 104.0 0.040000 9 Japan Mining 2021 112.0 0.076923
4
6
72,443,312
2022-5-31
https://stackoverflow.com/questions/72443312/what-is-the-most-efficient-way-to-open-osm-pbf-with-lowest-memory-consumption
Here's what I did from pyrosm import OSM # Initialize the OSM parser object osm = OSM('/DATA/user/nabih/indonesia-latest.osm.pbf') # Read all drivable roads drive_net = osm.get_network(network_type="driving") But it is memory error
https://osmcode.org/pyosmium/ provides a library to parse a osm.pbf. From what I remember, they keep the memory consumption to the minimum and provide different modes of parsing. Checkout their documentation for basic usage tutorial and references. The README of their GitHubprovides installation instructions.
4
4
72,440,803
2022-5-30
https://stackoverflow.com/questions/72440803/compare-rows-of-two-dataframes-in-pandas
I have two dataframes, the first is the data I currently have in the database, the second would be a file that might have changed fields: name and/or cnpj and/or create_date Based on that, I need to create a third dataframe with only the rows that have undergone some kind of change, as in the example of the expected output. The key to making the comparisons needs to be: id_account Dataframe 1: id_account name cnpj create_date 10 Agency Criss 10203040 2022-05-30 20 Agency Angel 11213141 2022-05-30 30 Supermarket Mario Bros 12223242 2022-05-30 40 Agency Mister M 13233343 2022-05-30 50 Supermarket Pokemon 14243454 2022-05-30 60 Supermarket of Dreams 15253580 2022-05-30 Dataframe 2: id_account name cnpj create_date 10 Supermarket Carol 80502030 2022-05-30 20 Agency Angel 11213141 2022-05-30 30 Supermarket Mario Bros 12223242 2022-05-30 40 Supermarket Magical 60304050 2022-05-30 50 Supermarket Pokemon 14243454 2022-05-30 60 Supermarket of Dreams 90804050 2022-05-30 Expected output: id_account name cnpj create_date 10 Supermarket Carol 80502030 2022-05-30 40 Supermarket Magical 60304050 2022-05-30 60 Supermarket of Dreams 90804050 2022-05-30 How can I do this? I've looked for a few ways, but I'm confused by the index.
If the data has same columns, but different number of rows, this is one possible solution: res = (pd.concat([df1,df2]) .drop_duplicates(keep=False) .drop_duplicates(subset='id_account', keep='last') ) Output: id_account name cnpj create_date 0 10 Supermarket Carol 80502030 2022-05-30 3 40 Supermarket Magical 60304050 2022-05-30 5 60 Supermarket of Dreams 90804050 2022-05-30
4
3
72,438,984
2022-5-30
https://stackoverflow.com/questions/72438984/unit-testing-init-subclass-method-in-python-3-x
I am trying to unit test an inherited class for which its base class implements __init_subclass__ method. Code is the following: quick_test.py import unittest from unittest.mock import create_autospec class Parent(): PROPERTY = NotImplemented def __init_subclass__(cls, **kwargs): if cls.PROPERTY is NotImplemented: raise NotImplementedError("Please implement the `PROPERTY`.") super().__init_subclass__(**kwargs) def __init__(self, connection_type="default"): self.connection_type = connection_type class Child(Parent): PROPERTY = "has value" class ChildNoProp(Parent): pass class TestClass(unittest.TestCase): def test_required_params(self): mock = create_autospec(Child) self.assertRaises(NotImplementedError, mock) if __name__ == '__main__': unittest.main() The problem is I can't even reach the test case because ChildNoProp definition calls __init_subclass__ in base class and raises exception. Is there a way I can unit test this with current implementation, or should I scrap the error raising in __init_subclass__?
You can create the class inside the assertRaises block. with self.assertRaises(NotImplementedError): class ChildNoProp(Parent): pass If the class declaration inside of a method makes you uncomfortable, you can use the type constructor directly. with self.assertRaises(NotImplementedError): type("ChildNoProp", (Parent,), {})
4
3
72,408,128
2022-5-27
https://stackoverflow.com/questions/72408128/typing-decorators-that-can-be-used-with-or-without-arguments
I have a decorator that can be called either without or with arguments (all strings): @decorator def fct0(a: int, b: int) -> int: return a * b @decorator("foo", "bar") # any number of arguments def fct1(a: int, b: int) -> int: return a * b I am having a hard time providing appropriate type hints so that type checkers will be able to properly validate the usage of the decorator, despite having read the related section of the doc of mypy. Here is what I have tried so far: from typing import overload, TypeVar, Any, Callable F = TypeVar("F", bound=Callable[..., Any]) @overload def decorator(arg: F) -> F: ... @overload def decorator(*args: str) -> Callable[[F], F]: ... def decorator(*args: Any) -> Any: # python code adapted from https://stackoverflow.com/q/653368 # @decorator -> shorthand for @decorator() if len(args) == 1 and callable(args[0]): return decorator()(args[0]) # @decorator(...) -> real implementation def wrapper(fct: F) -> F: # real code using `args` and `fct` here redacted for clarity return fct return wrapper Which results in the following error from mypy: error: Overloaded function implementation does not accept all possible arguments of signature 1 I also have an error with pyright: error: Overloaded implementation is not consistent with signature of overload 1 Type "(*args: Any) -> Any" cannot be assigned to type "(arg: F@decorator) -> F@decorator" Keyword parameter "arg" is missing in source I am using python 3.10.4, mypy 0.960, pyright 1.1.249.
The issue comes from the first overload (I should have read the pyright message twice!): @overload def decorator(arg: F) -> F: ... This overload accepts a keyword parameter named arg, while the implementation does not! Of course this does not matter in the case of a decorator used with the @decorator notation, but could if it is called like so: fct2 = decorator(arg=fct). Python >= 3.8 The best way to solve the issue would be to change the first overload so that arg is a positional-only parameter (so cannot be used as a keyword argument): @overload def decorator(arg: F, /) -> F: ... With support for Python < 3.8 Since positional-only parameters come with Python 3.8, we cannot change the first overload as desired. Instead, let's change the implementation to allow for a **kwargs parameter (an other possibility would be to add a keyword arg parameter). But now we need to handle it properly in the code implementation, for example: def decorator(*args: Any, **kwargs: Any) -> Any: if kwargs: raise TypeError("Unexpected keyword argument") # rest of the implementation here
5
4
72,411,825
2022-5-27
https://stackoverflow.com/questions/72411825/jupyter-notebook-in-vscode-with-virtual-environment-fails-to-import-tensorflow
I'm attempting to create an isolated virtual environment running tensorflow & tf2onnx using a jupyter notebook in vscode. The tf2onnx packge recommends python 3.7, and my local 3.7.9 version usually works well with tensorflow projects, so I have local and global versions set to 3.7.9 using pyenv. The following is my setup procedure: python -m venv .venv Then after starting a new terminal in vscode: pip install tensorflow==2.7.0 pip freeze > requirements.txt After this, in a cell in my jupyter notebook, the following line fails import tensorflow.keras as keras Exception: TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). At this point, the protobuf package version is showing as v4.21.0 in my requirements file. I've attempted to pre-install the 3.20.1 version into the virtual environment before installing tensorflow but this yields no effect. Here is the full requirements file after installing tensorflow: absl-py==1.0.0 astunparse==1.6.3 cachetools==5.1.0 certifi==2022.5.18.1 charset-normalizer==2.0.12 flatbuffers==2.0 gast==0.4.0 google-auth==2.6.6 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 grpcio==1.46.3 h5py==3.7.0 idna==3.3 importlib-metadata==4.11.4 keras==2.7.0 Keras-Preprocessing==1.1.2 libclang==14.0.1 Markdown==3.3.7 numpy==1.21.6 oauthlib==3.2.0 opt-einsum==3.3.0 protobuf==4.21.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 requests==2.27.1 requests-oauthlib==1.3.1 rsa==4.8 six==1.16.0 tensorboard==2.9.0 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.1 tensorflow==2.7.0 tensorflow-estimator==2.7.0 tensorflow-io-gcs-filesystem==0.26.0 termcolor==1.1.0 typing-extensions==4.2.0 urllib3==1.26.9 Werkzeug==2.1.2 wrapt==1.14.1 zipp==3.8.0
A recent change in protobuf is causing TensorFlow to break. Downgrading before installing TensorFlow might not work because TensorFlow might be bumping up the version itself. Check if that is what happens during the installation. You might want to either: Downgrade with pip install --upgrade "protobuf<=3.20.1" after installing TensorFlow, or Upgrade TensorFlow to the latest version, as TensorFlow has updated their setup file in their 2.9.1 release.
8
16
72,422,403
2022-5-29
https://stackoverflow.com/questions/72422403/python-syntaxerror-f-string-unmatched
from fastapi import FastAPI from fastapi.params import Body app = FastAPI() @app.post("/createposts") def create_posts(payload: dict = Body(...)): print(payload) return {"new_post" : f"title {payload["title"]} content: {payload["content"]}"} I'm trying to create an API with Fastapi, but every time I run the code I get this error related to the return statement: SyntaxError: f-string: unmatched '[' Thank you!
Please change return {"new_post" : f"title {payload["title"]} content: {payload["content"]}"} to return {"new_post" : f"title {payload['title']} content: {payload['content']}"} You can't have " quotes inside f"..." The error says that after the first [ the string stops and breaks.
3
11
72,422,968
2022-5-29
https://stackoverflow.com/questions/72422968/finding-the-max-value-in-accordance-with-other-columns
I have students names, scores in different subjects, subjects names. I want to add a column to the data frame, which contains the subject in which each student had the highest score. Here is the data: Input data would be: Output data (the result data frame) would be : My try at this (didn't work obviously): Data['Subject with highest score'] = Data.groupby(['Names','Subject'])[['Scores']].transform(lambda x: x.max())
Sort the values by Scores, then group the dataframe by Names and transform the column Subject with last df['S(max)'] = df.sort_values('Scores').groupby('Names')['Subject'].transform('last') Alternatively, we can group the dataframe by Names then transform Scores with idxmax to broadcast the indices corresponding to row having max Score, then use those indices to get the corresponding rows from Subject column df['S(max)'] = df.loc[df.groupby('Names')['Scores'].transform('idxmax'), 'Subject'].tolist() Names Scores Subject S(max) 0 Dan 98 Math Math 1 Dan 88 English Math 2 Dan 90 Biology Math 3 Bob 80 Math Chemistry 4 Bob 93 Chemistry Chemistry 5 Bob 70 Sports Chemistry 6 Bob 85 French Chemistry 7 Michael 100 History History 8 Sandra 67 French French 9 Michael 89 Math History 10 Michael 74 Sports History 11 Jacky 65 Biology Physics 12 Jacky 100 Physics Physics 13 Jacky 90 Geometry Physics 14 Jacky 87 Geography Physics 15 Jacky 69 Math Physics 16 Dan 73 Sports Math 17 Sandra 50 History French
5
1
72,421,952
2022-5-29
https://stackoverflow.com/questions/72421952/how-to-stop-selenium-from-printing-webdriver-manager-messages-in-python
Each time that I initiate a new webdriver the following text is written to the console: [WDM] - ====== WebDriver manager ====== [WDM] - Current google-chrome version is 102.0.5005 [WDM] - Get LATEST chromedriver version for 102.0.5005 google-chrome [WDM] - Driver [C:\Users\klaas\.wdm\drivers\chromedriver\win32\102.0.5005.61\chromedriver.exe] found in cache My goal was to stop selenium from printing this message to the console. Stack Overflow threads with similar topics to this one showed two options that did not work for me. The first one is: from selenium.webdriver.chrome.options import Options options = Options() options.add_experimental_option("excludeSwitches", ["enable-logging"]) driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) and the second one is: import logging from selenium.webdriver.remote.remote_connection import LOGGER LOGGER.setLevel(logging.WARNING) Both of these solutions did work for some others but not for me. Is there some other way to stop selenium from printing webdriver messages? Solution: As suggested by MohitC the following code prevented the webdriver-manager messages to be printed: import logging logging.getLogger('WDM').setLevel(logging.NOTSET)
These are webdriver-manager logs. You can either uninstall it if you are not using or disable logging as below import os os.environ['WDM_LOG'] = "false" You can also try import logging logging.getLogger('WDM').setLevel(logging.NOTSET)
4
4
72,412,077
2022-5-28
https://stackoverflow.com/questions/72412077/how-can-i-count-comma-separated-values-in-my-dataframe
I am trying to figure out how to get value_counts from how many times a specific text value is listed in the column. Example data: d = {'Title': ['Crash Landing on You', 'Memories of the Alhambra', 'The Heirs', 'While You Were Sleeping', 'Something in the Rain', 'Uncontrollably Fond'], 'Cast' : ['Hyun Bin,Son Ye Jin,Seo Ji Hye', 'Hyun Bin,Park Shin Hye,Park Hoon', 'Lee Min Ho,Park Shin Hye,Kim Woo Bin', 'Bae Suzy,Lee Jong Suk,Jung Hae In', 'Son Ye Jin,Jung Hae In,Jang So Yeon', 'Kim Woo Bin,Bae Suzy,Im Joo Hwan']} Title Cast 0 Crash Landing on You Hyun Bin,Son Ye Jin,Seo Ji Hye 1 Memories of the Alhambra Hyun Bin,Park Shin Hye,Park Hoon 2 The Heirs Lee Min Ho,Park Shin Hye,Kim Woo Bin 3 While You Were Sleeping Bae Suzy,Lee Jong Suk,Jung Hae In 4 Something in the Rain Son Ye Jin,Jung Hae In,Jang So Yeon 5 Uncontrollably Fond Kim Woo Bin,Bae Suzy,Im Joo Hwan When I split the text and do value counts: df['Cast'] = df['Cast'].str.split(',') df['Cast'].value_counts() [Hyun Bin, Son Ye Jin, Seo Ji Hye] 1 [Hyun Bin, Park Shin Hye, Park Hoon] 1 [Lee Min Ho, Park Shin Hye, Kim Woo Bin] 1 [Bae Suzy, Lee Jong Suk, Jung Hae In] 1 [Son Ye Jin, Jung Hae In, Jang So Yeon] 1 [Kim Woo Bin, Bae Suzy, Im Joo Hwan] 1 Name: Cast, dtype: int64 How do I get the amount of times a specific text is shown in the 'Cast' column? ie: [Park Shin Hye] 2 [Hyun Bin] 2 [Bae Suzy] 1 etc
You should use the .explode method to "unpack" each list in different rows. Then .value_counts will work as intended in the original code: import pandas as pd d = {'Title': ['Crash Landing on You', 'Memories of the Alhambra', 'The Heirs', 'While You Were Sleeping', 'Something in the Rain', 'Uncontrollably Fond'], 'Cast' : ['Hyun Bin,Son Ye Jin,Seo Ji Hye', 'Hyun Bin,Park Shin Hye,Park Hoon', 'Lee Min Ho,Park Shin Hye,Kim Woo Bin', 'Bae Suzy,Lee Jong Suk,Jung Hae In', 'Son Ye Jin,Jung Hae In,Jang So Yeon', 'Kim Woo Bin,Bae Suzy,Im Joo Hwan']} df = pd.DataFrame(d) df['Cast'].str.split(',').explode('Cast').value_counts()
4
10
72,411,999
2022-5-27
https://stackoverflow.com/questions/72411999/how-to-print-as-a-string-a-callable-object
I think there should be a question like this already, but I haven't found it. It could be because I don't know the exact concepts/words about what I'm looking for, but here is the example: I have this code: group_1 = ['Hello', 'world', '!'] group_2 = [1,23,4,2,5,2] group_3 = ['A', 'K', 'L'] all_groups = [group_1, group_2, group_3] for i in all_groups: print(i, ':', len(i)) It gives this output: ['Hello', 'world', '!'] : 3 [1, 23, 4, 2, 5, 2] : 6 ['A', 'K', 'L'] : 3 And this is the expected output: 'group_1' : 3 'group_2' : 6 'group_3' : 3 As you can see, I'm trying to print the names of the callable objects group_1, group_2, and group_3. Any suggestions?
Restructure your code so that it uses a dictionary to store the group names. I would not recommend approaches that use anything related to reflection, the inspect module, or locals(), as described (or linked to) in the comments. The names of the variables in all_groups list aren't preserved when you add them to all_groups; even if they were, accessing these names would likely be more complex than just using a dictionary: data = { 'group_1': ['Hello', 'world', '!'], 'group_2': [1,23,4,2,5,2], 'group_3': ['A', 'K', 'L'] } for k, v in data.items(): print(k, ':', len(v)) This outputs: group_1 : 3 group_2 : 6 group_3 : 3
5
5
72,408,888
2022-5-27
https://stackoverflow.com/questions/72408888/pytorch-why-does-running-output-modelimages-use-so-much-gpu-memory
In trying to understand why my maximum batch size is limited for my PyTorch model, I noticed that it's not the model itself nor loading the tensors onto the GPU that uses the most memory. Most memory is used up when generating a prediction for the first time, e.g. with the following line in the training loop: output = model(images) where images is some input tensor, and model is my PyTorch model. Before running the line, I have something like 9GB of GPU memory available, and afterwards I'm down to 2.5GB (it then further drops to 1GB available after running loss = criterion(outputs, labels). Two questions: Is this normal? Why is it happening? What is all that memory being used for? From what I understand the model is already loaded in, and the actual input tensors are already on the GPU before making that call. The output tensors themselves can't be that big. Does it have something to do with storing the computational graph?
This is normal: The key here is that all intermediate tensors (the whole computation graph) have to be stored if you want to compute the gradient via backward-mode differentiation. You can aviod that by using the .no_grad context manager: with torch.no_grad(): output = model(images) You will observe that a lot less memory is used, because no computation graph will be stored. But this also means that you can't compute the derivatives anymore. It is however the standard way if you just want to evaluate the model without the need of any optimization. There is one way to reduce the memory cosumption if you still want to optimize, and it is called checkpointing. Whenever you need an intermediate tensor in the backward pass, it will be computed again from the input (or actually from the last "checkpoint"), without storing an intermediate tensor up to that tensor. But this is just computationally more expensive. You're trading memory against computational time.
4
5
72,406,597
2022-5-27
https://stackoverflow.com/questions/72406597/how-to-avoid-bot-detection-on-websites-using-selenium-python
We are trying to automate process using selenium python for a website but as we proceed with the process the bot gets detected every time and a captcha comes up. Even though a human solve that captcha the website does not allow to move forward and continuously keeps detecting the bot and again and again shows the captcha to solve. So far we have tried all methods whichever we have explored to overcome it but none of method worked. Can someone help on this issue ? Some of the methods tried: 1.) using spoofing user agents 2.) using proxies 3.) turn-off useAutomationExtension 4.) changing the property value of navigator for web driver to undefined 5.) disable-blink-features 6.) exclude the collection of enable-automation switches
There is something called "Undetected ChromeDriver" you can check out! Optimized Selenium Chromedriver patch which does not trigger anti-bot services like Distill Network / Imperva / DataDome / Botprotect.io Automatically downloads the driver binary and patches it. here is the link Here is another useful website you can check out, this website shows if a site will detect you using selenium or not or anything like that: LINK Also for future reference on Stack Overflow you should steer away from opinion-based questions. Read this to learn more about asking a good question.
4
4
72,383,861
2022-5-25
https://stackoverflow.com/questions/72383861/how-to-declare-a-class-variable-without-a-value-in-a-way-that-suppresses-pylance
I love the typechecker in Pylance (VS Code), but there seems to be a situation that I must choose between ignoring a Pylance warning and best practice for class variable declaration. Many times, class variables are initialized using a None type in the class constructor, and the variable is set later. For example: class Person: def __init__(self) -> None: self.name:str = None def setName(self, name:str) -> None: self.name = name In this case, Pylance gives the following error on the assignment self.name:str = None: Cannot assign member "name" for type "Person" Expression of type "None" cannot be assigned to member "name" of class "Person" Type "None" cannot be assigned to type "str" Is there any way to declare self.name in the constructor in such a way that a value is not needed and Pylance is happy? EDIT: Several have suggested the use of typing.Optional to suppress this Pylance warning. However, if another member function is created to return self.name and guarantee that it is returning an instance of str, another Pylance error is generated. See the following example: class Person: def __init__(self) -> None: self._name:Optional[str] = None @property def name(self) -> str: return self._name @name.setter def name(self, name:str) -> None: self._name = name In this case, Pylance generates the error: (variable) _name: str | None Expression of type "str | None" cannot be assigned to return type "str" Type "str | None" cannot be assigned to type "str" Type "None" cannot be assigned to type "str" Ideally, there would be some way to initially "allocate" self._name in the constructor in such a way that its only allowed type is str but is not given any value. Is this possible?
update: This is so obvious I assume it won't work in your case, but it is the "natural thing to do" If you arte annotating instance attributes, and don't want them to be able to read "None" or other sentinel value, simply do not fill in a sentinel value, just declare the attribute and its annotation. That is, do not try to fill in a value in __init__, declare the annotation in the class body instead: class Person: name: str # No need for this: # def __init__(self) -> None: # self.name:str = None def setName(self, name:str) -> None: self.name = name Assuming you need to initialize it with an invalid value for whatever reason, the original answer applies: Well, if this was certainly a case were the programmer knows better than the type-checker, I think it would be the case for using typing.cast: this is a no-operation in runtime, but which tells the static checkers that the value being cast is actually of that type, regardless of previous declarations: (to be clear: don't do this:) import typing as t class Person: def __init__(self) -> None: self._name: t.Optional[str, None] = None @property def name(self) -> str: return t.cast(str, self._name) ... However, upon writing this, I came to the realisation that "self._name" might actually be "None" at this time. The typechecker can do some state-checking by following the program steps, but not in such a generic way - trying to read instance.name before it was set should cause in Python a runtime error instead. So, if the method is called to express exactly that, it works as the tools can follow parts of the code guarded by isinstance (and if one needs a more complicated check than isinstance, take a look on the docs for typing.TypeGuard) - and we thank the tools for ensuring the program will run without a hard to debug error afterwards: (this is the correct way) import typing as t class Person: def __init__(self) -> None: self._name: t.Optional[str, None] = None @property def name(self) -> str: if isinstance(name:=self._name, str): return name raise AttributeError() @name.setter def name(self, name:str) -> None: self._name = name
3
4
72,405,196
2022-5-27
https://stackoverflow.com/questions/72405196/append-1-for-the-first-occurence-of-an-item-in-list-p-that-occurs-in-list-s-and
I want this code to append 1 for the first occurence of an item in list p that occurs in list s, and append 0 for the other occurence and other items in s. That's my current code below and it is appending 1 for all occurences, I want it to append 1 for the first occurence alone. Please, help s = [20, 39, 0, 87, 13, 0, 23, 56, 12, 13] p = [0, 13] bin = [] for i in s: if i in p: bin.append(1) else: bin.append(0) print(bin) # current result [0, 0, 1, 0, 1, 1, 0, 0, 0, 1] # excepted result [0, 0, 1, 0, 1, 0, 0, 0, 0, 0]
The simplest solution is to remove the item from list p if found: s = [20, 39, 0, 87, 13, 0, 23, 56, 12, 13] p = [0, 13] out = [] for i in s: if i in p: out.append(1) p.remove(i) else: out.append(0) print(out) Prints: [0, 0, 1, 0, 1, 0, 0, 0, 0, 0]
6
5
72,403,062
2022-5-27
https://stackoverflow.com/questions/72403062/seaborn-displot-normalize-kdes-for-two-different-sample-batches
I was wondering if there is a quick way to normalize the KDE curves (such that the integral of each curve is equal to one) for two displayed sample batches (see figure below). So far I use: sb.displot(data=proc, x="TPSA", hue="Data", kind="kde", legend=False) Giving me the following plot: non-normalized KDE Plot. Thanks in advance for the help.
When the hue parameter is set, seaborn by default normalises with respect to the area of all kde curves combined. If you'd like to normalise each curve independently (so the area under each curve is 1), you should provide the displot/kdeplot with common_norm=False. e.g. in your case sb.displot(data=proc, x="TPSA", hue="Data", kind="kde", legend=False, common_norm=False)
3
9
72,400,524
2022-5-27
https://stackoverflow.com/questions/72400524/why-the-location-of-python-list-is-not-being-changed-if-the-size-is-increased
As far as I know, python list is a dynamic array. So when we reach a certain size, the capacity of that list will be increased automatically. But the problem is, unlike dynamic array of c or c++, even after increasing the capacity of list instance, the location is not being changed. Why is it happening? I've tested this using the following code block l = [] print(l.__sizeof__()) print(id(l)) for i in range(5_000_000): l.append(i) print(l.__sizeof__()) print(id(l))
In CPython (the implementation written in C distributed by python.org), a Python object never moves in memory. In the case of a list object, two pieces of memory are actually allocated: a basic header struct common to all variable-size Python container objects (containing things like the reference count, a pointer to the type object, and the number of contained objects), and a distinct block of memory for a C-level vector holding pointers to the contained Python objects. The header struct points to that vector. That vector can change size in arbitrary ways, and the header struct will change to point to its current location, but the header struct never moves. id() returns the address of that header struct. Python does not expose the address of the vector of objects.
3
10
72,397,740
2022-5-26
https://stackoverflow.com/questions/72397740/issues-with-spacy-model-en-core-web-lg-how-to-prevent-the-package-from-downloa
I am using spacy and its model en_core_web_lg, to perform summarisation in python. The code is running perfectly and there is no error at all. Except that, I am trying to find a way of making sure that the en_core_web_lg doesn't keep downloading in an environment if it already has it. I have googled a lot to find a perfect solution for this which I will list below but none has gelled with what I am trying to achieve. This code will be packaged and will be used by multiple people and I want to make sure that if they run the code everytime, the en_core_web_lg doesn't download if it already exists. Below is the spacy excerpt of my code and the solutions I tried: #Importing necessary Libraries from heapq import nlargest from string import punctuation import nltk import spacy from spacy.cli.download import download from spacy.lang.en.stop_words import STOP_WORDS nltk.download('punkt') download(model="en_core_web_lg") nlp_g = spacy.load('en_core_web_lg') #downloads everytime the code is run even if the model is present in the environment def spacy_summarize(text): """ Returns the summary for an input string text Parameters: :param text: Input String :type text: str Returns: :return: The summary for the input text :rtype: String """ nlp = nlp_g doc= nlp(text) word_frequencies={} for word in doc: if word.text.lower() not in [list(STOP_WORDS), punctuation]: if word.text not in word_frequencies: word_frequencies[word.text] = 1 else: word_frequencies[word.text] += 1 max_frequency=max(word_frequencies.values()) for word in word_frequencies: word_frequencies.copy()[word]=word_frequencies[word]/max_frequency sentence_tokens= [sent for sent in doc.sents] sentence_scores = {} spacy_frequencies(word_frequencies, sentence_tokens, sentence_scores) select_length=max(1,int(len(sentence_tokens)*0.05)) summary=nlargest(select_length, sentence_scores,key=sentence_scores.get) final_summary=[word.text for word in summary] summary=''.join(final_summary) return summary def spacy_frequencies(word_frequencies, sentence_tokens, sentence_scores): """ Child function for spacy function for calculating sentence scores Parameters: :param: word frequeny, sentence token and score which is provided through the parent function """ for sent in sentence_tokens: for word in sent: if word.text.lower() in word_frequencies: if sent not in sentence_scores: sentence_scores[sent]=word_frequencies[word.text.lower()] else: sentence_scores[sent]+=word_frequencies[word.text.lower()] Things Tried: import sys import subprocess import pkg_resources required = {'en_core_web_lg'} installed = {pkg.key for pkg in pkg_resources.working_set} missing = required - installed if missing: python = sys.executable subprocess.check_call([python, '-m', 'spacy', 'download', *missing], stdout=subprocess.DEVNULL) try: nlp_lg = spacy.load("en_core_web_lg") except ModuleNotFoundError: download(model="en_core_web_lg") nlp_lg = spacy.load("en_core_web_lg") Both solutions didn't give a satisfactory result and the package was downloaded again and I would appreciate if someone could help me with this? Thank you so much!
spaCy doesn't automatically download models at all, so this must be a bug with your code that checks if the model is already installed. Looking at this code: try: nlp_lg = spacy.load("en_core_web_lg") except ModuleNotFoundError: download(model="en_core_web_lg") nlp_lg = spacy.load("en_core_web_lg") The issue is that if the model is not installed this is an OSError, not a ModuleNoteFoundError. First you need to fix that. This approach seems like it should work, except loading models in the same process you installed them in doesn't work very reliably - the list of installed packages is not updated while Python is running. So even after fixing the above issue, it may not work as intended. I would recommend either: Download the model to a known directory, extract it there, and load it from a path instead of just the model name Check the output of pip list to see if the model is installed, and install it if not
5
3
72,398,163
2022-5-26
https://stackoverflow.com/questions/72398163/returning-default-members-when-enum-member-does-not-exist
I have an Enum where I would like a default member to be returned when a member does not exist inside of it. For example: class MyEnum(enum.Enum): A = 12 B = 24 CUSTOM = 1 print(MyEnum.UNKNOWN) # Should print MyEnum.CUSTOM I know I can use a metaclass like so: class MyMeta(enum.EnumMeta): def __getitem__(cls, name): try: return super().__getitem__(name) except KeyError as error: return cls.CUSTOM class MyEnum(enum.Enum,metaclass=MyMeta): ... But that appears to only work if I access the Enum using MyEnum['UNKNOWN']. Is there a way that covers both methods of accessing members of an enum when the member doesn't exist?
Add a definition for __getattr__ to the metaclass: class MyMeta(enum.EnumMeta): def __getitem__(cls, name): try: return super().__getitem__(name) except KeyError as error: return cls.CUSTOM def __getattr__(cls, name): try: return super().__getattr__(name) except AttributeError as error: return cls.CUSTOM Then, your code will output: MyEnum.CUSTOM
6
4
72,396,233
2022-5-26
https://stackoverflow.com/questions/72396233/why-write-async-code-in-python-while-gil-exists
I am wondering, if python GIL allow only a single thread / process to run at once, why should I use asyncio, I get that switching between threads is expensive but, thats it? this is the only advantage of asyncio in python?
Threading in Python is inefficient because of the GIL (Global Interpreter Lock) which means that multiple threads cannot be run in parallel as you would expect on a multi-processor system. Plus you have to rely on the interpreter to switch between threads, this adds to the inefficiency. asyc/asyncio allows concurrency within a single thread. This gives you, as the developer, much more fine grained control of the task switching and can give much better performance for concurrent I/O bound tasks than Python threading. The 3rd approach that you don't mention is multiprocessing. This approach uses processes for concurrency and allows programs to make full use of hardware with multiple cores.
5
7
72,392,884
2022-5-26
https://stackoverflow.com/questions/72392884/fastest-way-to-check-if-a-list-of-sets-has-any-containment-relationship
I hava a list of 10,000 random sets with different lengths: import random random.seed(99) lst = [set(random.sample(range(1, 10000), random.randint(1, 1000))) for _ in range(10000)] I want to know the fastest way to check if there is any set that is a subset of another set (or equivalently if there is any set that is a superset of another set). Right now I am using the following very basic code: def any_containment(lst): checked_sets = [] for st in lst: if any(st.issubset(s) for s in checked_sets): return True else: checked_sets.append(st) return False %timeit any_containment(lst) # 12.3 ms ± 230 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Clearly, my code is not utilizing previous information when checking containment in each iteration. Can anyone suggest the fastest way to do this?
Seems to be faster to sort by length and then try small sets as subset first (and for each, try large sets as superset first). Times in ms from ten cases, data generated like you did but without seeding: agree yours mine ratio result True 2.24 2.98 0.75 True True 146.25 3.10 47.19 True True 121.66 2.90 41.91 True True 0.21 2.73 0.08 True True 37.01 2.82 13.10 True True 5.86 3.13 1.87 True True 54.61 3.14 17.40 True True 0.86 2.81 0.30 True True 182.51 3.06 59.60 True True 192.93 2.73 70.65 True Code (Try it online!): import random from timeit import default_timer as time def original(lst): checked_sets = [] for st in lst: if any(st.issubset(s) for s in checked_sets): return True else: checked_sets.append(st) return False def any_containment(lst): remaining = sorted(lst, key=len, reverse=True) while remaining: s = remaining.pop() if any(s <= t for t in remaining): return True return False for _ in range(10): lst = [set(random.sample(range(1, 10000), random.randint(1, 1000))) for _ in range(10000)] t0 = time() expect = original(lst) t1 = time() result = any_containment(lst) t2 = time() te = t1 - t0 tr = t2 - t1 print(result == expect, '%6.2f ' * 3 % (te*1e3, tr*1e3, te/tr), expect) Improvement The following seems further ~20% faster. Instead of first comparing the smallest set with potentially all larger sets before giving even just the second-smallest a chance, this does give other small sets an early chance. def any_containment(lst): sets = sorted(lst, key=len) for i in range(1, len(sets)): for s, t in zip(sets, sets[-i:]): if s <= t: return True return False Comparison with my old solution (Try it online!): agree old new ratio result True 3.13 2.46 1.27 True True 3.36 3.31 1.02 True True 3.10 2.49 1.24 True True 2.72 2.43 1.12 True True 2.86 2.35 1.21 True True 2.65 2.47 1.07 True True 5.24 4.29 1.22 True True 3.01 2.35 1.28 True True 2.72 2.28 1.19 True True 2.80 2.45 1.14 True Yet another idea A shortcut could be to first collect the union of all single-element sets, and check whether that intersects with any other set (either without sorting them, or again from largest to smallest after sorting). That likely suffices. If not, then proceed as previously, but without the single-element sets.
4
6
72,380,478
2022-5-25
https://stackoverflow.com/questions/72380478/can-a-python-script-using-xlwings-be-deployed-on-a-server
We currently have a python script launched locally that periodically generates dozens of Excel files using Xlwings. How can it be deployed on a cloud server as an ETL that would be linked to a job scheduler, so that no human action is needed anymore? My concern is that Xlwings requires an Excel license (and a GUI?), which is not usually available in the production server.
The only way that you can currently do what you have in mind is to install Excel, Python, and xlwings on a Windows Server: xlwings was built for interactive workflows. You might want to look into OpenPyXL and XlsxWriter to see if you can create the reports by writing the Excel file directly, as opposed to automating the Excel application, as xlwings does.
6
4
72,384,268
2022-5-25
https://stackoverflow.com/questions/72384268/does-polars-support-creating-a-dataframe-from-a-nested-dictionary
I'm trying to create a polars dataframe from a dictionary (mainDict) where one of the values of mainDict is a list of dict objects (nestedDicts). When I try to do this I get an error (see below) that I don't know the meaning of. However, pandas does allow me to create a dataframe using mainDict. I'm not sure whether I'm doing something wrong, if it's a bug, or if this operation simply isn't supported by polars. I'm not too worried about finding a workaround as it should be straightforward (suggestions are welcome), but I'd like to do it this way if possible. I'm on polars version 0.13.38 on google colab (problem also happens locally on VScode, with python version 3.9.6 and windows 10). Below is an example of code that reproduces the problem and its output. Thanks! INPUT: import polars as pl import pandas as pd template = { 'a':['A', 'AA'], 'b':['B', 'BB'], 'c':['C', 'CC'], 'd':[{'D1':'D2'}, {'DD1':'DD2'}]} #create a dataframe using pandas df_pandas = pd.DataFrame(template) print(df_pandas) #create a dataframe using polars df_polars = pl.DataFrame(template) print(df_polars) OUTPUT: a b c d 0 A B C {'D1': 'D2'} 1 AA BB CC {'DD1': 'DD2'} --------------------------------------------------------------------------- ComputeError Traceback (most recent call last) <ipython-input-9-2abdc86d91da> in <module>() 12 13 #create a dataframe using polars ---> 14 df_polars = pl.DataFrame(template) 15 print(df_polars) 3 frames /usr/local/lib/python3.7/dist-packages/polars/internals/frame.py in __init__(self, data, columns, orient) 300 301 elif isinstance(data, dict): --> 302 self._df = dict_to_pydf(data, columns=columns) 303 304 elif isinstance(data, np.ndarray): /usr/local/lib/python3.7/dist-packages/polars/internals/construction.py in dict_to_pydf(data, columns) 400 return PyDataFrame(data_series) 401 # fast path --> 402 return PyDataFrame.read_dict(data) 403 404 /usr/local/lib/python3.7/dist-packages/polars/internals/series.py in __init__(self, name, values, dtype, strict, nan_to_null) 225 self._s = self.cast(dtype, strict=True)._s 226 elif isinstance(values, Sequence): --> 227 self._s = sequence_to_pyseries(name, values, dtype=dtype, strict=strict) 228 elif _PANDAS_AVAILABLE and isinstance(values, (pd.Series, pd.DatetimeIndex)): 229 self._s = pandas_to_pyseries(name, values) /usr/local/lib/python3.7/dist-packages/polars/internals/construction.py in sequence_to_pyseries(name, values, dtype, strict) 241 if constructor == PySeries.new_object: 242 try: --> 243 return PySeries.new_from_anyvalues(name, values) 244 # raised if we cannot convert to Wrap<AnyValue> 245 except RuntimeError: ComputeError: struct orders must remain the same
The error you are receiving is because your list of dictionaries does not conform to the expectations for a Series of struct in Polars. More specifically, your two dictionaries {'D1':'D2'} and {'DD1':'DD2'} are mapped to two different types of structs in Polars and thus are incompatible for inclusion in the same Series. I'll first need to explain structs ... Polars: Structs In Polars, dictionaries are mapped to something called a struct. A struct is an ordered, named collection of typed data. (In this regard, a struct is much like a Polars DataFrame with only one row.) In a struct: each field must have a unique field name each field has a datatype the order of the fields in a struct matters Polars: Mapping Dictionaries to Structs When dictionaries are mapped to structs (e.g., in a DataFrame constructor), each key in the dictionary is mapped to a field name in the struct and the corresponding dictionary value is assigned to the value of that field in the struct. Also, the order of the keys in the dictionary matters: the fields of the struct are created in the same order as the keys in the dictionary. In Python, it's easy to forget that the keys in a dictionary are ordered. Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6. Polars: Series/Lists of structs Here's where your input runs into trouble in Polars. A collection of structs can be included in the same Series only if: The structs have the same number of fields The fields have the same names The fields are in the same order The datatype of each field is the same for each of the structs. In your input, {'D1':'D2'} is mapped to a struct with one field having a field name of "D1" and a value of "D2". However, {'DD1':'DD2'} is mapped to a struct with one field having field name "DD1" and value "DD2". As such, the resulting structs are not compatible for inclusion in the same Series. Their field names do not match. In this instance, Polars is far more picky than Pandas, which allows for dictionaries with arbitrary key-value pairs to appear in the same column. In general, you'll find that Polars is far more opinionated about data structures and data types than Pandas. (And part of the reason is performance-related.) Workarounds One workaround for your example is to alter your dictionaries so that they include the same keys, in the same order. For example: template = { "a": ["A", "AA"], "b": ["B", "BB"], "c": ["C", "CC"], "d": [{"D1": "D2", "DD1": None}, {"D1": None, "DD1": "DD2"}], } pl.DataFrame(template) shape: (2, 4) ┌─────┬─────┬─────┬──────────────┐ │ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ struct[2] │ ╞═════╪═════╪═════╪══════════════╡ │ A ┆ B ┆ C ┆ {"D2",null} │ ├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ AA ┆ BB ┆ CC ┆ {null,"DD2"} │ └─────┴─────┴─────┴──────────────┘ Another easy workaround is to import the data into Pandas first, and then import the Pandas DataFrame into Polars. The import process will do the work for you. template = { "a": ["A", "AA"], "b": ["B", "BB"], "c": ["C", "CC"], "d": [{"D1": "D2"}, {"DD1": "DD2"}], } pl.DataFrame(pd.DataFrame(template)) >>> pl.DataFrame(pd.DataFrame(template)) shape: (2, 4) ┌─────┬─────┬─────┬──────────────┐ │ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ struct[2] │ ╞═════╪═════╪═════╪══════════════╡ │ A ┆ B ┆ C ┆ {"D2",null} │ ├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ AA ┆ BB ┆ CC ┆ {null,"DD2"} │ └─────┴─────┴─────┴──────────────┘ There may be other workarounds, but it will depend on your specific data and needs.
5
5
72,380,087
2022-5-25
https://stackoverflow.com/questions/72380087/is-it-possible-to-unpack-a-list-of-classes-for-use-in-a-union-type-annotation
I have a number of SQLAlchemy ORM classes that map to a database. I've written quite a few functions that utilise a number of these classes in different combinations. For example I might want to get the first from the Tournament table or the first record from the Player table. I only need one query for this as the queried fields are the same between the tables. To maintain type hints I could do the following: def get_first_record(table: Type[Union[Tournament, Player]]): # query the table for the first record If I don't want to keep repeating the type hint for other functions then I could create a variable: Tables = Type[Union[Tournament, Player]] def get_last_record(table: Tables): # query the table for the last record As I have a large number of tables and add to them frequently then it's a pain to maintain these Union variables. Coincidentally for locking and unlocking certain combinations of tables I've written some functions to create custom lists of classes in modules. I was hoping to be able to unpack these lists when creating the Union variables but Pylance tells me Union needs more than two variables: tables = [Tournament, Player] Tables = Type[Union[*tables]] # no bueno :( Is there a solution to this?
I think it's not possible to do literally what you're asking for here: tables = [Tournament, Player] Tables = Type[Union[*tables]] # no bueno :( See these other questions for more detail: Create Union type without hard-coding in Python3 typing typing: Dynamically Create Literal Alias from List of Valid Values How do I define a `typing.Union` dynamically? Passing a tuple arg (i.e. Union[tuple(tables)] is valid python and will work at runtime, but is not accepted by any type checker such as mypy, because you can only use literals and not expressions (i.e. the function call result) for defining types. An alternative that may work for you is: from typing import TypeVar T = TypeVar('T', bound=django.models.Model) def get_last_record(table: Type[T]) -> T: ... If you need to restrict calls of get_last_record to a subset of specific models this annotation won't help you unfortunately - you'd still need to define a Union of models to use as a bound. A possible work around would be... instead of defining a a list of models and try to create a Union from that, do it the other way around, e.g.: T = Union[Model1, Model2, Model3] models = list(T.__args__) But since you say " I've written some functions to create custom lists of classes in modules" I think this won't work for you either because the result of the function call won't be acceptable as a type for mypy. Actually, probably what would work better for you is to materialise the union of models into the type heirarchy as a base class that they all share. e.g. instead of: class Model1(django.models.Model): ... class Model2(django.models.Model): ... class Model3(django.models.Model): ... T = Union[Model1, Model2, Model3] def get_last_record(table: Type[T]) -> T: ... have: class OrderableRecord(django.models.Model): class Meta: abstract = True class Model1(OrderableRecord): ... class Model2(OrderableRecord): ... class Model3(OrderableRecord): ... T = TypeVar('T', bound=OrderableRecord) def get_last_record(table: Type[T]) -> T: ...
4
1
72,382,501
2022-5-25
https://stackoverflow.com/questions/72382501/how-to-interpret-mape-in-python-sklearn
I am trying to interpret the value that I get out of sklearn.metrics.mean_absolute_percentage_error(y_true, y_pred), but have difficulty to understand the interpretation. I need to interpret the result based on below accepted (?) schema Based on the official Python explanation: Note here that the output is not a percentage in the range [0, 100] and a value of 100 does not mean 100% but 1e2. Furthermore, the output can be arbitrarily high when y_true is small (which is specific to the metric) or when abs(y_true - y_pred) is large (which is common for most regression metrics). Read more in the User Guide. So again, from sklearn.metrics import mean_absolute_percentage_error y_true = [3, -0.5, 2, 7] y_pred = [2.5, 0.0, 2, 8] mean_absolute_percentage_error(y_true, y_pred) 0.3273... what does 0.32 mean? if this does not mean 32%, then what? I am using this function and I am getting these results for two different data sets: 0.3 for one set of my data 1.3 for another set of my data I can say the first set is more accurate, but can I say 30% is MAPE of the first set and 130% is MAPE of the second set, I guess I cannot. So How do I need to interpret these outputs?
If you look at the source code for the mape calculation in sklearn you will see the value is not multiplied by 100, so it is not a percentage. Therefore, while interpreting your results, you should multiply the mape value by a 100 to have it in percentage. You must also pay a close attention to your actual data if there is value close to 0 then they could cause mape to be large. For instance, you could look at the wikipedia link on mape formulation. I suggest you plot the actual value and the predicted value as a scatter plot and then compare it with the line y=x.
4
7
72,374,146
2022-5-25
https://stackoverflow.com/questions/72374146/histogram-equalization-on-specific-area-using-mask
I am working with images that I need to process. However, only specific areas of these images are of interest so for each image I have a corresponding mask (that can be of any shape, not a bounding box or anything specific). I would like to do Histogram Equalization but only on the "masked surface" as I am not interested in the rest of the image. A similar question has been asked here, however the solutions proposed rely on bounding boxes or equalizing the whole image which is not my goal. Any solution using Histogram Equalization or CLAHE, RGB or grey scale images would be interesting
This is possible. Here is a simple approach: Flow: You can perform histogram equalization for a given region with the help of the mask. Using the mask, store coordinates where pixels are in white. Store pixel intensities from these coordinates present in the grayscale image Perform histogram equalization on these stored pixels. You will now get a new set of values. Replace the old pixel values with the new ones based on the coordinate positions. Code: The following is an illustration using a grayscale image. img = cv2.imread('flower.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) The following is the mask: mask = cv2.imread('mask.jpg', 0) I want to perform histogram equalization only on the flower. Store the coordinates where pixels are white (255) in mask: coord = np.where(mask == 255) Store all pixel intensities on these coordinates present in gray: pixels = gray[coord] Perform histogram equalization on these pixel intensities: equalized_pixels = cv2.equalizeHist(pixels) Create a copy of gray named gray2. Place the equalized intensities in the same coordinates: gray2 = gray.copy() for i, C in enumerate(zip(coord[0], coord[1])): gray2[C[0], C[1]] = equalized_pixels[i][0] cv2.imshow('Selective equalization', gray2) Comparison: Note: This process can be extended for Histogram Equalization or CLAHE, ON RGB or grayscale images.
4
5
72,372,635
2022-5-25
https://stackoverflow.com/questions/72372635/importerror-cannot-import-name-re-path-from-django-conf-urls
I'm following a Django tutorial and trying to update a urls.py file with the following code: from django.contrib import admin from django.urls import path from django.conf.urls import re_path, include urlpatterns=[ path('admin/', admin.site.urls), re_path(r'^',include('EmployeeApp.urls')) ] When I run the server with python manage.py runserver I get the following error: ImportError: cannot import name 're_path' from 'django.conf.urls' (C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\django\conf\urls\__init__.py) I'm running version 4.0.4 of Django: py -m django --version # Prints: 4.0.4
I needed to change from django.conf.urls import re_path, include to: from django.conf.urls import include from django.urls import re_path Now the error has stopped. (Documentation)
6
11
72,368,336
2022-5-24
https://stackoverflow.com/questions/72368336/what-is-a-meshloop-in-the-blender-python-api
I'm currently making a custom render engine for Blender 3.0+. I'm programming in C++(core engine) and Python(Blender API). In the past I have made two other render engines, where the data stored for meshes are polygons, edges and vertices, which is quite usual. Blender uses a different structure, where a polygon references multiple "MeshLoops". A "MeshLoop" references one edge and one vertex. What I don't quite understand is the reason behind this. Simple referencing a list of vertices would absolutely work (and that's how my render engine works for now). I'm certain there is a reason for this, but I'm unable to find any more information about this. The API documentation about it simply states that is keeps the index of one vertex and one edge. So my question is : what exactly is a MeshLoop, and what purpose does it serve ? Usefull links : Blender API : Mesh Blender API : MeshLoops Blender API : MeshPolygon
What the MeshLoop is in Blender API You're undoubtedly right about official documentation, even in 2024 it sucks. But the answer to your question is obvious. MeshLoop objects are tiny "utilities" in Blender that help you not only describe a 3D geometry but also store a color data for each vertex. There are four main arrays to define mesh geometry in Blender (unlike Maya, for instance, where vertices, edges and polygons are used for a commonly accepted mesh description, as you previously said): Mesh.vertices : (3 points in space) Mesh.edges : (reference 2 vertices) Mesh.loops : (reference a single vertex and edge) Mesh.polygons : (reference a range of loops) As stated in the documentation: "Each polygon references a slice in the loop array, this way, polygons do not store vertices or corner data such as UVs directly, only a reference to loops that the polygon uses." Thus, using the simplest example of a plane with 4 different colors assigned, we'll see how the central vertex, thanks to MeshLoop, retrieves the information about each of the four colors (without MeshLoop you can store only one color per vertex). import bpy plane = bpy.context.object mesh = plane.data len(mesh.vertices) len(mesh.edges) len(mesh.loops) len(mesh.polygons) len(mesh.vertex_colors)
6
1
72,352,725
2022-5-23
https://stackoverflow.com/questions/72352725/how-to-increase-values-of-polars-dataframe-column-by-index
I have a data frame as follow ┌────────────┬──────────┬──────────┬──────────┬──────────┐ │ time ┆ open ┆ high ┆ low ┆ close │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 │ ╞════════════╪══════════╪══════════╪══════════╪══════════╡ │ 1649016000 ┆ 46405.49 ┆ 47444.11 ┆ 46248.84 ┆ 46407.35 │ │ 1649030400 ┆ 46407.36 ┆ 46461.14 ┆ 45744.77 ┆ 46005.44 │ │ 1649044800 ┆ 46005.43 ┆ 46293.38 ┆ 45834.39 ┆ 46173.99 │ │ 1649059200 ┆ 46174.0 ┆ 46287.97 ┆ 45787.0 ┆ 46160.09 │ │ … ┆ … ┆ … ┆ … ┆ … │ │ 1653278400 ┆ 30171.32 ┆ 30670.51 ┆ 30101.07 ┆ 30457.01 │ │ 1653292800 ┆ 30457.01 ┆ 30616.18 ┆ 30281.89 ┆ 30397.11 │ │ 1653307200 ┆ 30397.12 ┆ 30625.98 ┆ 29967.07 ┆ 30373.53 │ │ 1653321600 ┆ 30373.53 ┆ 30529.9 ┆ 30042.09 ┆ 30121.02 │ └────────────┴──────────┴──────────┴──────────┴──────────┘ I want to count how many times each price (low and high) were local minimum/maximum in a window range of 2 to 50. first I add two columns for count of being local min/max for each row and fill by zeros raw_data["lmin_count"] = np.zeros(len(raw_data), dtype=np.int16) raw_data["lmax_count"] = np.zeros(len(raw_data), dtype=np.int16) then I iterate window length from 2 to 50 and find index of each local min/max by using: for _order in range(2, 51): local_minima = argrelextrema(raw_data["low"].to_numpy(), np.less, order=_order)[0] local_maxima = argrelextrema(raw_data["high"].to_numpy(), np.greater, order=_order)[0] which order is window length. and in each iteration over window length I want to increase value of lmin_count and lmax_count by indices found in local_minima and local_maxima I tried increasing value by this code: if len(local_minima) > 1: raw_data[local_minima,5] += 1 if len(local_maxima) > 1: raw_data[local_minima,6] += 1 which local_minima and local_maxima are array of indices and 5,6 are index of lmin_count and lmax_count columns. but got error not implemented. So what is the best way to increase (or assign) value of column by row indices? Update 2022/05/24 As answers were very helpful now I have other issues. I changed my code as follow: min_expr_list = [ ( pl.col("price").rolling_min( window_size=_order * 2 + 1, min_periods=_order + 2, center=True ) == pl.col("price") ).cast(pl.UInt32) for _order in range(200, 1001) ] max_expr_list = [ ( pl.col("price").rolling_max( window_size=_order * 2 + 1, min_periods=_order + 2, center=True ) == pl.col("price") ).cast(pl.UInt32) for _order in range(200, 1001) ] raw_data = raw_data.with_columns( pl.sum_horizontal(min_expr_list).alias("min_freq"), pl.sum_horizontal(max_expr_list).alias("max_freq"), ) first: is it possible to merge both min_expr_list and max_expr_list into one list? and if it is possible, in with_columns expression how can I add separate columns based on each element of list? another issue I am facing is memory usage of this approach. In previous example _order were limited but in action it is more wider than example. currently I have datasets with millions of records (some of them have more than 10 million records) and _orders range can be from 2 to 1500 so calculating needs lots of GB of ram. is there any better way to do that? and one more side problem. when increasing _order to more than 1000 it seems it doesn't work. is there any limitation in source code?
Let me see if we can build on @ritchie46 response and nudge you closer to the finish line. Data I've concatenated the 'open', 'high', and 'low' columns in your sample data, just to give us some data to work with. I've also added an index column, just for discussion. (It won't be used in any calculations whatsoever, so you don't need to include it in your final code.) import numpy as np import polars as pl from scipy.signal import argrelextrema df = pl.DataFrame( { "col1": [ 46405.49, 46407.36, 46005.43, 46174.00, 30171.32, 30457.01, 30397.12, 30373.53, 47444.11, 46461.14, 46293.38, 46287.97, 30670.51, 30616.18, 30625.98, 30529.90, 46248.84, 45744.77, 45834.39, 45787.00, 30101.07, 30281.89, 29967.07, 30042.09, ] } ).with_row_index() df shape: (24, 2) ┌───────┬──────────┐ │ index ┆ col1 │ │ --- ┆ --- │ │ u32 ┆ f64 │ ╞═══════╪══════════╡ │ 0 ┆ 46405.49 │ │ 1 ┆ 46407.36 │ │ 2 ┆ 46005.43 │ │ 3 ┆ 46174.0 │ │ 4 ┆ 30171.32 │ │ 5 ┆ 30457.01 │ │ 6 ┆ 30397.12 │ │ 7 ┆ 30373.53 │ │ 8 ┆ 47444.11 │ │ 9 ┆ 46461.14 │ │ 10 ┆ 46293.38 │ │ 11 ┆ 46287.97 │ │ 12 ┆ 30670.51 │ │ 13 ┆ 30616.18 │ │ 14 ┆ 30625.98 │ │ 15 ┆ 30529.9 │ │ 16 ┆ 46248.84 │ │ 17 ┆ 45744.77 │ │ 18 ┆ 45834.39 │ │ 19 ┆ 45787.0 │ │ 20 ┆ 30101.07 │ │ 21 ┆ 30281.89 │ │ 22 ┆ 29967.07 │ │ 23 ┆ 30042.09 │ └───────┴──────────┘ Now, let's run the scipy.signal.argrelextrema code on this data. for _order in range(1, 7): print( "order:", _order, ":", argrelextrema(df["col1"].to_numpy(), np.less, order=_order) ) order: 1 : (array([ 2, 4, 7, 13, 15, 17, 20, 22]),) order: 2 : (array([ 4, 7, 15, 22]),) order: 3 : (array([ 4, 15, 22]),) order: 4 : (array([ 4, 15, 22]),) order: 5 : (array([ 4, 22]),) order: 6 : (array([ 4, 22]),) From the output, it looks like you're trying to find the index of any row that is the minimum value of a window centered on that row, for various window sizes. For example, index 2 is a local minimum of a window of size 3, centered on index 2. (Here, order=1 in the call to argrelextrema means "including one value above and below", and hence "window size" = (order * 2) + 1) = 3. Let's replicate this in Polars. We'll take it in steps. rolling_min First, let's use the rolling_min expression to calculate rolling minimums corresponding to order from 1 to 6. Notice that Polars allows us to generate a list of expressions outside of the with_columns context. (This often helps keep code more readable.) I'm converting the scipy order keyword to the equivalent window_size for rolling_min. Also, I'm setting the min_periods to make sure that there is at least one value on each side of the center value of any window (to replicate the scipy calculations). expr_list = [ pl.col("col1").rolling_min( window_size=_order * 2 + 1, min_periods=_order + 2, center=True ).alias("roll_min" + str(_order)) for _order in range(1, 7) ] df.with_columns(expr_list) shape: (24, 8) ┌───────┬──────────┬───────────┬───────────┬───────────┬───────────┬───────────┬───────────┐ │ index ┆ col1 ┆ roll_min1 ┆ roll_min2 ┆ roll_min3 ┆ roll_min4 ┆ roll_min5 ┆ roll_min6 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 │ ╞═══════╪══════════╪═══════════╪═══════════╪═══════════╪═══════════╪═══════════╪═══════════╡ │ 0 ┆ 46405.49 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ null │ │ 1 ┆ 46407.36 ┆ 46005.43 ┆ 46005.43 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 2 ┆ 46005.43 ┆ 46005.43 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 3 ┆ 46174.0 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 4 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 5 ┆ 30457.01 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 6 ┆ 30397.12 ┆ 30373.53 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 7 ┆ 30373.53 ┆ 30373.53 ┆ 30373.53 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 8 ┆ 47444.11 ┆ 30373.53 ┆ 30373.53 ┆ 30373.53 ┆ 30171.32 ┆ 30171.32 ┆ 30171.32 │ │ 9 ┆ 46461.14 ┆ 46293.38 ┆ 30373.53 ┆ 30373.53 ┆ 30373.53 ┆ 30171.32 ┆ 30171.32 │ │ 10 ┆ 46293.38 ┆ 46287.97 ┆ 30670.51 ┆ 30373.53 ┆ 30373.53 ┆ 30373.53 ┆ 30171.32 │ │ 11 ┆ 46287.97 ┆ 30670.51 ┆ 30616.18 ┆ 30616.18 ┆ 30373.53 ┆ 30373.53 ┆ 30373.53 │ │ 12 ┆ 30670.51 ┆ 30616.18 ┆ 30616.18 ┆ 30529.9 ┆ 30529.9 ┆ 30373.53 ┆ 30373.53 │ │ 13 ┆ 30616.18 ┆ 30616.18 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30373.53 │ │ 14 ┆ 30625.98 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30101.07 │ │ 15 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30101.07 ┆ 30101.07 │ │ 16 ┆ 46248.84 ┆ 30529.9 ┆ 30529.9 ┆ 30529.9 ┆ 30101.07 ┆ 30101.07 ┆ 29967.07 │ │ 17 ┆ 45744.77 ┆ 45744.77 ┆ 30529.9 ┆ 30101.07 ┆ 30101.07 ┆ 29967.07 ┆ 29967.07 │ │ 18 ┆ 45834.39 ┆ 45744.77 ┆ 30101.07 ┆ 30101.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 │ │ 19 ┆ 45787.0 ┆ 30101.07 ┆ 30101.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 │ │ 20 ┆ 30101.07 ┆ 30101.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 │ │ 21 ┆ 30281.89 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 │ │ 22 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 ┆ 29967.07 │ │ 23 ┆ 30042.09 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ null │ └───────┴──────────┴───────────┴───────────┴───────────┴───────────┴───────────┴───────────┘ Looking at the output in roll_min_1 (the equivalent of an order=1 call for argrelextrema), we see that the values in roll_min_1 equal the values in col1 for index 2, 4, 7, 13, 15, 17, 20, 22 ... which corresponds exactly to the output of argrelextrema for order=1. Likewise, for the other roll_min_X columns. We'll use this fact in the next step. Obtaining the row index As @ritchie46 points out, in Polars, we use conditions (not indexing). We'll modify the above code to identify whether the value in col1 equals it's rolling min, for each of our window sizes. expr_list = [ ( pl.col("col1").rolling_min( window_size=_order * 2 + 1, min_periods=_order + 2, center=True ) == pl.col("col1") ).alias("min_idx_" + str(_order)) for _order in range(1, 7) ] df.with_columns(expr_list) shape: (24, 8) ┌───────┬──────────┬───────────┬───────────┬───────────┬───────────┬───────────┬───────────┐ │ index ┆ col1 ┆ min_idx_1 ┆ min_idx_2 ┆ min_idx_3 ┆ min_idx_4 ┆ min_idx_5 ┆ min_idx_6 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ f64 ┆ bool ┆ bool ┆ bool ┆ bool ┆ bool ┆ bool │ ╞═══════╪══════════╪═══════════╪═══════════╪═══════════╪═══════════╪═══════════╪═══════════╡ │ 0 ┆ 46405.49 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ null │ │ 1 ┆ 46407.36 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 2 ┆ 46005.43 ┆ true ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 3 ┆ 46174.0 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 4 ┆ 30171.32 ┆ true ┆ true ┆ true ┆ true ┆ true ┆ true │ │ 5 ┆ 30457.01 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 6 ┆ 30397.12 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 7 ┆ 30373.53 ┆ true ┆ true ┆ false ┆ false ┆ false ┆ false │ │ 8 ┆ 47444.11 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 9 ┆ 46461.14 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 10 ┆ 46293.38 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 11 ┆ 46287.97 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 12 ┆ 30670.51 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 13 ┆ 30616.18 ┆ true ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 14 ┆ 30625.98 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 15 ┆ 30529.9 ┆ true ┆ true ┆ true ┆ true ┆ false ┆ false │ │ 16 ┆ 46248.84 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 17 ┆ 45744.77 ┆ true ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 18 ┆ 45834.39 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 19 ┆ 45787.0 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 20 ┆ 30101.07 ┆ true ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 21 ┆ 30281.89 ┆ false ┆ false ┆ false ┆ false ┆ false ┆ false │ │ 22 ┆ 29967.07 ┆ true ┆ true ┆ true ┆ true ┆ true ┆ true │ │ 23 ┆ 30042.09 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ null │ └───────┴──────────┴───────────┴───────────┴───────────┴───────────┴───────────┴───────────┘ Notice that for min_idx_1, the values are true for index 2, 4, 7, 13, 15, 17, 20, 22, which corresponds to the output of argrelextrema for order=1. Likewise, for the other columns. Summing We can now use the cast function and the polars.sum_horizontal function to sum row-wise across our columns. (Indeed, we won't keep our rolling min columns -- we'll just keep the sums). expr_list = [ ( pl.col("col1").rolling_min( window_size=_order * 2 + 1, min_periods=_order + 2, center=True ) == pl.col("col1") ).cast(pl.UInt32) for _order in range(1, 7) ] df.with_columns(pl.sum_horizontal(expr_list).alias("min_freq")) shape: (24, 3) ┌───────┬──────────┬──────────┐ │ index ┆ col1 ┆ min_freq │ │ --- ┆ --- ┆ --- │ │ u32 ┆ f64 ┆ u32 │ ╞═══════╪══════════╪══════════╡ │ 0 ┆ 46405.49 ┆ 0 │ │ 1 ┆ 46407.36 ┆ 0 │ │ 2 ┆ 46005.43 ┆ 1 │ │ 3 ┆ 46174.0 ┆ 0 │ │ 4 ┆ 30171.32 ┆ 6 │ │ 5 ┆ 30457.01 ┆ 0 │ │ 6 ┆ 30397.12 ┆ 0 │ │ 7 ┆ 30373.53 ┆ 2 │ │ 8 ┆ 47444.11 ┆ 0 │ │ 9 ┆ 46461.14 ┆ 0 │ │ 10 ┆ 46293.38 ┆ 0 │ │ 11 ┆ 46287.97 ┆ 0 │ │ 12 ┆ 30670.51 ┆ 0 │ │ 13 ┆ 30616.18 ┆ 1 │ │ 14 ┆ 30625.98 ┆ 0 │ │ 15 ┆ 30529.9 ┆ 4 │ │ 16 ┆ 46248.84 ┆ 0 │ │ 17 ┆ 45744.77 ┆ 1 │ │ 18 ┆ 45834.39 ┆ 0 │ │ 19 ┆ 45787.0 ┆ 0 │ │ 20 ┆ 30101.07 ┆ 1 │ │ 21 ┆ 30281.89 ┆ 0 │ │ 22 ┆ 29967.07 ┆ 6 │ │ 23 ┆ 30042.09 ┆ 0 │ └───────┴──────────┴──────────┘ I believe this is the result you were looking to obtain. From here, I think you can expand the above code for rolling maximums. Ties One difference between this code and the argrelextrema code pertains to ties. If two values tie for the minimum in any window, argrelextrema considers neither to be the minimum for the window. The code above considers both to be minimum values. I'm not sure how likely this will be for the size of windows you have, or for the type of data. Please update Polars to 0.13.38 The latest release of Polars contains some major improvements to the performance of rolling functions. (The announcement is on this Twitter thread.) You'll want to take advantage of that by updating to the latest version. Update - 2022/05/24 Merging all lists into one expression first: is it possible to merge both min_expr_list and max_expr_list into one list? and if it is possible, in with_columns expression how can I add separate columns based on each element of list? It is possible to generate all columns (min and max, for each order, for each variable) using a single list in a single with_columns context. The calculations are independent, and thus can be in the same with_columns context. Each column would have a name that could be used in a later calculation step. But in that case, the accumulation steps would need to be in a separate with_columns expression. The with_columns context assumes all calculations are independent - and that they can be run in any order, without dependencies. But summarizing columns (by selecting them by name) is dependent on those columns first being created. You can return multiple Series from a single function if you return them as a Series of struct (e.g., using map) .. but that's generally to be avoided. (And beyond the scope of our question here.) And more specifically for this problem, we are dealing with issues of memory pressure. So, in this case, we'll need to move in the opposite direction - we'll need to break lists into smaller pieces and feed them to the with_columns expression in batches. Batching the calculations by order currently I have datasets with millions of records (some of them have more than 10 million records) and _orders range can be from 2 to 1500 so calculating needs lots of GB of ram. is there any better way to do that? We'll try a couple of techniques to reduce memory pressure ... and still achieve good performance. One technique will be to use the fold method to accumulate values. This allows us to sum boolean values without having to cast every intermediate column to integers. This should reduce memory pressure during the intermediate calculations. We'll also batch our calculations by breaking the expression lists into sub-lists, calculating intermediate results, and accumulating into an accumulator column using the fold method. First, let's eliminate the cast to integer in your min_expr_list. min_expr_list = [ ( pl.col("price").rolling_min( window_size=_order * 2 + 1, min_periods=_order + 2, center=True ) == pl.col("price") ) for _order in range(1, 20) ] Next we'll need to pick a batch_size and initialize an accumulator column. I would experiment with different batch_size numbers until you find one that seems to work well for your computing platform and size of dataset. Since we have limited data in this example, I'll pick a batch_size of 5 - just to demonstrate the algorithm. batch_size = 5 df = df.with_columns(pl.lit(0, dtype=pl.UInt32).alias("min_freq")) Next, we'll iterate through the batches of sub-lists, and accumulate as we go. while(min_expr_list): next_batch, min_expr_list = min_expr_list[0: batch_size], min_expr_list[batch_size:] df=( df .with_columns( pl.fold( pl.col("min_freq"), lambda acc, x: acc + x, next_batch, ) ) ) print(df) shape: (24, 3) ┌────────┬──────────┬──────────┐ │ index ┆ price ┆ min_freq │ │ --- ┆ --- ┆ --- │ │ u32 ┆ f64 ┆ u32 │ ╞════════╪══════════╪══════════╡ │ 0 ┆ 46405.49 ┆ 0 │ │ 1 ┆ 46407.36 ┆ 0 │ │ 2 ┆ 46005.43 ┆ 1 │ │ 3 ┆ 46174.0 ┆ 0 │ │ 4 ┆ 30171.32 ┆ 15 │ │ 5 ┆ 30457.01 ┆ 0 │ │ 6 ┆ 30397.12 ┆ 0 │ │ 7 ┆ 30373.53 ┆ 2 │ │ 8 ┆ 47444.11 ┆ 0 │ │ 9 ┆ 46461.14 ┆ 0 │ │ 10 ┆ 46293.38 ┆ 0 │ │ 11 ┆ 46287.97 ┆ 0 │ │ 12 ┆ 30670.51 ┆ 0 │ │ 13 ┆ 30616.18 ┆ 1 │ │ 14 ┆ 30625.98 ┆ 0 │ │ 15 ┆ 30529.9 ┆ 4 │ │ 16 ┆ 46248.84 ┆ 0 │ │ 17 ┆ 45744.77 ┆ 1 │ │ 18 ┆ 45834.39 ┆ 0 │ │ 19 ┆ 45787.0 ┆ 0 │ │ 20 ┆ 30101.07 ┆ 1 │ │ 21 ┆ 30281.89 ┆ 0 │ │ 22 ┆ 29967.07 ┆ 19 │ │ 23 ┆ 30042.09 ┆ 0 │ └────────┴──────────┴──────────┘ Problems with rolling_min when order is 1,000 or more and one more side problem. when increasing _order to more than 1000 it seems it doesn't work. is there any limitation in source code? I generated datasets of 50 million random numbers and tested the rolling_min for order sizes of 1,500 .. and found no problems. Indeed, I replicated the algorithm using a rolling slice and found no errors. But I have a hunch about what might be happening. Let's start with this dataset of 10 records: df = pl.DataFrame( { "col1": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] } ) df shape: (10, 1) ┌──────┐ │ col1 │ │ --- │ │ i64 │ ╞══════╡ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 │ │ 6 │ │ 7 │ │ 8 │ │ 9 │ │ 10 │ └──────┘ If we set _order = 8 and run the algorithm, we get a result. _order = 8 df = df.with_columns( pl.col("col1") .rolling_min(window_size=(2 * _order) + 1, min_periods=(_order + 2), center=True) .alias("rolling_min") ) df shape: (10, 2) ┌──────┬─────────────┐ │ col1 ┆ rolling_min │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞══════╪═════════════╡ │ 1 ┆ null │ │ 2 ┆ 1 │ │ 3 ┆ 1 │ │ 4 ┆ 1 │ │ 5 ┆ 1 │ │ 6 ┆ 1 │ │ 7 ┆ 1 │ │ 8 ┆ 1 │ │ 9 ┆ 1 │ │ 10 ┆ null │ └──────┴─────────────┘ However, if we set _order=9, we get all null values: shape: (10, 2) ┌──────┬─────────────┐ │ col1 ┆ rolling_min │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞══════╪═════════════╡ │ 1 ┆ null │ │ 2 ┆ null │ │ 3 ┆ null │ │ 4 ┆ null │ │ 5 ┆ null │ │ 6 ┆ null │ │ 7 ┆ null │ │ 8 ┆ null │ │ 9 ┆ null │ │ 10 ┆ null │ └──────┴─────────────┘ Is this what you're seeing? The reason for the null values is the min_period value. We set min_period = _order + 2, which in this case is 11. However, there are only 10 values in the dataset. Thus, we get all null values. Perhaps this is what is happening in your data?
4
4
72,294,299
2022-5-18
https://stackoverflow.com/questions/72294299/multiple-top-level-packages-discovered-in-a-flat-layout
I am trying to install a library from the source that makes use of Poetry, but I get this error error: Multiple top-level packages discovered in a flat-layout: ['tulips', 'fixtures']. To avoid accidental inclusion of unwanted files or directories, setuptools will not proceed with this build. If you are trying to create a single distribution with multiple packages on purpose, you should not rely on automatic discovery. Instead, consider the following options: 1. set up custom discovery (`find` directive with `include` or `exclude`) 2. use a `src-layout` 3. explicitly set `py_modules` or `packages` with a list of names To find more information, look for "package discovery" on setuptools docs What do I need to do to fix it?
Based on this comment on a GitHub issue, adding the following lines to your pyproject.toml might solve your problem: [tool.setuptools] py-modules = [] (For my case, the other workaround provided in that comment, i.e. adding py_modules=[] as a keyword argument to the setup() function in setup.py worked) See Package Discovery and Namespace Packages for additional information.
88
92
72,345,536
2022-5-23
https://stackoverflow.com/questions/72345536/how-to-avoid-mypy-checking-explicitly-excluded-but-imported-modules-without-ma
In the following MWE, I have two files/modules: main.py which is and should be checked with mypy and importedmodule.py which should not be type checked because it is autogenerated. This file is autogenerated, I don't want to add type:ignore. MyPy Command $ mypy main.py --exclude '.*importedmodule.*' $ mypy --version mypy 0.931 main.py """ This should be type checked """ from importedmodule import say_hello greeting = say_hello("Joe") print(greeting) importedmodule.py """ This module should not be checked in mypy, because it is excluded """ def say_hello(name: str) -> str: # This function is imported and called from my type checked code return f"Hello {name}!" def return_an_int() -> int: # ok, things are obviously wrong here but mypy should ignore them # also, I never expclitly imported this function return "this is a str, not an int" # <-- this is line 14 referenced in the mypy error message But MyPy complains about the function that is not even imported in main.py: importedmodule.py:14: error: Incompatible return value type (got "str", expected "int") Found 1 error in 1 file (checked 1 source file) What is wrong about my exclude?
Here is SUTerliakov's comment on your question written as an answer. In the pyproject.toml file you can insert the following below your other mypy config [[tool.mypy.overrides]] module = "importedmodule" ignore_errors = true With this config you will ignore all errors coming from the mentioned module. By using a wildcard, you can also ignore all modules in a directory: [[tool.mypy.overrides]] module = "importedpackage.*" ignore_errors = true Related Documentation: https://mypy.readthedocs.io/en/stable/config_file.html#using-a-pyproject-toml-file
12
10
72,295,812
2022-5-18
https://stackoverflow.com/questions/72295812/python-match-case-by-type-of-value
I came across a weird issue while using the new match/case syntax in Python3.10. The following example seems like it should work, but throws an error: values = [ 1, "hello", True ] for v in values: match type(v): case str: print("It is a string!") case int: print("It is an integer!") case bool: print("It is a boolean!") case _: print(f"It is a {type(v)}!") $ python example.py File "/.../example.py", line 9 case str: ^^^ SyntaxError: name capture 'str' makes remaining patterns unreachable It is mentioning that the first case (the value str) will always result in True. Wondering if there is an alternative to this other than converting the type to a string.
Rather than match type(v), match v directly: values = [ 1, "hello", True, ] for v in values: match v: case str(): print("It is a string!") case bool(): print("It is a boolean!") case int(): print("It is an integer!") case _: print(f"It is a {type(v)}!") Note that I've swapped the order of bool() and int() here, so that True being an instance of int doesn't cause issues. This is a class pattern match.
43
50
72,360,442
2022-5-24
https://stackoverflow.com/questions/72360442/pydantic-transform-a-value-before-it-is-assigned-to-a-field
I have the following model class Window(BaseModel): size: tuple[int, int] and I would like to instantiate it like this: fields = {'size': '1920x1080'} window = Window(**fields) Of course this fails since the value of 'size' is not of the correct type. However, I would like to add logic so that the value is split at x, i.e.: def transform(raw: str) -> tuple[int, int]: x, y = raw.split('x') return int(x), int(y) Does Pydantic support this?
Pydantic 2.x (edit) Pydantic 2.0 introduced the field_validator decorator which lets you implement such a behaviour in a very simple way. Given the original parsing function: from pydantic import BaseModel, field_validator class Window(BaseModel): size: tuple[int, int] @field_validator("size", mode="before") @classmethod def transform(cls, raw: str) -> tuple[int, int]: x, y = raw.split("x") return int(x), int(y) Note: The validator method is a class method, as denoted by the cls first argument. Implementing it as an instance method (with self) will raise an error. The mode="before" in the decorator is critical here, as expected this is what makes the method run before checking "size" is a tuple. Pydantic 1.x (original answer) You can implement such a behaviour with pydantic's validator. Given your predefined function: def transform(raw: str) -> tuple[int, int]: x, y = raw.split('x') return int(x), int(y) You can implement it in your class like this: from pydantic import BaseModel, validator class Window(BaseModel): size: tuple[int, int] _extract_size = validator('size', pre=True, allow_reuse=True)(transform) Note the pre=True argument passed to the validator. It means that it will be run before the default validator that checks if size is a tuple. Now: fields = {'size': '1920x1080'} window = Window(**fields) print(window) # output: size=(1920, 1080) Note that after that, you won't be able to instantiate your Window with a tuple for size. fields2 = {'size': (800, 600)} window2 = Window(**fields2) # AttributeError: 'tuple' object has no attribute 'split' In order to overcome that, you could simply bypass the function if a tuple is passed by altering slightly your code: Pydantic 2.x class Window(BaseModel): size: tuple[int, int] @field_validator("size", mode="before") def transform(cls, raw: str | tuple[int, int]) -> tuple[int, int]: if isinstance(raw, tuple): return raw x, y = raw.split("x") return int(x), int(y) Pydantic 1.x def transform(raw: str | tuple[int, int]) -> tuple[int, int]: if isinstance(raw, tuple): return raw x, y = raw.split('x') return int(x), int(y) class Window(BaseModel): size: tuple[int, int] _extract_size = validator('size', pre=True, allow_reuse=True)(transform) Which should give: fields2 = {'size': (800, 600)} window2 = Window(**fields2) print(window2) # output: size:(800, 600)
18
30
72,298,911
2022-5-19
https://stackoverflow.com/questions/72298911/where-to-locate-virtual-environment-installed-using-poetry-where-to-find-poetr
I installed poetry using the following command:- (Invoke-WebRequest -Uri https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py -UseBasicParsing).Content | python - To know more about it refer this. Now I wanted to create a virtual environment, that I created it using the following command:- poetry config virtualenvs.in-project true To know more about it refer this. But after doing this, I can`t see any .venv(virtual environment) folder. To check if virtual environment is there, we use the following command:- poetry config virtualenvs.in-project and if the above command return true, implies you have it. I'm getting true, also the location mentioned on the docs I cant see it there. Could anyone tell me how to locate the virtual environment now?
There are 2 commands that can find where the virtual environment is located. poetry show -v The first line of this command will tell you where the virtual environment is located. And the rest will tell you which packages are installed in it. poetry env info -p The above command will give you just the location of the virtual environment.
35
58
72,343,232
2022-5-23
https://stackoverflow.com/questions/72343232/pip-install-confluent-kafka-gives-error-in-mac
When i tried pip install confluent-kafka got the following error #include <librdkafka/rdkafka.h> ^~~~~~~~~~~~~~~~~~~~~~ 1 error generated. error: command '/usr/bin/gcc' failed with exit code 1 I'm using python version 3.9 and macOs Monterey
Install the librdkafka library brew install librdkafka Set the environment variables export C_INCLUDE_PATH=/usr/local/Cellar/librdkafka/2.2.0/include export LIBRARY_PATH=/usr/local/Cellar/librdkafka/2.2.0/lib Then you can install it through pip install
4
12
72,352,528
2022-5-23
https://stackoverflow.com/questions/72352528/how-to-fix-winerror-206-the-filename-or-extension-is-too-long-error
I'm facing this error while installing the setup for Tensorflow Object Detection API. How to fix this error? Error: Could not install packages due to an OSError: [WinError 206] The filename or extension is too long: ```
Error: Could not install packages due to an OSError: [WinError 206] The filename or extension is too long: To fix this error on your Windows machine on regedit and navigate to Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem and edit LongPathsEnabled and set value from 0 to 1
12
24
72,360,709
2022-5-24
https://stackoverflow.com/questions/72360709/how-to-serialize-custom-type-that-extends-builtin-type-in-pydantic
currently I'm working with FastAPI and pydantic as serializer. Problem is, we're using snowflake id on the server side, which means we need to convert those ids to string before sending to client (javascript) because the id is larger than JS's MAX SAFE INTEGER. So I tried to create a new class which extends python's int type and customize how it will be serialized and deserialized. Here's my code: class SnowflakeId(int): @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, v: str): return int(v) @classmethod def __modify_schema__(cls, field_schema: dict) -> None: field_schema['type'] = 'string' And here is the model: class BaseModel(pydantic.BaseModel): __abstract__ = True id: SnowflakeId class Config: orm_mode = True arbitrary_types_allowed = True json_encoders = { SnowflakeId: lambda v: str(v) } alias_generator = camelize allow_population_by_field_name = True It works fine when deserializing from json string into int id, however, when it comes to the serialization, the output still is integer. I want it to serialize the id into string also, is it possible?
Yes it is! json_encoders is a good try, however under the hood pydantic calls json.dumps. So for serializable types (like your SnowflakeId) it won't care about additional json_encoders. What you can do is to override dumps method: def my_dumps(v, *, default): for key, value in v.items(): if isinstance(value, SnowflakeId): v[key] = str(value) else: v[key] = value return json.dumps(v) class BaseModel(pydantic.BaseModel): id: SnowflakeId class Config: json_dumps = my_dumps And let validate return SnowflakeId: class SnowflakeId(int): ... @classmethod def validate(cls, v: str): return cls(v) m = BaseModel(id="123") print(m.json()) # {"id": "123"}
9
4
72,306,836
2022-5-19
https://stackoverflow.com/questions/72306836/airflow-branch-operator-and-task-group-invalid-task-ids
I have a simple dag that uses a branch operator to check if y is False. If it is, the dag is supposed to move on to the say_goodbye task group. If True, it skips and goes to finish_dag_step. Here's the dag: def which_step() -> str: y = False if not y: return 'say_goodbye' else: return 'finish_dag_step' with DAG( 'my_test_dag', start_date = datetime(2022, 5, 14), schedule_interval = '0 0 * * *', catchup = True) as dag: say_hello = BashOperator( task_id = 'say_hello', retries = 3, bash_command = 'echo "hello world"' ) run_which_step = BranchPythonOperator( task_id = 'run_which_step', python_callable = which_step, retries = 3, retry_exponential_backoff = True, retry_delay = timedelta(seconds = 5) ) with TaskGroup('say_goodbye') as say_goodbye: for i in range(0,2): step = BashOperator( task_id = 'step_' + str(i), retries = 3, bash_command = 'echo "goodbye world"' ) step finish_dag_step = BashOperator( task_id = 'finish_dag_step', retries = 3, bash_command = 'echo "dag is finished"' ) say_hello >> run_which_step run_which_step >> say_goodbye >> finish_dag_step run_which_step >> finish_dag_step finish_dag_step I get the following errors when the dag hits run_which_step: I don't understand what's causing this. What is going on?
You can't create task dependencies to a TaskGroup. Therefore, you have to refer to the tasks by task_id, which is the TaskGroup's name and the task's id joined by a dot (task_group.task_id). Your branching function should return something like def branch(): if condition: return [f'task_group.task_{i}' for i in range(0,2)] return 'default' But instead of returning a list of task ids in such way, probably the easiest is to just put a DummyOperator upstream of the TaskGroup. It'd effectively act as an entrypoint to the whole group.
4
7
72,338,356
2022-5-22
https://stackoverflow.com/questions/72338356/how-to-show-values-in-pandas-pie-chart
I would like to visualize the amount of laps a certain go-kart has driven within a pie chart. To achive this i would like to count the amount of laptime groupedby kartnumber. I found there are two ways to create such a pie chart: 1# df.groupby('KartNumber')['Laptime'].count().plot.pie() 2# df.groupby(['KartNumber']).count().plot(kind='pie', y='Laptime') print(df) print(df) HeatNumber NumberOfKarts KartNumber DriverName Laptime 0 334 11 5 Monique 53.862 1 334 11 5 Monique 59.070 2 334 11 5 Monique 47.832 3 334 11 5 Monique 47.213 4 334 11 5 Monique 51.975 ... ... ... ... ... ... 4053 437 2 20 luuk 39.678 4054 437 2 20 luuk 39.872 4055 437 2 20 luuk 39.454 4056 437 2 20 luuk 39.575 4057 437 2 20 luuk 39.648 Output not with plot: KartNumber 1 203 10 277 11 133 12 244 13 194 14 172 15 203 16 134 17 253 18 247 19 240 2 218 20 288 21 14 4 190 5 314 6 54 60 55 61 9 62 70 63 65 64 29 65 53 66 76 67 42 68 28 69 32 8 49 9 159 None 13 As you can see i have the kartnumbers and count of laptimes. But i would like to show the count of laptimes within the pie chart(or legend). I tried using autopct but couldnt get it working properly. Does anyone knows how to achive my desired situation? Edit: For more information on this dataset please see: How to get distinct rows from pandas dataframe?
Complete awswer: autopct=lambda x: '{:.0f}'.format(x * (df['Laptime'].count()) / 100))
4
1
72,352,491
2022-5-23
https://stackoverflow.com/questions/72352491/how-to-plot-errorbars-on-seaborn-barplot
I have the following dataframe: data = {'Value':[6.25, 4.55, 4.74, 1.36, 2.56, 1.4, 3.55, 3.21, 3.2, 3.65, 3.45, 3.86, 13.9, 10.3, 15], 'Name':['Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke'], 'Param': ['Param1', 'Param1', 'Param1', 'Param2', 'Param2', 'Param2', 'Param3', 'Param3', 'Param3', 'Param4', 'Param4', 'Param4', 'Param5', 'Param5', 'Param5'], 'error': [2.55, 1.24, 0, 0.04, 0.97, 0, 0.87, 0.7, 0, 0.73, 0.62, 0, 0, 0, 0]} df = pd.DataFrame(data) I'd like to add errorbars (pre-defined in the error column) to the bar plot, but I can't seem to get the x-coordinates right? It shows errorbars for Param5 but there are no errors for Param5? Also for Luke, there are no errors, but in Param1 an errorbar is plotted. plt.figure() ax = sns.barplot(x = 'Param', y = 'Value', data = df, hue = 'Name', palette = sns.color_palette('CMRmap_r', n_colors = 3)) x_coords = [p.get_x() + 0.5*p.get_width() for p in ax.patches] y_coords = [p.get_height() for p in ax.patches] plt.errorbar(x=x_coords, y=y_coords, yerr=df["error"], fmt="none", c= "k")
The bars in ax.patches come ordered by hue value. To get the bars and the dataframe in the same order, the dataframe could be sorted first by Name and then by Param: from matplotlib import pyplot as plt import seaborn as sns import pandas as pd data = {'Value': [6.25, 4.55, 4.74, 1.36, 2.56, 1.4, 3.55, 3.21, 3.2, 3.65, 3.45, 3.86, 13.9, 10.3, 15], 'Name': ['Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke', 'Peter', 'Anna', 'Luke'], 'Param': ['Param1', 'Param1', 'Param1', 'Param2', 'Param2', 'Param2', 'Param3', 'Param3', 'Param3', 'Param4', 'Param4', 'Param4', 'Param5', 'Param5', 'Param5'], 'error': [2.55, 1.24, 0, 0.04, 0.97, 0, 0.87, 0.7, 0, 0.73, 0.62, 0, 0, 0, 0]} df = pd.DataFrame(data) df = df.sort_values(['Name', 'Param']) plt.figure(figsize=(8, 5)) ax = sns.barplot(x='Param', y='Value', data=df, hue='Name', palette='CMRmap_r') x_coords = [p.get_x() + 0.5 * p.get_width() for p in ax.patches] y_coords = [p.get_height() for p in ax.patches] ax.errorbar(x=x_coords, y=y_coords, yerr=df["error"], fmt="none", c="k") plt.show() PS: Note that by default, the columns are sorted alphabetically. If you want to maintain the original order, you can make the column categorical via pd.Categorical(df['Name'], df['Name'].unique()). df = pd.DataFrame(data) df['Name'] = pd.Categorical(df['Name'], df['Name'].unique()) df['Param'] = pd.Categorical(df['Param'], df['Param'].unique()) df = df.sort_values(['Name', 'Param'])
8
11
72,352,801
2022-5-23
https://stackoverflow.com/questions/72352801/migration-from-setup-py-to-pyproject-toml-how-to-specify-package-name
I'm currently trying to move our internal projects away from setup.py to pyproject.toml (PEP-518). I'd like to not use build backend specific configuration if possible, even though I do specify the backend in the [build-system] section by require'ing it. The pyproject.toml files are more or less straight-forward translations of the setup.py files, with the metadata set according to PEP-621, including the dependencies. We are using setuptools_scm for the determination of the version, therefore the version field ends up in the dynamic section. We used to set the packages parameter to setup in our setup.py files, but I couldn't find any corresponding field in pyproject.toml, so I simply omitted it. When building the project using python3 -m build ., I end up with a package named UNKNOWN, even though I have the name field set in the [project] section. It seems that this breaks very early in the build: $ python -m build . * Creating virtualenv isolated environment... * Installing packages in isolated environment... (setuptools, setuptools_scm[toml]>=6.2, wheel) * Getting dependencies for sdist... running egg_info writing UNKNOWN.egg-info/PKG-INFO .... I'm using python 3.8.11 and the following packages: build==0.8.0 distlib==0.3.4 filelock==3.4.1 packaging==21.3 pep517==0.12.0 pip==22.0.4 platformdirs==2.4.0 pyparsing==3.0.9 setuptools==62.1.0 six==1.16.0 tomli==1.2.3 virtualenv==20.14.1 wheel==0.37.1 My (abbreviated) pyproject.toml looks like this: [project] name = "coolproject" dependencies = [ 'pyyaml==5.3', 'anytree==2.8.0', 'pytest' ] dynamic = [ "version" ] [build-system] requires = ["setuptools", "wheel", "setuptools_scm[toml]>=6.2"] [tool.setuptools_scm] Any ideas?
Turning @AKX's comments into an answer so that other people can find it more easily. The problem may be an outdated pip/setuptools on the system. Apparently, version 19.3.1 which I have on my system cannot install a version of setuptools that can handle PEP621 metadata correctly. You cannot require a new pip from within pyproject.toml using the build-system.requires directive. In case you cannot update the system pip, you can always install on a per-user basis: pip install --user pip and you're good to go.
13
3
72,350,835
2022-5-23
https://stackoverflow.com/questions/72350835/how-to-plot-loss-when-using-hugginfaces-trainer
While finetuning a model using HF's trainer. training_args = TrainingArguments(output_dir=data_dir + "test_trainer") metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) training_args = TrainingArguments(num_train_epochs=5,per_device_train_batch_size=64,per_device_eval_batch_size=32,output_dir="test_trainer", evaluation_strategy="epoch") trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, compute_metrics=compute_metrics, ) trainer.train() How would I be able to plot the loss in a notebook? (Perhaps Is it possible to get a list of the loss)
It is possible to get a list of losses. You can access the history of logs after training is complete with: trainer.state.log_history
7
4
72,313,080
2022-5-20
https://stackoverflow.com/questions/72313080/how-to-check-for-unit-root-in-panel-data-using-python
I am working on time series analysis and I have sales data (lets call it df_panel as we panel data structure) for 700 individual areas for each month of 2021. e.g. Area Month Sales Area 1 January 1000 Area 1 February 2000 Area 1 Marts 3000 Area 2 January 1000 Area 2 February 2000 Area 2 Marts 1400 Area 3 January 1000 Area 3 February 1200 Area 3 Marts 1400 Normally when working on sales data you use e.g. ADF Testing to check for unit roots in the sales data. I know how to do this in Python for a standard non-panel data structure using e.g. the adfuller function from statsmodels on a dataframe df: adf_test_result = adfuller(df["Sales"])[1] How can I do something similar for my panel data structure, as it consists of 700 individual sales curves (one for each area). The goal is to use Panel Data Regression (Fixed or Random Effects) One approximation could be to sum up my panel data sales curve to one sales curve and do the ADF test on that: adf_test_result = adfuller(df_panel.groupby("Month").sum()["Sales"]) But I think this will greatly overestimate the probability of a unit root in the sales data. A lot of information in the sales data is lost when summing it up like this for 700 individual areas. Another approximation could maybe be to check for unit roots in each individual area and somehow take the mean (?) Not exactly sure what is best here... In R there is package plm with function purtest that implements several testing procedures that have been proposed to test unit root hypotheses with panel data, e.g., "levinlin" for Levin, Lin and Chu (2002), "ips" for Im, Pesaran and Shin (2003), "madwu" for Maddala and Wu (1999), and "hadri" for Hadri (2000). Does anyone know how to estimate the unit root for panel data structures? And how to implement this in Python?
The SAS documentation website HERE tells us that the IPS method uses the average of the ADF test statistics across groups/panels. The ADF test is available from the package "statsmodel" library HERE, so you can simply calculate the tau-statistics yourself, take the average, and calculate the p-value using a t-test. # p-value for a 2-sided t-test from scipy import stats 2*(stats.t.sf( abs(tau_avg) ,dof=1000 )) Note that 1000 is just an example for a high degree of freedom.
4
2
72,331,816
2022-5-21
https://stackoverflow.com/questions/72331816/how-to-connect-to-an-existing-firefox-instance-using-selenium-python
Is there any way to open a Firefox browser and then connect to it using selenium? I know this is possible on chrome by launching it in the command line and using --remote-debugging-port argument like this: import subprocess from selenium import webdriver from selenium.webdriver.chrome.options import Options subprocess.Popen('"C:/Program Files (x86)/Google/Chrome/Application/chrome.exe" --remote-debugging-port=9222', shell=True) options = Options() options.add_experimental_option("debuggerAddress", "127.0.0.1:9222") driver = webdriver.Chrome(executable_path=PATH, options=options) Can this be done in firefox? I have been searching and checking questions relating to this for a while now but no luck. The only lead I found is that geckodriver has a --connect-existing argument but I am not sure how to use it. How do you pass arguments to geckodriver and use it in selenium? Any help would be appreciated. If it can't be done please let me know. Thank you EDIT: Okay I have made some progress, I know how to pass geckodriver args to selenium: driver = webdriver.Firefox(service=Service(PATH, service_args=['--marionette-port', '9394', '--connect-existing'])) The problem now is even though i start firefox with a debugger server like this: firefox.exe -marionette -start-debugger-server <PORT> When I run the code it either raises this error message: Traceback (most recent call last): File "c:\Users\maxis\Desktop\Python\Freelance\Application for Opening Web Browsers\browsers\firefox.py", line 107, in <module> driver = webdriver.Firefox(service=Service(PATH, service_args=['--marionette-port', '9394', '--connect-existing'])) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\firefox\webdriver.py", line 180, in __init__ RemoteWebDriver.__init__( File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 275, in __init__ self.start_session(capabilities, browser_profile) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 365, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 430, in execute self.error_handler.check_response(response) File "C:\Users\maxis\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 247, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: No connection could be made because the target machine actively refused it. (os error 10061) or I get multiple popups, that tell me there is an incoming request to Firefox. Even when I click okay, nothing seems to happen.
CMD: C:\Program Files\Mozilla Firefox\ firefox.exe -marionette -start-debugger-server 2828 //only use 2828 Python Script: from selenium import webdriver driver = webdriver.Firefox(executable_path = "YOUR GECKODRIVER PATH", service_args = ['--marionette-port', '2828', '--connect-existing'] ) pageSource = driver.page_source print(pageSource)
6
5
72,325,242
2022-5-20
https://stackoverflow.com/questions/72325242/type-object-base-has-no-attribute-decl-class-registry
I am upgrading a library to a recent version of SQLAlchemy and I am getting this error type object 'Base' has no attribute '_decl_class_registry' On line Base = declarative_base(metadata=metadata) Base._decl_class_registry How can I solve this?
Had the same problem. Because of my upgrade of sqlalchemy looks like there is a change in the base code. use this instead to accomplish the same Base.registry._class_registry.values()
8
12
72,363,601
2022-5-24
https://stackoverflow.com/questions/72363601/how-to-interpret-the-package-would-be-ignored-warning-generated-by-setuptools
I work on several python packages that contain data within them. I add them via the MANIFEST.in file, passing include_package_data=True to setup. For example: # MANIFEST.in graft mypackage/plugins graft mypackage/data Up to now, this has worked without warnings as far as I know. However, in setuptools 62.3.0, I get the following message: SetuptoolsDeprecationWarning: Installing 'mypackage.plugins' as data is deprecated, please list it in `packages`. 07:53:53 !! 07:53:53 07:53:53 07:53:53 ############################ 07:53:53 # Package would be ignored # 07:53:53 ############################ 07:53:53 Python recognizes 'mypackage.plugins' as an importable package, however it is 07:53:53 included in the distribution as "data". 07:53:53 This behavior is likely to change in future versions of setuptools (and 07:53:53 therefore is considered deprecated). 07:53:53 07:53:53 Please make sure that 'mypackage.plugins' is included as a package by using 07:53:53 setuptools' `packages` configuration field or the proper discovery methods 07:53:53 (for example by using `find_namespace_packages(...)`/`find_namespace:` 07:53:53 instead of `find_packages(...)`/`find:`). 07:53:53 07:53:53 You can read more about "package discovery" and "data files" on setuptools 07:53:53 documentation page. I get the above warning for pretty much every directory within mypackage that contains data and is included by MANIFEST.in. My goal is to include arbitrary data (which could even include python files in the case of a plugin interface) in a package so that it can be accessed by users who install via wheel or tarball. I would also like that applications built by, e.g., pyinstaller, that pull my package in can easily collect the data with collect_data_files, which for me has worked without any additional setup with the current methodology. What is the proper way to do this going forward?
The TL;DR is that in Python since PEP 420, directories count as packages, even if they don't have a __init__.py file. The main difference is that directories without __init__.py are called "namespace packages". Accordingly, if a project wants to distribute directories without a __init__.py file, it should use packages=find_namespace_packages() (setup.py) or packages = find_namespace: (setup.cfg). Details on how to use those tools can be found on these docs. Doing this change should make the error go away. The MANIFEST.in or the include_package_data=True should be fine.
20
15
72,309,492
2022-5-19
https://stackoverflow.com/questions/72309492/how-do-i-install-the-same-pip-dependencies-locally-as-are-installed-in-my-cloud
I'm trying to set up a local development environment in VS Code where I'd get code completion for the packages Cloud Composer/Apache Airflow uses. I've been successful so far using a virtual environment (created with python -m venv .venv) and a very minimal requirements.txt file that contains just the Airflow package, installed into the local environment. The file is like this: apache-airflow==1.10.15 And I can install it into my virtual environment by running pip install -r requirements.txt after activating my virtual environment in VS Code, after which I get code completion in VS Code for the quickstart DAG in their docs, the BashOperator: I wanted to get more code completion as I followed more tutorials. For example, following the KubernetesPodOperator tutorial (https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator), I get this error, and VS Code doesn't recognize the import: Import "airflow.providers.cncf.kubernetes.operators.kubernetes_pod" could not be resolved Pylance(reportMissingImports) I figured that a good next step would be to install exactly the same PyPI packages into my virtual environment as are running in the Cloud Composer environment. I used the page https://cloud.google.com/composer/docs/concepts/versioning/composer-versions to see which packages were installed: So my requirements.txt file then looked like this: absl-py==1.0.0 alembic==1.5.7 amqp==2.6.1 apache-airflow==1.10.15+composer apache-airflow-backport-providers-apache-beam==2021.3.13 apache-airflow-backport-providers-cncf-kubernetes==2021.3.3 apache-airflow-backport-providers-google==2022.4.1+composer apache-beam==2.37.0 apispec==1.3.3 appdirs==1.4.4 argcomplete==1.12.2 astunparse==1.6.3 attrs==20.3.0 Babel==2.9.0 bcrypt==3.2.0 billiard==3.6.3.0 cached-property==1.5.2 cachetools==4.2.1 cattrs==1.1.2 celery==4.4.7 certifi==2020.12.5 cffi==1.14.5 chardet==4.0.0 click==6.7 cloudpickle==2.0.0 colorama==0.4.4 colorlog==4.0.2 configparser==3.5.3 crcmod==1.7 croniter==0.3.37 cryptography==3.4.6 defusedxml==0.7.1 dill==0.3.1.1 distlib==0.3.1 dnspython==2.1.0 docopt==0.6.2 docutils==0.16 email-validator==1.1.2 fastavro==1.3.4 fasteners==0.17.3 filelock==3.0.12 Flask==1.1.2 Flask-Admin==1.5.4 Flask-AppBuilder==2.3.4 Flask-Babel==1.0.0 Flask-Bcrypt==0.7.1 Flask-Caching==1.3.3 Flask-JWT-Extended==3.25.1 Flask-Login==0.4.1 Flask-OpenID==1.3.0 Flask-SQLAlchemy==2.5.1 flask-swagger==0.2.14 Flask-WTF==0.14.3 flower==0.9.7 funcsigs==1.0.2 future==0.18.2 gast==0.3.3 google-ads==7.0.0 google-api-core==1.31.5 google-api-python-client==1.12.8 google-apitools==0.5.31 google-auth==1.28.0 google-auth-httplib2==0.1.0 google-auth-oauthlib==0.4.3 google-cloud-aiplatform==1.12.1 google-cloud-automl==2.7.2 google-cloud-bigquery==1.28.0 google-cloud-bigquery-datatransfer==3.6.1 google-cloud-bigquery-storage==2.6.3 google-cloud-bigtable==1.7.0 google-cloud-build==2.0.0 google-cloud-container==1.0.1 google-cloud-core==1.6.0 google-cloud-datacatalog==3.7.1 google-cloud-dataplex==0.2.1 google-cloud-dataproc==3.3.1 google-cloud-dataproc-metastore==1.5.0 google-cloud-datastore==1.15.3 google-cloud-dlp==1.0.0 google-cloud-kms==2.11.1 google-cloud-language==1.3.0 google-cloud-logging==2.2.0 google-cloud-memcache==1.3.1 google-cloud-monitoring==2.0.0 google-cloud-os-login==2.6.1 google-cloud-pubsub==2.12.0 google-cloud-pubsublite==1.4.1 google-cloud-redis==2.8.0 google-cloud-resource-manager==1.4.1 google-cloud-secret-manager==1.0.0 google-cloud-spanner==1.19.1 google-cloud-speech==1.3.2 google-cloud-storage==1.36.2 google-cloud-tasks==2.8.1 google-cloud-texttospeech==1.0.1 google-cloud-translate==1.7.0 google-cloud-videointelligence==1.16.1 google-cloud-vision==1.0.0 google-cloud-workflows==1.6.1 google-crc32c==1.1.2 google-pasta==0.2.0 google-resumable-media==1.2.0 googleapis-common-protos==1.53.0 graphviz==0.16 grpc-google-iam-v1==0.12.3 grpcio==1.44.0 grpcio-gcp==0.2.2 grpcio-status==1.44.0 gunicorn==20.0.4 h5py==2.10.0 hdfs==2.6.0 httplib2==0.17.4 humanize==3.3.0 idna==2.8 importlib-metadata==2.1.1 importlib-resources==1.5.0 iso8601==0.1.14 itsdangerous==1.1.0 Jinja2==2.11.3 json-merge-patch==0.2 jsonschema==3.2.0 Keras-Preprocessing==1.1.2 kombu==4.6.11 kubernetes==11.0.0 lazy-object-proxy==1.4.3 libcst==0.3.17 lockfile==0.12.2 Mako==1.1.4 Markdown==2.6.11 MarkupSafe==1.1.1 marshmallow==2.21.0 marshmallow-enum==1.5.1 marshmallow-sqlalchemy==0.23.1 mock==2.0.0 monotonic==1.5 mypy-extensions==0.4.3 mysqlclient==1.3.14 natsort==7.1.1 numpy==1.19.5 oauth2client==4.1.3 oauthlib==3.1.0 opt-einsum==3.3.0 orjson==3.6.8 overrides==6.1.0 packaging==20.9 pandas==1.1.5 pandas-gbq==0.14.1 pbr==5.8.1 pendulum==1.4.4 pip==20.1.1 pipdeptree==1.0.0 prison==0.1.3 prometheus-client==0.8.0 proto-plus==1.18.1 protobuf==3.15.6 psutil==5.8.0 psycopg2-binary==2.8.6 pyarrow==2.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.20 pydata-google-auth==1.1.0 pydot==1.4.2 Pygments==2.8.1 PyJWT==1.7.1 pymongo==3.11.3 pyOpenSSL==20.0.1 pyparsing==2.4.7 pyrsistent==0.17.3 python-daemon==2.3.0 python-dateutil==2.8.1 python-editor==1.0.4 python-http-client==3.3.4 python-nvd3==0.15.0 python-slugify==4.0.1 python3-openid==3.2.0 pytz==2021.1 pytzdata==2020.1 PyYAML==5.4.1 redis==3.5.3 requests==2.25.1 requests-oauthlib==1.3.0 rsa==4.7.2 scipy==1.4.1 sendgrid==5.6.0 setproctitle==1.2.2 setuptools==57.5.0 six==1.15.0 SQLAlchemy==1.3.20 SQLAlchemy-JSONField==0.9.0 SQLAlchemy-Utils==0.36.8 statsd==3.3.0 tabulate==0.8.9 tenacity==4.12.0 tensorboard==2.2.2 tensorboard-plugin-wit==1.8.1 tensorflow==2.2.0 tensorflow-estimator==2.2.0 termcolor==1.1.0 text-unidecode==1.3 thrift==0.13.0 tornado==5.1.1 typing-extensions==3.7.4.3 typing-inspect==0.6.0 typing-utils==0.1.0 tzlocal==1.5.1 unicodecsv==0.14.1 uritemplate==3.0.1 urllib3==1.26.4 vine==1.3.0 virtualenv==20.4.3 websocket-client==0.58.0 Werkzeug==0.16.1 wheel==0.37.1 wrapt==1.12.1 WTForms==2.3.3 zipp==3.4.1 zope.deprecation==4.4.0 When I tried running pip install -r requirements.txt again, I get the following error: ERROR: Could not find a version that satisfies the requirement apache-airflow==1.10.15+composer (from versions: 1.10.9-bin, 1.8.1, 1.8.2rc1, 1.8.2, 1.9.0, 1.10.0, 1.10.1b1, 1.10.1rc2, 1.10.1, 1.10.2b2, 1.10.2rc1, 1.10.2rc2, 1.10.2rc3, 1.10.2, 1.10.3b1, 1.10.3b2, 1.10.3rc1, 1.10.3rc2, 1.10.3, 1.10.4b2, 1.10.4rc1, 1.10.4rc2, 1.10.4rc3, 1.10.4rc4, 1.10.4rc5, 1.10.4, 1.10.5rc1, 1.10.5, 1.10.6rc1, 1.10.6rc2, 1.10.6, 1.10.7rc1, 1.10.7rc2, 1.10.7rc3, 1.10.7, 1.10.8rc1, 1.10.8, 1.10.9rc1, 1.10.9, 1.10.10rc1, 1.10.10rc2, 1.10.10rc3, 1.10.10rc4, 1.10.10rc5, 1.10.10, 1.10.11rc1, 1.10.11rc2, 1.10.11, 1.10.12rc1, 1.10.12rc2, 1.10.12rc3, 1.10.12rc4, 1.10.12, 1.10.13rc1, 1.10.13, 1.10.14rc1, 1.10.14rc2, 1.10.14rc3, 1.10.14rc4, 1.10.14, 1.10.15rc1, 1.10.15, 2.0.0b1, 2.0.0b2, 2.0.0b3, 2.0.0rc1, 2.0.0rc2, 2.0.0rc3, 2.0.0, 2.0.1rc1, 2.0.1rc2, 2.0.1, 2.0.2rc1, 2.0.2, 2.1.0rc1, 2.1.0rc2, 2.1.0, 2.1.1rc1, 2.1.1, 2.1.2rc1, 2.1.2, 2.1.3rc1, 2.1.3, 2.1.4rc1, 2.1.4rc2, 2.1.4, 2.2.0b1, 2.2.0b2, 2.2.0rc1, 2.2.0, 2.2.1rc1, 2.2.1rc2, 2.2.1, 2.2.2rc1, 2.2.2rc2, 2.2.2, 2.2.3rc1, 2.2.3rc2, 2.2.3, 2.2.4rc1, 2.2.4, 2.2.5rc1, 2.2.5rc2, 2.2.5rc3, 2.2.5, 2.3.0b1, 2.3.0rc1, 2.3.0rc2, 2.3.0) ERROR: No matching distribution found for apache-airflow==1.10.15+composer When I looked at the PyPI website, I noticed that some of the packages that have "+composer" in their name in requirements.txt don't exist in PyPI. For example, apache-airflow==1.10.15+composer and apache-airflow-backport-providers-google==2022.4.1+composer don't exist there. Does this mean that those packages are not publicly available? I'm relatively new to Python and Airflow, so these are just some ideas I've been thinking of since I encountered this issue. I may be on the wrong track. I'd appreciate any help I can get here in installing these packages into my local virtual environment, or installing some other packages that would achieve my goal of being able to do local development, with code completion, on DAGs. Here's the script I used to create my environment for this test, for reference: #!/bin/bash gcloud composer environments create my-environment \ --location us-central1 \ --image-version composer-1.18.8-airflow-1.10.15 # uses Python 3.8.12
So the two incompatibilities in Cloud Composer dependencies as listed on the official website are apache-airflow and apache-airflow-providers-google (or apache-airflow-backport-providers-google if you are using Cloud Composer v1). What you need to do is to replace these two dependencies with the correct pins. For example, if you are running composer-2.0.16-airflow-2.2.5 version that specifies the two dependencies as apache-airflow==2.2.5+composer apache-airflow-providers-google==2022.5.18+composer You need to replace them with apache-airflow==2.2.5 apache-airflow-providers-google==7.0.0 If you are wondering how I came up with the specific version for apache-airflow-providers-google then what you need to do is head the page containing the list of commits included in each release. At the top of each release, you can see the date of the latest commit. Then the specific package version will be the one with the latest 'Latest change' prior to the date specified in the original listing on Cloud Composer version page (in this example that'd be 2022.5.18). Note that for some specific composer versions, the apache-ariflow-providers-google dependency is specified explicitly (.e.g 6.7.0 or 6.8.0). Not sure if the date convention is there by mistake or perhaps a convention that we are not aware of (?)
7
6
72,366,034
2022-5-24
https://stackoverflow.com/questions/72366034/code-duplication-in-api-design-for-url-route-functions-vs-real-world-object-met
I have code duplication in my API design for the object methods vs. the URL routing functions: # door_model.py class Door: def open(self): # "Door.open" written once... ... # http_api.py (the HTTP server is separated from the real-world object models) @app.route('/api/door/open') # ... written twice def dooropen(): # ... written three times d.open() # ... written four times! d = Door() How to avoid this unnecessary duplication of names in a similar API design? (while keeping a separation between real-world object models vs. HTTP server). Is there a general pattern to avoid unnecessary duplication of names when using an object model (with methods), and URL routes functions? (nearly a Model View Controller pattern) See also Associate methods of an object to API URL routes with Python Flask.
If we declare a route for every model action and do the same things for each (in your case, call the corresponding method with or without parameter), it will duplicate the code. Commonly, people use design patterns (primarily for big projects) and algorithms to avoid code duplications. And I want to show a simple example that defines one generic route and handles all requests in one handler function. Suppose we have the following file structure. application/ ├─ models/ │ ├─ door.py │ ├─ window.py ├─ main.py The prototype of the Door looks like # door.py class Door: def open(self): try: # open the door return 0 except: return 1 def close(self): try: # close the door return 0 except: return 1 def openlater(self, waitseconds=2): print("Waiting for ", waitseconds) try: # wait and open the door return 0 except: return 1 Where I conditionally set exit codes of the C, 0 for success and 1 for error or failure. We must separate and group the model actions into one as they have a common structure. +----------+----------+------------+----------------------+ | API base | model | action | arguments (optional) | +----------+----------+------------+----------------------+ | /api | /door | /open | | | /api | /door | /close | | | /api | /door | /openlater | ?waitseconds=10 | | /api | /window | /open | | | /api | /<model> | /<action> | | +----------+----------+------------+----------------------+ After we separate our groups by usage interface, we can implement a generic handler for each. Generic handler implementation # main.py from flask import Flask, Response, request import json from models.door import Door from models.window import Window app = Flask(__name__) door = Door() window = Window() MODELS = { "door": door, "window": window, } @app.route("/api/<model>/<action>") def handler(model, action): model_ = MODELS.get(model) action_ = getattr(model_, action, None) if callable(action_): try: error = action_(**request.args) if not error: return Response(json.dumps({ "message": "Operation succeeded" }), status=200, mimetype="application/json") return Response(json.dumps({ "message": "Operation failed" }), status=400, mimetype="application/json") except (TypeError, Exception): return Response(json.dumps({ "message": "Invalid parameters" }), status=400, mimetype="application/json") return Response(json.dumps({ "message": "Wrong action" }), status=404, mimetype="application/json") if __name__ == "__main__": app.run() So you can control the actions of the models by using different API paths and query parameters.
4
3
72,320,478
2022-5-20
https://stackoverflow.com/questions/72320478/pyinstaller-every-joblib-parallel-call-creates-a-new-tkinter-window-on-macos
Here is the code which can reproduce the problem (it is just for reproducing the problem, so what it does is a bit meaningless): from joblib import Parallel, delayed import tkinter as tk def f(): print('func call') if __name__ == '__main__': root = tk.Tk() button = tk.Button(root, command=lambda: Parallel(n_jobs=-1, backend='threading')(delayed(f)() for _ in range(1)), text='func') button.pack() root.mainloop() The above code can run perfectly in my IDE, but once I create an executable with Pyinstaller, it bugs. The command line to create the executable is as below: pyinstaller -F main.py The excepted behavior is that, every time when presses the button in the tkinter window, a func call string should be printed in the terminal. But when I use the executable file to run, every time I press the button, besides the printing in the terminal, a new tkinter window is created. I have also tried to build the executable in Windows with the same command. The executable file runs fine in Windows (No new tkinter window is created when pressing the button). Only the MacOS platform has this problem. How should I fix this? Here is the platform that I have the problem: CPU: ARM64 (M1) Python: 3.10 joblib: 1.1.0 pyinstaller 5.0.1
The problem is solved by adding multiprocessing.freeze_support() to the code. The fixed version of code is as below: import multiprocessing multiprocessing.freeze_support() from joblib import Parallel, delayed import tkinter as tk def f(): print('func called') if __name__ == '__main__': root = tk.Tk() button = tk.Button(root, command=lambda: Parallel(n_jobs=-1, backend='threading')(delayed(f)() for _ in range(1)), text='func') button.pack() root.mainloop() Thank you rokm for answering my question on GitHub, here is the URL to rokm's answer: https://github.com/pyinstaller/pyinstaller/issues/6852#issuecomment-1138269358.
4
4
72,361,314
2022-5-24
https://stackoverflow.com/questions/72361314/504-gateway-timeout-only-in-django-function
I have a very mind-boggling problem and my team has struggled to solve it. We do have it narrowed down but not 100%. Introduction We are trying to implement LTI in a Django app with the Vue frontend. To fetch the token from the URL the backend makes a POST request to the URL with data and should receive a token or error if it's expired or invalid. Design Browser ---- POST REQUEST --> view function on Server (Django) --- POST REQUEST --> Auth URL Problem The post request that Django view makes times out with 504 Gateway Timeout. This could be normal if the server takes a lot of time. However, increasing the time did not help and checked Auth URL with POSTMAN it worked fine and was not down. What I have tried We decided to debug or diagnose this issue that why a code block works in a function when it is called through a shell but not when it is called by a POST request does not. Eliminated the front end and used POSTMAN to send a POST request to my Django server -- timeout on the Auth URL Called the same function using Django shell -- worked Copied the code into a separate python file outside Django -- worked Used same virtual env for all above What it appears to be When a POST function is called and another post request is made inside it then it times out. Please note: If I make a POST request in the same situation with invalid data (say missing grant_type) then it does not time out. Code Block auth_request = { "grant_type": "client_credentials", "client_assertion_type": "urn:ietf:params:oauth:client-assertion-type:jwt-bearer", "client_assertion": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IlJVbzNJWWlSR0ZDYUhNNEg0S2lid095enAtRU9KWlAweXkwd0g3bk5VOEEifQ.eyJpc3MiOiI2NjkxNmZmYy03YjE5LTQ0MWEtYjE5Zi0yOGQxMzVmYjZjOWYiLCJzdWIiOiI2NjkxNmZmYy03YjE5LTQ0MWEtYjE5Zi0yOGQxMzVmYjZjOWYiLCJhdWQiOiJodHRwczovL2RldmVsb3Blci5ibGFja2JvYXJkLmNvbS9hcGkvdjEvZ2F0ZXdheS9vYXV0aDIvand0dG9rZW4iLCJpYXQiOjE2NTMyODcyNzMsImV4cCI6MTY1MzI4NzMzOCwianRpIjoibHRpLXNlcnZpY2UtdG9rZW4tMDQyZTZhNjctNDA2My00YmQ1LWI2NmQtNTM4YjU2ZTllM2Q1In0.8Jaou965cPTCFv-7yP9iIlH8mMgQjAi0AR2li0KwCcRuHsRZ_1OpbE83bZ06RMXhbjA4crRqTI4zMi8aNfq16Mkg4lXoPj8JiJW7q8b_ZQ1rLZvIojmabehYjpyscHRitFPLibfTYF2mCjUyHqwPgnFRLNrHIVuSvM0BiK56PuYK6SiiSjxu2U3bmJqOHNW2mqx2YYfkaXx2u7ru6CKTiL3KBGzFPYjCUwwWNBdbz4R0g0aHK_l-hhA3oi_pCDZOyqdnyCmGAj5SpZbuZOqrZbQBrqPoEFtXdNDPpHGGwW7IUbbmCtsmE2NqQiYt6snmK-1pbxsLxE0mXrpDqASh4A", "scope": "https://purl.imsglobal.org/spec/lti-nrps/scope/contextmembership.readonly", } response = requests.post( "https://developer.blackboard.com/api/v1/gateway/oauth2/jwttoken", data=auth_request, ) response = response.json() print(response)
After days of trying to figure it out, we deployed the project with Production settings and it worked. Upon investigation on why it was not working on staging we found the following: Front end sends a POST request to the back end Backend then encoded the data using the private key and sent it to a 3rd party server Because 3rd party server needed to validate our jwks message it sent another request to a URL on our staging server. This 3rd request was not addressed as the server only had one thread. Adding a threads parameter to gunicorn did the trick gunicorn ripple.wsgi --reload --log-level debug --threads 4
4
3
72,369,250
2022-5-24
https://stackoverflow.com/questions/72369250/weird-datetime-utcnow-bug
Consider this simple Python script: $ cat test_utc.py from datetime import datetime for i in range(10_000_000): first = datetime.utcnow() second = datetime.utcnow() assert first <= second, f"{first=} {second=} {i=}" When I run it from the shell like python test_utc.py it finishes w/o errors, just as expected. However, when I run it in a Docker container the assertion fails: $ docker run -it --rm -v "$PWD":/code -w /code python:3.10.4 python test_utc.py Traceback (most recent call last): File "/code/test_utc.py", line 7, in <module> assert first <= second, f"{first=} {second=} {i=}" AssertionError: first=datetime.datetime(2022, 5, 24, 19, 5, 1, 861308) second=datetime.datetime(2022, 5, 24, 19, 5, 1, 818270) i=1818860 How is it possible? P.S. a colleague has reported that increasing the range parameter to 100_000_000 makes it fail in the shell on their mac as well (but not for me).
utcnow refers to now refers to today refers to fromtimestamp refers to time, which says: While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls. The utcnow code also shows its usage of time: def utcnow(cls): "Construct a UTC datetime from time.time()." t = _time.time() return cls.utcfromtimestamp(t) Such system clock updates are also why monotonic exists, which says: Return the value (in fractional seconds) of a monotonic clock, i.e. a clock that cannot go backwards. The clock is not affected by system clock updates. And utcnow has no such guarantee. Your computer doesn't have a perfect clock, every now and then it synchronizes via the internet with more accurate clocks, possibly adjusting it backwards. See for example answers here. And looks like Docker makes it worse, see for example Addressing Time Drift in Docker Desktop for Mac from the Docker blog. Excerpt: macOS doesn’t have native container support. The helper VM has its own internal clock, separate from the host’s clock. When the two clocks drift apart then suddenly commands which rely on the time, or on file timestamps, may start to behave differently Lastly, you can increase your chance to catch a backwards update when one occurs. If one occurs not between getting first and second but between second and the next first, you'll miss it! Below code fixes that issue and is also micro-optimized (including removing the utcnow middle man) so it checks faster / more frequently: import time from itertools import repeat def function(): n = 10_000_000 reps = repeat(1, n) now = time.time first = now() for _ in reps: second = now() assert first <= second, f"{first=} {second=} i={n - sum(reps)}" first = second function()
7
9
72,367,342
2022-5-24
https://stackoverflow.com/questions/72367342/selecting-items-on-a-matrix-based-on-indexes-given-by-an-array
Consider this matrix: [0.9, 0.45, 0.4, 0.35], [0.4, 0.8, 0.3, 0.25], [0.5, 0.45, 0.9, 0.35], [0.2, 0.18, 0.8, 0.1], [0.6, 0.45, 0.4, 0.9] and this list: [0,1,2,3,3] I want to create a list that looks like the following: [0.9, 0.8, 0.9, 0.1, 0.9] To clarify, for each row, I want the element of the matrix whose column index is contained in the first array. How can I accomplish this?
Zip the two lists together as below a=[[0.9, 0.45, 0.4, 0.35],[0.4, 0.8, 0.3, 0.25],[0.5, 0.45, 0.9, 0.35],[0.2, 0.18, 0.8, 0.1],[0.6, 0.45, 0.4, 0.9]] b=[0,1,2,3,3] [i[j] for i,j in zip(a,b)] Result [0.9, 0.8, 0.9, 0.1, 0.9] This basically pairs up each sublist in the matrix with the element of your second list in order with zip(a,b) Then for each pair you choose the bth element of a
4
3
72,362,774
2022-5-24
https://stackoverflow.com/questions/72362774/understanding-gradient-computation-using-backward-in-pytorch
I'm trying to understand the basic pytorch autograd system: x = torch.tensor(10., requires_grad=True) print('tensor:',x) x.backward() print('gradient:',x.grad) output: tensor: tensor(10., requires_grad=True) gradient: tensor(1.) since x is a scalar constant and no function is applied to it, I expected 0. as the gradient output. Why is the gradient 1. instead?
Whenever you are using value.backward(), you compute the derivative value (in your case value == x) with respect to all your parameters (in your case that is just x). Roughly speaking, this means all tensors that are somehow involved in your computation that have requires_grad=True. So this means x.grad = dx / dx = 1 To add to that: With the automatic differentiation you always ever compute with "constant" values: All your function or networks are always evaluated at a concrete point. And the gradient you get is the gradient evaluated at that same point. There is no symbolic computation taking place. All the information needed for the computation of the gradient is encoded in the computation graph.
4
6
72,362,566
2022-5-24
https://stackoverflow.com/questions/72362566/access-previous-dataframe-during-pandas-method-chaining
Method chaining is a known way to improve code readability and often referred to as a Fluent API [1, 2]. Pandas does support this approach as multiple method calls can be chained like: #!/usr/bin/env python3 # -*- coding: utf-8 -*- import numpy as np import pandas as pd d = {'col1': [1, 2, 3, 4], 'col2': [5, np.nan, 7, 8], 'col3': [9, 10, 11, np.nan], 'col4': [np.nan, np.nan, np.nan, np.nan]} df = ( pd .DataFrame(d) .set_index('col1') .drop(labels='col3', axis=1) ) print(df) How could I use method chaining if I need to access attributes of the DataFrame returned from the previous function call? To be specific, I need to call .dropna() on a column subset. As the DataFrame is generated from pd.concat() the exact column names are not known a priori. Therefore, I am currently using a two-step approach like this: #!/usr/bin/env python3 # -*- coding: utf-8 -*- import numpy as np import pandas as pd d_1 = {'col1': [1, 2, 3, 4], 'col2': [5, np.nan, 7, 8], 'col3': [9, 10, 11, np.nan], 'col4': [np.nan, np.nan, np.nan, np.nan]} d_2 = {'col10': [10, 20, 30, 40], 'col20': [50, np.nan, 70, 80], 'col30': [90, 100, 110, np.nan]} df_1 = pd.DataFrame(d_1) df_2 = pd.DataFrame(d_2) df = pd.concat([df_1, df_2], axis=1) print(df) dropped = df.dropna(how='any', subset=[c for c in df.columns if c != 'col4']) print(dropped) Is there a more elegant way based on method chaining? .dropna() can certainly be chained, but I did not find a way to access the column names of the DataFrame resulting from the previous pd.concat(). I image something like # pseudo-code dropped = ( pd .concat([df_1, df_2], axis=1) .dropna(how='any', subset=<access columns of dataframe returned from previous concat and ignore desired column>) ) print(dropped) but did not find a solution. Memory-efficiency could be improved by using .dropna() with the inplace=True option to re-assign the variable in-place. However, readability with respect to method chaining remains unimproved.
Use pipe: dropped = ( pd .concat([df_1, df_2], axis=1) .pipe(lambda d: d.dropna(how='any', subset=[c for c in d.columns if c != 'col4'])) ) output: col1 col2 col3 col4 col10 col20 col30 0 1 5.0 9.0 NaN 10 50.0 90.0 2 3 7.0 11.0 NaN 30 70.0 110.0 NB. alternative syntax for the dropna: lambda d: d.dropna(how='any', subset=d.columns.difference(['col4']))
4
3
72,360,040
2022-5-24
https://stackoverflow.com/questions/72360040/how-to-find-the-frequency-of-the-most-frequent-value-mode-of-a-series-in-polar
import polars as pl df = pl.DataFrame({ "tags": ["a", "a", "a", "b", "c", "c", "c", "c", "d"] }) This is how to compute the most frequent element of the column using the .mode expression: df.select([ pl.col("tags").mode().alias("mode"), ]) How can I display also the frequency/count of that mode?
There is a value_counts expression. This expression will return a Struct datatype where the first field is the unique value and the second field is the count of that value. df.select([ pl.col("tags").value_counts() ]) shape: (4, 1) ┌───────────┐ │ tags │ │ --- │ │ struct[2] │ ╞═══════════╡ │ {"c",4} │ ├╌╌╌╌╌╌╌╌╌╌╌┤ │ {"a",3} │ ├╌╌╌╌╌╌╌╌╌╌╌┤ │ {"b",1} │ ├╌╌╌╌╌╌╌╌╌╌╌┤ │ {"d",1} │ └───────────┘ Or if you want to have that result as a DataFrame: (df.select([ pl.col("tags").value_counts() ]).to_series().struct.to_frame()) shape: (4, 2) ┌──────┬────────┐ │ tags ┆ counts │ │ --- ┆ --- │ │ str ┆ u32 │ ╞══════╪════════╡ │ c ┆ 4 │ ├╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤ │ a ┆ 3 │ ├╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤ │ d ┆ 1 │ ├╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤ │ b ┆ 1 │ └──────┴────────┘ Edited: Which can be even simpler: df["tags"].value_counts()
4
4
72,351,552
2022-5-23
https://stackoverflow.com/questions/72351552/vs-code-debugging-prompting-for-arguments-and-also-setting-the-working-directory
I know how to pass fixed arguments in the launch.json, e.g. In Visual Studio Code, how to pass arguments in launch.json . What I really need is a prompt where I can give a value for an argument that changes. In addition, my argument is a (data) directory for which there is a very ugly long absolute path. I'd really like to be able to set the working directory to a path which contains each of my individual data directories so I only need to provide a relative directory path, i.e. just the directory name. I'm working with Python, on Windows (not my choice) using VS Code 1.55.2 (not my choice, either).
You can use input variables { "version": "0.2.0", "configurations": [ { "name": "Python: Current File with arguments", "type": "python", "request": "launch", "program": "${file}", "args": [ "--dir", "/some/fixed/dir/${input:enterDir}" ] } ], "inputs": [ { "id": "enterDir", "type": "promptString", "description": "Subdirectory to process", "default": "data-0034" } ] } You can place the ${input:enterDir} in any string in the task "configurations" like the "cwd" property. If you like to pick a directory from a list because it is dynamic you can use the extension Command Variable that has the command pickFile Command Variable v1.36.0 supports the fixed folder specification. { "version": "0.2.0", "configurations": [ { "name": "Python: Current File with arguments", "type": "python", "request": "launch", "program": "${file}", "args": [ "--dir", "${input:pickDir}" ] } ], "inputs": [ { "id": "pickDir", "type": "command", "command": "extension.commandvariable.file.pickFile", "args": { "include": "**/*", "display": "fileName", "description": "Subdirectory to process", "showDirs": true, "fromFolder": { "fixed": "/some/fixed/dir" } } } ] } On Unix like systems you can include the folder in the include glob pattern. On Windows you have to use the fromFolder to convert the directory path to a usable glob pattern. If you have multiple folders you can use the predefined property.
10
19
72,339,128
2022-5-22
https://stackoverflow.com/questions/72339128/cython-error-while-building-extension-microsoft-visual-c-14-0-or-greater-is
Short Description: I'm trying to build an example cython script, but when I run the python setup.py build_ext --inplace command, I get an error saying that I need MS Visual C++ version 14.0 or greater. I've tried a lot of the things on related SO threads and other forums but to no avail in resolving the issue. Longer Description: The specific cython script: test.pyx: cpdef int test(int n): cdef int sum_ = 0, i = 0 while i < n: sum_ += i i += 1 return sum_ setup.py: # from setuptools import setup from distutils.core import setup from Cython.Build import cythonize setup( name = "test", ext_modules = cythonize('test.pyx'), # accepts a glob pattern ) I'm on python 3.10.0 and cython 0.29.30 and am using Windows 10 And here is the error that I get: C:\Users\LENOVO PC\PycharmProjects\MyProject\cython_src>py setup.py build_ext --inplace Compiling test.pyx because it changed. [1/1] Cythonizing test.pyx C:\Users\LENOVO PC\AppData\Local\Programs\Python\Python310\lib\site-packages\Cython\Compiler\Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: C:\Users\LENOVO PC\PycharmProjects\MyProject\cython_src\test.pyx tree = Parsing.p_module(s, pxd, full_module_name) running build_ext building 'test' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ C:\Users\LENOVO PC\PycharmProjects\MyProject\cython_src> I've tried numerous different things: Visited the link in the error and downloaded and installed the build tools Installed multiple versions of Visual Studio (2022, 2019, 2017) CE and Build Tools Uninstalled all of the above and reinstalling MSVC 2019 CE and Build Tools from scratch Browsed through a lot of other related SO threads about this error and none of the solutions presented in them have worked for me so far, they have broadly included: Building the script from the developer console Updating setuptools Installing numerous different components in MSVC Installing numerous vc redistributables But none of these have worked for me unfortunately, and I keep getting the same error. I personally think the cause might be related to missing registry keys, or missing path variables, because the MSVC tools are definitely installed on my machine, but the setup script is unable to find them, but I do not know how to find out for sure. Some additional info that might be relevant(?): I've used Cython on the same machine before, and it used to work just fine, I had Visual Studio 2019 at this time. At some point though, I uninstalled it and upgraded to Visual Studio 2022 because I was learning C++ and wanted to use a newer C++ standard. Oddly enough, when I did this, the IDE that I use for C++ (CLion) stopped detecting the MSVC toolchain as well, and I never got it to correctly detect it again (I've been using WSL toolchain on CLion since) Recently when I tried to use Cython again and got this error, and did a lot of digging, I realised that the two incidents might be related, so I thought that it may be worth mentioning here.
Both the main python issue and the secondary CLion thing that I mentioned were resolved with this one solution (the issues were connected after all!) Clear the registry key that is mentioned in this SO thread: https://stackoverflow.com/a/64389979/15379178 This error had nothing to do with python (sort of) or msvc, in short, an anaconda installation had left an invalid path in my cmd's auto run regkey and it was causing a "The system cannot find the path specified" error and despite it being unrelated to python or msvc it was causing the build to fail. I am so glad I finally figured this out!
5
2
72,323,871
2022-5-20
https://stackoverflow.com/questions/72323871/in-plotly-dash-how-do-i-source-a-local-image-file-to-dash-html-img
This is how to source a local image file to the <img> element in html: <html> <h1>This is an image</h1> <img src="file:///C:/Users/MyUser/Desktop/Plotly_Dash_logo.png" alt="image"></img> </html> This displays the image as expected. But when I try to make the same page using the plotly dash wrapper elements, it does not work: import dash from dash import html, dcc app = dash.Dash(__name__) app.layout = html.Div([ html.H1('This is an image'), html.Img(src=r'file:///C:/Users/MyUser/Desktop/Plotly_Dash_logo.png', alt='image'), ]) if __name__ == '__main__': app.run_server(host='0.0.0.0', port=8080, debug=False, use_reloader=False) The local image file does not display. But if I replace the source with a file from the internet, like 'https://rapids.ai/assets/images/Plotly_Dash_logo.png', it works just fine. What is going on here?
After some more searching, I found that I could place my image file in a folder named "assets/", then reference it relative to the app folder. html.Img(src=r'assets/Plotly_Dash_logo.png', alt='image') I could also use a special method of the app instance dash.Dash.get_asset_url(). html.Img(src=app.get_asset_url('my-image.png')) Source: https://dash.plotly.com/dash-enterprise/static-assets
4
9
72,338,808
2022-5-22
https://stackoverflow.com/questions/72338808/how-to-calculate-per-document-probabilities-under-respective-topics-with-bertopi
I am trying to use BERTopic to analyze the topic distribution of documents, after BERTopic is performed, I would like to calculate the probabilities under respective topics per document, how should I did it? # define model model = BERTopic(verbose=True, vectorizer_model=vectorizer_model, embedding_model='paraphrase-MiniLM-L3-v2', min_topic_size= 50, nr_topics=10) # train model headline_topics, _ = model.fit_transform(df1.review_processed3) # examine one of the topic a_topic = freq.iloc[0]["Topic"] # Select the 1st topic model.get_topic(a_topic) # Show the words and their c-TF-IDF scores Below is the words and their c-TF-IDF scores for one of the Topics image 1 How should I change the result into Topic Distribution as below in order to calculate the topic distribution score and also identify the main topic? image 2
First, to compute probabilities, you have to add to your model definition calculate_probabilities=True (this could slow down the extraction of topics if you have many documents, > 100000). # define model model = BERTopic(verbose=True, vectorizer_model=vectorizer_model, embedding_model='paraphrase-MiniLM-L3-v2', min_topic_size= 50, nr_topics=10, calculate_probabilities=True) Then, calling fit_transform, you should save the probabilities: headline_topics, probs = model.fit_transform(df1.review_processed3) Now, you can create a pandas dataframe which shows probabilities under respective topics per document. import pandas as pd probs_df=pd.DataFrame(probs) probs_df['main percentage'] = pd.DataFrame({'max': probs_df.max(axis=1)})
4
6
72,343,944
2022-5-23
https://stackoverflow.com/questions/72343944/cv2-imshow-doesnt-work-when-easyocr-installed
I installed easyocr in a newly created python environment using pip install easyocr. Then i installed opencv-python. when i try to execute the code - import cv2 img = cv2.imread('2.jpg') cv2.imshow('sd',img) cv2.waitKey(0) It's giving error OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1268: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'
Problem: If you already have an existing OpenCV version in your system/environment; installing easyOCR can alter that. Going through the requirements.txt file of easyOCR, opencv-python-headless gets installed. The following excerpt is taken from opencv-python-headless documentation: Packages for server (headless) environments (such as Docker, cloud environments etc.), no GUI library dependencies These packages are smaller than the two other packages above because they do not contain any GUI functionality (not compiled with Qt / other GUI components). This means that the packages avoid a heavy dependency chain to X11 libraries and you will have for example smaller Docker images as a result. You should always use these packages if you do not use cv2.imshow et al. or you are using some other package (such as PyQt) than OpenCV to create your GUI. TLDR; In short, easyocr disables existing GUI capabilities. It has been designed exclusively for containerized applications and/or server deployments. Solution: To use easyOCR with OpenCV, you can try either one of the following: 1. Change installation sequence: All the following can be done using pip: First install easyocr. Uninstall opencv-python-headless. Install opencv-python 2. Use matplotlib One can still display images using matplotlib: import cv2 from matplotlib import pyplot as plt img = cv2.imread('img.jpg',0) plt.imshow(img) plt.show()
6
7
72,345,302
2022-5-23
https://stackoverflow.com/questions/72345302/save-jalalihijri-shamsi-datetime-in-database-in-django
I have a Django project, and I want to save created_at datetime in the database. I generate datetime.now with jdatetime (or Khayyam) python package and try to save this in DateTimeField. But sometimes it raises error because the Gregorian(miladi) date of the entry does not exist. what can I do about this?
In my idea, you can save two model fields. One is DateTimeField contains gregorian datetime, and another one, CharField contains converted Jalali to a String value and save it. The DateTimeField for functionality, e.g., filter between to datetime. The StringField for representing in response(without overload).
4
4
72,344,392
2022-5-23
https://stackoverflow.com/questions/72344392/how-to-split-column-of-type-intervalint64-right-onto-two-columns-in-pandas
Given a df as follow time_interval dvalue 0 (0, 5] 1 1 (5, 10] 2 2 (10, 15] 4 3 (15, 20] 5 4 (20, 25] 6 5 (25, 30] 7 6 (30, 35] 8 I would like to split the column time_interval, which of type interval[int64, right] as the following dvalue l u 0 1 0 5 1 2 5 10 2 4 10 15 3 5 15 20 4 6 20 25 5 7 25 30 6 8 30 35 The full code to reproduce the df is as follow ls = range(0, 100, 5) df=pd.DataFrame([1,2,4,5,6,7,8],columns=['dvalue']) df.index = pd.IntervalIndex.from_breaks(ls[:len(df) + 1]) df.reset_index(inplace=True) df.rename(columns={'index':'time_interval'},inplace=True)
Use Interval.left and Interval.right: df['l'] = df['time_interval'].apply(lambda x: x.left) df['u'] = df['time_interval'].apply(lambda x: x.right) df['l'] = df['time_interval'].map(lambda x: x.left) df['u'] = df['time_interval'].map(lambda x: x.right) Or first convert to IntervalIndex: idx = pd.IntervalIndex(df['time_interval']) df['l'] = idx.left df['u'] = idx.right idx = pd.IntervalIndex(df['time_interval']) df = df.assign(l=idx.left, u=idx.right) print (df) time_interval dvalue l u 0 (0, 5] 1 0 5 1 (5, 10] 2 5 10 2 (10, 15] 4 10 15 3 (15, 20] 5 15 20 4 (20, 25] 6 20 25 5 (25, 30] 7 25 30 6 (30, 35] 8 30 35
4
2
72,338,204
2022-5-22
https://stackoverflow.com/questions/72338204/flask-show-loading-page-while-another-time-consuming-function-is-running
Hi everyone. I'm developing my first flask project and I got stuck on the following problem: I have a simple Flask app: from flask import Flask, render_template import map_plotting_test as mpt app = Flask(__name__) @app.route('/') def render_the_map(): mpt.create_map() return render_template("map.html") if __name__ == '__main__': app.run(debug=True) Problem mpt.create_map() function here is just making the map, rendering it, then creating the map.html file and saving it to the templates folder: templates/map.html. It works pretty fine, but it takes some noticeable time to finish making the map (around 10-15 seconds). The problem is that while this function is performed, I see just a blank screen in the browser, and only then does Flask render the finished map.html file. What I want What I want to do is to show the loading screen instead of a blank screen while the create_map() function is running. And when the function finishes its work and creates a map.html file - show rendered template to user just like return render_template("map.html") does. Is there a way to achieve this without much effort? I'm new to Flask, and I would be very grateful for a good explanation. Thank you!!!
Finally I found the solution! Thanks to Laurel's answer. I'll just make it more nice and clear. What I've done I redesigned my Flask app, so it looks like this: from flask import Flask, render_template import map_plotting_module as mpm app = Flask(__name__) @app.route('/') def loading(): return render_template("loading.html") @app.route('/map') def show_map(): return render_template("map.html") @app.route('/create_map') def create_map(): mpm.create_map() return "Map created" if __name__ == '__main__': app.run() When user opens the page, flask stars rendering the loading.html file. In this file you have to add the following code to the <head> section: <script> function navigate() { window.location.href = 'map'; // Redirects user to the /map route when 'create_map' is finished } fetch('create_map').then(navigate); // Performing 'create_map' and then calls navigate() function, declared above </script> Then, add a loading wheel div to your <body> section: <body> <div class="loader"></div> </body> If it's still not clear for you - please check my example at the end of the answer Explanation In the <script> section we have to declare navigate() function. It just redirects the user to the desired reference, /map in the current case. fetch() is the analog to jQuery.ajax() - read more. It's just fetching the app route to /create_map, awaits it to be done in the "backround", and then performs action in the .then() block - navigate() function in our case. So the workflow is: User opens the page, @app.route('/') function is performed, which is loading page. Loading page fetching @app.route('/create_map') and runs its functions in the background. When create_map() function is completed - user is being redirected to the @app.route('/map') which function is rendering all-done map.html template. Few recommendations from me If you want your loading page to have an icon, just add the following tab to your <head> section of loading.html: <link rel="icon" href="/static/<icon_file>"> Pay attention, Flask is searching for media in the /static folder. Put your media in it. Otherwise, media will not be rendered! If you want your loading page to have a nice CSS loader, consider visiting this page: loading.io. I really enjoyed it. Finally, here's code snippet as an example of loader: .loader { position: absolute; top: 50%; left: 50%; margin: -56px 0 0 -56px; } .loader:after { content: " "; display: block; width: 110px; height: 110px; border-radius: 50%; border: 1px solid; border-color: #0aa13a transparent #47a90e transparent; animation: ring 1.2s linear infinite; } @keyframes ring { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Loading...</title> </head> <body> <div class="loader"></div> </body> </html>
5
10
72,324,669
2022-5-20
https://stackoverflow.com/questions/72324669/playwright-download-via-print-to-pdf
I'm seeking to scrape a web page using Playwright. I load the page, and click the download button with Playwright successfully. This brings up a print dialog box with a printer selected. I would like to select "Save as PDF" and then click the "Save" button. Here's my current code: with sync_playwright() as p: browser = p.chromium.launch(headless=True) playwright_page = browser.new_page() got_error = False try: playwright_page.goto(url_to_start_from) print(playwright_page.title()) html = playwright_page.content() except Exception as e: print(f"Playwright exception: {e}") got_error = True if not got_error: soup = BeautifulSoup(html, 'html.parser') #download pdf with playwright_page.expect_download() as download_info: playwright_page.locator("text=download").click() download = download_info.value path = download.path() download.save_as(DOWNLOADED_PDF_FOLDER) browser.close() Is there a way to do this using Playwright?
Thanks very much to @KJ in the comments, who suggested that with headless=True, Chromium won't even put up a print dialog box in the first place.
5
2
72,339,545
2022-5-22
https://stackoverflow.com/questions/72339545/attributeerror-cant-pickle-local-object-locals-lambda
I am trying to pickle a nested dictionary which is created using: collections.defaultdict(lambda: collections.defaultdict(int)) My simplified code goes like this: class A: def funA(self): #create a dictionary and fill with values dictionary = collections.defaultdict(lambda: collections.defaultdict(int)) ... #then pickle to save it pickle.dump(dictionary, f) However it gives error: AttributeError: Can't pickle local object 'A.funA.<locals>.<lambda>' After I print dictionary it shows: defaultdict(<function A.funA.<locals>.<lambda> at 0x7fd569dd07b8> {...} I try to make the dictionary global within that function but the error is the same. I appreciate any solution or insight to this problem. Thanks!
pickle records references to functions (module and function name), not the functions themselves. When unpickling, it will load the module and get the function by name. lambda creates anonymous function objects that don't have names and can't be found by the loader. The solution is to switch to a named function. def create_int_defaultdict(): return collections.defaultdict(int) class A: def funA(self): #create a dictionary and fill with values dictionary = collections.defaultdict(create_int_defaultdict) ... #then pickle to save it pickle.dump(dictionary, f)
21
19
72,335,807
2022-5-22
https://stackoverflow.com/questions/72335807/pip3-install-grpcio-fails-on-alpine-linux
I use Alpine Linux by Docker on my Mac (12.3.1) and try to run pip3 install grpcio but this command always fails. I tried info here, but nothing worked until now. Unable to install grpcio using pip install grpcio --> Upgrade to the latest setuptools https://github.com/grpc/grpc/issues/24390 --> Run export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 and export GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 Step Build an image using this Dockerfile. Dockerfile FROM alpine:latest COPY src /root/src # Please think this is empty. I don't use any files in this directory until now. WORKDIR /root/src RUN set -x \ && apk update \ && apk add build-base \ && apk add python3 py3-pip python3-dev \ && pip3 install --no-cache --upgrade pip setuptools \ && pip3 install wheel Use docker run to get into the image. Run pip3 list to check what has been installed. ~/src # pip3 list Package Version ------------------ --------- appdirs 1.4.4 CacheControl 0.12.10 certifi 2020.12.5 charset-normalizer 2.0.7 colorama 0.4.4 contextlib2 21.6.0 distlib 0.3.3 distro 1.6.0 html5lib 1.1 idna 3.3 lockfile 0.12.2 msgpack 1.0.2 ordered-set 4.0.2 packaging 20.9 pep517 0.12.0 pip 22.1.1 progress 1.6 pyparsing 2.4.7 requests 2.26.0 retrying 1.3.3 setuptools 62.3.2 six 1.16.0 toml 0.10.2 tomli 1.2.2 urllib3 1.26.7 webencodings 0.5.1 wheel 0.37.1 Run pip3 install grpcio This error message is too long to write in this question. Please check Google Docs for full messages. https://docs.google.com/document/d/1ATyMCA0vRAsxfDquByeWh7cE7InhPCG6bDsgtDEG2Ls/edit?usp=sharing https://docs.google.com/document/d/19erFzIcB2zCDbCklyeOGDVNUBTf6I8oW4B-sNWuO6Zk/edit?usp=sharing Error messages (the last part) (There are messages before this part. Please check Google Docs.) gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -g -fno-semantic-interposition -g -fno-semantic-interposition -g -fno-semantic-interposition -DTHREAD_STACK_SIZE=0x100000 -fPIC -D_WIN32_WINNT=1536 -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=\"Python\" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=\"1.46.3\" -DGPR_BACKWARDS_COMPATIBILITY_MODE=1 -DHAVE_CONFIG_H=1 -DGRPC_ENABLE_FORK_SUPPORT=1 "-DPyMODINIT_FUNC=extern \"C\" __attribute__((visibility (\"default\"))) PyObject*" -DGRPC_POSIX_FORK_ALLOW_PTHREAD_ATFORK=1 -Isrc/python/grpcio -Iinclude -I. -Ithird_party/abseil-cpp -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -Ithird_party/cares/config_linux -Ithird_party/re2 -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -Ithird_party/zlib -I/usr/include/python3.9 -c third_party/cares/cares/src/lib/ares_process.c -o python_build/temp.linux-aarch64-cpython-39/third_party/cares/cares/src/lib/ares_process.o -std=c++11 -std=gnu99 -fvisibility=hidden -fno-wrapv -fno-exceptions -pthread cc1: warning: command-line option '-std=c++11' is valid for C++/ObjC++ but not for C creating None/tmp/tmp_x4urxfk gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -g -fno-semantic-interposition -g -fno-semantic-interposition -g -fno-semantic-interposition -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/include/python3.9 -c /tmp/tmp_x4urxfk/a.c -o None/tmp/tmp_x4urxfk/a.o Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/setuptools/_distutils/unixccompiler.py", line 173, in _compile self.spawn(compiler_so + cc_args + [src, '-o', obj] + File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn _classic_spawn(self, command) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/ccompiler.py", line 917, in spawn spawn(cmd, dry_run=self.dry_run, **kwargs) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/spawn.py", line 68, in spawn raise DistutilsExecError( distutils.errors.DistutilsExecError: command '/usr/bin/gcc' failed with exit code 1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/commands.py", line 280, in build_extensions build_ext.build_ext.build_extensions(self) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/usr/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/_parallel_compile_patch.py", line 58, in _parallel_compile multiprocessing.pool.ThreadPool(BUILD_EXT_COMPILER_JOBS).map( File "/usr/lib/python3.9/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/usr/lib/python3.9/multiprocessing/pool.py", line 771, in get raise self._value File "/usr/lib/python3.9/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.9/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/_parallel_compile_patch.py", line 54, in _compile_single_file self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/unixccompiler.py", line 176, in _compile raise CompileError(msg) distutils.errors.CompileError: command '/usr/bin/gcc' failed with exit code 1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/setup.py", line 527, in <module> setuptools.setup( File "/usr/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/usr/lib/python3.9/site-packages/setuptools/dist.py", line 1229, in run_command super().run_command(command) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/lib/python3.9/site-packages/setuptools/command/install.py", line 68, in run return orig.install.run(self) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/install.py", line 670, in run self.run_command('build') File "/usr/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.9/site-packages/setuptools/dist.py", line 1229, in run_command super().run_command(command) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 136, in run self.run_command(cmd_name) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.9/site-packages/setuptools/dist.py", line 1229, in run_command super().run_command(command) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/commands.py", line 284, in build_extensions raise CommandError( commands.CommandError: Failed `build_ext` step: Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/setuptools/_distutils/unixccompiler.py", line 173, in _compile self.spawn(compiler_so + cc_args + [src, '-o', obj] + File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn _classic_spawn(self, command) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/ccompiler.py", line 917, in spawn spawn(cmd, dry_run=self.dry_run, **kwargs) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/spawn.py", line 68, in spawn raise DistutilsExecError( distutils.errors.DistutilsExecError: command '/usr/bin/gcc' failed with exit code 1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/commands.py", line 280, in build_extensions build_ext.build_ext.build_extensions(self) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/usr/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/_parallel_compile_patch.py", line 58, in _parallel_compile multiprocessing.pool.ThreadPool(BUILD_EXT_COMPILER_JOBS).map( File "/usr/lib/python3.9/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/usr/lib/python3.9/multiprocessing/pool.py", line 771, in get raise self._value File "/usr/lib/python3.9/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.9/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/tmp/pip-install-dddnrveo/grpcio_87c868971a7943939c5252f5c860ad57/src/python/grpcio/_parallel_compile_patch.py", line 54, in _compile_single_file self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) File "/usr/lib/python3.9/site-packages/setuptools/_distutils/unixccompiler.py", line 176, in _compile raise CompileError(msg) distutils.errors.CompileError: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> grpcio note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.
The build environment for Alpine Linux is not installed by default. You need to install the header files - apk add linux-headers. This was in found in this github issue: grpcio can't be installed on alpine
4
8
72,336,254
2022-5-22
https://stackoverflow.com/questions/72336254/negative-huge-loss-in-tensorflow
I am trying to predict price values from datasets using keras. I am following this tutorial: https://keras.io/examples/structured_data/structured_data_classification_from_scratch/, but when I get to the part of fitting the model, I am getting a huge negative loss and very small accuracy Epoch 1/50 1607/1607 [==============================] - ETA: 0s - loss: -117944.7500 - accuracy: 3.8897e-05 2022-05-22 11:14:28.922065: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7500 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05 Epoch 2/50 1607/1607 [==============================] - 15s 9ms/step - loss: -117944.7734 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05 Epoch 3/50 1607/1607 [==============================] - 15s 10ms/step - loss: -117939.4844 - accuracy: 3.8897e-05 - val_loss: -123245.9922 - val_accuracy: 7.7791e-05 Epoch 4/50 1607/1607 [==============================] - 16s 10ms/step - loss: -117944.0859 - accuracy: 3.8897e-05 - val_loss: -123245.9844 - val_accuracy: 7.7791e-05 Epoch 5/50 1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7422 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05 Epoch 6/50 1607/1607 [==============================] - 15s 10ms/step - loss: -117944.8203 - accuracy: 3.8897e-05 - val_loss: -123245.9766 - val_accuracy: 7.7791e-05 Epoch 7/50 1607/1607 [==============================] - 15s 10ms/step - loss: -117944.8047 - accuracy: 3.8897e-05 - val_loss: -123246.0234 - val_accuracy: 7.7791e-05 Epoch 8/50 1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7578 - accuracy: 3.8897e-05 - val_loss: -123245.9766 - val_accuracy: 7.7791e-05 Epoch 9/50 This is my graph, as far as the code, it looks like the one from the example but adapted: # Categorical feature encoded as string desc = keras.Input(shape=(1,), name="desc", dtype="string") # Numerical features date = keras.Input(shape=(1,), name="date") quant = keras.Input(shape=(1,), name="quant") all_inputs = [ desc, quant, date, ] # String categorical features desc_encoded = encode_categorical_feature(desc, "desc", train_ds) # Numerical features quant_encoded = encode_numerical_feature(quant, "quant", train_ds) date_encoded = encode_numerical_feature(date, "date", train_ds) all_features = layers.concatenate( [ desc_encoded, quant_encoded, date_encoded, ] ) x = layers.Dense(32, activation="sigmoid")(all_features) x = layers.Dropout(0.5)(x) output = layers.Dense(1, activation="relu")(x) model = keras.Model(all_inputs, output) model.compile("adam", "binary_crossentropy", metrics=["accuracy"]) And the dataset looks like this: date desc quant price 0 20140101.0 CARBONATO DE DIMETILO 999.00 1428.57 1 20140101.0 HIDROQUINONA 137.00 1314.82 2 20140101.0 1,5 PENTANODIOL TECN. 495.00 2811.60 3 20140101.0 SOSA CAUSTICA LIQUIDA 50% 567160.61 113109.14 4 20140101.0 BOROHIDRURO SODICO 6.24 299.27 Also I am converting the date from being YYYY-MM-DD to being numbers using: dataset['date'] = pd.to_datetime(dataset["date"]).dt.strftime("%Y%m%d").astype('float64') What am I doing wrong? :( EDIT: I though the encoder function from the tutorial was normalizing data, but it wasnt. Is there any other tutorial that you know guys which can guide me better? The loss problem has been fixed ! (was due to normalization)
You seem to be quite confused by the components of your model. Binary cross entropy is a classification loss, your problem is regression -> use MSE. Also "accuracy" makes no sense for regression, change it to MSE too. You data is huge and thus your loss is huge. You have a price of 113109.14 in the data, what if your model is bad initially and says 0? You get a loss of ~100,000^2 = 10,000,000,000. Normalise your data, in your case - the output variable (target, price) to in between -1 and 1 There are some use cases where an output neuron should have an activation function, but unless you know why you are doing this, leaving it as a linear is a much safer choice. Dropout is a method for regularising your model, do not start with having it, always start with the simplest possible model, and make sure you can learn before trying to maximise test score. Neural networks will not extrapolate, feeding in an ever growing signal (date) in a raw format almost surely will cause problems.
6
4
72,336,173
2022-5-22
https://stackoverflow.com/questions/72336173/make-a-get-request-with-a-multiple-value-param-in-django-requests-module
I have a webservice that give doc list. I call this webservice via get_doc_list. but when I pass 2 values to id__in, it return one mapping object. def get_doc_list(self, id__in): config = self.configurer.doc params = { "id__in": id__in, } response = self._make_request( token=self.access_token, method='get', proxies=self.proxies, url=config.service_url, params=params, module_name=self.module_name, finalize_response=False ) return response How can I fix it?!
You can add this two lines before make_request: string_id_in = [str(i) for i in id_in] id_in = ",".join(string_id_in)
5
4
72,334,642
2022-5-22
https://stackoverflow.com/questions/72334642/importerror-cannot-import-name-img-to-array-from-keras-preprocessing-image
Im new here. I have problem with this code, #Library import numpy as np import pickle import cv2 from os import listdir from sklearn.preprocessing import LabelBinarizer from keras.models import Sequential from keras.layers import BatchNormalization from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.layers.core import Activation, Flatten, Dropout, Dense from keras import backend as K from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import Adam from keras.preprocessing import image #from tensorflow.keras.preprocessing.image import img_to_array from keras.preprocessing.image import img_to_array from sklearn.preprocessing import MultiLabelBinarizer from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt I got an error this code is from github link Im using python 3.7.13 tensorflow 2.9 opencv 4.5.5 keras 2.9.0
In Keras Documentation V2.9.0, In tf version 2.9.0 the img_to_array moved to utlis Insted of, from keras.preprocessing.image import img_to_array Try this, from tensorflow.keras.utils import img_to_array
12
33
72,332,222
2022-5-21
https://stackoverflow.com/questions/72332222/how-do-i-json-normalize-a-specific-field-within-a-df-and-keep-the-other-column
So here's my simple example (the json field in my actual dataset is very nested so I'm unpacking things one level at a time). I need to keep certain columns on the dataset post json_normalize(). https://pandas.pydata.org/docs/reference/api/pandas.json_normalize.html Start: Expected (Excel mockup): Actual: import json d = {'report_id': [100, 101, 102], 'start_date': ["2021-03-12", "2021-04-22", "2021-05-02"], 'report_json': ['{"name":"John", "age":30, "disease":"A-Pox"}', '{"name":"Mary", "age":22, "disease":"B-Pox"}', '{"name":"Karen", "age":42, "disease":"C-Pox"}']} df = pd.DataFrame(data=d) display(df) df = pd.json_normalize(df['report_json'].apply(json.loads), max_level=0, meta=['report_id', 'start_date']) display(df) Looking at the documentation on json_normalize(), I think the meta parameter is what I need to keep the report_id and start_date but it doesn't seem to be working as the expected fields to keep are not appearing on the final dataset. Does anyone have advice? Thank you.
as you're dealing with a pretty simple json along a structured index you can just normalize your frame then make use of .join to join along your axis. from ast import literal_eval df.join( pd.json_normalize(df['report_json'].map(literal_eval)) ).drop('report_json',axis=1) report_id start_date name age disease 0 100 2021-03-12 John 30 A-Pox 1 101 2021-04-22 Mary 22 B-Pox 2 102 2021-05-02 Karen 42 C-Pox
6
7
72,331,707
2022-5-21
https://stackoverflow.com/questions/72331707/socket-io-returns-127-0-0-1-as-host-address-and-not-192-168-0-on-my-device
When I run the following code to determine my device's local IP address, I get 127.0.0.1 instead of 192.168.0.101. import socket import threading PORT = 8080 HOST_NAME = socket.gethostname() print(HOST_NAME) SERVER = socket.gethostbyname(HOST_NAME) print(SERVER) The output i get on the console is MyDeviceName.local 127.0.0.1
127.0.0.1 is localhost address, it is right. If you want your device's address do this: import socket s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.connect(("8.8.8.8", 80)) print(s.getsockname()[0])
4
6
72,313,046
2022-5-20
https://stackoverflow.com/questions/72313046/pycharm-project-cannot-add-poetry-interpreter
OS: win10 PyCharm version: PyCharm Professional 2021.2.2 Poetry version: 1.1.13 Poetry plugin version: 1.1.5-212 (from koudai aono) I have tried to build a new PyCharm project by poetry environment, while setting up it showed and cannot setup the interpreter. Have anyone got the similar problem before and know how to solve this error? Update I have upgraded the PyCharm version to PyCharm Professional 2022.1.1 and the problem still remain.
Alright, I have fixed this problem. Below is my debug steps, hope it can help those who are struggling in the same situation: Posting a support report to PyCharm team with no responce Searching for a lot of posts from communities Creating a poetry project by terminal Inside the project directory, I tried poetry env info and it showed that the local virtualenv is NA Trying to create one by poetry env use $(which python), yet it returned Skipping virtualenv creation, as specified in config file and this answer gave me a hint. Typing poetry config --list and it showed that virtualenvs.create = false Trying to enable the virtualenv creation by the command poetry config virtualenvs.create true Restarted the PyCharm IDE and tried to add the interpreter again and it WORKED!! I am not sure if the command poetry config virtualenvs.create true is permant or not.
7
11
72,327,987
2022-5-21
https://stackoverflow.com/questions/72327987/mypy-how-to-declare-the-return-type-of-a-method-returning-self-in-a-generic-cla
This answer does not seem to work for generics. Mypy complains about "error: Missing type parameters for generic type A" when checking the following code. I have tried using 'A[T]' for the TypeVar but then mypy says "error: Type variable T is unbound." I have also tried using AnyA[T] as return type of get but that produces two error messages, the already known "error: Missing type parameters for generic type A" and the new error message "Type variable AnyA used with arguments". How do I specify the return type of get correctly? import typing T = typing.TypeVar('T') AnyA = typing.TypeVar('AnyA', bound='A') class A(typing.Generic[T]): def __init__(self, val: T) -> None: self.val = val def get(self: AnyA) -> AnyA: return self class B(A[T]): def is_int(self) -> bool: return isinstance(self.val, int) if __name__ == '__main__': b = B(42) print(b.get().is_int())
I know of three ways of typing here: Declaring an inner self-type This approach is described in mypy docs, see Precise typing of alternative constructors. class A(typing.Generic[T]): _Self = typing.TypeVar('_Self', bound='A[T]') def __init__(self, val: T) -> None: self.val = val def get(self: _Self) -> _Self: return self Note however, that this is mypy-specific stuff and may not work with other checkers. E.g. pyre doesn't support inner self-types yet. Using _typeshed.Self This saves the boilerplate of declaring custom types, but requires a somewhat obscure import from typeshed which will fail at runtime. It thus must be wrapped by typing.TYPE_CHECKING: from typing import Any, TYPE_CHECKING if TYPE_CHECKING: from _typeshed import Self else: Self = Any class A(typing.Generic[T]): def __init__(self, val: T) -> None: self.val = val def get(self: Self) -> Self: return self _typeshed.Self was created to be used in custom stubs in the first place, but is suitable for inline typing as well. Python 3.11 and upwards: typing.Self A recently introduced PEP 673 adds Self to stdlib, so starting from Python 3.11 one will be able to use that: from typing import Self class A(typing.Generic[T]): def __init__(self, val: T) -> None: self.val = val def get(self: Self) -> Self: return self This is not supported by mypy yet as of now though, but e.g. by pyright from version 1.1.184.
5
10
72,324,239
2022-5-20
https://stackoverflow.com/questions/72324239/sort-elements-to-maximise-amount-of-positive-differences
I have a list of integers. Numbers can be repeated. I would like "sort" them in that way to get as many "jumps" (difference from the very next element to the current one is positive) as possible. Examples: [10, 10, 10, 20, 20, 20] # only one "jump" from 10 to 20 [10, 20, 10, 20, 10, 20] # three jumps (10->20, 10->20, 10->20) - the correct answer [20, 10, 20, 10, 20, 10] # two jumps [11, 16, 8, 9, 4, 1, 2, 17, 4, 15, 9, 11, 11, 7, 19, 16, 19, 5, 19, 11] # 9 [9, 11, 2, 19, 4, 11, 15, 5, 7, 11, 16, 19, 1, 4, 8, 11, 16, 19, 9, 17] # 14 [2, 9, 11, 16, 17, 19, 4, 5, 8, 15, 16, 9, 11, 1, 7, 11, 19, 4, 11, 19] # 15 [1, 2, 4, 5, 7, 8, 9, 11, 15, 16, 17, 19, 4, 9, 11, 16, 19, 11, 19, 11] # 16 My totally inefficient (but working) code.: def sol1(my_list): my_list.sort() final_list = [] to_delete = [] i = 0 last_element = float('-inf') while my_list: if i >= len(my_list): i = 0 for index in to_delete[::-1]: my_list.pop(index) if len(my_list): last_element = my_list.pop(0) final_list.append(last_element) to_delete = [] continue curr_element = my_list[i] if curr_element > last_element: final_list.append(curr_element) last_element = curr_element to_delete.append(i) i += 1 return final_list Does anyone know a way to optimize the solution? For now I'm iterating the list many times. It doesn't need to be in Python.
I think this should be equivalent and only take O(n log n) time for sorting and O(n) time for the rest. from collections import Counter, OrderedDict arr = [11, 16, 8, 9, 4, 1, 2, 17, 4, 15, 9, 11, 11, 7, 19, 16, 19, 5, 19, 11] d = OrderedDict(Counter(sorted(arr))) ans = [] while d: ans += d for x in list(d): d[x] -= 1 if not d[x]: del d[x] print(ans) Another, inspired by trincot: from collections import defaultdict from itertools import count arr = [11, 16, 8, 9, 4, 1, 2, 17, 4, 15, 9, 11, 11, 7, 19, 16, 19, 5, 19, 11] d = defaultdict(count) arr.sort(key=lambda x: (next(d[x]), x)) print(arr) Benchmarks along with other solutions, on your own suggested input and on two of mine (for each input, each solution's best three times from multiple attempts are shown): [randint(1, 10**5) for i in range(10**4)] 2.14 ms 2.15 ms 2.18 ms Kelly4c 2.19 ms 2.24 ms 2.32 ms Kelly4b 2.23 ms 2.25 ms 2.37 ms Kelly4 5.83 ms 6.02 ms 6.03 ms original 7.05 ms 7.12 ms 7.54 ms Kelly1 7.82 ms 8.43 ms 8.45 ms Kelly3b 8.13 ms 8.15 ms 8.92 ms Kelly3 9.06 ms 9.44 ms 9.50 ms db0 10.25 ms 10.28 ms 10.31 ms db 11.09 ms 11.11 ms 11.23 ms trincot 11.19 ms 11.25 ms 11.58 ms Kelly2 11.29 ms 11.65 ms 11.74 ms db1 11.64 ms 11.65 ms 12.49 ms Kelly3c list(range(n := 1000)) + [n] * n 0.57 ms 0.60 ms 0.63 ms Kelly2 0.64 ms 0.65 ms 0.68 ms Kelly3 0.66 ms 0.69 ms 0.69 ms trincot 0.69 ms 0.71 ms 0.71 ms db 0.69 ms 0.70 ms 0.70 ms db1 0.72 ms 0.74 ms 0.75 ms Kelly3b 0.99 ms 1.04 ms 1.11 ms Kelly3c 1.04 ms 1.08 ms 1.09 ms Kelly1 28.27 ms 28.56 ms 28.63 ms Kelly4b 36.58 ms 36.81 ms 37.03 ms Kelly4c 39.78 ms 40.07 ms 40.37 ms Kelly4 80.41 ms 80.96 ms 81.99 ms original 81.00 ms 81.90 ms 82.08 ms db0 list(range(n := 10000)) + [n] * n 7.11 ms 7.37 ms 7.42 ms Kelly2 7.30 ms 7.62 ms 7.63 ms db 7.31 ms 7.31 ms 7.37 ms Kelly3 7.52 ms 7.64 ms 7.80 ms trincot 7.64 ms 7.82 ms 7.94 ms db1 8.81 ms 8.83 ms 8.84 ms Kelly3b 10.18 ms 10.41 ms 10.52 ms Kelly1 10.85 ms 10.92 ms 11.16 ms Kelly3c Benchmark code (Try it online!): from timeit import timeit from collections import Counter, OrderedDict, defaultdict, deque from itertools import count, chain, repeat from random import randint, shuffle from bisect import insort def Kelly1(arr): d = OrderedDict(Counter(sorted(arr))) ans = [] while d: ans += d for x in list(d): d[x] -= 1 if not d[x]: del d[x] return ans def Kelly2(arr): d = defaultdict(count) arr.sort(key=lambda x: (next(d[x]), x)) return arr def Kelly3(arr): ctr = Counter(arr) rounds = [[] for _ in range(max(ctr.values()))] for x, count in sorted(ctr.items()): for rnd in rounds[:count]: rnd.append(x) return list(chain.from_iterable(rounds)) def Kelly3b(arr): ctr = Counter(arr) rounds = [[] for _ in range(max(ctr.values()))] appends = [rnd.append for rnd in rounds] for x, count in sorted(ctr.items()): for append in appends[:count]: append(x) return list(chain.from_iterable(rounds)) def Kelly3c(arr): ctr = Counter(arr) rounds = [[] for _ in range(max(ctr.values()))] for x, count in sorted(ctr.items()): deque(map(list.append, rounds[:count], repeat(x)), 0) return list(chain.from_iterable(rounds)) def Kelly4(arr): arr.sort() out = [].append while arr: postpone = [].append last = None for x in arr: if last != x: out(x) else: postpone(x) last = x arr = postpone.__self__ return out.__self__ def Kelly4b(arr): arr.sort() out = [].append while arr: postpone = [].append last = None arr = [x for x in arr if last == x or out(last := x)] return out.__self__ def Kelly4c(arr): arr.sort() out = [] while arr: postpone = [].append last = None out += [last := x for x in arr if last != x or postpone(x)] arr = postpone.__self__ return out def original(my_list): my_list.sort() final_list = [] to_delete = [] i = 0 last_element = float('-inf') while my_list: if i >= len(my_list): i = 0 for index in to_delete[::-1]: my_list.pop(index) if len(my_list): last_element = my_list.pop(0) final_list.append(last_element) to_delete = [] continue curr_element = my_list[i] if curr_element > last_element: final_list.append(curr_element) last_element = curr_element to_delete.append(i) i += 1 return final_list def db(arr): cumcount = [] d = dict.fromkeys(arr, 0) for el in arr: cumcount.append(d[el]) d[el] += 1 return [x[1] for x in sorted(zip(cumcount, arr))] def db0(arr): d = Counter(arr) keys = sorted(d.keys()) ans = [] while len(ans) < len(arr): for k in keys: if d.get(k, 0) > 0: ans.append(k) d[k] -= 1 return ans def db1(arr): cumcount = [] d = {k: 0 for k in set(arr)} for el in arr: cumcount.append(d[el]) d[el] += 1 return [x[1] for x in sorted(zip(cumcount, arr))] def trincot(lst): return [num for _,num in sorted( (i, num) for num, freq in Counter(lst).items() for i in range(freq) )] funcs = [Kelly1, Kelly2, Kelly3, Kelly3b, Kelly3c, Kelly4, Kelly4b, Kelly4c, original, db, db0, db1, trincot] def test(arr, funcs): print(arr) arr = eval(arr) expect = original(arr[:]) for func in funcs: result = func(arr[:]) if result != expect: print(expect[:20]) print(result[:20]) assert result == expect, func times = {func: [] for func in funcs} for _ in range(20): shuffle(funcs) for func in funcs: copy = arr[:] t = timeit(lambda: func(copy), 'gc.enable()', number=1) insort(times[func], t) for func in sorted(funcs, key=times.get): print(*('%5.2f ms ' % (t * 1e3) for t in times[func][:3]), func.__name__) print() test('[randint(1, 10**5) for i in range(10**4)]', funcs) test('list(range(n := 1000)) + [n] * n', funcs) test('list(range(n := 10000)) + [n] * n', list(set(funcs) - {original, Kelly4, Kelly4b, Kelly4c, db0}))
4
2
72,322,295
2022-5-20
https://stackoverflow.com/questions/72322295/how-to-use-django-f-expression-in-update-for-jsonfield
I have around 12 million records that I need to update in my postgres db (so I need to do it in an efficient way). I am using Django. I have to update a jsonfield column (extra_info) to use values from a different column (related_type_id which is an integer) in the same model. Trying to do it with an update. This seems to be the way to do it most efficiently. Example: Person.objects.all().update(extra_info={ "type": "Human", "id": F('related_type_id') }) This errors out with : "Object of type F is not JSON serializable". I thought the F() will give back the value of that column (which should be an integer) which should be serializable. Can this be done ? Or am I trying to do it in the wrong way. I don't really want to iterate in a for loop and update each record with a save() because it will take too long. There's too many records.
The Django database function JSONObject should work, it returns a valid JSON object from key-value pairs and may be passed F objects from django.db.models import F, Value from django.db.models.functions import JSONObject Person.objects.all().update(extra_info=JSONObject( type=Value('Human'), id=F('related_type_id') ))
4
4
72,322,120
2022-5-20
https://stackoverflow.com/questions/72322120/vscode-import-x-could-not-be-resolved-even-though-listed-under-helpmodules
I'm on day 1 of Python and trying to import SciPy into a project. I installed it via pip install on ElementaryOS (an Ubuntu derivative). I have verified it's existence via: $ python >>> help("modules") The exact error I'm getting is: Import "scipy" could not be resolved Pylance (reportMissingImports) When searching for this error I found: Import could not be resolved/could not be resolved from source Pylance in VS Code using Python 3.9.2 on Windows 10 Powershell -- the accepted answers all pointed towards a project specific .env file. I have no such project structure, nor does it make sense to me that one would be needed. A github issue -- this issue ends with "it just fixed itself" When I run my program, I get no errors in console. And looking up "Pylance" it appears to be a Microsoft product. I suspect that VSCode is failing to lint correctly. Potentially because pip installed something in a place it wasn't expecting. This is my guess, but any help would be very much appreciated. Edit: Following through on the idea of missing paths, I found this post -- How do I get into the environment VS Code is using for pylance? Having added the path to where my modules can be found has yielded no results, though I'm not sure if the formatting is correct. Perhaps it needs glob syntax (eg path/**/*)
The issue was indeed with Pylance. It was missing an "additional path" to where pip had installed the projects I wanted to import. To solve the issue: First make sure you know the location of your import; you can find it with: $ python >>> import modulename >>> print(modulename.__file__) Then, once you know the location: Open settings (ctrl + ,) Search "pylance" or find it under "Extensions > Pylance" Find the "Extra Paths" config item Use "add item" to a add a path to the parent folder of the module. It will not do any recursive tree searching And you should be good to go! For a further example, you can see the image above where I had added the path /home/seph/.local/lib/python2.7/ to no avail. Updating it to /home/seph/.local/lib/python2.7/site-packages/ did the trick.
24
64
72,294,311
2022-5-18
https://stackoverflow.com/questions/72294311/what-is-numpy-ndarray-flags-contiguous-about
While experimenting with Numpy, I found that the contiguous value provided by numpy.info may differ from numpy.ndarray.data.contiguous (see the code and screenshot below). import numpy as np x = np.arange(9).reshape(3,3)[:,(0,1)] np.info(x) print(f''' {x.data.contiguous = } {x.flags.contiguous = } {x.data.c_contiguous = } {x.flags.c_contiguous = } {x.data.f_contiguous = } {x.flags.f_contiguous = } ''') According to documentation about a memoryview class, data.contiguous == True exactly if an array is either C-contiguous or Fortran contiguous. As for numpy.info, I believe it displays the value of flags.contiguous. Alas, there is no information about it in the manual. What does it actually mean? Is it a synonim for flags.c_contiguous?
In the source code of numpy.info, we can see the subroutine for processing ndarray: def info(object=None, maxwidth=76, output=None, toplevel='numpy'): ... elif isinstance(object, ndarray): _info(object, output=output) ... def _info(obj, output=None): """Provide information about ndarray obj""" bp = lambda x: x ... print("contiguous: ", bp(obj.flags.contiguous), file=output) print("fortran: ", obj.flags.fortran, file=output) ... It returns flags.contiguous as the array's continuity parameter. This one isn't specified in flags description. But we can find it in flagsobject.c: // ... static PyGetSetDef arrayflags_getsets[] = { {"contiguous", (getter)arrayflags_contiguous_get, NULL, NULL, NULL}, {"c_contiguous", (getter)arrayflags_contiguous_get, NULL, NULL, NULL}, // ... So it's clear now that a contiguous parameter from numpy.info is actually flags.c_contiguous and has nothing in common with ndarray.data.contiguous. I guess when programming in C it was natural to say just contiguous instead of c_contiguous, and this has led to a slight inconsistency in terminology.
4
3
72,312,594
2022-5-20
https://stackoverflow.com/questions/72312594/pandas-forward-fill-but-only-between-equal-values
I have two data frames: main and auxiliary. I am concatenating auxiliary to the main. It results in NaN in a few rows and I want to fill them, not all. Code: df1 = pd.DataFrame({'Main':[00,10,20,30,40,50,60,70,80]}) df1 = Main 0 0 1 10 2 20 3 30 4 40 5 50 6 60 7 70 8 80 df2 = pd.DataFrame({'aux':['aa','aa','bb','bb']},index=[0,2,5,7]) df2 = aux 0 aa 2 aa 5 bb 7 bb df = pd.concat([df1,df2],axis=1) # After concating, in the aux column, I want to fill the NaN rows in between # the rows with same value. Example, fill rows between 0 and 2 with 'aa', 2 and 5 NaN, 5 and 7 with 'bb' df = pd.concat([df1,df2],axis=1).fillna(method='ffill') print(df) Present result: Main aux 0 0 aa 1 10 aa 2 20 aa 3 30 aa # Wrong, here it should be NaN 4 40 aa # Wrong, here it should be NaN 5 50 bb 6 60 bb 7 70 bb 8 80 bb # Wrong, here it should be NaN Expected result: Main aux 0 0 aa 1 10 aa 2 20 aa 3 30 NaN 4 40 NaN 5 50 bb 6 60 bb 7 70 bb 8 80 NaN
If I understand correctly, what you want can be done like this. You want to fill the NaNs where backfill and forward fill give the same value. ff = df.aux.ffill() bf = df.aux.bfill() df.aux = ff[ff == bf]
6
10
72,319,355
2022-5-20
https://stackoverflow.com/questions/72319355/space-in-f-string-leads-to-valueerror-invalid-format-specifier
A colleague and I just stumbled across an interesting problem using an f-string. Here is a minimal example: >>> f"{ 42:x}" '2a' Writing a space after the hexadecimal type leads to a ValueError: >>> f"{ 42:x }" Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Invalid format specifier I understand the paragraph Leading and trailing whitespace in expressions is ignored in PEP 498 to mean that the space should actually be ignored. Why does the space character lead to an error and what is the reasoning behind this?
Per the link you shared: For ease of readability, leading and trailing whitespace in expressions is ignored. This is a by-product of enclosing the expression in parentheses before evaluation. The expression is everything[1] before the colon (:), while the format specifier is everything afterwards. { 42 : x } inside an f-string is equivalent to ( 42 ) with a format specifier of " x ". ( 42 ) == 42 but " x " != "x", and a format specifier is passed to the object's __format__ intact. When the object attempts to format, it doesn't recognize the meaning of space (" ") in those positions, and throws an error. Furthermore, on some format specifiers, space actually has a meaning, for example as a fill character or as a sign option: >>> f"{42: >+5}" # Width of 5, align to right, with space as fill and plus or minus as sign ' +42' >>> f"{42:#> 5}" # Width of 5, align to right, with `#` as fill and space or minus as sign. '## 42' For more info, see the Format Specification Mini-Language. 1 Does not include the type conversion, for example !s, !r or !a.
4
7
72,314,928
2022-5-20
https://stackoverflow.com/questions/72314928/most-efficient-way-of-checking-if-a-string-matches-a-pattern-in-python
I have a string format that can be changed by someone else (just say) sample = f"This is a {pet} it has {number} legs" And I have currently two string a = "This is a dog it has 4 legs" b = "This was a dog" How to check which string satisfies this sample format? I can use python's string replace() on sample and create regex of it and check using re.match. But the catch is sample can be changed, so statically using replace won't always work, as sample may get more place holders.
I liked the approaches but I found a two liner solution: (I don't know the performance aspect of this, but it works!) def pattern_match(input, pattern): regex = re.sub(r'{[^{]*}','(.*)', "^" + pattern + "$") if re.match(regex, input): print(f"'{input}' matches the pattern '{pattern}'") pattern_match(a, sample) pattern_match(b, sample) Output 'This is a dog it has 4 legs' matches the pattern 'This is a {pet} it has {number} legs'
4
3
72,312,099
2022-5-19
https://stackoverflow.com/questions/72312099/discord-py-button-responses-interaction-failed-after-a-certain-time
I have an extremely basic script that pops up a message with a button with the command ?place Upon clicking this button the bot replies Hi to the user who clicked it. If the button isn't interacted with for > approx 3 minutes it then starts to return "interaction failed". after that the button becomes useless. I assume there is some sort of internal timeout i can't find in the docs. The button does the same thing whether using discord.py (2.0) or pycord. Nothing hits the console. It's as if the button click isn't picked up. Very occasionally the button starts to work again and a host of these errors hit the console: discord.errors.NotFound: 404 Not Found (error code: 10062): Unknown interaction Ignoring exception in view <View timeout=180.0 children=1> for item <Button style=<ButtonStyle.success: 3> url=None disabled=False label='click me' emoji=None row=None>: I assume the timeout = 180 is the cause of this issue but is anyone aware of how to stop this timeout and why it's happening? I can't see anything in the docs about discord buttons only being usable for 3 mins. import discord from discord.ext import commands intents = discord.Intents.default() intents.members = True intents.message_content = True bot = commands.Bot(command_prefix="?", intents=intents) embed1=discord.Embed(title="Test", description = f"TESTING",color=0xffffff) print("bot connected") @ bot.command(name='place') async def hello(ctx): view = discord.ui.View() buttonSign = discord.ui.Button(label = "click me", style= discord.ButtonStyle.green) async def buttonSign_callback(interaction): userName = interaction.user.id embedText = f"test test test" embed=discord.Embed(title="Test", description = embedText,color=0xffffff) await interaction.response.send_message(f"Hi <@{userName}>") buttonSign.callback = buttonSign_callback view.add_item(item=buttonSign) await ctx.send(embed = embed1,view = view) bot.run(TOKEN)
Explanation By default, Views in discord.py 2.0 have a timeout of 180 seconds (3 minutes). You can fix this error by passing in None as the timeout when creating the view. Code @bot.command(name='place') async def hello(ctx): view = discord.ui.View(timeout=None) References discord.ui.View.timeout
5
12
72,306,585
2022-5-19
https://stackoverflow.com/questions/72306585/brighten-only-dark-areas-of-image-in-python
I am trying to process images and I would like to brighten the dark areas of my image. I tried Histogram Equalization, however as there are also some bright areas in the image, the result is not satisfying. This is why I am looking for a way to brighten only the dark areas of the image. As an example, Input image is on the left and the expected result is on the right, the hair and face of the girl are brightened ImageMagick seems to offer some possibilities to achieve this, however, I would like to do it using python
If you want to avoid colour distortions, you could: convert to HSV colourspace, split the channels, bump up the V (Value) channel recombine the channels save That might go something like this: from PIL import Image # Open the image im = Image.open('hEHxh.jpg') # Convert to HSV colourspace and split channels for ease of separate processing H, S, V = im.convert('HSV').split() # Increase the brightness, or Value channel # Change 30 to 50 for bigger effect, or 10 for smaller effect newV = V.point(lambda i: i + int(30*(255-i)/255)) # Recombine channels and convert back to RGB res = Image.merge(mode="HSV", bands=(H,S,newV)).convert('RGB') res.save('result.jpg') Essentially, I am changing the brightness from the black mapping to the green mapping: Reminder to forgetful self... "you made the plot like this, Mark": import matplotlib.pyplot as plt import numpy as np # Generate some straight-line data xdata = np.arange(0,256) # And the new mapping ydata = xdata + 30*(255-xdata)/255 # Plot plt.plot(xdata,xdata,'.k') plt.plot(xdata,ydata,'g^') plt.title('Adjustment of V') plt.xlabel('Input V') plt.ylabel('Output V') plt.grid(True) plt.show()
6
7
72,306,979
2022-5-19
https://stackoverflow.com/questions/72306979/client-get-bucket-returns-error-api-request-got-an-unexpected-keyword-argum
I'm trying to store a newline-delimited JSON string in a GCS bucket using a cloud function, but seeing an error. I start by converting a dataframe to ndjson, then attempt to upload this to my GCS bucket as below. There is more code above this, but not relevant to my problem: import pandas as pd from google.cloud import storage from google.cloud.storage import blob df = df.to_json(orient="records", lines=True) storage_client = storage.Client(project='my-project') bucket = storage_client.get_bucket('my-bucket') blob = bucket.blob('my-blob') blob.upload_from_string(df) When running this, I find the error below in the logs: Exception on / [POST] Traceback (most recent call last): File "/layers/google.python.pip/pip/lib/python3.7/site-packages/flask/app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "/layers/google.python.pip/pip/lib/python3.7/site-packages/flask/app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "/layers/google.python.pip/pip/lib/python3.7/site-packages/flask/app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "/layers/google.python.pip/pip/lib/python3.7/site-packages/flask/app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/layers/google.python.pip/pip/lib/python3.7/site-packages/functions_framework/__init__.py", line 99, in view_func return function(request._get_current_object()) File "/workspace/main.py", line 66, in my_request bucket = storage_client.get_bucket('my-bucket') File "/layers/google.python.pip/pip/lib/python3.7/site-packages/google/cloud/storage/client.py", line 787, in get_bucket retry=retry, File "/layers/google.python.pip/pip/lib/python3.7/site-packages/google/cloud/storage/bucket.py", line 1037, in reload retry=retry, File "/layers/google.python.pip/pip/lib/python3.7/site-packages/google/cloud/storage/_helpers.py", line 244, in reload _target_object=self, File "/layers/google.python.pip/pip/lib/python3.7/site-packages/google/cloud/storage/client.py", line 373, in _get_resource _target_object=_target_object, File "/layers/google.python.pip/pip/lib/python3.7/site-packages/google/cloud/storage/_http.py", line 73, in api_request return call() File "/layers/google.python.pip/pip/lib/python3.7/site-packages/google/api_core/retry.py", line 288, in retry_wrapped_func on_error=on_error, File "/layers/google.python.pip/pip/lib/python3.7/site-packages/google/api_core/retry.py", line 190, in retry_target return target() TypeError: api_request() got an unexpected keyword argument 'extra_api_info' This 'extra_api_info' argument appears to be the culprit, but I have no idea what this means, and never used to get this error when following exactly the same approach, so I wonder whether this is down to some change between different versions of the 'google.cloud' Python module.
I worked out the answer to my own question. It was indeed a module version issue as I suspected. Specifying google.cloud.storage==1.44.0 in my requirements.txt file solved the problem, as my code is seemingly not compatible with the latest version of that module (for reasons that escape me).
5
5
72,299,007
2022-5-19
https://stackoverflow.com/questions/72299007/how-to-create-a-class-with-multiple-inheritance
I have this code: class Person: def __init__(self, name, last_name, age): self.name = name self.last_name = last_name self.age = age class Student(Person): def __init__(self, name, last_name, age, indexNr, notes): super().__init__(name, last_name, age) self.indexNr = indexNr self.notes = notes class Employee(Person): def __init__(self, name, last_name, age, salary, position): super().__init__(name, last_name, age) self.salary = salary self.position = position class WorkingStudent(Student, Employee): def __init__(self, name, last_name, age, indexNr, notes, salary, position): Student.__init__(name, last_name, age, indexNr, notes) Employee.__init__(name, last_name, age, salary, position) I want to create a WorkingStudent instance like this: ws = WorkingStudent("john", "brown", 18, 1, [1,2,3], 1000, 'Programmer') but it's not working, I get this error: TypeError: __init__() missing 1 required positional argument: 'notes' Or what I am doing wrong here? Also, I have already tried super() in WorkingStudent class but it calls only the constructor of the first passed class. i.e in this case Student Note: I have already gone through multiple StackOverflow queries but I couldn't find anything that could answer this. (or maybe I have missed).
Instead of explicit classes, use super() to pass arguments along the mro: class Person: def __init__(self, name, last_name, age): self.name = name self.last_name = last_name self.age = age class Student(Person): def __init__(self, name, last_name, age, indexNr, notes, salary, position): # since Employee comes after Student in the mro, pass its arguments using super super().__init__(name, last_name, age, salary, position) self.indexNr = indexNr self.notes = notes class Employee(Person): def __init__(self, name, last_name, age, salary, position): super().__init__(name, last_name, age) self.salary = salary self.position = position class WorkingStudent(Student, Employee): def __init__(self, name, last_name, age, indexNr, notes, salary, position): # pass all arguments along the mro super().__init__(name, last_name, age, indexNr, notes, salary, position) # uses positional arguments ws = WorkingStudent("john", "brown", 18, 1, [1,2,3], 1000, 'Programmer') # then you can print stuff like print(f"My name is {ws.name} {ws.last_name}. I'm a {ws.position} and I'm {ws.age} years old.") # My name is john brown. I'm a Programmer and I'm 18 years old. Check mro: WorkingStudent.__mro__ (__main__.WorkingStudent, __main__.Student, __main__.Employee, __main__.Person, object) When you create an instance of WorkingStudent, it's better if you pass keyword arguments so that you don't have to worry about messing up the order of arguments. Since WorkingStudent defers the definition of attributes to parent classes, immediately pass all arguments up the hierarchy using super().__init__(**kwargs) since a child class doesn't need to know about the parameters it doesn't handle. The first parent class is Student, so self.IndexNr etc are defined there. The next parent class in the mro is Employee, so from Student, pass the remaining keyword arguments to it, using super().__init__(**kwargs) yet again. From Employee, define the attributes defined there and pass the rest along the mro (to Person) via super().__init__(**kwargs) yet again. class Person: def __init__(self, name, last_name, age): self.name = name self.last_name = last_name self.age = age class Student(Person): def __init__(self, indexNr, notes, **kwargs): # since Employee comes after Student in the mro, pass its arguments using super super().__init__(**kwargs) self.indexNr = indexNr self.notes = notes class Employee(Person): def __init__(self, salary, position, **kwargs): super().__init__(**kwargs) self.salary = salary self.position = position class WorkingStudent(Student, Employee): def __init__(self, **kwargs): # pass all arguments along the mro super().__init__(**kwargs) # keyword arguments (not positional arguments like the case above) ws = WorkingStudent(name="john", last_name="brown", age=18, indexNr=1, notes=[1,2,3], salary=1000, position='Programmer')
9
10
72,293,719
2022-5-18
https://stackoverflow.com/questions/72293719/pytest-cannot-be-executed-from-python-3-10-4
I already saw one old post regarding this topic - An error while trying to execute tests on python 3.10 with pytest, I am having the same problem, Python 3.10.4 and pytest 7.1.2, when I start command: $ pipenv run pytest I get an error: $ pipenv run pytest ============================= test session starts ============================= platform win32 -- Python 3.10.4, pytest-4.0.0, py-1.7.0, pluggy-0.8.0 rootdir: **DIR**, inifile: collected 0 items / 1 errors =================================== ERRORS ==================================== ____________________ ERROR collecting test/test_person.py _____________________ <frozen importlib._bootstrap>:939: in _find_spec ??? E AttributeError: 'AssertionRewritingHook' object has no attribute 'find_spec' During handling of the above exception, another exception occurred: **LOCAL_PATH**\.virtualenvs\iamdb-2ZawZA6J\lib\site-packages\py\_path\local.py:668: in pyimport __import__(modname) <frozen importlib._bootstrap>:1027: in _find_and_load ??? <frozen importlib._bootstrap>:1002: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:941: in _find_spec ??? <frozen importlib._bootstrap>:915: in _find_spec_legacy ??? **LOCAL_PATH**\.virtualenvs\iamdb-2ZawZA6J\lib\site-packages\_pytest\assertion\rewrite.py:162: in find_module source_stat, co = _rewrite_test(self.config, fn_pypath) **LOCAL_PATH**\.virtualenvs\iamdb-2ZawZA6J\lib\site-packages\_pytest\assertion\rewrite.py:412: in _rewrite_test co = compile(tree, fn.strpath, "exec", dont_inherit=True) E TypeError: required field "lineno" missing from alias !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!! =========================== 1 error in 0.16 seconds =========================== Anyone has a solution?
As per comment from Marco Bonelli, pytest had no correct version. So command: pipenv update pytest fixed the issue.
4
1
72,291,290
2022-5-18
https://stackoverflow.com/questions/72291290/how-to-create-new-column-dynamically-in-pandas-like-we-do-in-pyspark-withcolumn
from statistics import mean import pandas as pd df = pd.DataFrame(columns=['A', 'B', 'C']) df["A"] = [1, 2, 3, 4, 4, 5, 6] df["B"] = ["Feb", "Feb", "Feb", "May", "May", "May", "May"] df["C"] = [10, 20, 30, 40, 30, 50, 60] df1 = df.groupby(["A","B"]).agg(mean_err=("C", mean)).reset_index() df1["threshold"] = df1["A"] * df1["mean_err"] Instead of the last line of code, how can I do it as in Pyspark .withColumn() ? This code wont work. I would like to create new column by using output of operation on the fly similarly like we do in Pyspark withColumn method. Can anybody have any idea how to do this?
Option 1: DataFrame.eval (df.groupby(['A', 'B'], as_index=False) .agg(mean_err=('C', 'mean')) .eval('threshold = A * mean_err')) Option 2: DataFrame.assign (df.groupby(['A', 'B'], as_index=False) .agg(mean_err=('C', 'mean')) .assign(threshold=lambda x: x['A'] * x['mean_err'])) A B mean_err threshold 0 1 Feb 10.0 10.0 1 2 Feb 20.0 40.0 2 3 Feb 30.0 90.0 3 4 May 35.0 140.0 4 5 May 50.0 250.0 5 6 May 60.0 360.0
6
7
72,288,401
2022-5-18
https://stackoverflow.com/questions/72288401/how-to-concat-lists-integers-and-strings-into-one-string
I have the following variables: a = [1, 2, 3] b = "de" # <-- not a (usual) list ! c = 5 # <-- not a list ! d = [4, 5, 23, 11, 5] e = ["dg", "kuku"] Now I want to concat all a, b, c, d, e to one list: [1, 2, 3, "de", 5, 4, 5, 23, 11, 5, "dg", "kuku"] I have tried itertools.chain but it didn't work. Please advise how can I make the concatenation?
chain works with iterables. What you mean is: concatenate these lists and raw values. I see two steps: def ensure_list(x): if isinstance(x, list): return x return [x] lists = map(ensure_list, (a, b, c, d, e)) concatenated = list(itertools.chain.from_iterable(lists))
5
4
72,283,998
2022-5-18
https://stackoverflow.com/questions/72283998/is-it-possible-to-save-boolean-numpy-arrays-on-disk-as-1bit-per-element-with-mem
Is it possible to save numpy arrays on disk in boolean format where it takes only 1 bit per element? This answer suggests to use packbits and unpackbits, however from the documentation, it seems that this may not support memory mapping. Is there a way to store 1bit arays on disk with memmap support? Reason for memmap requirement: I'm training my neural network on a database of full HD (1920x1080) images, but I crop out randomly a 256x256 patch for each iteration. Since reading the full image is time consuming, I use memmap to read the only the required patch. Now, I want to use a binary mask along with my images and hence this requirement.
numpy does not support 1 bit per element arrays, I doubt memmap has such a feature. However, there is a simple workaround using packbits. Since your case is not bitwise random access, you can read it as 1 byte per element array. # A binary mask represented as an 1 byte per element array. full_size_mask = np.random.randint(0, 2, size=[1920, 1080], dtype=np.uint8) # Pack mask vertically. packed_mask = np.packbits(full_size_mask, axis=0) # Save as a memmap compatible file. buffer = np.memmap("./temp.bin", mode='w+', dtype=packed_mask.dtype, shape=packed_mask.shape) buffer[:] = packed_mask buffer.flush() del buffer # Open as a memmap file. packed_mask = np.memmap("./temp.bin", mode='r', dtype=packed_mask.dtype, shape=packed_mask.shape) # Rect where you want to crop. top = 555 left = 777 width = 256 height = 256 # Read the area containing the rect. packed_top = top // 8 packed_bottom = (top + height) // 8 + 1 packed_patch = packed_mask[packed_top:packed_bottom, left:left + width] # Unpack and crop the actual area. patch_top = top - packed_top * 8 patch_mask = np.unpackbits(packed_patch, axis=0)[patch_top:patch_top + height] # Check that the mask is cropped from the correct area. print(np.all(patch_mask == full_size_mask[top:top + height, left:left + width])) Note that this solution could (and likely will) read extra bits. To be specific, 7 bits maximum at both ends. In your case, it will be 7x2x256 bits, but this is only about 5% of the patch, so I believe it is negligible. By the way, this is not an answer to your question, but when you are dealing with binary masks such as labels for image segmentation, compressing with zip may drastically reduce the file size. It is possible that it could be reduced to less than 8 KB per image (not per patch). You might want to consider this option as well.
6
2
72,284,064
2022-5-18
https://stackoverflow.com/questions/72284064/regex-expressions-deprecation-warning
The following fragment of code comes from my github repository found here. It opens a binary file, and extracts the text within <header> tags. These are the crucial lines: gbxfile = open(filename,'rb') gbx_data = gbxfile.read() gbx_header = b'(<header)((?s).*)(</header>)' header_intermediate = re.findall(gbx_header, gbx_data) The script works BUT it receives the following Deprecation Warning: DeprecationWarning: Flags not at the start of the expression b'(<header)((?s).*)(</' (truncated) header_intermediate = re.findall(gbx_header, gbx_data) What is the correct use of the regular expression in gbx_header, so that this warning is not displayed?
You can check the Python bug tacker Issue 39394, the warning was introduced in Python 3.6. The point is that the Python re now does not allow using inline modifiers not at the start of string. In Python 2.x, you can use your pattern without any problem and warnings as (?s) is silently applied to the whole regular expression under the hood. Since it is not always an expected behavior, the Python developers decided to produce a warning. Note you can use inline modifier groups in Python re now, see restrict 1 word as case sensitive and other as case insensitive in python regex | (pipe). So, the solutions are Putting (?s) (or any other inline modifier) at the start of the pattern: (?s)(<header)(.*)(</header>) Using the re option, re.S / re.DOTALL instead of (?s), re.I / re.IGNORECASE instead of (?i), etc. Using workarounds (instead of ., use [\w\W]/[\d\D]/[\s\S] if you do not want to use (?s) or re.S/re.DOTALL).
5
2