question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
71,670,587 | 2022-3-30 | https://stackoverflow.com/questions/71670587/node-gyp-rebuilding-failing-on-macos-12-3-to-make-for-hunspell-with-error-127 | I started facing and error on node-gyp when running make for hunspell which a dependency from the the npm library spellchecker after updating my macOS to 12.3 last week. No other change related to environment or versions changed, and compilation still work for colleagues of mine: > [email protected] install /Users/myuser/projects/project/packages/data/node_modules/spellchecker > node-gyp rebuild CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/affentry.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/affixmgr.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/csutil.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/dictmgr.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/filemgr.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/hashmgr.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/hunspell.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/hunzip.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/phonet.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/replist.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/hunspell/suggestmgr.o CXX(target) Release/obj.target/hunspell/vendor/hunspell/src/parsers/textparser.o LIBTOOL-STATIC Release/hunspell.a lerna ERR! npm install stderr: ../vendor/hunspell/src/hunspell/affentry.cxx:544:47: warning: while loop has empty body [-Wempty-body] while (p && *p != ']' && (p = nextchar(p))); ^ ../vendor/hunspell/src/hunspell/affentry.cxx:544:47: note: put the semicolon on a separate line to silence this warning 1 warning generated. In file included from ../vendor/hunspell/src/hunspell/affixmgr.cxx:12: ../vendor/hunspell/src/hunspell/affentry.hxx:30:105: warning: implicit conversion of NULL constant to 'unsigned short' [-Wnull-conversion] struct hentry * check_twosfx(const char * word, int len, char in_compound, const FLAG needflag = NULL); ~ ^~~~ 0 ../vendor/hunspell/src/hunspell/affentry.hxx:93:114: warning: implicit conversion of NULL constant to 'unsigned short' [-Wnull-conversion] struct hentry * check_twosfx(const char * word, int len, int optflags, PfxEntry* ppfx, const FLAG needflag = NULL); ~ ^~~~ 0 ../vendor/hunspell/src/hunspell/affixmgr.cxx:3654:65: warning: 'strncmp' call operates on objects of type 'const char' while the size is based on a different type 'const char *' [-Wsizeof-pointer-memaccess] if (strncmp(piece, keyword, sizeof(keyword)) != 0) { ~~~~~~~ ^~~~~~~ ../vendor/hunspell/src/hunspell/affixmgr.cxx:3654:65: note: did you mean to provide an explicit length? if (strncmp(piece, keyword, sizeof(keyword)) != 0) { ^~~~~~~ 3 warnings generated. In file included from ../vendor/hunspell/src/hunspell/hashmgr.cxx:9: ../vendor/hunspell/src/hunspell/hashmgr.hxx:17:21: warning: private field 'userword' is not used [-Wunused-private-field] int userword; ^ 1 warning generated env: python: No such file or directory make: *** [Release/hunspell.a] Error 127 gyp ERR! build error gyp ERR! stack Error: `make` failed with exit code: 2 gyp ERR! stack at ChildProcess.onExit (/Users/myuser/.volta/tools/image/npm/6.14.16/node_modules/node-gyp/lib/build.js:194:23) gyp ERR! stack at ChildProcess.emit (events.js:400:28) gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12) gyp ERR! System Darwin 21.4.0 gyp ERR! command "/Users/myuser/.volta/tools/image/node/14.19.1/bin/node" "/Users/myuser/.volta/tools/image/npm/6.14.16/node_modules/node-gyp/bin/node-gyp.js" "rebuild" gyp ERR! cwd /Users/myuser/projects/project/packages/data/node_modules/spellchecker gyp ERR! node -v v14.19.1 gyp ERR! node-gyp -v v5.1.0 gyp ERR! not ok npm WARN @typescript-eslint/[email protected] requires a peer of eslint@* but none is installed. You must install peer dependencies yourself. npm WARN @typescript-eslint/[email protected] requires a peer of eslint@^5.0.0 but none is installed. You must install peer dependencies yourself. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] install: `node-gyp rebuild` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] install script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /Users/myuser/.npm/_logs/2022-03-29T14_24_01_048Z-debug.log System summary: Volta 1.0.6 NodeJS v14.19.1 NPM 6.14.16 macOS 12.3 (System Darwin 21.4.0) node-gyp v5.1.0 spellchecker 3.7.1 | The problem was related to this line in the log: env: python: No such file or directory Apple did removed the default Python installation (Python 2.7) that used to come with macOS (macOS 12.3 Release Notes). The fix is quite simple and consist on installing Python and changing the path to become the default one. This tutorial covers it: https://dev.to/malwarebo/how-to-set-python3-as-a-default-python-version-on-mac-4jjf | 7 | 19 |
71,650,452 | 2022-3-28 | https://stackoverflow.com/questions/71650452/use-fastapi-to-parse-incoming-post-request-from-slack | I'm building a FastAPI server to receive requests sent by slack slash command. Using the code below, I could see that the following: token=BLAHBLAH&team_id=BLAHBLAH&team_domain=myteam&channel_id=BLAHBLAH&channel_name=testme&user_id=BLAH&user_name=myname&command=%2Fwhatever&text=test&api_app_id=BLAHBLAH&is_enterprise_install=false&response_url=https%3A%2F%2Fhooks.slack.com%2Fcommands%BLAHBLAH&trigger_id=BLAHBLAHBLAH was printed, which is exactly the payload I saw in the official docs. I'm trying to use the payload information to do something, and I'm curious whether there's a great way of parsing this payload info. I can definitely parse this payload using the split() function or any other beautiful functions, but I'm curious whether there is a "de facto" way of dealing with slack payload. Thanks in advance! from fastapi import FastAPI, Request app = FastAPI() @app.post("/") async def root(request: Request): request_body = await request.body() print(request_body) | Receive JSON data You would normally use Pydantic models to declare a request body—if you were about to receive data in JSON format—thus, benefiting from the automatic validation that Pydantic has to offer (for more options on how to post JSON data, have a look at this answer). In Pydantic V2 the dict() method has been replaced by model_dump(), in case you had to convert the model into a dictionary. So, you would have to define a Pydantic model like this: from fastapi import FastAPI from pydantic import BaseModel class Item(BaseModel): token: str team_id: str team_domain: str # etc. app = FastAPI() @app.post("/") def root(item: Item): print(item.model_dump()) # convert into dict (if required) return item The payload would look like this: { "token": "gIkuvaNzQIHg97ATvDxqgjtO" "team_id": "Foo", "team_domain": "bar", # etc. } Receive Form data If, however, you were about to receive the payload as Form data, just like what slack API does (as shown in the link you provided), you could use Form fileds. With Form fields, your payload will still be validated against those fields and the type you define them with. You would need, however, to define all the parameters in the endpoint, as described in the above link and as shown below: from fastapi import Form @app.post("/") def root(token: str = Form(...), team_id: str = Form(...), team_domain: str = Form(...)): return {"token": token, "team_id": team_id, "team_domain": team_domain} or to avoid specifying the parameters in an endpoint, in case you had a great number of Form fields, you could create a custom dependency class (using the @dataclass decorator, for simplicity), which would allow you to define multiple Form fields inside a separate class, and only use that class definition in your endpoint—see this answer and this answer for more details on FastAPI dependencies. Example: from dataclasses import dataclass from fastapi import FastAPI, Form, Depends @dataclass class Item: token: str = Form(...) team_id: str = Form(...) team_domain: str = Form(...) #... app = FastAPI() @app.post("/") def root(data: Item = Depends()): return data As of FastAPI 0.113.0 (see the relevant documentation as well), support has been added for decalring Form fields with Pydantic models (hence, no need for using a @dataclass as shown above): from fastapi import FastAPI, Form, Depends from pydantic import BaseModel class Item(BaseModel): token: str team_id: str team_domain: str #... app = FastAPI() @app.post("/") def root(data: Item = Form()): return data Notes As FastAPI is actually Starlette underneath, even if you still had to access the request body in the way you do in the question—which can be useful when dealing with arbitrary data that are unknown beforehand in order to specify them in a Pydantic model or directly in the endpoint—you should rather use methods such as request.json() or request.form(), as described in Starlette documentation, which would allow you to get the request body parsed as JSON or form-data, respectively. Please have a look at this answer and this answer for more details and examples. | 4 | 11 |
71,613,305 | 2022-3-25 | https://stackoverflow.com/questions/71613305/how-to-process-requests-from-multiiple-users-using-ml-model-and-fastapi | I'm studying the process of distributing artificial intelligence modules through FastAPI. I created a FastAPI app that answers questions using a pre-learned Machine Learning model. In this case, it is not a problem for one user to use it, but when multiple users use it at the same time, the response may be too slow. Hence, when multiple users enter a question, is there any way to copy the model and load it in at once? class sentencebert_ai(): def __init__(self) -> None: super().__init__() def ask_query(self,query, topN): startt = time.time() ask_result = [] score = [] result_value = [] embedder = torch.load(model_path) corpus_embeddings = embedder.encode(corpus, convert_to_tensor=True) query_embedding = embedder.encode(query, convert_to_tensor=True) cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0] #torch.Size([121])121개의 말뭉치에 대한 코사인 유사도 값이다. cos_scores = cos_scores.cpu() top_results = np.argpartition(-cos_scores, range(topN))[0:topN] for idx in top_results[0:topN]: ask_result.append(corpusid[idx].item()) #.item()으로 접근하는 이유는 tensor(5)에서 해당 숫자에 접근하기 위한 방식이다. score.append(round(cos_scores[idx].item(),3)) #서버에 json array 형태로 내보내기 위한 작업 for i,e in zip(ask_result,score): result_value.append({"pred_id":i,"pred_weight":e}) endd = time.time() print('시간체크',endd-startt) return result_value # return ','.join(str(e) for e in ask_result),','.join(str(e) for e in score) class Item_inference(BaseModel): text : str topN : Optional[int] = 1 @app.post("/retrieval", tags=["knowledge recommendation"]) async def Knowledge_recommendation(item: Item_inference): # db.append(item.dict()) item.dict() results = _ai.ask_query(item.text, item.topN) return results if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--port", default='9003', type=int) # parser.add_argument("--mode", default='cpu', type=str, help='cpu for CPU mode, gpu for GPU mode') args = parser.parse_args() _ai = sentencebert_ai() uvicorn.run(app, host="0.0.0.0", port=args.port,workers=4) corrected version @app.post("/aaa") def your_endpoint(request: Request, item:Item_inference): start = time.time() model = request.app.state.model item.dict() #커널 실행시 필요 _ai = sentencebert_ai() results = _ai.ask_query(item.text, item.topN,model) end = time.time() print(end-start) return results ``` | First, you should rather not load your model every time a request arrives, but rahter have it loaded once at startup (you could use the startup event for this) and store it on the app instance—using the generic app.state attribute (see implementation of State too)—which you can later retrieve, as described here and here Update: startup event has recently been deprecated, and since then, the recommended way to handle startup and shutdown events is using the lifespan handler, as demonstrated in this answer. You might still find the references provided earlier useful, as they provide information on additional concepts in FastAPI. For now, you could keep using the startup event, but it is recommended not to, as it might be completely removed from future FastAPI/Starlette versions. For instance: from fastapi import Request @app.on_event("startup") async def startup_event(): app.state.model = torch.load('<model_path>') Second, if you do not have any async def functions inside your endpoint that you have to await, you could define your endpoint with normal def instead. In this way, FastAPI will run requests to that def endpoint in a separate thread from an external threadpool, which will then be awaited (so that the blocking operations inside won't block the event loop); whereas, async def endpoints run directly in the event loop, and thus any synchronous blocking operations inside would block the event loop. Please have a look at the answers here and here, as well as all the references included in them, in order to understand the concept of async/await, as well as the difference between using def and async def in FastAPI. Example with normal def endpoint: @app.post('/') def your_endpoint(request: Request): model = request.app.state.model # run your synchronous ask_query() function here Alternatively, as described here, you could use an async def endpoint and have your CPU-bound task run in a separate process (which is more suited than using a thread), using ProcessPoolExecutor, and integrate it with asyncio, in order to await for it to complete and return the result(s). Beware that it is important to protect the main loop of code to avoid recursive spawning of subprocesses, etc.; that is, your code must be under if __name__ == '__main__'. Note that in the example below a new ProcessPool is created every time a request arrives at / endpoint, but a more suitable approach would be to have a reusable ProcessPoolExecutor created at application startup instead, which you could add to request.state, as demonstrated in this answer. Also, as explained earlier, startup event is now deprecated, and you should rather use a lifepsan event, as demonstrated in the linked answer provided earlier at the beginning of this answer, as well as the one provided just above. Example from fastapi import FastAPI, Request import concurrent.futures import asyncio import uvicorn class MyAIClass(): def __init__(self) -> None: super().__init__() def ask_query(self, model, query, topN): # ... ai = MyAIClass() app = FastAPI() @app.on_event("startup") async def startup_event(): app.state.model = torch.load('<model_path>') @app.post('/') async def your_endpoint(request: Request): model = request.app.state.model loop = asyncio.get_running_loop() with concurrent.futures.ProcessPoolExecutor() as pool: res = await loop.run_in_executor(pool, ai.ask_query, model, item.text, item.topN) if __name__ == '__main__': uvicorn.run(app) Using multiple workers Note that if you plan on having several workers active at the same time, each worker has its own memory—in other words, workers do not share the same memory—and hence, each worker will load their own instance of the ML model into memory (RAM). If, for instance, you are using four workers for your app, the model will result in being loaded four times into RAM. Thus, if the model, as well as other variables in your code, are consuming a large amount of memory, each process/worker will consume an equivalent amount of memory. If you would like to avoid that, you may have a look at how to share objects across multiple workers, as well as—if you are using Gunicorn as a process manager with Uvicorn workers—you can use Gunicorn's --preload flag. As per the documentation: Command line: --preload Default: False Load application code before the worker processes are forked. By preloading an application you can save some RAM resources as well as speed up server boot times. Although, if you defer application loading to each worker process, you can reload your application code easily by restarting workers. Example: gunicorn --workers 4 --preload --worker-class=uvicorn.workers.UvicornWorker app:app Note that you cannot combine Gunicorn's --preload with --reload flag, as when the code is preloaded into the master process, the new worker processes—which will automatically be created, if your application code has changed—will still have the old code in memory, due to how fork() works. | 6 | 10 |
71,595,635 | 2022-3-24 | https://stackoverflow.com/questions/71595635/render-numpy-array-in-fastapi | I have found How to return a numpy array as an image using FastAPI?, however, I am still struggling to show the image, which appears just as a white square. I read an array into io.BytesIO like so: def iterarray(array): output = io.BytesIO() np.savez(output, array) yield output.get_value() In my endpoint, my return is StreamingResponse(iterarray(), media_type='application/octet-stream') When I leave the media_type blank to be inferred a zipfile is downloaded. How do I get the array to be displayed as an image? | Option 1 - Return image as bytes The below examples show how to convert an image loaded from disk, or an in-memory image (in the form of numpy array), into bytes (using either PIL or OpenCV libraries) and return them using a custom Response directly. For the purposes of this demo, the below code is used to create the in-memory sample image (numpy array), which is based on this answer. # Function to create a sample RGB image def create_img(): w, h = 512, 512 arr = np.zeros((h, w, 3), dtype=np.uint8) arr[0:256, 0:256] = [255, 0, 0] # red patch in upper left return arr Using PIL Server side: You can load an image from disk using Image.open, or use Image.fromarray to load an in-memory image (Note: For demo purposes, when the case is loading the image from disk, the below demonstrates that operation inside the route. However, if the same image is going to be served multiple times, one could load the image only once at startup and store it to the app instance, as described in this answer and this answer). Next, write the image to a buffered stream, i.e., BytesIO, and use the getvalue() method to get the entire contents of the buffer. Even though the buffered stream is garbage collected when goes out of scope, it is generally better to call close() or use the with statement, as shown here and in the example below. from fastapi import Response from PIL import Image import numpy as np import io @app.get('/image', response_class=Response) def get_image(): # loading image from disk # im = Image.open('test.png') # using an in-memory image arr = create_img() im = Image.fromarray(arr) # save image to an in-memory bytes buffer with io.BytesIO() as buf: im.save(buf, format='PNG') im_bytes = buf.getvalue() headers = {'Content-Disposition': 'inline; filename="test.png"'} return Response(im_bytes, headers=headers, media_type='image/png') Client side: The below demonstrates how to send a request to the above endpoint using Python requests module, and write the received bytes to a file, or convert the bytes back into PIL Image, as described here. import requests from PIL import Image url = 'http://127.0.0.1:8000/image' r = requests.get(url=url) # write raw bytes to file with open('test.png', 'wb') as f: f.write(r.content) # or, convert back to PIL Image # im = Image.open(io.BytesIO(r.content)) # im.save('test.png') Using OpenCV Server side: You can load an image from disk using cv2.imread() function, or use an in-memory image, which—if it is in RGB order, as in the example below—needs to be converted, as OpenCV uses BGR as its default colour order for images. Next, use cv2.imencode() function, which compresses the image data (based on the file extension you pass that defines the output format, i.e., .png, .jpg, etc.) and stores it in an in-memory buffer that is used to transfer the data over the network. import cv2 @app.get('/image', response_class=Response) def get_image(): # loading image from disk # arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED) # using an in-memory image arr = create_img() arr = cv2.cvtColor(arr, cv2.COLOR_RGB2BGR) # arr = cv2.cvtColor(arr, cv2.COLOR_RGBA2BGRA) # if dealing with 4-channel RGBA (transparent) image success, im = cv2.imencode('.png', arr) headers = {'Content-Disposition': 'inline; filename="test.png"'} return Response(im.tobytes(), headers=headers, media_type='image/png') Client side: On client side, you can write the raw bytes to a file, or use the numpy.frombuffer() function and cv2.imdecode() function to decompress the buffer into an image format (similar to this)—cv2.imdecode() does not require a file extension, as the correct codec will be deduced from the first bytes of the compressed image in the buffer. url = 'http://127.0.0.1:8000/image' r = requests.get(url=url) # write raw bytes to file with open('test.png', 'wb') as f: f.write(r.content) # or, convert back to image format # arr = np.frombuffer(r.content, np.uint8) # img_np = cv2.imdecode(arr, cv2.IMREAD_UNCHANGED) # cv2.imwrite('test.png', img_np) Useful Information Since you noted that you would like the image displayed similar to a FileResponse, using a custom Response to return the bytes should be the way to do this, instead of using StreamingResponse (as shown in your question). To indicate that the image should be viewed in the browser, the HTTP response should include the following Content-Disposition header, as described here and as shown in the above examples (the quotes around the filename are required, if the filename contains special characters): headers = {'Content-Disposition': 'inline; filename="test.png"'} Whereas, to have the image downloaded rather than viewed (use attachment instead of inline): headers = {'Content-Disposition': 'attachment; filename="test.png"'} If you would like to display (or download) the image using a JavaScript interface, such as Fetch API or Axios, have a look at the answers here and here. As for StreamingResponse, if the entire numpy array/image is already loaded into memory, StreamingResponse would not be necessary at all (and certainly, should not be the preferred choice for returning data that is already loaded in memory to the client). StreamingResponse streams by iterating over the chunks provided by your iter() function. As shown in the implementation of StreamingResponse class, if the iterator/generator you passed is not an AsyncIterable, a thread from the external threadpool—see this answer for more details on that threadpool—will be spawned to run the synchronous iterator you passed, using Starlette's iterate_in_threadpool() function, in order to avoid blocking the event loop. It should also be noted that the Content-Length response header is not set when using StreamingResponse (which makes sense, since StreamingResponse is supposed to be used when you don't know the size of the response beforehand), unlike other Response classes of FastAPI/Starlette that set that header for you, so that the browser will know where the data ends. It should be kept that way, as if the Content-Length header is included (of which its value should match the overall response body size in bytes), then to the server StreamingResponse would look the same as Response, as the server would not use transfer-encoding: chunked in that case (even though at the application level the two would still differ)—take a look at Uvicorn's documentation on response headers and MDN'S documentation on Transfer-Encoding: chunked for further details. Even in cases where you know the body size beforehand, but would still need using StreamingResponse, as it would allow you to load and transfer data by specifying the chunk size of your choice, unlike FileResponse (see later on for more details), you should ensure not setting the Content-Length header on your own, e.g., StreamingResponse(iterfile(), headers={'Content-Length': str(content_length)}), as this would result in the server not using transfer-encoding: chunked (regardless of the application delivering the data to the web server in chunks, as shown in the relevant implementation). As described in this answer: Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images. Even if you would like to stream an image file that is saved on the disk, file-like objects, such as those created by open(), are normal iterators; thus, you could return them directly in a StreamingResponse, as described in the documentation and as shown below (if you find yield from f being rather slow, when using StreamingResponse, please have a look at this answer on how to read the file in chunks with the chunk size of your choice—which should be set based on your needs, as well as your server's resources). It should be noted that using FileResponse would also read the file contents into memory in chunks, instead of the entire contents at once. However, as can be seen in the implementation of FileResponse class, the chunk size used is pre-defined and set to 64KB. Thus, based on one's requirements, they should decide on which of the two Response classes they should use. @app.get('/image') def get_image(): def iterfile(): with open('test.png', mode='rb') as f: yield from f return StreamingResponse(iterfile(), media_type='image/png') Or, if the image was already loaded into memory instead, and then saved into a BytesIO buffered stream, since BytesIO is a file-like object (like all the concrete classes of io module), you could return it directly in a StreamingResponse (or, preferably, simply call buf.getvalue() to get the entire image bytes and return them using a custom Response directly, as shown earlier). In case of returning the buffered stream, as shown in the example below, please remember to call buf.seek(0), in order to rewind the cursor to the start of the buffer, as well as call close() inside a background task, in order to discard the buffer, once the response has been sent to the client. from fastapi import BackgroundTasks @app.get('/image') def get_image(background_tasks: BackgroundTasks): # supposedly, the buffer already existed in memory arr = create_img() im = Image.fromarray(arr) buf = BytesIO() im.save(buf, format='PNG') # rewind the cursor to the start of the buffer buf.seek(0) # discard the buffer, after the response is returned background_tasks.add_task(buf.close) return StreamingResponse(buf, media_type='image/png') Thus, in your case scenario, the most suited approach would be to return a custom Response directly, including your custom content and media_type, as well as setting the Content-Disposition header, as described earlier, so that the image is viewed in the browser. Option 2 - Return image as JSON-encoded numpy array The below should not be used for displaying the image in the browser, but it is rather added here for the sake of completeness, showing how to convert an image into a numpy array (preferably, using asarray() function), then return the data in JSON format, and finally, convert the data back to image on client side, as described in this and this answer. For faster alternatives to the standard Python json library, see this answer. Using PIL Server side: from PIL import Image import numpy as np import json @app.get('/image') def get_image(): im = Image.open('test.png') # im = Image.open('test.png').convert('RGBA') # if dealing with 4-channel RGBA (transparent) image arr = np.asarray(im) return json.dumps(arr.tolist()) Client side: import requests from PIL import Image import numpy as np import json url = 'http://127.0.0.1:8000/image' r = requests.get(url=url) arr = np.asarray(json.loads(r.json())).astype(np.uint8) im = Image.fromarray(arr) im.save('test_received.png') Using OpenCV Server side: import cv2 import json @app.get('/image') def get_image(): arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED) return json.dumps(arr.tolist()) Client side: import requests import numpy as np import cv2 import json url = 'http://127.0.0.1:8000/image' r = requests.get(url=url) arr = np.asarray(json.loads(r.json())).astype(np.uint8) cv2.imwrite('test_received.png', arr) | 4 | 11 |
71,591,971 | 2022-3-23 | https://stackoverflow.com/questions/71591971/how-can-i-fix-the-zsh-command-not-found-python-error-macos-monterey-12-3 | Since I got the macOS v12.3 (Monterey) update (not sure it's related though), I have been getting this error when I try to run my Python code in the terminal: I am using Python 3.10.3, Atom IDE, and run the code in the terminal via atom-python-run package (which used to work perfectly fine). The settings for the package go like this: The which command in the terminal returns the following (which is odd, because earlier it would return something to just which python): I gather the error occurs because the terminal calls for python instead of python3, but I am super new to any coding and have no idea why it started now and how to fix it. Nothing of these has worked for me: I deleted and then reinstalled the Python interpreter from python.org. I tried alias python='python3' (which I saw in one of the threads here). I tried export PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" (which I found here). To reset zsh and paths, I deleted all associated hidden files in /local/users/ and ran the terminal once again. I deleted everything and reinstalled Mac OS X and the Python interpreter only to get the same error. | OK, after a couple of days trying, this is what has worked for me: I reinstalled Monterey (not sure it was essential, but I just figured I had messed with terminal and $PATH too much). I installed python via brew rather than from the official website. It would still return command not found error. I ran echo "alias python=/usr/bin/python3" >> ~/.zshrc in terminal to alias python with python3. Relaunch the shell or run source ~/.zshrc Problem solved. As far as I get it, there is no more pre-installed python 2.x in macOS as of 12.3 hence the error. I still find it odd though that atom-python-run would call for python instead of python3 despite the settings. | 262 | 184 |
71,654,669 | 2022-3-28 | https://stackoverflow.com/questions/71654669/what-is-the-point-of-the-slice-indices-method | What is the point of the slice.indices method, since we have the following equality? s = slice(start, stop, step) assert range(*s.indices(length)) == range(length)[s] | Since Python 3.2 added slicing support to range, the slice.indices method is unnecessary because s.indices(length) is equal to (range(length)[s].start, range(length)[s].stop, range(length)[s].step): range objects now support index and count methods. This is part of an effort to make more objects fully implement the collections.Sequence abstract base class. As a result, the language will have a more uniform API. In addition, range objects now support slicing and negative indices, even with values larger than sys.maxsize. This makes range more interoperable with lists: >>> range(0, 100, 2).count(10) 1 >>> range(0, 100, 2).index(10) 5 >>> range(0, 100, 2)[5] 10 >>> range(0, 100, 2)[0:5] range(0, 10, 2) (Contributed by Daniel Stutzbach in bpo-9213, by Alexander Belopolsky in bpo-2690, and by Nick Coghlan in bpo-10889.) The slice.indices method is only kept for backwards compatibility. The credit goes to Karl Knechtel. | 4 | 4 |
71,592,285 | 2022-3-23 | https://stackoverflow.com/questions/71592285/how-to-annotate-that-a-function-produces-a-dataclass | Say you want to wrap the dataclass decorator like so: from dataclasses import dataclass def something_else(klass): return klass def my_dataclass(klass): return something_else(dataclass(klass)) How should my_dataclass and/or something_else be annotated to indicate that the return type is a dataclass? See the following example on how the builtin @dataclass works but a custom @my_dataclass does not: @dataclass class TestA: a: int b: str TestA(0, "") # fine @my_dataclass class TestB: a: int b: str TestB(0, "") # error: Too many arguments for "TestB" (from mypy) | There is no feasible way to do this prior to PEP 681. A dataclass does not describe a type but a transformation. The actual effects of this cannot be expressed by Python's type system – @dataclass is handled by a MyPy Plugin which inspects the code, not just the types. This is triggered on specific decorators without understanding their implementation. dataclass_makers: Final = { 'dataclass', 'dataclasses.dataclass', } While it is possible to provide custom MyPy plugins, this is generally out of scope for most projects. PEP 681 (Python 3.11) adds a generic "this decorator behaves like @dataclass"-marker that can be used for all transformers from annotations to fields. PEP 681 is available to earlier Python versions via typing_extensions. Enforcing dataclasses For a pure typing alternative, define your custom decorator to take a dataclass and modify it. A dataclass can be identified by its __dataclass_fields__ field. from typing import Protocol, Any, TypeVar, Type, ClassVar from dataclasses import Field class DataClass(Protocol): __dataclass_fields__: ClassVar[dict[str, Field[Any]]] DC = TypeVar("DC", bound=DataClass) def my_dataclass(klass: Type[DC]) -> Type[DC]: ... This allows the type checker to understand and verify that a dataclass class is needed. @my_dataclass @dataclass class TestB: a: int b: str TestB(0, "") # note: Revealed type is "so_test.TestB" @my_dataclass class TestC: # error: Value of type variable "DC" of "my_dataclass" cannot be "TestC" a: int b: str Custom dataclass-like decorators The PEP 681 dataclass_transform decorator is a marker for other decorators to show that they act "like" @dataclass. In order to match the behaviour of @dataclass, one has to use field_specifiers to indicate that fields are denoted the same way. from typing import dataclass_transform, TypeVar, Type import dataclasses T = TypeVar("T") @dataclass_transform( field_specifiers=(dataclasses.Field, dataclasses.field), ) def my_dataclass(klass: Type[T]) -> Type[T]: return something_else(dataclasses.dataclass(klass)) It is possible for the custom dataclass decorator to take all keywords as @dataclass. dataclass_transform can be used to mark their respective defaults, even when not accepted as keywords by the decorator itself. | 8 | 9 |
71,603,314 | 2022-3-24 | https://stackoverflow.com/questions/71603314/ssl-error-unsafe-legacy-renegotiation-disabled | I am running a Python code where I have to get some data from HTTPSConnectionPool(host='ssd.jpl.nasa.gov', port=443). But each time I try to run the code I get the following error. I am on MAC OS 12.1 raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='ssd.jpl.nasa.gov', port=443): Max retries exceeded with url: /api/horizons.api?format=text&EPHEM_TYPE=OBSERVER&QUANTITIES_[...]_ (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:997)'))) I really don't know how to bypass this issue. | This error comes up when using OpenSSL 3 to connect to a server which does not support it. The solution is to downgrade the cryptography package in python: run pip install cryptography==36.0.2 in the used enviroment. source: https://github.com/scrapy/scrapy/issues/5491 EDIT: Refer to Harry Mallon and ahmkara's answer for a fix without downgrading cryptography | 83 | 18 |
71,599,282 | 2022-3-24 | https://stackoverflow.com/questions/71599282/how-to-pass-kwargs-as-params-to-fastapi-endpoint | I have a function generating a dict template. This function consists of several generators and requires one parameter (i.e., carrier) and has many optional parameters (keyword arguments - **kwargs). def main_builder(carrier, **params): output = SamplerBuilder(DEFAULT_JSON) output.generate_flight(carrier) output.generate_airline_info(carrier) output.generate_locations() output.generate_passengers() output.generate_contact_info() output.generate_payment_card_info() output.configs(**params) result = output.input_json return result # example of function call examplex = main_builder("3M", proxy="5.39.69.171:8888", card=Visa, passengers={"ADT":2, "CHD":1}, bags=2) I want to deploy this function to FastAPI endpoint. I managed to do it for carrier but how can I set **kwargs as params to the function? @app.get("/carrier/{carrier_code}", response_class=PrettyJSONResponse) # params/kwargs?? async def get_carrier(carrier_code): output_json = main_builder(carrier_code) return airline_input_json | Using Pydantic Model Since your function "..has many optional parameters" and passengers parameter requires a dictionary as an input, I would suggest creating a Pydantic model, where you define the parameters, and which would allow you sending the data in JSON format and getting them automatically validated by Pydantci as well. Once the endpoint is called, you can use Pydantic's dict() method (Note: In Pydantic V2, dict() was replaced by model_dump()—see this answer for more details) to convert the model into a dictionary. Example from pydantic import BaseModel from typing import Optional class MyModel(BaseModel): proxy: Optional[str] = None card: Optional[str] = None passengers: Optional[dict] = None bags: Optional[int] = None @app.post("/carrier/{carrier_code}") async def get_carrier(carrier_code: int, m: MyModel): return main_builder(carrier_code, **m.dict()) # In Pydantic V2, use **m.model_dump() Sending arbitrary JSON data In case you had to send arbitrary JSON data, and hence, pre-defining the parameters of an endpoint wouldn't be possible, you could use an approach similar to the one described in this answer (see Options 3 and 4), as well as this answer and this answer. | 4 | 4 |
71,617,325 | 2022-3-25 | https://stackoverflow.com/questions/71617325/ssl-decryption-failed-or-bad-record-mac-decryption-failed-or-bad-record-mac-s | When I try to install python on Windows using anaconda, I get the following error: SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC decryption failed or bad record mac (_ssl.c:2633) Anaconda Prompt Error How can I fix? I have already try to set ssl verification parameter to false using: conda config --set ssl_verify false This Pc is workstation so I can use it at another network. I have tried doing the same on another laptop which is connected same Wlan Network. That works without any problem. Here is a log if the error: C:\\WINDOWS\\system32\>conda install -c conda-forge python Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan environment location: C:\\ProgramData\\Anaconda3\\envs\\gkk added / updated specs: - keepalive The following packages will be downloaded: package | build ---------------------------|----------------- python-3.10.4 |hcf16a7b_0_cpython 16.2 MB conda-forge ------------------------------------------------------------ Total: 16.2 MB The following NEW packages will be INSTALLED: bzip2 conda-forge/win-64::bzip2-1.0.8-h8ffe710_4 keepalive conda-forge/noarch::keepalive-0.5-pyhd8ed1ab_6 libffi conda-forge/win-64::libffi-3.4.2-h8ffe710_5 libzlib conda-forge/win-64::libzlib-1.2.11-h8ffe710_1013 pip conda-forge/noarch::pip-22.0.4-pyhd8ed1ab_0 python conda-forge/win-64::python-3.10.4-hcf16a7b_0_cpython python_abi conda-forge/win-64::python_abi-3.10-2_cp310 setuptools conda-forge/win-64::setuptools-60.10.0-py310h5588dad_0 sqlite conda-forge/win-64::sqlite-3.37.1-h8ffe710_0 tk conda-forge/win-64::tk-8.6.12-h8ffe710_0 tzdata conda-forge/noarch::tzdata-2022a-h191b570_0 wheel conda-forge/noarch::wheel-0.37.1-pyhd8ed1ab_0 xz conda-forge/win-64::xz-5.2.5-h62dcd97_1 Proceed (\[y\]/n)? y Downloading and Extracting Packages python-3.10.4 | 16.2 MB | | 0% SSLError(SSLError(1, '\[SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC\] decryption failed or bad record mac (\_ssl.c:2633)')) I am expecting without error: Preparing transaction: done Verifying transaction: done Executing transaction: done | Had this error when updating conda with: conda update -n base -c defaults conda which led to: [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:2622) [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:2622) I found two downloads that had been stopped at lower percentages (mkl and markdown): jupyter_server-1.23. | 399 KB | ############################################################################ | 100% mkl-2023.1.0 | 155.6 MB | ###6 | 5% executing-0.8.3 | 18 KB | ############################################################################ | 100% snappy-1.1.9 | 2.2 MB | ############################################################################ | 100% markdown-3.4.1 | 148 KB | ################################8 | 43% numexpr-2.8.4 | 128 KB | ############################################################################ | 100% It is likely that the SSL errors came just from these two broken downloads. That is why I reran the command, and it worked: Downloading and Extracting Packages mkl-2023.1.0 | 155.6 MB | ############################################################################ | 100% markdown-3.4.1 | 148 KB | ################################################################################################################# | 100% Preparing transaction: done Verifying transaction: done Executing transaction: - Windows 64-bit packages of scikit-learn can be accelerated using scikit-learn-intelex. More details are available here: https://intel.github.io/scikit-learn-intelex For example: $ conda install scikit-learn-intelex $ python -m sklearnex my_application.py done I did not need to change the ssl settings for this since I have not touched the settings before. Though this more or less what the other answer also says, I try to make clearer that running again - with the ssl setting to true if you had changed it before - might be all that is needed. | 4 | 1 |
71,592,060 | 2022-3-23 | https://stackoverflow.com/questions/71592060/makefile-how-should-i-extract-the-version-number-embedded-in-pyproject-toml | I have a python project with a pyproject.toml file. Typically I store the project's version number in pyproject.toml like this: % grep version pyproject.toml version = "0.0.2" % I want to get that version number into a Makefile variable regardless of how many spaces wind up around the version terms. What should I do to extract the pyproject.toml version string into a Makefile environment variable called VERSION? | An alternative solution to parse major.minor.patch, based on @Mike Pennington's answer: grep -m 1 version pyproject.toml | grep -e '\d.\d.\d' -o | 10 | 3 |
71,665,819 | 2022-3-29 | https://stackoverflow.com/questions/71665819/is-it-possible-to-write-a-csv-file-from-a-xarray-dataset-in-python | I have been using the python package xgrads to parse and read a descriptor file with a suffix .ctl which describes a raw binary 3D dataset, provided by GrADS (Grid Analysis and Display System), a widely used software for easy access, manipulation, and visualization of earth science data. I have been using the following code to read the binary data into a xarray.Dataset. from xgrads import open_CtlDataset dset = open_CtlDataset('./ur2m_eta40km_2001011312.ctl') # print all the info in ctl file print(dset) <xarray.Dataset> Dimensions: (time: 553, lat: 36, lon: 30) Coordinates: * time (time) datetime64[ns] 2001-01-13T12:00:00 ... 2001-05-31T12:00:00 * lat (lat) float32 -21.2 -20.8 -20.4 -20.0 -19.6 ... -8.4 -8.0 -7.6 -7.2 * lon (lon) float32 -47.8 -47.4 -47.0 -46.6 ... -37.4 -37.0 -36.6 -36.2 Data variables: ur2m (time, lat, lon) float32 dask.array<chunksize=(1, 36, 30), meta=np.ndarray> Attributes: comment: Relative Humidity 2m storage: 99 title: File undef: 1e+20 pdef: None This .ctl file comprises forecast results of humidity, estimated over a predefined area at each 6 hours, from 2001-01-13 12:00:00 hs to 2001-05-31 12:00:00 hs. Plotting the results for the first time step (2001-01-13T12:00:00) I got this: ds['ur2m'][0,...].plot() I would like to know if it is possible to create tabular data from this xarray.Dataset and export it as a single .csv or .txt file, following the data structure below: long lat ur2m time variable datetime -47.8 -21.2 0 1 ur2m 2001-01-13 12:00:00 -47.4 -21.2 0 1 ur2m 2001-01-13 12:00:00 -47.0 -21.2 0 1 ur2m 2001-01-13 12:00:00 -46.6 -21.2 0 1 ur2m 2001-01-13 12:00:00 ... ... ... ... <NA> ... <NA> -37.4 -7.2 0 553 ur2m 2001-05-31 12:00:00 -37.0 -7.2 0 553 ur2m 2001-05-31 12:00:00 -36.6 -7.2 0 553 ur2m 2001-05-31 12:00:00 -36.2 -7.2 0 553 ur2m 2001-05-31 12:00:00 The original data are available here | Try this: Convert netcdf to dataframe df = ds.to_dataframe() Save dataframe to csv df.to_csv('df.csv') | 4 | 7 |
71,632,325 | 2022-3-26 | https://stackoverflow.com/questions/71632325/cannot-import-name-mapping-from-collections-on-importing-requests | Python Version: Python 3.10.4 PIP Version: pip 22.0.4 So I was trying to make a small project with sockets, I added a feature to upload files but whenever I import requests, it throws this error. Below is the code I ran. Traceback (most recent call last): File "C:\Programming\WireUS\test.py", line 1, in <module> import requests File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\__init__.py", line 43, in <module> import urllib3 File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\__init__.py", line 8, in <module> from .connectionpool import ( File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 29, in <module> from .connection import ( File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 39, in <module> from .util.ssl_ import ( File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\__init__.py", line 3, in <module> from .connection import is_connection_dropped File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\connection.py", line 3, in <module> from .wait import wait_for_read File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\wait.py", line 1, in <module> from .selectors import ( File "C:\Users\John\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\selectors.py", line 14, in <module> from collections import namedtuple, Mapping ImportError: cannot import name 'Mapping' from 'collections' (C:\Users\John\AppData\Local\Programs\Python\Python310\lib\collections\__init__.py) Even this basic code gives me that error. import requests import time r = request.get("google.com").text print(r) time.sleep(999) | As user2357112-supports-monica said, running pip install urllib3 fixes it. | 4 | 7 |
71,581,084 | 2022-3-23 | https://stackoverflow.com/questions/71581084/why-does-bot-get-channel-produce-nonetype | I'm making a Discord bot to handle an announcement command. When the command is used, I want the bot to send a message in a specific channel and send a message back to the user to show that the command was sent. However, I cannot get the message to be sent to the channel. I tried this code: import discord import os import random import asyncio testing_servers = [912361242985918464] intents = discord.Intents().all() bot = discord.Bot(intents=intents) @bot.slash_command(guild_ids=testing_servers, name="announce", description="Make server announcements!") async def announce(ctx, title, text, channel_id,anonymous=None): #response embed print(channel_id) #announcement embed embed_announce = discord.Embed( colour = discord.Colour.blue(), title=str(title), description = text ) await bot.get_channel(channel_id).send(embed = embed_announce) But before even attempting to send the other message back to the user, I get an error that says AttributeError: 'NoneType' object has no attribute 'send'. I conclude that bot.get_channel(channel_id) evaluated to None. But why? How can I get the correct Channel to send the message? | Make sure you are sending an integer to get_channel(): await bot.get_channel(int(channel_id)).send(embed=embed_announce) | 4 | 3 |
71,643,087 | 2022-3-28 | https://stackoverflow.com/questions/71643087/vscode-will-not-autofocus-on-integrated-terminal-while-running-code | When I run without debugging in python on vscode it no longer autofocuses on the terminal forcing me to click into the terminal everytime to input data. Is there any solution to cause vscode to autofocus when code is running? | The following solution to this issue has been tested on Visual Studio Code 1.74.3. Install the Python extension for Visual Studio Code. Go to File >> Preferences >> Settings. In the Search settings field enter, "Python › Terminal: Focus After Launch" Click on, When launching a python terminal, whether to focus the cursor on the terminal. A check mark should appear in the tick box. Done! Now every time you Run Python File the terminal will be focused. | 7 | 6 |
71,584,511 | 2022-3-23 | https://stackoverflow.com/questions/71584511/aws-cdk-type-cls-runtime-runtime-cannot-be-assigned-to-type-runtime | I get this flycheck error pointing to the runtime=_lambda.. variable: Argument of type "(cls: Runtime) -> Runtime" cannot be assigned to parameter "runtime" of type "Runtime" in function "__init__" Type "(cls: Runtime) -> Runtime" cannot be assigned to type "Runtime" # create lambda function # executed as root function = _lambda.Function(self, "lambda_function", runtime=_lambda.Runtime.PYTHON_3_7, handler="lambda_handler.main", code=_lambda.Code.from_asset("./lambda"), environment={ 'EC2_INSTANCE_ID': instance.instance_id, 'S3_OUTPUT': output_s3.s3_url_for_object(), 'S3_INPUT': input_s3.s3_url_for_object() }) It's a rather cosmetic IDE issue, the code itself works | This was a bug in jsii, the library CDK uses to transpile TypeScript (the language in which CDK is written) to Python. Here is the PR that fixed it. The fix was released in 1.64.0 If you are using a version before 1.64.0, you can use casting to suppress the error: import typing ... function = lambda_.Function( self, "function", ... runtime=typing.cast(lambda_.Runtime, lambda_.Runtime.PYTHON_3_7) ) Or just append # type: ignore to the end of the line to disable type checking on that particular line. | 4 | 8 |
71,584,885 | 2022-3-23 | https://stackoverflow.com/questions/71584885/ipdb-stops-showing-prompt-text-after-carriage-return | Recently when setting up a breakpoint using ipdb.set_trace(context=20) I can see the command I'm inputing the first time, after hitting return, next time I write an instruction or command in my ipdb prompt is not showing. When I hit enter it executes it and shows it in the previous lines. This wasn't happening until very recently. I'm using mac, with iterm, latest ipdb and pytest. EDIT 2022-3-29 I've been trying to play with the shell settings, disconnect ozsh, antigen plugins, to see it was related, but doesn't seem to affect. I've also tried with terminal, instead of iterm. Here is a recording of what I'm describing: EDIT 2022-3-31 I've realized this only happens with one of my projects The prompt disappears after an exception occurs no matter which type, otherwise it always works fine. After the exception prompt starts failing, but sometimes it's not in the first command after I've written a simple python program to run with the same setup and it doesn't happen, so there's something else messing with this EDIT 2022-3-31 (2.0) After spending some time playing with this, I discovered this was only happening in some tests, the ones decorated with freezegun I'm using freezegun 1.2.1 and pytest 6.2.5. When I run this code if I execute print a couple times, cursor disappears. This is the most basic reproduction test I've been able to come up with. import ipdb from freezegun import freeze_time @freeze_time("2022-3-12") def test_prompt_ipdb(): ipdb.set_trace() test_prompt_ipdb() I now believe this a bug in one of these two, most likely freezegun doing something fancy. | This doesn't seem like a bug in ipdb (nor in IPython for that matter, with which this reproduces as well). The problem is between freezegun and prompt-toolkit, which IPython (and consequently ipdb) rely on. I'm hoping they will accept this PR, but until then this behavior can be resolved by adding prompt_toolkit to the ignore-list using the extend_ignore_list argument, like so: import ipdb import freezegun freezegun.configure(extend_ignore_list=['prompt_toolkit']) @freezegun.freeze_time("2022-3-12") def test_prompt_ipdb(): ipdb.set_trace() test_prompt_ipdb() | 14 | 7 |
71,652,965 | 2022-3-28 | https://stackoverflow.com/questions/71652965/importerror-cannot-import-name-safe-str-cmp-from-werkzeug-security | Any ideas on why I get this error? My project was working fine. I copied it to an external drive and onto my laptop to work on the road; it worked fine. I copied it back to my desktop and had a load of issues with invalid interpreters etc, so I made a new project and copied just the scripts in, made a new requirements.txt and installed all the packages, but when I run it, I get this error: Traceback (most recent call last): File "E:\Dev\spot_new\flask_blog\run.py", line 1, in <module> from flaskblog import app File "E:\Dev\spot_new\flask_blog\flaskblog\__init__.py", line 3, in <module> from flask_bcrypt import Bcrypt File "E:\Dev\spot_new\venv\lib\site-packages\flask_bcrypt.py", line 21, in <module> from werkzeug.security import safe_str_cmp ImportError: cannot import name 'safe_str_cmp' from 'werkzeug.security' (E:\Dev\spot_new\venv\lib\site-packages\werkzeug\security.py) I've tried uninstalling Python, Anaconda, PyCharm, deleting every reg key and environment variable I can find that looks pythonic, reinstalling all from scratch but still no dice. | Werkzeug released v2.1.0 today, removing werkzeug.security.safe_str_cmp. You can probably resolve this issue by pinning Werkzeug~=2.0.0 in your requirements.txt file (or similar). pip install Werkzeug~=2.0.0 After that it is likely that you will also have an AttributeError related to the jinja package, so if you have it, also run: pip install jinja2~=3.0.3 | 38 | 66 |
71,654,590 | 2022-3-28 | https://stackoverflow.com/questions/71654590/dash-importerror-cannot-import-name-get-current-traceback-from-werkzeug-debu | I'm trying to run a simple dash app in a conda environment in Pycharm, however I'm running into the error in the title. Weirdly enough, I couldn't find a place on the internet which has a mention of this bug, except for here. The code is simple, as all I'm trying to run is a simple dashapp; code obtained the code from here. I have tried switching between python versions in conda (back and forth between python 3.9, 3.8 and 3.7) but the error seems to be persistent. I know I have also correctly installed all its dependencies as I'm not getting any import error. Would appreciate if anyone could help with this. Edit: Versions of Dash installed, as requested by @coralvanda : Basically, I just did a pip install of everything so all the versions of packages are the latest. Screenshot of a full traceback of the error: | I've been in the same problem. Uninstall the wrong version with: pip uninstall werkzeug Install the right one with: pip install -v https://github.com/pallets/werkzeug/archive/refs/tags/2.0.3.tar.gz | 22 | 13 |
71,660,787 | 2022-3-29 | https://stackoverflow.com/questions/71660787/how-to-trim-crop-bottom-whitespace-of-a-pdf-document-in-memory | I am using wkhtmltopdf to render a (Django-templated) HTML document to a single-page PDF file. I would like to either render it immediately with the correct height (which I've failed to do so far) or render it incorrectly and trim it. I'm using Python. Attempt type 1: wkhtmltopdf render to a very, very long single-page PDF with a lot of extra space using --page-height Use pdfCropMargins to trim: crop(["-p4", "100", "0", "100", "100", "-a4", "0", "-28", "0", "0", "input.pdf"]) The PDF is rendered perfectly with 28 units of margin at the bottom, but I had to use the filesystem to execute the crop command. It seems that the tool expects an input file and output file, and also creates temporary files midway through. So I can't use it. Attempt type 2: wkhtmltopdf render to multi-page PDF with default parameters Use PyPDF4 (or PyPDF2) to read the file and combine pages into a long, single page The PDF is rendered fine-ish in most cases, however, sometimes a lot of extra white space can be seen on the bottom if by chance the last PDF page had very little content. Ideal scenario: The ideal scenario would involve a function that takes HTML and renders it into a single-page PDF with the expected amount of white space at the bottom. I would be happy with rendering the PDF using wkhtmltopdf, since it returns bytes, and later processing these bytes to remove any extra white space. But I don't want to involve the file system in this, as instead, I want to perform all operations in memory. Perhaps I can somehow inspect the PDF directly and remove the white space manually, or do some HTML magic to determine the render height before-hand? What am I doing now: Note that pdfkit is a wkhtmltopdf wrapper # This is not a valid HTML (includes Django-specific stuff) template: Template = get_template("some-django-template.html") # This is now valid HTML rendered = template.render({ "foo": "bar", }) # This first renders PDF from HTML normally (multiple pages) # Then counts how many pages were created and determines the required single-page height # Then renders a single-page PDF from HTML using the page height and width arguments return pdfkit.from_string(rendered, options={ "page-height": f"{297 * PdfFileReader(BytesIO(pdfkit.from_string(rendered))).getNumPages()}mm", "page-width": "210mm" }) It's equivalent to Attempt type 2, except I don't use PyDPF4 here to stitch the pages together, but instead render again with wkhtmltopdf using precomputed page height. | There might be better ways to do this, but this at least works. I'm assuming that you are able to crop the PDF yourself, and all I'm doing here is determining how far down on the last page you still have content. If that assumption is wrong, I could probably figure out how to crop the PDF. Or otherwise, just crop the image (easy in Pillow) and then convert that to PDF? Also, if you have one big PDF, you might need to figure how how far down on the whole PDF the text ends. I'm just finding out how far down on the last page the content ends. But converting from one to the other is like just an easy arithmetic problem. Tested code: import pdfkit from PyPDF2 import PdfFileReader from io import BytesIO # This library isn't named fitz on pypi, # obtain this library with `pip install PyMuPDF==1.19.4` import fitz # `pip install Pillow==8.3.1` from PIL import Image import numpy as np # However you arrive at valid HTML, it makes no difference to the solution. rendered = "<html><head></head><body><h3>Hello World</h3><p>hello</p></body></html>" # This first renders PDF from HTML normally (multiple pages) # Then counts how many pages were created and determines the required single-page height # Then renders a single-page PDF from HTML using the page height and width arguments pdf_bytes = pdfkit.from_string(rendered, options={ "page-height": f"{297 * PdfFileReader(BytesIO(pdfkit.from_string(rendered))).getNumPages()}mm", "page-width": "210mm" }) # convert the pdf into an image. pdf = fitz.open(stream=pdf_bytes, filetype="pdf") last_page = pdf[pdf.pageCount-1] matrix = fitz.Matrix(1, 1) image_pixels = last_page.get_pixmap(matrix=matrix, colorspace="GRAY") image = Image.frombytes("L", [image_pixels.width, image_pixels.height], image_pixels.samples) #Uncomment if you want to see. #image.show() # Now figure out where the end of the text is: # First binarize. This might not be the most efficient way to do this. # But it's how I do it. THRESHOLD = 100 # I wrote this code ages ago and don't remember the details but # basically, we treat every pixel > 100 as a white pixel, # We convert the result to a true/false matrix # And then invert that. # The upshot is that, at the end, a value of "True" # in the matrix will represent a black pixel in that location. binary_matrix = np.logical_not(image.point( lambda p: 255 if p > THRESHOLD else 0 ).convert("1")) # Now find last white row, starting at the bottom row_count, column_count = binary_matrix.shape last_row = 0 for i, row in enumerate(reversed(binary_matrix)): if any(row): last_row = i break else: continue percentage_from_top = (1 - last_row / row_count) * 100 print(percentage_from_top) # Now you know where the page ends. # Go back and crop the PDF accordingly. | 5 | 1 |
71,622,869 | 2022-3-25 | https://stackoverflow.com/questions/71622869/typeerror-init-missing-1-required-positional-argument-scheme-in-elasti | Below is my code- Elasticsearch is not using https protocol, it's using http protocol. pip uninstall elasticsearch pip install elasticsearch==7.13.4 import elasticsearch.helpers from elasticsearch import Elasticsearch # from elasticsearch import Elasticsearch, RequestsHttpConnection es_host = '<>' es_port = '<>' es_username = '<>' es_password = '><' es_index = '<>' es = Elasticsearch([{'host':str(es_host),'port':str(es_port)}], http_auth=(str(es_username), str(es_password))) es.indices.refresh(index=es_index) Error- 10 es = Elasticsearch([{'host': str(es_host), 'port': str(es_port)}],http_auth=(str(es_username), str(es_password))) 11 12 es.indices.refresh(index=es_index) 3 frames /usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport) /usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs) /usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts) /usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host) TypeError: __init__() missing 1 required positional argument: 'scheme' When I add "scheme" Code- es = Elasticsearch([{'host':str(es_host),'port':str(es_port)}], http_auth=(str(es_username), str(es_password)), scheme="http",verify_certs=False) Error- __init__() got an unexpected keyword argument 'scheme' I checked and tried connection to ES but its not connecting. | I ran into a similar error. I am using elasticsearch==8.3.1. When you construct your url with the list of dictionaries, you need to define the schema. Add "scheme": "https" to your dictionary and that will solve the missing argument. es = Elasticsearch( [ {'host': 'localhost', 'port': '9200', "scheme": "https"} ], basic_auth=('elastic', '<password>') ) In your case, you should convert your instantiation to as follows: es = Elasticsearch( [ { 'host':str(es_host), 'port':str(es_port), 'scheme': "https" } ], http_auth=(str(es_username), str(es_password)) ) I am not sure if the scheme is http or https, that is something you'll need to dig into. | 11 | 10 |
71,666,214 | 2022-3-29 | https://stackoverflow.com/questions/71666214/deprecation-warnings-distutils-and-netcdf-file | I get two deprecation warnings whenever I try running any python code. They are: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. MIN_CHEMFILES_VERSION = LooseVersion("0.9") DeprecationWarning: Please use netcdf_file from the scipy.io namespace, the scipy.io.netcdf namespace is deprecated. I am not sure how to use packaging.version instead of distuils and netcdf file. I am running python 3.8. I tried updating my virtualenv as suggested here: DeprecationWarning in Python 3.6 and 3.7 (with Pillow, distutils, imp) This doesn't work for me. Any help will be appreciated. I could not find results for the second deprecation warning. | The DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. message is caused by the distutils module being overridden since setuptools 60.0.0. You would notice because the distutils.__file__ variable evals to .../site-packages/setuptools/_distutils/__init__.py, regardless of your python interpreter. Using any python 3.8 or 3.9 will not fix the warning unless your installation features setuptools<60. And even then, the warning can still be triggered by setting SETUPTOOLS_USE_DISTUTILS=local in the environment. See the breaking changes section in https://github.com/pypa/setuptools/blob/main/CHANGES.rst#v6000 Workarounds to suppress this warnings can be either to downgrade to setuptools<60 or to start your python interpreter with the SETUPTOOLS_USE_DISTUTILS=stdlib environment variable. But I discourage any of these approaches, be ready for worse side effects if you go this way. Simply embrace the warning and try to contribute actual patches fixing this for any upstream library you were using before python 3.12 is released. | 4 | 12 |
71,599,769 | 2022-3-24 | https://stackoverflow.com/questions/71599769/importerror-cannot-import-name-inference-from-paddle | I am trying to implement paddleocr. I have installed it using: #Github repo installation for paddle ! python3 -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple #install paddle ocr !pip install paddleocr !git clone https://github.com/PaddlePaddle/PaddleOCR.git But while importing from paddleocr import PaddleOCR,draw_ocr I'm getting this error: ImportError: cannot import name 'inference' from 'paddle' | I had the same error. My solution was to: pip install paddlepaddle Then I got another error (luckily you will not get this one but just in case) telling me to downgrade protoc to a version between 3.19 and 3.20, which I fixed by executing the following command: pip install protobuf==3.19.0 After this I was able to execute the script that imported from paddleocr | 5 | 5 |
71,639,534 | 2022-3-27 | https://stackoverflow.com/questions/71639534/why-the-sum-value-isnt-equal-to-the-number-of-samples-in-scikit-learn-rando | I built a random forest by RandomForestClassifier and plot the decision trees. What does the parameter "value" (pointed by red arrows) mean? And why the sum of two numbers in the [] doesn't equal to the number of "samples"? I saw some other examples, the sum of two numbers in the [] equals to the number of "samples". Why in my case, it doesn't? df = pd.read_csv("Dataset.csv") df.drop(['Flow ID', 'Inbound'], axis=1, inplace=True) df.replace([np.inf, -np.inf], np.nan, inplace=True) df.dropna(inplace = True) df.Label[df.Label == 'BENIGN'] = 0 df.Label[df.Label == 'DrDoS_LDAP'] = 1 Y = df["Label"].values Y = Y.astype('int') X = df.drop(labels = ["Label"], axis=1) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.5) model = RandomForestClassifier(n_estimators = 20) model.fit(X_train, Y_train) Accuracy = model.score(X_test, Y_test) for i in range(len(model.estimators_)): fig = plt.figure(figsize=(15,15)) tree.plot_tree(model.estimators_[i], feature_names = df.columns, class_names = ['Benign', 'DDoS']) plt.savefig('.\\TheForest\\T'+str(i)) | Nice catch. Although undocumented, this is due to the bootstrap sampling taking place by default in a Random Forest model (see my answer in Why is Random Forest with a single tree much better than a Decision Tree classifier? for more on the RF algorithm details and its difference from a mere "bunch" of decision trees). Let's see an example with the iris data: from sklearn.datasets import load_iris from sklearn import tree from sklearn.ensemble import RandomForestClassifier iris = load_iris() rf = RandomForestClassifier(max_depth = 3) rf.fit(iris.data, iris.target) tree.plot_tree(rf.estimators_[0]) # take the first tree The result here is similar to what you report: for every other node except the lower right one, sum(value) does not equal samples, as it should be the case for a "simple" decision tree. A cautious observer would have noticed something else which seems odd here: while the iris dataset has 150 samples: print(iris.DESCR) .. _iris_dataset: Iris plants dataset -------------------- **Data Set Characteristics:** :Number of Instances: 150 (50 in each of three classes) :Number of Attributes: 4 numeric, predictive attributes and the class and the base node of the tree should include all of them, the samples for the first node are only 89. Why is that, and what exactly is going on here? To see, let us fit a second RF model, this time without bootstrap sampling (i.e. with bootstrap=False): rf2 = RandomForestClassifier(max_depth = 3, bootstrap=False) # no bootstrap sampling rf2.fit(iris.data, iris.target) tree.plot_tree(rf2.estimators_[0]) # take again the first tree Well, now that we have disabled bootstrap sampling, everything looks "nice": the sum of value in every node equals samples, and the base node contains indeed the whole dataset (150 samples). So, the behavior you describe seems to be due to bootstrap sampling indeed, which, while creating samples with replacement (i.e. ending up with duplicate samples for each individual decision tree of the ensemble), these duplicate samples are not reflected in the sample values of the tree nodes, which display the number of unique samples; nevertheless, it is reflected in the node value. The situation is completely analogous with that of a RF regression model, as well as with a Bagging Classifier - see respectively: sklearn RandomForestRegressor discrepancy in the displayed tree values Why does this decision tree's values at each step not sum to the number of samples? | 4 | 5 |
71,590,362 | 2022-3-23 | https://stackoverflow.com/questions/71590362/json-unicodedecodeerror-charmap-codec-cant-decode-byte-0x8d-in-position-3621 | I'm loading a json file on my computer. I can load it in without specifying the encoding on Kaggle, no, errors. On my PC I get the error in the title. with open('D:\soccer\statsbomb360\matches.json') as f: data = json.load(f, encoding = 'utf8') Adding errors = 'ignore' or changing encoding to 'latin' doesn't work either. I'm a bit lost on what to try next, can you give me an idea? The json is from statsbombs freely available data. Interestingly from the same dataset I have some files that give me this error on Kaggle/Colab but not on my pc, but there specifying encoding = 'latin' did the trick. thank you! | Try with open('D:\soccer\statsbomb360\matches.json', encoding="utf8") as f: data = json.load(f) per @mark-tolonen Also see this post: UnicodeDecodeError: 'charmap' codec can't decode byte X in position Y: character maps to <undefined> | 4 | 9 |
71,648,007 | 2022-3-28 | https://stackoverflow.com/questions/71648007/npm-install-error-npm-err-gyp-err-find-python-stack-error | Whenever I try to run npm install or npm update in my nuxt.js(vue.js) project, error below appears. npm ERR! code 1 npm ERR! path /Users/kyeolhan/ForWork/BackOffceFront/node_modules/deasync npm ERR! command failed npm ERR! command sh -c node ./build.js npm ERR! gyp info it worked if it ends with ok npm ERR! gyp info using [email protected] npm ERR! gyp info using [email protected] | darwin | arm64 npm ERR! gyp ERR! find Python npm ERR! gyp ERR! find Python Python is not set from command line or npm configuration npm ERR! gyp ERR! find Python Python is not set from environment variable PYTHON npm ERR! gyp ERR! find Python checking if "python3" can be used npm ERR! gyp ERR! find Python - "python3" is not in PATH or produced an error npm ERR! gyp ERR! find Python checking if "python" can be used npm ERR! gyp ERR! find Python - "python" is not in PATH or produced an error npm ERR! gyp ERR! find Python checking if "python2" can be used npm ERR! gyp ERR! find Python - "python2" is not in PATH or produced an error npm ERR! gyp ERR! find Python npm ERR! gyp ERR! find Python ********************************************************** npm ERR! gyp ERR! find Python You need to install the latest version of Python. npm ERR! gyp ERR! find Python Node-gyp should be able to find and use Python. If not, npm ERR! gyp ERR! find Python you can try one of the following options: npm ERR! gyp ERR! find Python - Use the switch --python="/path/to/pythonexecutable" npm ERR! gyp ERR! find Python (accepted by both node-gyp and npm) npm ERR! gyp ERR! find Python - Set the environment variable PYTHON npm ERR! gyp ERR! find Python - Set the npm configuration variable python: npm ERR! gyp ERR! find Python npm config set python "/path/to/pythonexecutable" npm ERR! gyp ERR! find Python For more information consult the documentation at: npm ERR! gyp ERR! find Python https://github.com/nodejs/node-gyp#installation npm ERR! gyp ERR! find Python ********************************************************** npm ERR! gyp ERR! find Python npm ERR! gyp ERR! configure error npm ERR! gyp ERR! stack Error: Could not find any Python installation to use npm ERR! gyp ERR! stack at PythonFinder.fail (/Users/kyeolhan/ForWork/BackOffceFront/node_modules/node-gyp/lib/find-python.js:302:47) npm ERR! gyp ERR! stack at PythonFinder.runChecks (/Users/kyeolhan/ForWork/BackOffceFront/node_modules/node-gyp/lib/find-python.js:136:21) npm ERR! gyp ERR! stack at PythonFinder.<anonymous> (/Users/kyeolhan/ForWork/BackOffceFront/node_modules/node-gyp/lib/find-python.js:179:16) npm ERR! gyp ERR! stack at PythonFinder.execFileCallback (/Users/kyeolhan/ForWork/BackOffceFront/node_modules/node-gyp/lib/find-python.js:266:16) npm ERR! gyp ERR! stack at exithandler (node:child_process:406:5) npm ERR! gyp ERR! stack at ChildProcess.errorhandler (node:child_process:418:5) npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:526:28) npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:289:12) npm ERR! gyp ERR! stack at onErrorNT (node:internal/child_process:478:16) npm ERR! gyp ERR! stack at processTicksAndRejections (node:internal/process/task_queues:83:21) npm ERR! gyp ERR! System Darwin 21.4.0 npm ERR! gyp ERR! command "/Users/kyeolhan/.nvm/versions/node/v16.14.2/bin/node" "/Users/kyeolhan/ForWork/BackOffceFront/node_modules/.bin/node-gyp" "rebuild" npm ERR! gyp ERR! cwd /Users/kyeolhan/ForWork/BackOffceFront/node_modules/deasync npm ERR! gyp ERR! node -v v16.14.2 npm ERR! gyp ERR! node-gyp -v v7.1.2 npm ERR! gyp ERR! not ok npm ERR! Build failed npm ERR! A complete log of this run can be found in: npm ERR! /Users/kyeolhan/.npm/_logs/2022-03-28T12_56_28_864Z-debug-0.log python3 is installed in my mac (Apple M1 Pro, macOS Monterey 12.3). $ python3 --version Python 3.9.12 I also tried with --python option with paths below. $ which -a python3 /opt/homebrew/bin/python3 /usr/bin/python3 /opt/homebrew/bin/python3 npm i --python="/usr/bin/python3" npm i --python="/opt/homebrew/bin/python3" But it doesn't work npm ERR! code 1 npm ERR! path /Users/kyeolhan/ForWork/BackOffceFront/node_modules/deasync npm ERR! command failed npm ERR! command sh -c node ./build.js npm ERR! gyp info it worked if it ends with ok npm ERR! gyp info using [email protected] npm ERR! gyp info using [email protected] | darwin | arm64 npm ERR! gyp ERR! find Python npm ERR! gyp ERR! find Python checking Python explicitly set from command line or npm configuration npm ERR! gyp ERR! find Python - "--python=" or "npm config get python" is "/usr/bin/python3" npm ERR! gyp ERR! find Python - "/usr/bin/python3" is not in PATH or produced an error npm ERR! gyp ERR! find Python Python is not set from environment variable PYTHON npm ERR! gyp ERR! find Python checking if "python3" can be used npm ERR! gyp ERR! find Python - "python3" is not in PATH or produced an error npm ERR! gyp ERR! find Python checking if "python" can be used npm ERR! gyp ERR! find Python - "python" is not in PATH or produced an error npm ERR! gyp ERR! find Python npm ERR! gyp ERR! find Python ********************************************************** npm ERR! gyp ERR! find Python You need to install the latest version of Python. npm ERR! gyp ERR! find Python Node-gyp should be able to find and use Python. If not, npm ERR! gyp ERR! find Python you can try one of the following options: npm ERR! gyp ERR! find Python - Use the switch --python="/path/to/pythonexecutable" npm ERR! gyp ERR! find Python (accepted by both node-gyp and npm) npm ERR! gyp ERR! find Python - Set the environment variable PYTHON npm ERR! gyp ERR! find Python - Set the npm configuration variable python: npm ERR! gyp ERR! find Python npm config set python "/path/to/pythonexecutable" npm ERR! gyp ERR! find Python For more information consult the documentation at: npm ERR! gyp ERR! find Python https://github.com/nodejs/node-gyp#installation npm ERR! gyp ERR! find Python ********************************************************** npm ERR! gyp ERR! find Python npm ERR! gyp ERR! configure error npm ERR! gyp ERR! stack Error: Could not find any Python installation to use npm ERR! gyp ERR! stack at PythonFinder.fail (/Users/kyeolhan/.nvm/versions/node/v16.14.2/lib/node_modules/npm/node_modules/node-gyp/lib/find-python.js:330:47) npm ERR! gyp ERR! stack at PythonFinder.runChecks (/Users/kyeolhan/.nvm/versions/node/v16.14.2/lib/node_modules/npm/node_modules/node-gyp/lib/find-python.js:159:21) npm ERR! gyp ERR! stack at PythonFinder.<anonymous> (/Users/kyeolhan/.nvm/versions/node/v16.14.2/lib/node_modules/npm/node_modules/node-gyp/lib/find-python.js:202:16) npm ERR! gyp ERR! stack at PythonFinder.execFileCallback (/Users/kyeolhan/.nvm/versions/node/v16.14.2/lib/node_modules/npm/node_modules/node-gyp/lib/find-python.js:294:16) npm ERR! gyp ERR! stack at exithandler (node:child_process:406:5) npm ERR! gyp ERR! stack at ChildProcess.errorhandler (node:child_process:418:5) npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:526:28) npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:289:12) npm ERR! gyp ERR! stack at onErrorNT (node:internal/child_process:478:16) npm ERR! gyp ERR! stack at processTicksAndRejections (node:internal/process/task_queues:83:21) npm ERR! gyp ERR! System Darwin 21.4.0 npm ERR! gyp ERR! command "/Users/kyeolhan/.nvm/versions/node/v16.14.2/bin/node" "/Users/kyeolhan/.nvm/versions/node/v16.14.2/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild" npm ERR! gyp ERR! cwd /Users/kyeolhan/ForWork/BackOffceFront/node_modules/deasync npm ERR! gyp ERR! node -v v16.14.2 npm ERR! gyp ERR! node-gyp -v v9.0.0 npm ERR! gyp ERR! not ok npm ERR! Build failed npm ERR! A complete log of this run can be found in: npm ERR! /Users/kyeolhan/.npm/_logs/2022-03-28T13_05_45_769Z-debug-0.log I think the 'deasync' package is requiring python since only projects containing that package in package-lock.json make this error. How can I solve it? | I resolved this issue by downgrading node version (to v14.19.1). #reference! How to solve npm install error “npm ERR! code 1” | 7 | 10 |
71,627,943 | 2022-3-26 | https://stackoverflow.com/questions/71627943/update-an-element-in-faiss-index | I am using faiss indexflatIP to store vectors related to some words. I also use another list to store words (the vector of the nth element in the list is nth vector in faiss index). I have two questions: Is there a better way to relate words to their vectors? Can I update the nth element in the faiss? | You can do both. Is there a better way to relate words to their vectors? Call index.add_with_ids(vectors, ids) Some index types support the method add_with_ids, but flat indexes don't. If you call the method on a flat index, you will receive the error add_with_ids not implemented for this type of index If you want to use IDs with a flat index, you must use index2 = faiss.IndexIDMap(index) Can I update the nth element in the faiss? If you want to update some encodings, first remove them, then add them again with add_with_ids If you don't remove the original IDs first, you will have duplicates and search results will be messed up. To remove an array of IDs, call index.remove_ids(ids_to_replace) Nota bene: IDs must be of np.int64 type. | 8 | 9 |
71,612,119 | 2022-3-25 | https://stackoverflow.com/questions/71612119/how-to-extract-texts-and-tables-pdfplumber | With the pdfplumber library, you can extract the text of a PDF page, or you can extract the tables from a pdf page. The issue is that I can't seem to find a way to extract text and tables. Essentially, if the pdf is formatted in this way: text1 tablename ___________ | Header 1 | ------------ | row 1 | ------------ text 2 I would like the output to be: ["text 1", "table name", [["header 1"], ["row 1"]], "text 2"] In this example you could run extract_text from pdfplumber: with pdfplumber.open("example.pdf") as pdf: for page in pdf.pages: page.extract_text() but that extracts text and tables as text. You could run extract_tables, but that only gives you the tables. I need a way to extract both text and tables at the same time. Is this built into the library some way that I don't understand? If not, is this possible? Edit: Answered This comes directly from the accepted answer with a slight tweak to fix it. Thanks so much! from operations import itemgetter def check_bboxes(word, table_bbox): """ Check whether word is inside a table bbox. """ l = word['x0'], word['top'], word['x1'], word['bottom'] r = table_bbox return l[0] > r[0] and l[1] > r[1] and l[2] < r[2] and l[3] < r[3] tables = page.find_tables() table_bboxes = [i.bbox for i in tables] tables = [{'table': i.extract(), 'top': i.bbox[1]} for i in tables] non_table_words = [word for word in page.extract_words() if not any( [check_bboxes(word, table_bbox) for table_bbox in table_bboxes])] lines = [] for cluster in pdfplumber.utils.cluster_objects( non_table_words + tables, itemgetter('top'), tolerance=5): if 'text' in cluster[0]: lines.append(' '.join([i['text'] for i in cluster])) elif 'table' in cluster[0]: lines.append(cluster[0]['table']) Edit July 19th 2022: Updated a param to include itemgetter, which is now required for pdfplumber's cluster_objects function (rather than a string) | You can get tables' bounding boxes and then filter out all of the words inside them, something like this: def check_bboxes(word, table_bbox): """ Check whether word is inside a table bbox. """ l = word['x0'], word['top'], word['x1'], word['bottom'] r = table_bbox return l[0] > r[0] and l[1] > r[1] and l[2] < r[2] and l[3] < r[3] tables = page.find_tables() table_bboxes = [i.bbox for i in tables] tables = [{'table': i.extract(), 'doctop': i.bbox[1]} for i in tables] non_table_words = [word for word in page.extract_words() if not any( [check_bboxes(word, table_bbox) for table_bbox in table_bboxes])] lines = [] for cluster in pdfplumber.utils.cluster_objects(non_table_words+tables, 'doctop', tolerance=5): if 'text' in cluster[0]: lines.append(' '.join([i['text'] for i in cluster])) elif 'table' in cluster[0]: lines.append(cluster[0]['table']) | 4 | 3 |
71,652,903 | 2022-3-28 | https://stackoverflow.com/questions/71652903/torchtext-vocab-typeerror-vocab-init-got-an-unexpected-keyword-argument | I am working on a CNN Sentiment analysis machine learning model which uses the IMDb dataset provided by the Torchtext library. On one of my lines of code vocab = Vocab(counter, min_freq = 1, specials=('\<unk\>', '\<BOS\>', '\<EOS\>', '\<PAD\>')) I am getting a TypeError for the min_freq argument even though I am certain that it is one of the accepted arguments for the function. I am also getting UserWarning Lambda function is not supported for pickle, please use regular python function or functools partial instead. Full code from torchtext.datasets import IMDB from collections import Counter from torchtext.data.utils import get_tokenizer from torchtext.vocab import Vocab tokenizer = get_tokenizer('basic_english') train_iter = IMDB(split='train') test_iter = IMDB(split='test') counter = Counter() for (label, line) in train_iter: counter.update(tokenizer(line)) vocab = Vocab(counter, min_freq = 1, specials=('\<unk\>', '\<BOS\>', '\<EOS\>', '\<PAD\>')) Source Links towardsdatascience github Legacy to new I have tried removing the min_freq argument and use the functions default as follows vocab = Vocab(counter, specials=('\<unk\>', '\<BOS\>', '\<EOS\>', '\<PAD\>')) however I end up getting the same type error but for the specials argument rather than min_freq. Any help will be much appreciated Thank you. | As https://github.com/pytorch/text/issues/1445 mentioned, you should change "Vocab" to "vocab". I think they miss-type the legacy-to-new notebook. correct code: from torchtext.datasets import IMDB from collections import Counter from torchtext.data.utils import get_tokenizer from torchtext.vocab import vocab tokenizer = get_tokenizer('basic_english') train_iter = IMDB(split='train') test_iter = IMDB(split='test') counter = Counter() for (label, line) in train_iter: counter.update(tokenizer(line)) vocab = vocab(counter, min_freq = 1, specials=('\<unk\>', '\<BOS\>', '\<EOS\>', '\<PAD\>')) my environment: python 3.9.12 torchtext 0.12.0 pytorch 1.11.0 | 5 | 6 |
71,713,719 | 2022-3-24 | https://stackoverflow.com/questions/71713719/runtimeerror-dataloader-worker-pids-15876-2756-exited-unexpectedly | I am compiling some existing examples from the PyTorch tutorial website. I am working especially on the CPU device no GPU. When running a program the type of error below is shown. Does it become I'm working on the CPU device or setup issue? raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e RuntimeError: DataLoader worker (pid(s) 15876, 2756) exited unexpectedly`. How can I solve it? import torch import torch.functional as F import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np from torch.utils.tensorboard import SummaryWriter from torch.utils.data import DataLoader from torchvision import datasets device = 'cpu' if torch.cuda.is_available() else 'cuda' print(device) transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))] ) #Store separate training and validations splits in data training_set = datasets.FashionMNIST( root='data', train=True, download=True, transform=transform ) validation_set = datasets.FashionMNIST( root='data', train=False, download=True, transform=transform ) training_loader = DataLoader(training_set, batch_size=4, shuffle=True, num_workers=2) validation_loader = DataLoader(validation_set, batch_size=4, shuffle=False, num_workers=2) classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot') def matplotlib_imshow(img, one_channel=False): if one_channel: img = img.mean(dim=0) img = img/2+0.5 #unnormalize npimg = img.numpy() if one_channel: plt.imshow(npimg, cmap="Greys") else: plt.imshow(np.transpose(npimg, (1, 2, 0))) dataiter = iter(training_loader) images, labels = dataiter.next() img_grid = torchvision.utils.make_grid(images) matplotlib_imshow(img_grid, one_channel=True) | You need to first figure out why the dataLoader worker crashed. A common reason is out of memory. You can check this by running dmesg -T after your script crashes and see if the system killed any python process. | 4 | 2 |
71,628,971 | 2022-3-26 | https://stackoverflow.com/questions/71628971/jupyter-is-busy-stuck-randomly-when-input-is-executed-inside-while-statement | Has anyone ever had a problem that Jupyter is busy (stuck) when executing input() inside the while statement? The problem is randomly happening to me. Sometimes the command box is prompted next to the cell, and sometimes the input() box never prompts. Here is the simpler version of my code: from IPython.display import clear_output class game_quit(Exception): def _render_traceback_(self): pass def the_game(player_status = "Offering"): while True: clear_output() print("The Game", flush = True) print(f"Status: {player_status}", flush = True) if (player_status == "Offering") or (player_status == "Wrong input!"): play_offer = input("Would you like to play the game (Y or N)? ") if play_offer.upper() == "Y": player_status = "Playing" play_accepted = True clear_output() break elif play_offer.upper() == "N": play_accepted = False clear_output() break else: player_status = "Wrong input!" clear_output() continue else: player_status = "Playing" play_accepted = True clear_output() break while play_accepted: the_play(player_status) else: raise game_quit() def the_play(player_status): while True: clear_output() print("The Game", flush = True) print(f"Status: {player_status}", flush = True) pet_offer = input("Do you want to go with a (D)og, a (C)at, or a (P)arakeet? ") if pet_offer.upper() == "D": player_pet = "Dog" clear_output() break elif pet_offer.upper() == "C": player_pet = "Cat" clear_output() break elif pet_offer.upper() == "P": player_pet = "Parakeet" clear_output() break else: player_status = "Wrong input!" clear_output() continue while pet_offer: clear_output() print(f"Your companion is a {player_pet}", flush = True) play_again = input("Would you like to continue playing the game (Y or N)? ") if play_again.upper() == "Y": play_continue = True clear_output() break elif play_again.upper() == "N": play_continue = False clear_output() break else: player_status = "Wrong input!" clear_output() continue if play_continue: player_status = "Playing" the_game(player_status) else: raise game_quit() Step to reproduce the problem: Execute the code. the_game() The user wants to play the game. play_offer = input("Would you like to play the game (Y or N)? ") The user chooses his pet. pet_offer = input("Do you want to go with a (D)og, a (C)at, or a (P)arakeet? ") The user wants to replay the game. play_again = input("Would you like to continue playing the game (Y or N)? ") The user should expect the text box to choose his pet. pet_offer = input("Do you want to go with a (D)og, a (C)at, or a (P)arakeet? ") A The problem: The text box is not showing. The running code is stuck there. B But sometimes: The text box is showing, and the user can choose his pet. So far, my only solution is to restart the kernel. Has anyone had a solution for this kind of problem? Regards, Ade | Someone gave me an insight. The problem is because clear_output() asynchronous problem. Then I created this function: from time import sleep from IPython.display import clear_output def refresh_screen(): clear_output() sleep(0.02) and replaced all clear_output() with refresh_screen() in my code. The problem is gone. A little delay solved the problem. | 4 | 3 |
71,642,233 | 2022-3-28 | https://stackoverflow.com/questions/71642233/replacing-pythons-parser-functionality | First of all I want to mention that I know this is a horrible idea and it shouldn't be done. My intention is mainly curiosity and learning the innards of Python, and how to 'hack' them. I was wondering whether it is at all possible to change what happens when we, for instance, use [] to create a list. Is there a way to modify how the parser behaves in order to, for instance, cause ["hello world"] to call print("hello world") instead of creating a list with one element? I've attempted to find any documentation or posts about this but failed to do so. Below is an example of replacing the built-in dict to instead use a custom class: from __future__ import annotations from typing import List, Any import builtins class Dict(dict): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.__dict__ = self def subset(self, keys: List[Any]) -> Dict: return Dict({key: self[key] for key in keys}) builtins.dict = Dict When this module is imported, it replaces the dict built-in with the Dict class. However this only works when we directly call dict(). If we attempt to use {} it will fall back to the base dict built-in implementation: import new_dict a = dict({'a': 5, 'b': 8}) b = {'a': 5, 'b': 8} print(type(a)) print(type(b)) Yields: <class 'py_extensions.new_dict.Dict'> <class 'dict'> | [] and {} are compiled to specific opcodes that specifically return a list or a dict, respectively. On the other hand list() and dict() compile to bytecodes that search global variables for list and dict and then call them as functions: import dis dis.dis(lambda:[]) dis.dis(lambda:{}) dis.dis(lambda:list()) dis.dis(lambda:dict()) returns (with some additional newlines for clarity): 3 0 BUILD_LIST 0 2 RETURN_VALUE 5 0 BUILD_MAP 0 2 RETURN_VALUE 7 0 LOAD_GLOBAL 0 (list) 2 CALL_FUNCTION 0 4 RETURN_VALUE 9 0 LOAD_GLOBAL 0 (dict) 2 CALL_FUNCTION 0 4 RETURN_VALUE Thus you can overwrite what dict() returns simply by overwriting the global dict, but you can't overwrite what {} returns. These opcodes are documented here. If the BUILD_MAP opcode runs, you get a dict, no way around it. As an example, here is the implementation of BUILD_MAP in CPython, which calls the function _PyDict_FromItems. It doesn't look at any kind of user-defined classes, it specifically makes a C struct that represents a python dict. It is possible in at least some cases to manipulate the python bytecode at runtime. If you really wanted to make {} return a custom class, I suppose you could write some code to search for the BUILD_MAP opcode and replace it with the appropriate opcodes. Though those opcodes aren't the same size, so there's probably quite a few additional changes you'd have to make. | 5 | 3 |
71,597,789 | 2022-3-24 | https://stackoverflow.com/questions/71597789/generate-all-digraphs-of-a-given-size-up-to-isomorphism | I am trying to generate all directed graphs with a given number of nodes up to graph isomorphism so that I can feed them into another Python program. Here is a naive reference implementation using NetworkX, I would like to speed it up: from itertools import combinations, product import networkx as nx def generate_digraphs(n): graphs_so_far = list() nodes = list(range(n)) possible_edges = [(i, j) for i, j in product(nodes, nodes) if i != j] for edge_mask in product([True, False], repeat=len(possible_edges)): edges = [edge for include, edge in zip(edge_mask, possible_edges) if include] g = nx.DiGraph() g.add_nodes_from(nodes) g.add_edges_from(edges) if not any(nx.is_isomorphic(g_before, g) for g_before in graphs_so_far): graphs_so_far.append(g) return graphs_so_far assert len(generate_digraphs(1)) == 1 assert len(generate_digraphs(2)) == 3 assert len(generate_digraphs(3)) == 16 The number of such graphs seems to grow pretty quickly and is given by this OEIS sequence. I am looking for a solution that is able to generate all graphs up to 7 nodes (about a billion graphs in total) in a reasonable amount of time. Representing a graph as a NetworkX object is not very important; for example, representing a graph with an adjacency list or using a different library is good with me. | There’s a useful idea that I learned from Brendan McKay’s paper “Isomorph-free exhaustive generation” (though I believe that it predates that paper). The idea is that we can organize the isomorphism classes into a tree, where the singleton class with the empty graph is the root, and each class with graphs having n > 0 nodes has a parent class with graphs having n − 1 nodes. To enumerate the isomorphism classes of graphs with n > 0 nodes, enumerate the isomorphism classes of graphs with n − 1 nodes, and for each such class, extend its representatives in all possible ways to n nodes and filter out the ones that aren’t actually children. The Python code below implements this idea with a rudimentary but nontrivial graph isomorphism subroutine. It takes a few minutes for n = 6 and (estimating here) on the order of a few days for n = 7. For extra speed, port it to C++ and maybe find better algorithms for handling the permutation groups (maybe in TAoCP, though most of the graphs have no symmetry, so it’s not clear how big the benefit would be). import cProfile import collections import itertools import random # Returns labels approximating the orbits of graph. Two nodes in the same orbit # have the same label, but two nodes in different orbits don't necessarily have # different labels. def invariant_labels(graph, n): labels = [1] * n for r in range(2): incoming = [0] * n outgoing = [0] * n for i, j in graph: incoming[j] += labels[i] outgoing[i] += labels[j] for i in range(n): labels[i] = hash((incoming[i], outgoing[i])) return labels # Returns the inverse of perm. def inverse_permutation(perm): n = len(perm) inverse = [None] * n for i in range(n): inverse[perm[i]] = i return inverse # Returns the permutation that sorts by label. def label_sorting_permutation(labels): n = len(labels) return inverse_permutation(sorted(range(n), key=lambda i: labels[i])) # Returns the graph where node i becomes perm[i] . def permuted_graph(perm, graph): perm_graph = [(perm[i], perm[j]) for (i, j) in graph] perm_graph.sort() return perm_graph # Yields each permutation generated by swaps of two consecutive nodes with the # same label. def label_stabilizer(labels): n = len(labels) factors = ( itertools.permutations(block) for (_, block) in itertools.groupby(range(n), key=lambda i: labels[i]) ) for subperms in itertools.product(*factors): yield [i for subperm in subperms for i in subperm] # Returns the canonical labeled graph isomorphic to graph. def canonical_graph(graph, n): labels = invariant_labels(graph, n) sorting_perm = label_sorting_permutation(labels) graph = permuted_graph(sorting_perm, graph) labels.sort() return max( (permuted_graph(perm, graph), perm[sorting_perm[n - 1]]) for perm in label_stabilizer(labels) ) # Returns the list of permutations that stabilize graph. def graph_stabilizer(graph, n): return [ perm for perm in label_stabilizer(invariant_labels(graph, n)) if permuted_graph(perm, graph) == graph ] # Yields the subsets of range(n) . def power_set(n): for r in range(n + 1): for s in itertools.combinations(range(n), r): yield list(s) # Returns the set where i becomes perm[i] . def permuted_set(perm, s): perm_s = [perm[i] for i in s] perm_s.sort() return perm_s # If s is canonical, returns the list of permutations in group that stabilize s. # Otherwise, returns None. def set_stabilizer(s, group): stabilizer = [] for perm in group: perm_s = permuted_set(perm, s) if perm_s < s: return None if perm_s == s: stabilizer.append(perm) return stabilizer # Yields one representative of each isomorphism class. def enumerate_graphs(n): assert 0 <= n if 0 == n: yield [] return for subgraph in enumerate_graphs(n - 1): sub_stab = graph_stabilizer(subgraph, n - 1) for incoming in power_set(n - 1): in_stab = set_stabilizer(incoming, sub_stab) if not in_stab: continue for outgoing in power_set(n - 1): out_stab = set_stabilizer(outgoing, in_stab) if not out_stab: continue graph, i_star = canonical_graph( subgraph + [(i, n - 1) for i in incoming] + [(n - 1, j) for j in outgoing], n, ) if i_star == n - 1: yield graph def test(): print(sum(1 for graph in enumerate_graphs(5))) cProfile.run("test()") | 8 | 3 |
71,607,514 | 2022-3-24 | https://stackoverflow.com/questions/71607514/stopiteration-error-while-drawing-a-pgmpy-networkx-graph | I have a python script that loads a csv file using pandas, and then uses pgmpy to learn a bayesian network over the data. After learning the structure, I am drawing the graph using the function: nx.draw(graph_model, node_color='#00b4d9', with_labels=True) This works perfectly in Ubuntu, However, it is throwing a StopIteration error in a virtual machine running Mac that I use to compile a Mac version. The error it is throwing is the following (I've removed the paths because it contains the name of the project and this is unpublished work): StopIteration: At: <path>/site-packages/matplotlib/bezier.py(352): split_path_inout <path>/site-packages/matplotlib/patches.py(2754): _shrink <path>/site-packages/matplotlib/patches.py(2771): _call_ <path>/site-packages/networkx/drawing/nx_pylab.py(794): _connectionstyle <path>/site-packages/matplotlib/patches.py(4453): _get_path_in_displaycoord <path>/site-packages/matplotlib/patches.py(4440): get_path <path>/site-packages/matplotlib/axes/_base.py(2376): _update_patch_limits <path>/site-packages/matplotlib/axes/_base.py(2358): add_patch <path>/site-packages/networkx/drawing/nx_pylab.py(867): _draw_networkx_edges_fancy_arrow_patch <path>/site-packages/networkx/drawing/nx_pylab.py(889): draw_networkx_edges <path>/site-packages/networkx/drawing/nx_pylab.py(334): draw_networkx <path>/site-packages/networkx/drawing/nx_pylab.py(120): draw <path>/bayesian_network/draw_model.py(7): <module> I have checked that the learned graph has nodes and edges. If I try to draw a graph with only one node, it works. I have already upgraded all of my packages, including pgmpy, matplotlib and networkx. Could this problem be related to the code being executed in a virtual machine running Mac? I currently have no access to a real Mac machine to test it. | I finally solved it by adding the position as a circular layout. Looks like in the previous version, it automatically did this, but in the new version that was installed in the virtual machine don't. pos = nx.circular_layout(graph_model) nx.draw(graph_model, node_color='#00b4d9', pos=pos, with_labels=True) | 5 | 6 |
71,655,179 | 2022-3-29 | https://stackoverflow.com/questions/71655179/how-can-i-make-an-object-with-an-interface-like-a-random-number-generator-but-t | I'd like to construct an object that works like a random number generator, but generates numbers in a specified sequence. # a random number generator rng = lambda : np.random.randint(2,20)//2 # a non-random number generator def nrng(): numbers = np.arange(1,10.5,0.5) for i in range(len(numbers)): yield numbers[i] for j in range(10): print('random number', rng()) print('non-random number', nrng()) The issue with the code above that I cannot call nrng in the last line because it is a generator. I know that the most straightforward way to rewrite the code above is to simply loop over the non-random numbers instead of defining the generator. I would prefer getting the example above to work because I am working with a large chunk of code that include a function that accepts a random number generator as an argument, and I would like to add the functionality to pass non-random number sequences without rewriting the entire code. EDIT: I see some confusion in the comments. I am aware that python's random number generators generate pseudo-random numbers. This post is about replacing a pseudo-random-number generator by a number generator that generates numbers from a non-random, user-specified sequence (e.g., a generator that generates the number sequence 1,1,2,2,1,0,1 if I want it to). | Edit: The cleanest way to do this would be to use a lambda to wrap your call to next(nrng) as per great comment from @GACy20: def nrng_gen(): yield from range(10) nrng = nrng_gen() nrng_func = lambda: next(nrng) for i in range(10): print(nrng_func()) Original answer: If you want your object to keep state and look like a function, create a custom class with __call__ method. eg. class NRNG: def __init__(self): self.numbers = range(10) self.state = -1 def __call__(self): self.state += 1 return self.numbers[self.state] nrng = NRNG() for i in range(10): print(nrng()) However, I wouldn't recommend this unless absolutely necessary, as it obscures the fact that your nrng keeps a state (although technically, most rngs keep their state internally). It's best to just use a regular generator with yield by calling next on it or to write a custom iterator (also class-based). Those will work with things like for loops and other python tools for iteration (like the excellent itertools package). | 18 | 16 |
71,665,973 | 2022-3-29 | https://stackoverflow.com/questions/71665973/inputting-just-a-comma-returns-strange-behaviour | Today I by mistake inputted just a comma on an interactive session Input: , and I noticed strangely that it did not return an error but instead: Output '' So I explored a bit this behaviour and tried some random stuff, and it seems like it creates tuples of strings, but it seems like these objects cannot be interacted with: , 'foo' bar 1 x returns: ("'foo'", 'bar', '1', 'x') Trying to assign those tuples or making some == checks doesn't really work but return errors. I couldn't find any answer or documentation about this behaviour. Anyone know what's happening here? EDIT: I am using Python 3.9.8 and running in VSCode interactive window with IPython. As someone pointed out in the comments this is not the behaviour when running from the terminal | This is an input transformation performed by the EscapedCommand class, specifically here. It's not part of autocall (details see below) which is handled by prefilter.AutoHandler. I couldn't find any public documentation on "escaped commands" and the class' docstring just mentions that it is a "transformer for escaped commands like %foo, !foo, or /foo". So I get the impression that the transformation for input like , a b is an (unintended) side effect of some other feature, as it's not publicly documented and doesn't seem to be of any use. We can request the current IPython shell by importing the corresponding module and then check what component modifies the input: In [1]: import IPython In [2]: shell = IPython.get_ipython() In [3]: %autocall Automatic calling is: Smart In [4]: shell.prefilter(',f a b') # autocall (note the function name 'f'), not applied since there is no callable `f` in the global namespace Out[4]: ',f a b' In [5]: f = lambda x,y: x+y In [6]: shell.prefilter(',f a b') # autocall (note the function name 'f'), now it works ------> f("a", "b") Out[6]: 'f("a", "b")' In [7]: shell.prefilter(', a b') # not identified as autocall --> remains unchanged Out[7]: ', a b' In [8]: shell.transform_cell(', a b') # however, it gets transformed by `EscapedCommand` Out[8]: '("a", "b")\n' For autocall to work, we first have to activate it via the "magic" command %autocall. Also the indicated function name (f) must be present in the namespace and be callable. %quickref provides a brief overview of the autocall feature (scroll down to "Autocall"): Autocall: f 1,2 : f(1,2) # Off by default, enable with %autocall magic. /f 1,2 : f(1,2) (forced autoparen) ,f 1 2 : f("1","2") ;f 1 2 : f("1 2") | 4 | 2 |
71,668,895 | 2022-3-29 | https://stackoverflow.com/questions/71668895/pydantic-inherit-generic-class | New to python and pydantic, I come from a typescript background. I was wondering if you can inherit a generic class? In typescript the code would be as follows interface GenericInterface<T> { value: T } interface ExtendsGeneric<T> extends GenericInterface<T> { // inherit value from GenericInterface otherValue: string } const thing: ExtendsGeneric<Number> = { value: 1, otherValue: 'string' } What I have been trying is something along the lines of #python3.9 from pydantic.generics import GenericModel from typing import TypeVar from typing import Generic T = TypeVar("T", int, str) class GenericField(GenericModel, Generic[T]): value: T class ExtendsGenericField(GenericField[T]): otherValue: str ExtendsGenericField[int](value=1, otherValue="other value") And I get the error of TypeError: Too many parameters for ExtendsGenericField; actual 1, expected 0. This sort of checks out because in the Pydantic docs it explicitly states "In order to declare a generic model...Use the TypeVar instances as annotations where you will want to replace them..." The easy workaround is to make ExtendsGeneric inherit from GenericModel and have value in its own class definition, but I was trying to reuse classes. Is inheriting a value from a generic class possible? | Generics are a little weird in Python, and the problem is that ExtendsGenericField itself isn't declared as generic. To solve, just add Generic[T] as a super class of ExtendsGenericField: from pydantic.generics import GenericModel from typing import TypeVar from typing import Generic T = TypeVar("T", int, str) class GenericField(GenericModel, Generic[T]): value: T class ExtendsGenericField(GenericField[T], Generic[T]): otherValue: str ExtendsGenericField[int](value=1, otherValue="other value") | 9 | 14 |
71,641,609 | 2022-3-28 | https://stackoverflow.com/questions/71641609/how-does-cpython-implement-os-environ | I was looking through source and noticed that it references a variable environ in methods before its defined: def _createenviron(): if name == 'nt': # Where Env Var Names Must Be UPPERCASE def check_str(value): if not isinstance(value, str): raise TypeError("str expected, not %s" % type(value).__name__) return value encode = check_str decode = str def encodekey(key): return encode(key).upper() data = {} for key, value in environ.items(): data[encodekey(key)] = value else: # Where Env Var Names Can Be Mixed Case encoding = sys.getfilesystemencoding() def encode(value): if not isinstance(value, str): raise TypeError("str expected, not %s" % type(value).__name__) return value.encode(encoding, 'surrogateescape') def decode(value): return value.decode(encoding, 'surrogateescape') encodekey = encode data = environ return _Environ(data, encodekey, decode, encode, decode) # unicode environ environ = _createenviron() del _createenviron So how does environ get setup? I cant seem to reason about where its initialized and declared so that _createenviron can use it? | TLDR search for from posix import * in os module content. The os module imports all public symbols from posix (Unix) or nt (Windows) low-level module at the beginning of os.py. posix exposes environ as a plain Python dict. os wraps it with _Environ dict-like object that updates environment variables on _Environ items changing. | 7 | 2 |
71,644,405 | 2022-3-28 | https://stackoverflow.com/questions/71644405/why-is-it-faster-to-compare-strings-that-match-than-strings-that-do-not | Here are two measurements: timeit.timeit('"toto"=="1234"', number=100000000) 1.8320042459999968 timeit.timeit('"toto"=="toto"', number=100000000) 1.4517491540000265 As you can see, comparing two strings that match is faster than comparing two strings with the same size that do not match. This is quite disturbing: During a string comparison, I believed that Python was testing strings character by character, so "toto"=="toto" should be longer to test than "toto"=="1234" as it requires four tests against one for the non-matching comparison. Maybe the comparison is hash-based, but in this case, timings should be the same for both comparisons. Why? | Combining my comment and the comment by @khelwood: TL;DR: When analysing the bytecode for the two comparisons, it reveals the 'time' and 'time' strings are assigned to the same object. Therefore, an up-front identity check (at C-level) is the reason for the increased comparison speed. The reason for the same object assignment is that, as an implementation detail, CPython interns strings which contain only 'name characters' (i.e. alpha and underscore characters). This enables the object's identity check. Bytecode: import dis In [24]: dis.dis("'time'=='time'") 1 0 LOAD_CONST 0 ('time') # <-- same object (0) 2 LOAD_CONST 0 ('time') # <-- same object (0) 4 COMPARE_OP 2 (==) 6 RETURN_VALUE In [25]: dis.dis("'time'=='1234'") 1 0 LOAD_CONST 0 ('time') # <-- different object (0) 2 LOAD_CONST 1 ('1234') # <-- different object (1) 4 COMPARE_OP 2 (==) 6 RETURN_VALUE Assignment Timing: The 'speed-up' can also be seen in using assignment for the time tests. The assignment (and compare) of two variables to the same string, is faster than the assignment (and compare) of two variables to different strings. Further supporting the hypothesis the underlying logic is performing an object comparison. This is confirmed in the next section. In [26]: timeit.timeit("x='time'; y='time'; x==y", number=1000000) Out[26]: 0.0745926329982467 In [27]: timeit.timeit("x='time'; y='1234'; x==y", number=1000000) Out[27]: 0.10328884399496019 Python source code: As helpfully provided by @mkrieger1 and @Masklinn in their comments, the source code for unicodeobject.c performs a pointer comparison first and if True, returns immediately. int _PyUnicode_Equal(PyObject *str1, PyObject *str2) { assert(PyUnicode_CheckExact(str1)); assert(PyUnicode_CheckExact(str2)); if (str1 == str2) { // <-- Here return 1; } if (PyUnicode_READY(str1) || PyUnicode_READY(str2)) { return -1; } return unicode_compare_eq(str1, str2); } Appendix: Reference answer nicely illustrating how to read the disassembled bytecode output. Courtesy of @Delgan Reference answer which nicely describes CPython's string interning. Coutresy of @ShadowRanger | 77 | 75 |
71,669,583 | 2022-3-29 | https://stackoverflow.com/questions/71669583/is-there-a-converse-to-operator-contains | edit: I changed the title from complement to converse after the discussion below. In the operator module, the binary functions comparing objects take two parameters. But the contains function has them swapped. I use a list of operators, e.g. operator.lt, operator.ge. They take 2 arguments, a and b. I can say operator.lt(a, b) and it will tell me whether a is less than b. But with operator.contains, I want to know whether b contains a so I have to swap the arguments. This is a pain because I want a uniform interface, so I can have a user defined list of operations to use (I'm implementing something like Django QL). I know I could create a helper function which swaps the arguments: def is_contained_by(a, b): return operator.contains(b, a) Is there a "standard" way to do it? Alternatively, I can implement everything backwards, except contains. So map lt to ge, etc, but that gets really confusing. | If either of them posts an answer, you should accept that, but between users @chepner and @khelwood, they gave you most of the answer. The complement of operator.contains would be something like operator.does_not_contain, so that's not what you're looking for exactly. Although I think a 'reflection' isn't quite what you're after either, since that would essentially be its inverse, if it were defined. At any rate, as @chepner points out, contains is not backwards. It just not the same as in, in would be is_contained_by as you defined it. Consider that a in b would not be a contains b, but rather b contains a, so the signature of operator.contains makes sense. It follows the convention of the function's stated infix operation being its name. I.e. (a < b) == operator.lt(a, b) and b contains a == operator.contains(b, a) == (a in b). (in a world where contains would be an existing infix operator) Although I wouldn't recommend it, because it may cause confusion with others reading your code and making the wrong assumptions, you could do something like: operator.in_ = lambda a, b: b.__contains__(a) # or operator.in_ = lambda a, b: operator.contains(b, a) That would give you an operator.in_ that works as you expect (and avoids the in keyword), but at the cost of a little overhead and possible confusion. I'd recommend working with operator.contains instead. | 8 | 3 |
71,668,058 | 2022-3-29 | https://stackoverflow.com/questions/71668058/import-module-after-pip-install-wheel | I have a customized built module, lets call it abc, and pip install /local_path/abc-0.1-py3-none-any.whl. Installation is correct, >>pip install dist/abc-0.1-py3-none-any.whl Processing ./dist/abc-0.1-py3-none-any.whl Successfully installed abc-0.1 but I could not import the module. After I ran ppip freeze list and found out the name of module in list is abc @ file:///local_path/abc-0.1-py3-none-any.whl. my question is how could import the module? Thank you . ├── requirements.txt ├── setup.py ├── src │ ├── bin │ │ ├── __init__.py │ │ ├── xyz1.py │ │ ├── xyz2.py │ │ └── xyz3.py here is my setup.py with open("requirements.txt") as f: install_requires = f.read() setup( name="abc", version="0.1", author="galaxyan", author_email="[email protected]", description="test whell framework", packages=find_packages(include=["src"]), zip_safe=False, install_requires=install_requires, ) ############ update ############ it does not work even change setup.py with open("requirements.txt") as f: install_requires = f.read() setup( name="abc", version="0.1", author="galaxyan", author_email="[email protected]", description="test whell framework", packages=find_packages(where="src"), package_dir={"": "src"}, zip_safe=False, install_requires=install_requires, ) | The setup.py is wrong, which means you're building a wheel with no packages actually inside. Instead of setup( ... packages=find_packages(include=["src"]), ... ) Try this: setup( ... packages=find_packages(where="src"), package_dir={"": "src"}, ... ) See Testing & Packaging for more info. | 4 | 3 |
71,664,875 | 2022-3-29 | https://stackoverflow.com/questions/71664875/what-is-the-replacement-for-distutils-util-get-platform | Apparently, Python 3.10 / 3.12 is going to deprecate / remove distutils (cpython/issues/92584). Unfortunately, I have not been able to find a replacement for the one and only function I am using from it; distutils.util.get_platform(). What is the replacement for this? Note that platform is NOT an answer. I need a function that returns the complete string that is used when building a binary wheel¹, e.g. macosx-12-x86_64. Note particularly that there appears to be platform-specific logic embedded in this (e.g. the only other way I know to get the macos version is with a macos-specific API). (¹ As noted in a comment, distutils.util.get_platform() is, strictly speaking, not that function. However, PEP 425 specifies that "the platform tag is simply distutils.util.get_platform() with all hyphens - and periods . replaced with underscore _." Ergo, it is straight-forward and platform-agnostic to derive the tag from distutils.util.get_platform(). An acceptable answer may therefore give an approved, public API which produces the platform tag directly, or a compatible replacement for distutils.util.get_platform().) | For your use-case, sysconfig has a replacement import sysconfig sysconfig.get_platform() This is what the wheel project itself used as a replacement for distutils.util.get_platform() when removing distutils from the code in Replaced all uses of distutils with setuptools #428. | 7 | 9 |
71,661,851 | 2022-3-29 | https://stackoverflow.com/questions/71661851/typeerror-init-got-an-unexpected-keyword-argument-as-tuple | While I am testing my API I recently started to get the error below. if request is None: > builder = EnvironBuilder(*args, **kwargs) E TypeError: __init__() got an unexpected keyword argument 'as_tuple' /usr/local/lib/python3.7/site-packages/werkzeug/test.py:1081: TypeError As I read from the documentation in the newer version of Werkzeug the as_tuple parameter is removed. Part of my test code is from flask.testing import FlaskClient @pytest.fixture(name='test_client') def _test_client() -> FlaskClient: app = create_app() return app.test_client() class TestPeerscoutAPI: def test_should_have_access_for_status_page(self, test_client: FlaskClient): response = test_client.get('/api/status') assert _get_ok_json(response) == {"status": "OK"} Any help would be greatly appreciated. | As of version 2.1.0, werkzeug has removed the as_tuple argument to Client. Since Flask wraps werkzeug and you're using a version that still passes this argument, it will fail. See the exact change on the GitHub PR here. You can take one of two paths to solve this: Upgrade flask Pin your werkzeug version # in requirements.txt werkzeug==2.0.3 | 37 | 48 |
71,657,355 | 2022-3-29 | https://stackoverflow.com/questions/71657355/run-mypy-from-pre-commit-for-different-directories | I have the following structure for my project: project/ ├── backend │ ├── api_v1 │ ├── api_v2 │ └── api_v3 └── frontend Each of the API dirs, api_v1, api_v2, and api_v3, have python files. I would like to run pre-commit for each of these directories only if there is a change in the code. For eg., I would like to run mypy -p api_v1 if there is a change in the directory api_v1. I'm aware of the keys files and types of the pre-commit, but I cannot figure out a way to run mypy as if it was running from the directory backend. Also, I cannot run mypy separately for api_v1, api_v2, or api_v3, when I have changes in more than 1 of these directories. Is it not possible or am | pre-commit operates on files so what you're trying to do isn't exactly supported but anything is possible. when not running on files you're going to take some efficiency concessions as you'll be linting much more often than you need to be here's a rough sketch for how you would do this: - repo: https://github.com/pre-commit/mirrors-mypy rev: ... hooks: - id: mypy pass_filenames: false # suppress the normal filename passing files: ^backend/api_v1/ # filter the files down to a specific subdirectory # pre-commit only supports running at the root of a repo since that's where # git hooks run. but it also allows running arbitrary code so you can # step outside of those bounds # note that `bash` will reduce your portability slightly entry: bash -c 'cd backend && mypy -p api_v1 "$@"' -- # and then repeat ... - id: mypy pass_filenames: false files: ^backend/api_v2/ entry: bash -c 'cd backend && mypy -p api_v2 "$@"' -- # etc. disclaimer: I wrote pre-commit | 6 | 10 |
71,661,228 | 2022-3-29 | https://stackoverflow.com/questions/71661228/how-to-multiply-several-vectors-by-one-matrix-at-once-in-numpy | I have a 2x2 rotation matrix and several vectors stored in a Nx2 array. Is there a way to rotate them all (i.e. multiply them all by the rotation matrix) at once? I'm sure there is a numpy method for that, it's just not obvious. import numpy as np vectors = np.array( ( (1,1), (1,2), (2,2), (4,2) ) ) # 4 2D vectors ang = np.radians(30) m = np.array( ( (np.cos(ang), -np.sin(ang)), (np.sin(ang), np.cos(ang)) )) # 2x2 rotation matrix # rotate 1 vector: m.dot(vectors[0,:]) # rotate all vectors at once?? | Because m has shape (2,2) and vectors has shape (4,2), you can simply do dots = vectors @ m.T Then each row i contains the matrix-vector product m @ vectors[i, :]. | 4 | 2 |
71,654,966 | 2022-3-28 | https://stackoverflow.com/questions/71654966/how-can-i-append-or-concatenate-two-dataframes-in-python-polars | I see it's possible to append using the series namespace (https://stackoverflow.com/a/70599059/5363883). What I'm wondering is if there is a similar method for appending or concatenating DataFrames. In pandas historically it could be done with df1.append(df2). However that method is being deprecated (if it hasn't already been deprecated) for pd.concat([df1, df2]). Sample frames: df1 = pl.from_repr(""" ┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 2 ┆ 3 │ └─────┴─────┴─────┘ """) df2 = pl.from_repr(""" ┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 4 ┆ 5 ┆ 6 │ └─────┴─────┴─────┘ """) Desired result: shape: (2, 3) ┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 2 ┆ 3 │ │ 4 ┆ 5 ┆ 6 │ └─────┴─────┴─────┘ | There are different append strategies depending on your needs. df1 = pl.DataFrame({"a": [1], "b": [2], "c": [3]}) df2 = pl.DataFrame({"a": [4], "b": [5], "c": [6]}) # new memory slab new_df = pl.concat([df1, df2], rechunk=True) # append free (no memory copy) new_df = df1.vstack(df2) # try to append in place df1.extend(df2) To understand the differences, it is important to understand polars memory is immutable iff it has any copy. Copies in polars are free, because it only increments a reference count of the backing memory buffer instead of copying the data itself. However, if a memory buffer has no copies yet, e.g. the refcount == 1, we can mutate polars memory. Knowing this background there are the following ways to append data: concat -> concatenate all given DataFrames. This is sort of a linked list of DataFrames. If you pass rechunk=True, all memory will be reallocated to contiguous chunks. vstack -> Adds the data from other to DataFrame by incrementing a refcount. This is super cheap. It is recommended to call rechunk after many vstacks. Or simply use pl.concat. extend This operation copies data. It tries to copy data from other to DataFrame. If however the refcount of DataFrame is larger than 1. A new buffer of memory is allocated to hold both DataFrames. | 17 | 43 |
71,656,644 | 2022-3-29 | https://stackoverflow.com/questions/71656644/python-type-hint-for-iterablestr-that-isnt-str | In Python, is there a way to distinguish between strings and other iterables of strings? A str is valid as an Iterable[str] type, but that may not be the correct input for a function. For example, in this trivial example that is intended to operate on sequences of filenames: from typing import Iterable def operate_on_files(file_paths: Iterable[str]) -> None: for path in file_paths: ... Passing in a single filename would produce the wrong result but would not be caught by type checking. I know that I can check for string or byte types at runtime, but I want to know if it's possible to catch silly mistakes like that with a type-checking tool. I've looked over the collections.abc module and there doesn't seem to be any abc that would include typical iterables (e.g. lists, tuples) but exclude strings. Similarly, for the typing module, there doesn't seem to be a type for iterables that don't include strings. | As of March 2022, the answer is no. This issue has been discussed since at least July 2016. On a proposal to distinguish between str and Iterable[str], Guido van Rossum writes: Since str is a valid iterable of str this is tricky. Various proposals have been made but they don't fit easily in the type system. You'll need to list out all of the types that you want your functions to accept explicitly, using Union (pre-3.10) or | (3.10 and higher). e.g. For pre-3.10, use: from typing import Union ## Heading ## def operate_on_files(file_paths: Union[TypeOneName, TypeTwoName, etc.]) -> None: for path in file_paths: ... For 3.10 and higher, use: ## Heading ## def operate_on_files(file_paths: TypeOneName | TypeTwoName | etc.) -> None: for path in file_paths: ... If you happen to be using Pytype, it will not treat str as an Iterable[str] (as pointed out by Kelly Bundy). But, this behavior is typechecker-specific, and isn't widely supported in other typecheckers. | 23 | 10 |
71,656,436 | 2022-3-29 | https://stackoverflow.com/questions/71656436/pandas-groupby-cumcount-one-cumulative-count-rather-than-a-cumulative-count-fo | Let's say I have a df pd.DataFrame( {'name':['pam','pam','bob','bob','pam','bob','pam','bob'], 'game_id':[0,0,1,1,0,2,1,2] } ) name game_id 0 pam 0 1 pam 0 2 bob 1 3 bob 1 4 pam 0 5 bob 2 6 pam 1 7 bob 2 I want to calculate how many games bob and amy have appeared in cumulatively. However, when I use .groupby() and .cumcount()+1, I get something different. I get a cumulative count within each game_id: df['games'] = df.groupby(['name','game_id']).cumcount()+1 name game_id games 0 pam 0 1 1 pam 0 2 2 bob 1 1 3 bob 1 2 4 pam 0 3 5 bob 2 1 6 pam 1 1 7 bob 2 2 When what I really want is a one total cumulative count rather than a cumulative count for each unique game_id. Here's an example of my desired output: name game_id games 0 pam 0 1 1 pam 0 1 2 bob 1 1 3 bob 1 1 4 pam 0 1 5 bob 2 2 6 pam 1 2 7 bob 2 2 Note, in my actual dataset game_id is a random sequence of numbers. | Lets try sort df, check consecutive difference, create new group by cumsum and then resort the df new_df=df.sort_values(by=['name','game_id']) new_df=new_df.assign(rank=new_df['game_id']!=new_df['game_id'].shift()) new_df=new_df.assign(rank=new_df.groupby('name')['rank'].cumsum()).sort_index() print(new_df) name game_id rank 0 pam 0 1 1 pam 0 1 2 bob 1 1 3 bob 1 1 4 pam 0 1 5 bob 2 2 6 pam 1 2 7 bob 2 2 | 4 | 2 |
71,650,564 | 2022-3-28 | https://stackoverflow.com/questions/71650564/pandas-dataframe-styler-how-to-style-pandas-dataframe-as-excel-table | How to style the pandas dataframe as an excel table (alternate row colour)? Sample style: Sample data: import pandas as pd import seaborn as sns df = sns.load_dataset("tips") | If your final goal is to save to_excel, the only way to retain the styling after export is using the apply-based methods: df.style.apply / df.style.applymap are the styling counterparts to df.apply / df.applymap and work analogously df.style.apply_index / df.style.applymap_index are the index styling counterparts (requires pandas 1.4.0+) For the given sample, use df.style.apply to style each column with alternating row colors and df.style.applymap_index to style all row/col indexes: css_alt_rows = 'background-color: powderblue; color: black;' css_indexes = 'background-color: steelblue; color: white;' (df.style.apply(lambda col: np.where(col.index % 2, css_alt_rows, None)) # alternating rows .applymap_index(lambda _: css_indexes, axis=0) # row indexes (pandas 1.4.0+) .applymap_index(lambda _: css_indexes, axis=1) # col indexes (pandas 1.4.0+) ).to_excel('styled.xlsx', engine='openpyxl') If you only care about the appearance in Jupyter, another option is to set properties for targeted selectors using df.style.set_table_styles (requires pandas 1.2.0+): # pandas 1.2.0+ df.style.set_table_styles([ {'selector': 'tr:nth-child(even)', 'props': css_alt_rows}, {'selector': 'th', 'props': css_indexes}, ]) | 7 | 12 |
71,653,262 | 2022-3-28 | https://stackoverflow.com/questions/71653262/how-to-join-dataframes-with-multiple-ids | I have two dataframes and a rather tricky join to accomplish. The first dataframe: data = [[0, 'Standard1', [100, 101, 102]], [1, 'Standard2', [100, 102]], [2, 'Standard3', [103]]] df1 = pd.DataFrame(data, columns = ['RuleSetID', 'RuleSetName', 'KeyWordGroupID']) df1 Output: RuleSetID RuleSetName KeyWordGroupID 0 Standard1 [100, 101, 102] 1 Standard2 [100, 102] 2 Standard3 [103] ... ... ... The second one: data = [[100, 'verahren', ['word1', 'word2']], [101, 'flaechen', ['word3']], [102, 'nutzung', ['word4', 'word5']], [103, 'ort', ['word6', 'word7']]] df2 = pd.DataFrame(data, columns = ['KeyWordGroupID', 'KeyWordGroupName', 'KeyWords']) df2 Output: KeyWordGroupID KeyWordGroupName KeyWords 100 verahren ['word1', 'word2'] 101 flaechen ['word3'] 102 nutzung ['word4', 'word5'] 103 ort ['word6', 'word7'] ... ... ... The desired output: RuleSetID RuleSetName KeyWordGroupID 0 Standard1 [['word1', 'word2'], ['word3'], ['word4', 'word5']] 1 Standard2 [['word1', 'word2'], ['word4', 'word5']] 2 Standard3 [['word6', 'word7']] I tried to convert the second dataframe into a dictionary using df.to_dict('records') and put it into a pandas apply user defined function to match via key values but it doesn't seem like a clean approach. Does someone has an approach to solve that? Any ideas are rewarded. | The main idea is to convert df2 as a dict mapping Series where the key is the KeyWordGroupID column and the value is the KeyWords column. You can use explode to flatten KeyWordGroupID column of df1 then map it to df2 then groupby to reshape your first dataframe: df1['KeyWordGroupID'] = ( df1['KeyWordGroupID'].explode().map(df2.set_index('KeyWordGroupID')['KeyWords']) .groupby(level=0).apply(list) ) print(df1) # Output RuleSetID RuleSetName KeyWordGroupID 0 0 Standard1 [[word1, word2], [word3], [word4, word5]] 1 1 Standard2 [[word1, word2], [word4, word5]] 2 2 Standard3 [[word6, word7]] | 4 | 1 |
71,629,200 | 2022-3-26 | https://stackoverflow.com/questions/71629200/apache-beam-infer-schema-using-namedtuple-python | I am quite new to apache beam and I am wondering how to infer schema to a pcollection using namedtuple. The example from the documentation Programming Guide states: class Transaction(typing.NamedTuple): bank: str purchase_amount: float pc = input | beam.Map(lambda ...).with_output_types(Transaction) I tried to implement similar thing but reading from a parquet file first from apache_beam import coders from typing import NamedTuple import apache_beam as beam class TestSchema(NamedTuple): company_id: int is_company: bool company_created_datetime: str company_values: str if __name__ == '__main__': coders.registry.register_coder(TestSchema, coders.RowCoder) with beam.Pipeline() as pipeline: record = pipeline | "Read Parquet" >> beam.io.ReadFromParquet("test.parquet").with_output_types(TestSchema) \ | "Print" >> beam.Map(print) pipeline.run().wait_until_finish() And I am getting AttributeError: 'dict' object has no attribute 'company_id' [while running 'Read Parquet/ParDo(_ArrowTableToRowDictionaries)'] Also without the .with_output_types(TestSchema) I can see the data fine which looks like this {'company_id': 3, 'is_company': True, 'company_created_datetime': datetime.datetime(2022, 3, 8, 13, 2, 26, 573511), 'company_values': 'test value'} I am using python 3.8 and beam 2.37.0 Am I missing something? any help would be appreciated (stack trace below). Traceback (most recent call last): File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 817, in apache_beam.runners.worker.operations.SdfProcessSizedElements.process File "apache_beam/runners/worker/operations.py", line 826, in apache_beam.runners.worker.operations.SdfProcessSizedElements.process File "apache_beam/runners/common.py", line 1206, in apache_beam.runners.common.DoFnRunner.process_with_sized_restriction File "apache_beam/runners/common.py", line 698, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 836, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam/runners/common.py", line 1361, in apache_beam.runners.common._OutputProcessor.process_outputs File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 841, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam/runners/common.py", line 1361, in apache_beam.runners.common._OutputProcessor.process_outputs File "apache_beam/runners/worker/operations.py", line 214, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 178, in apache_beam.runners.worker.operations.ConsumerSet.update_counters_start File "apache_beam/runners/worker/opcounters.py", line 211, in apache_beam.runners.worker.opcounters.OperationCounters.update_from File "apache_beam/runners/worker/opcounters.py", line 250, in apache_beam.runners.worker.opcounters.OperationCounters.do_sample File "apache_beam/coders/coder_impl.py", line 1425, in apache_beam.coders.coder_impl.WindowedValueCoderImpl.get_estimated_size_and_observables File "apache_beam/coders/coder_impl.py", line 1436, in apache_beam.coders.coder_impl.WindowedValueCoderImpl.get_estimated_size_and_observables File "apache_beam/coders/coder_impl.py", line 207, in apache_beam.coders.coder_impl.CoderImpl.get_estimated_size_and_observables File "apache_beam/coders/coder_impl.py", line 246, in apache_beam.coders.coder_impl.StreamCoderImpl.estimate_size File "apache_beam/coders/coder_impl.py", line 1610, in apache_beam.coders.coder_impl.RowCoderImpl.encode_to_stream AttributeError: 'dict' object has no attribute 'company_id' [while running 'Read Parquet/ParDo(_ArrowTableToRowDictionaries)'] | Ok after some research on beam schema and digging in the source code I finally found the solution. It looks like you need to convert every single value in the pcollection to NamedTuple and later apply a type hint. with beam.Pipeline() as pipeline: record = pipeline | "Read Parquet" >> beam.io.ReadFromParquet("test.parquet") \ | "Transform to NamedTuple" beam.Map(lambda x: TestSchema(**x)).with_output_types(TestSchema) \ | "Print" >> beam.Map(print) | 4 | 4 |
71,648,736 | 2022-3-28 | https://stackoverflow.com/questions/71648736/how-to-get-a-list-of-all-custom-django-commands-in-a-project | I want to find a custom command in a project with many apps, how to get a list of all commands from all apps? | This command will list all the custom or existing command of all installed apps: python manage.py help | 6 | 13 |
71,648,826 | 2022-3-28 | https://stackoverflow.com/questions/71648826/why-gunicorn-use-same-thread | a simple python name myapp.py: import threading import os def app(environ, start_response): tid = threading.get_ident() pid = os.getpid() ppid = os.getppid() # ##### print('tid ================ ', tid) # why same tid? # ##### print('pid', pid) # print('ppid', ppid) # data = b"Hello, World!\n" start_response("200 OK", [ ("Content-Type", "text/plain"), ("Content-Length", str(len(data))) ]) return iter([data]) And I start with gunicorn: gunicorn -w 4 myapp:app [2022-03-28 21:59:57 +0800] [55107] [INFO] Starting gunicorn 20.1.0 [2022-03-28 21:59:57 +0800] [55107] [INFO] Listening at: http://127.0.0.1:8000 (55107) [2022-03-28 21:59:57 +0800] [55107] [INFO] Using worker: sync [2022-03-28 21:59:57 +0800] [55110] [INFO] Booting worker with pid: 55110 [2022-03-28 21:59:57 +0800] [55111] [INFO] Booting worker with pid: 55111 [2022-03-28 21:59:57 +0800] [55112] [INFO] Booting worker with pid: 55112 [2022-03-28 21:59:57 +0800] [55113] [INFO] Booting worker with pid: 55113 then I curl http://127.0.0.1:8000/ (or use a browser). logs below: tid ================ 4455738816 pid 55112 ppid 55107 tid ================ 4455738816 pid 55111 ppid 55107 tid ================ 4455738816 pid 55113 ppid 55107 the question is why the tid is same but the pid is not same. ps: the code is from https://gunicorn.org/ homepage. | Gunicorn creates multiple processes to avoid the Python GIL. Each process has a unique PID. Regarding the threads, threading.get_ident() is a Python specific thread identifier, it should be regarded as meaningless and relevant only within the local process. Instead, you should use threading.get_native_id() which returns the unique system-wide thread identifier. Keep in mind the latter may be recycled and reused upon thread closure. | 4 | 2 |
71,648,478 | 2022-3-28 | https://stackoverflow.com/questions/71648478/nested-list-after-json-normalize | I'm trying to get all the data out of an API call which is returned in the json format. For this purpose I'm using the json_normalize library from pandas, but I'm left with a list within that list that is not unwrapped. This is the code I am using: data=requests.get(url,endpointParams) data_read=json.loads(data.content) values=json_normalize(data_read['data']) This is what I end up with: name period values title description id follower_count day [{'value': 0, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 0, 'end_time': '2022-03-28T07:00:00+0000'}] Follower Count Total number of unique accounts following this profile 1/insights/follower_count/day impressions day [{'value': 19100, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 6000, 'end_time': '2022-03-28T07:00:00+0000'}] Impressions Total number of times the Business Account's media objects have been viewed 1/insights/impressions/day profile_views day [{'value': 80, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 90, 'end_time': '2022-03-28T07:00:00+0000'}] Profile Views Total number of users who have viewed the Business Account's profile within the specified period 1/insights/profile_views/day reach day [{'value': 5000, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 2000, 'end_time': '2022-03-28T07:00:00+0000'}] Reach Total number of times the Business Account's media objects have been uniquely viewed 1/insights/reach/day My question is how do I unwrap the values column? EDIT: Here's the data_read before normalizing: {'data': [{'name': 'follower_count', 'period': 'day', 'values': [{'value': 50, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 50, 'end_time': '2022-03-28T07:00:00+0000'}], 'title': 'Follower Count', 'description': 'Total number of unique accounts following this profile', 'id': '1/insights/follower_count/day'}, {'name': 'impressions', 'period': 'day', 'values': [{'value': 19000, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 6000, 'end_time': '2022-03-28T07:00:00+0000'}], 'title': 'Impressions', 'description': "Total number of times the Business Account's media objects have been viewed", 'id': '1/insights/impressions/day'}, {'name': 'profile_views', 'period': 'day', 'values': [{'value': 90, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 99, 'end_time': '2022-03-28T07:00:00+0000'}], 'title': 'Profile Views', 'description': "Total number of users who have viewed the Business Account's profile within the specified period", 'id': '1/insights/profile_views/day'}, {'name': 'reach', 'period': 'day', 'values': [{'value': 5000, 'end_time': '2022-03-27T07:00:00+0000'}, {'value': 2000, 'end_time': '2022-03-28T07:00:00+0000'}], 'title': 'Reach', 'description': "Total number of times the Business Account's media objects have been uniquely viewed", 'id': '1/insights/reach/day'}], 'paging': {'previous': 'someotherurl.com', 'next': 'someurl.com'}} | Try: metadata = ['name', 'period', 'title', 'description', 'id'] out = pd.json_normalize(data_read['data'], 'values', metadata) value end_time name period title description id 50 2022-03-27T07:00:00+0000 follower_count day Follower Count Total number of unique accounts following this profile 1/insights/follower_count/day 50 2022-03-28T07:00:00+0000 follower_count day Follower Count Total number of unique accounts following this profile 1/insights/follower_count/day 19000 2022-03-27T07:00:00+0000 impressions day Impressions Total number of times the Business Account's media objects have been viewed 1/insights/impressions/day 6000 2022-03-28T07:00:00+0000 impressions day Impressions Total number of times the Business Account's media objects have been viewed 1/insights/impressions/day 90 2022-03-27T07:00:00+0000 profile_views day Profile Views Total number of users who have viewed the Business Account's profile within the specified period 1/insights/profile_views/day 99 2022-03-28T07:00:00+0000 profile_views day Profile Views Total number of users who have viewed the Business Account's profile within the specified period 1/insights/profile_views/day 5000 2022-03-27T07:00:00+0000 reach day Reach Total number of times the Business Account's media objects have been uniquely viewed 1/insights/reach/day 2000 2022-03-28T07:00:00+0000 reach day Reach Total number of times the Business Account's media objects have been uniquely viewed 1/insights/reach/day | 4 | 4 |
71,642,386 | 2022-3-28 | https://stackoverflow.com/questions/71642386/how-to-open-excel-file-in-polars-dataframe | I am a python pandas user but recently found about polars dataframe and it seems quite promising and blazingly fast. I am not able to find a way to open an excel file in polars. Polars is happily reading csv, json, etc. but not excel. I am extensive user of excel files in pandas and I want to try using polars. I have many sheets in excel that pandas automatically read. How can I do same with polars? What am I missing? | This is more of a workaround than a real answer, but you can read it into pandas and then convert it to a polars dataframe. import polars as pl import pandas as pd df = pd.read_excel(...) df_pl = pl.DataFrame(df) You could, however, make a feature request to the Apache Arrow community to support excel files. | 5 | 3 |
71,596,075 | 2022-3-24 | https://stackoverflow.com/questions/71596075/how-to-detect-corners-of-a-square-with-python-opencv | In the image below, I am using OpenCV harris corner detector to detect only the corners for the squares (and the smaller squares within the outer squares). However, I am also getting corners detected for the numbers on the side of the image. How do I get this to focus only on the squares and not the numbers? I need a method to ignore the numbers when performing OpenCV corner detection. The code, input image and output image are below: import cv2 as cv img = cv.imread(filename) gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY) gray = np.float32(gray) dst = cv.cornerHarris(gray, 2, 3, 0.04) dst = cv.dilate(dst,None) # Threshold for an optimal value, it may vary depending on the image. img[dst>0.01*dst.max()]=[0,0,255] cv.imshow('dst', img) Input image Output from Harris corner detector | Here's a potential approach using traditional image processing: Obtain binary image. We load the image, convert to grayscale, Gaussian blur, then adaptive threshold to obtain a black/white binary image. We then remove small noise using contour area filtering. At this stage we also create two blank masks. Detect horizontal and vertical lines. Now we isolate horizontal lines by creating a horizontal shaped kernel and perform morphological operations. To detect vertical lines, we do the same but with a vertical shaped kernel. We draw the detected lines onto separate masks. Find intersection points. The idea is that if we combine the horizontal and vertical masks, the intersection points will be the corners. We can perform a bitwise-and operation on the two masks. Finally we find the centroid of each intersection point and highlight corners by drawing a circle. Here's a visualization of the pipeline Input image -> binary image Detected horizontal lines -> horizontal mask Detected vertical lines -> vertical mask Bitwise-and both masks -> detected intersection points -> corners -> cleaned up corners The results aren't perfect but it's pretty close. The problem comes from the noise on the vertical mask due to the slanted image. If the image was centered without an angle, the results would be ideal. You can probably fine tune the kernel sizes or iterations to get better results. Code import cv2 import numpy as np # Load image, create horizontal/vertical masks, Gaussian blur, Adaptive threshold image = cv2.imread('1.png') original = image.copy() horizontal_mask = np.zeros(image.shape, dtype=np.uint8) vertical_mask = np.zeros(image.shape, dtype=np.uint8) gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (3,3), 0) thresh = cv2.adaptiveThreshold(blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 23, 7) # Remove small noise on thresholded image cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area < 150: cv2.drawContours(thresh, [c], -1, 0, -1) # Detect horizontal lines dilate_horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10,1)) dilate_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, dilate_horizontal_kernel, iterations=1) horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (40,1)) detected_lines = cv2.morphologyEx(dilate_horizontal, cv2.MORPH_OPEN, horizontal_kernel, iterations=1) cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(image, [c], -1, (36,255,12), 2) cv2.drawContours(horizontal_mask, [c], -1, (255,255,255), 2) # Remove extra horizontal lines using contour area filtering horizontal_mask = cv2.cvtColor(horizontal_mask,cv2.COLOR_BGR2GRAY) cnts = cv2.findContours(horizontal_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area > 1000 or area < 100: cv2.drawContours(horizontal_mask, [c], -1, 0, -1) # Detect vertical dilate_vertical_kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (1,7)) dilate_vertical = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, dilate_vertical_kernel, iterations=1) vertical_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (1,2)) detected_lines = cv2.morphologyEx(dilate_vertical, cv2.MORPH_OPEN, vertical_kernel, iterations=4) cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(image, [c], -1, (36,255,12), 2) cv2.drawContours(vertical_mask, [c], -1, (255,255,255), 2) # Find intersection points vertical_mask = cv2.cvtColor(vertical_mask,cv2.COLOR_BGR2GRAY) combined = cv2.bitwise_and(horizontal_mask, vertical_mask) kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2,2)) combined = cv2.morphologyEx(combined, cv2.MORPH_OPEN, kernel, iterations=1) # Highlight corners cnts = cv2.findContours(combined, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: # Find centroid and draw center point try: M = cv2.moments(c) cx = int(M['m10']/M['m00']) cy = int(M['m01']/M['m00']) cv2.circle(original, (cx, cy), 3, (36,255,12), -1) except ZeroDivisionError: pass cv2.imshow('thresh', thresh) cv2.imshow('horizontal_mask', horizontal_mask) cv2.imshow('vertical_mask', vertical_mask) cv2.imshow('combined', combined) cv2.imshow('original', original) cv2.imshow('image', image) cv2.waitKey() | 4 | 8 |
71,613,837 | 2022-3-25 | https://stackoverflow.com/questions/71613837/couldnt-use-data-file-coverage-unable-to-open-database-file | A strange issue with permissions occured when pushing to GitHub. I have a test job which runs tests with coverage and then pushes results to codecov on every push and pull request. However, this scenario only works with root user. If running with digitalshop user it throws an error: Couldn't use data file '/digital-shop-app/.coverage': unable to open database file My question is: how to run coverage in docker container so it won't throw this error? My guess is that it's because of permissions. docker-compose.yml: version: '3.9' services: test: build: . command: > sh -c " python manage.py wait_for_db && coverage run --source='.' manage.py test mainapp.tests && coverage report && coverage xml " volumes: - ./digital-shop-app:/digital-shop-app env_file: .env depends_on: - db db: image: postgres:13-alpine environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASS} Dockerfile: FROM python:3.9-alpine3.13 ENV PYTHONUNBUFFERED 1 COPY ./requirements.txt /requirements.txt COPY ./digital-shop-app /digital-shop-app COPY ./scripts /scripts WORKDIR /digital-shop-app RUN python -m venv /py && \ /py/bin/pip install --upgrade pip && \ apk add --no-cache bash && \ apk add --update --no-cache postgresql-client && \ apk add --update --no-cache --virtual .tmp-deps \ build-base jpeg-dev postgresql-dev musl-dev linux-headers \ zlib-dev libffi-dev openssl-dev python3-dev cargo && \ apk add --update --no-cache libjpeg && \ /py/bin/pip install -r /requirements.txt && \ apk del .tmp-deps && \ adduser --disabled-password --no-create-home digitalshop && \ chown -R digitalshop:digitalshop /py/lib/python3.9/site-packages && \ chmod -R +x /scripts ENV PATH="/scripts:/py/bin:/py/lib:$PATH" USER digitalshop CMD ["run.sh"] | So I ended up creating another Dockerfile called Dockerfile.test and putting pretty much the same configuration except non-admin user creation. Here's the final variant: Running code as root user is not recommended thus please read UPDATE section Dockerfile.test: FROM python:3.9-alpine3.13 ENV PYTHONUNBUFFERED 1 COPY ./requirements.txt /requirements.txt COPY ./digital-shop-app /digital-shop-app WORKDIR /digital-shop-app RUN python -m venv /py && \ /py/bin/pip install --upgrade pip && \ apk add --no-cache bash curl gnupg coreutils && \ apk add --update --no-cache postgresql-client libjpeg && \ apk add --update --no-cache --virtual .tmp-deps \ build-base jpeg-dev postgresql-dev musl-dev linux-headers \ zlib-dev libffi-dev openssl-dev python3-dev cargo && \ /py/bin/pip install -r /requirements.txt && \ apk del .tmp-deps ENV PATH="/py/bin:/py/lib:$PATH" docker-compose.yml: version: '3.9' services: test: build: context: . dockerfile: Dockerfile.test command: > sh -c " python manage.py wait_for_db && coverage run --source='.' manage.py test mainapp.tests && coverage report && coverage xml " volumes: - ./digital-shop-app:/digital-shop-app env_file: .env depends_on: - db I don't know exactly whether it is a good practice. If not then please tell how to do it correctly. UPDATE: Thanks to @β.εηοιτ.βε for giving me food for thought. After some local debugging I found out that coverage needs user to own the directory where .coverage file is located. So I created subdir named /cov inside project folder and set digitalshop user as its owner including everything inside. Finally I specified path to .coverage file by setting env variable COVERAGE_FILE=/digital-shop-app/cov/.coverage where digital-shop-app is project root folder. And also specified the same path to coverage.xml report in docker-compose.yml. Here's the code: docker-compose.yml (added -o flag to coverage xml command): version: '3.9' services: test: build: context: . command: > sh -c " python manage.py wait_for_db && coverage run --source='.' manage.py test mainapp.tests && coverage xml -o /digital-shop-app/cov/coverage.xml " env_file: .env depends_on: - db db: image: postgres:13-alpine environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASS} Dockerfile: FROM python:3.9-alpine3.13 ENV PYTHONUNBUFFERED 1 COPY ./requirements.txt /requirements.txt COPY ./digital-shop-app /digital-shop-app COPY ./scripts /scripts WORKDIR /digital-shop-app RUN python -m venv /py && \ /py/bin/pip install --upgrade pip && \ apk add --no-cache bash && \ apk add --update --no-cache postgresql-client && \ apk add --update --no-cache --virtual .tmp-deps \ build-base jpeg-dev postgresql-dev musl-dev linux-headers \ zlib-dev libffi-dev openssl-dev python3-dev cargo && \ apk add --update --no-cache libjpeg && \ /py/bin/pip install -r /requirements.txt && \ apk del .tmp-deps && \ adduser --disabled-password --no-create-home digitalshop && \ chown -R digitalshop:digitalshop /py/lib/python3.9/site-packages && \ chmod -R +x /scripts && \ # New code here mkdir -p /digital-shop-app/cov && \ chown -R digitalshop:digitalshop /digital-shop-app/cov ENV PATH="/scripts:/py/bin:/py/lib:$PATH" USER digitalshop CMD ["run.sh"] | 9 | 1 |
71,600,077 | 2022-3-24 | https://stackoverflow.com/questions/71600077/make-all-keys-in-a-typed-dict-not-required | I have an existing TypedDict containing multiple entries: from typing import TypedDict class Params(TypedDict): param1:str param2:str param3:str I want to create the exact same TypedDict but with all the keys being optional so that the user can specify only certain parameters. I know I can do something like: class OptionalParams(TypedDict, total=False): param1:str param2:str param3:str but the problem with this method is that I have to duplicate the code. Is there a way to inherit from Params by making the keys optional ? I tried to do class OptionalParams(Params, total=False): pass but the linter does not understand that the parameters are optional | What you ask for is not possible - at least if you use mypy - as you can read in the comments of Why can a Final dictionary not be used as a literal in TypedDict? and on mypy's github: TypedDict keys reuse?. Pycharm seems to have the same limitation, as tested in the two other "Failed attempts" answers to your question. When trying to run this code: from typing import TypeDict params = {"a": str, "b": str} Params = TypedDict("Params", params) mypy will give error: TypedDict() expects a dictionary literal as the second argument, thrown here in the source code. | 6 | 5 |
71,632,230 | 2022-3-26 | https://stackoverflow.com/questions/71632230/pandas-create-a-date-range-with-utc-time | I have this code to create a date range dataframe dates_df = pd.date_range(start='01/01/2022', end='02/02/2022', freq='1H') The problem is that the time is not UTC. It is dtype='datetime64[ns] instead of dtype='datetime64[ns, UTC] How can I generate the date range in UTC without having the generated time change? | pass the parameter timezone = 'utc' otherwise the result of date_range() is timezone neutral and can be interpreted as your desired timezone dates_df = pd.date_range(start='01/01/2022', end='02/02/2022', freq='1H',tz='UTC') output: >>> DatetimeIndex(['2022-01-01 00:00:00+00:00', '2022-01-01 01:00:00+00:00', ... '2022-02-01 23:00:00+00:00', '2022-02-02 00:00:00+00:00'], dtype='datetime64[ns, UTC]', length=769, freq='H') | 4 | 9 |
71,632,064 | 2022-3-26 | https://stackoverflow.com/questions/71632064/why-i-cant-get-dictionary-keys-by-index | Since Python 3.7, dictionaries are ordered. So why I can't get keys by index? | Building in such an API would be an "attractive nuisance": the implementation can't support it efficiently, so better not to tempt people into using an inappropriate data structure. It's for much the same reason that, e.g., a linked list rarely offers an indexing API. That's totally ordered too, but there's no efficient way to find the i'th element for an arbitrary i. You have to start at the beginning, and follow i links in turn to find the i'th. Same end result for a CPython dict. It doesn't use a linked list, but same thing in the end: it uses a flat vector under the covers, but basically any number of the vector's entries can be "holes". There's no way to jump over holes short of looking at each entry, one at a time. People expect a[i] to take O(1) (constant) time, not O(i) time. | 5 | 6 |
71,630,563 | 2022-3-26 | https://stackoverflow.com/questions/71630563/syntax-for-making-objects-callable-in-python | I understand that in python user-defined objects can be made callable by defining a __call__() method in the class definition. For example, class MyClass: def __init__(self): pass def __call__(self, input1): self.my_function(input1) def my_function(self, input1): print(f"MyClass - print {input1}") my_obj = MyClass() # same as calling my_obj.my_function("haha") my_obj("haha") # prints "MyClass - print haha" I was looking at how pytorch makes the forward() method of a nn.Module object be called implicitly when the object is called and saw some syntax I didn't understand. In the line that supposedly defines the __call__ method the syntax used is, __call__ : Callable[..., Any] = _call_impl This seemed like a combination of an annotation (keyword Callable[ following : ignored by python) and a value of _call_impl which we want to be called when __call__ is invoked, and my guess is that this is a shorthand for, def __call__(self, *args, **kwargs): return self._call_impl(*args, **kwargs) but wanted to understand clearly how this method of defining functions worked. My question is: When would we want to use such a definition of callable attributes of a class instead of the usual def myfunc(self, *args, **kwargs) | Functions are normal first-class objects in python. The name to with which you define a function object, e.g. with a def statement, is not set in stone, any more than it would be for an int or list. Just as you can do a = [1, 2, 3] b = a to access the elements of a through the name b, you can do the same with functions. In your first example, you could replace def __call__(self, input1): self.my_function(input1) with the much simpler __call__ = my_function You would need to put this line after the definition of my_function. The key differences between the two implementations is that def __call__(... creates a new function. __call__ = ... simply binds the name __call__ to the same object as my_function. The noticeable difference is that if you do __call__.__name__, the first version will show __call__, while the second will show my_function, since that's what gets assigned by a def statement. | 9 | 6 |
71,599,069 | 2022-3-24 | https://stackoverflow.com/questions/71599069/sort-a-list-of-dicts-according-to-a-list-of-values-with-regex | I'd like to sort the keys of the list_of_dicts according to the list_months. It works fine once I remove the digits (years) from the keys of list_of_dicts, but I cannot figure out how to use the regex correctly in the lambda function to include the digits. My code so far: import re list_months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] list_of_dicts = [{'Apr23': '64.401'}, {'Aug23': '56.955'}, {'Dec23': '57.453'}, {'Feb23': '90.459'}, {'Jan23': '92.731'}, {'Jul23': '56.6'}, {'Jun23': '56.509'},{'Mar23': '86.209'}, {'May23': '58.705'}, {'Nov23': '57.368'}, {'Oct23': '56.711'}, {'Sep23': '57.952'}] r = re.compile("[a-zA-Z]{3}[0-9]{2}") print(sorted(list_of_dicts, key=lambda d: [k in d for k in list_months if re.search(r, k)], reverse=True)) | No need for a regex here. dict_months = {m:i for i, m in enumerate(list_months)} result = sorted(list_of_dicts, key=lambda d: dict_months[next(iter(d))[:3]]) print(result) # [{'Jan23': '92.731'}, {'Feb23': '90.459'}, {'Mar23': '86.209'}, {'Apr23': '64.401'}, {'May23': '58.705'}, {'Jun23': '56.509'}, {'Jul23': '56.6'}, {'Aug23': '56.955'}, {'Sep23': '57.952'}, {'Oct23': '56.711'}, {'Nov23': '57.368'}, {'Dec23': '57.453'}] If you also want to take the year into account, use def sortby(d): key = next(iter(d)) return int(key[3:]), dict_months[key[:3]] result = sorted(list_of_dicts, key=sortby) | 4 | 3 |
71,594,548 | 2022-3-23 | https://stackoverflow.com/questions/71594548/sending-message-with-slack-webclient-that-includes-an-uploading-image | I'm trying to use the Slack Web Client to send a message from a bot to a private channel. The message would include some text and an image. After reading the current Slack documentation, it seems like the best way to accomplish this would be to use the file.upload method to upload the file to Slack, and then use the chat.PostMessage method to send the message including a URL to the hosted image. While it seems that I'm able to upload the file, when I go to send the message, I get an error regarding the file that I've uploaded. I'm not sure if I'm passing the wrong URL or if there is something else that I need to do after uploading the image. I'm able to successfully send a message without a file, so I know the issue has to do with the image specifically. Error: The request to the Slack API failed. The server responded with: {'ok': False, 'error': 'invalid_blocks', 'errors': ['downloading image failed [json-pointer:/blocks/1/image_url]'], 'response_metadata': {'messages': ['[ERROR] downloading image failed [json-pointer:/blocks/1/image_url]']}} Below is the process that I'm using to create the web client, upload the file, then send the message. import os import requests from slack_sdk import WebClient from slack_sdk.errors import SlackApiError from pprint import pprint # create Slack web client client = WebClient(token="xoxb-123456789") # find the IDs of the Slack channels for result in client.conversations_list(): for channel in result["channels"]: if channel['name'] == 'my_channel': channel_id = channel['id'] break # upload image to my Slack channel image = client.files_upload( channel = channel_id, initial_comment = "This is my image", file = "~/image.png" ) # write my message block = [ { "type": "section", "text": { "type": "mrkdwn", "text": "Guess what? I don't know" } }, { "type": "image", "image_url": image['file']['permalink'], "alt_text": "inspiration" } ] # try to send message with image try: result = client.chat_postMessage( channel = channel_id, text = "New message for you", blocks = block ) except SlackApiError as e: print(f"Error: {e}") At this point, I experience the following error message: Error: The request to the Slack API failed. The server responded with: {'ok': False, 'error': 'invalid_blocks', 'errors': ['downloading image failed [json-pointer:/blocks/1/image_url]'], 'response_metadata': {'messages': ['[ERROR] downloading image failed [json-pointer:/blocks/1/image_url]']}} For the purpose of troubleshoot, here is the data that I get back # print the details about the file uploaded pprint(image['file']) {'channels': [], 'comments_count': 0, 'created': 1648070852, 'display_as_bot': False, 'editable': False, 'external_type': '', 'filetype': 'png', 'groups': [], 'has_rich_preview': False, 'id': 'FHBB87462378', 'ims': [], 'is_external': False, 'is_public': False, 'is_starred': False, 'media_display_type': 'unknown', 'mimetype': 'image/png', 'mode': 'hosted', 'name': 'image.png', 'original_h': 1004, 'original_w': 1790, 'permalink': 'https://sandbox.enterprise.slack.com/files/123456789/ABC/image.png', 'permalink_public': 'https://slack-files.com/123456789', 'pretty_type': 'PNG', 'public_url_shared': False, 'shares': {}, 'size': 1623063, 'thumb_1024': 'https://files.slack.com/files-tmb/123456789/image_1024.png', 'thumb_1024_h': 574, 'thumb_1024_w': 1024, 'thumb_160': 'https://files.slack.com/files-tmb/123456789/image_160.png', 'thumb_360': 'https://files.slack.com/files-tmb/123456789/image_360.png', 'thumb_360_h': 202, 'thumb_360_w': 360, 'thumb_480': 'https://files.slack.com/files-tmb/123456789/image_480.png', 'thumb_480_h': 269, 'thumb_480_w': 480, 'thumb_64': 'https://files.slack.com/files-tmb/123456789/image_64.png', 'thumb_720': 'https://files.slack.com/files-tmb/123456789/image_720.png', 'thumb_720_h': 404, 'thumb_720_w': 720, 'thumb_80': 'https://files.slack.com/files-tmb/123456789/image_80.png', 'thumb_800': 'https://files.slack.com/files-tmb/123456789/image_800.png', 'thumb_800_h': 449, 'thumb_800_w': 800, 'thumb_960': 'https://files.slack.com/files-tmb/123456789/image_960.png', 'thumb_960_h': 538, 'thumb_960_w': 960, 'thumb_tiny': 'AoinfgvoindwoidnasQOJWQNWOIQONQqoinoiQQ/2Q==', 'timestamp': 1648070852, 'title': 'image', 'url_private': 'https://files.slack.com/files-pri/123456789/image.png', 'url_private_download': 'https://files.slack.com/files-pri/123456789/download/image.png', 'user': 'U123456789', 'username': ''} | I found out that you need to have the top-level text property in addition to the blocks. The example below works as expected and now I'm able upload an image to Slack and the include that image in a message. See https://github.com/slackapi/python-slack-sdk/issues/1194 for more info. # get the file URL file_url = image["file"]["permalink"] # write my message block = [ { "type": "section", "text": { "type": "mrkdwn", "text": "Guess what? I don't know" } } ] # try to send message with image try: result = client.chat_postMessage( channel = channel_id, text = f"Here is the image data that you want! {file_url}", blocks = block ) except SlackApiError as e: print(f"Error: {e}") | 4 | 3 |
71,591,770 | 2022-3-23 | https://stackoverflow.com/questions/71591770/typeerror-shield-got-an-unexpected-keyword-argument-loop-when-running-dis | When I launch my discord.py bot with this code: > from discord.ext import commands > > bot = commands.Bot(command_prefix = ",", description = "Bot de eagle57") > > bot.run("Mytoken") I get this error: C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py:964: RuntimeWarning: coroutine 'TCPConnector._resolve_host' was never awaited hosts = await asyncio.shield(self._resolve_host( RuntimeWarning: Enable tracemalloc to get the object allocation traceback Traceback (most recent call last): File "d:\Python\Bot_discord\main.py", line 5, in <module> bot.run("Mytoken") File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\client.py", line 723, in run return future.result() File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\client.py", line 702, in runner await self.start(*args, **kwargs) File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\client.py", line 665, in start await self.login(*args, bot=bot) File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\client.py", line 511, in login await self.http.static_login(token.strip(), bot=bot) File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\http.py", line 300, in static_login data = await self.request(Route('GET', '/users/@me')) File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\http.py", line 192, in request async with self.__session.request(method, url, **kwargs) as r: File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 1012, in __aenter__ self._resp = await self._coro File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 480, in _request conn = await self._connector.connect( File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 523, in connect proto = await self._create_connection(req, traces, timeout) File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 858, in _create_connection _, proto = await self._create_direct_connection( File "C:\Users\Elève\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 964, in _create_direct_connection hosts = await asyncio.shield(self._resolve_host( TypeError: shield() got an unexpected keyword argument 'loop' Does anyone have any idea why I have this error? I have already removed everything that is not required in the code and I have updated all my pip freeze, but the error doesn't change. | This is usually caused because of outdated aiohttp module You can run pip install -U aiohttp and pip install -U discord.py This will fix your issue in most cases | 5 | 7 |
71,595,728 | 2022-3-24 | https://stackoverflow.com/questions/71595728/pip-importerror-cannot-import-name-mapping-from-collections | There appear to be conflicting libraries of python that pip is trying to access, as you can see with the following error: [root@fedora user]# pip Traceback (most recent call last): File "/usr/local/bin/pip", line 5, in <module> from pip._internal import main File "/usr/local/lib/python3.10/site-packages/pip/_internal/__init__.py", line 40, in <module> from pip._internal.cli.autocompletion import autocomplete File "/usr/local/lib/python3.10/site-packages/pip/_internal/cli/autocompletion.py", line 8, in <module> from pip._internal.cli.main_parser import create_main_parser File "/usr/local/lib/python3.10/site-packages/pip/_internal/cli/main_parser.py", line 12, in <module> from pip._internal.commands import ( File "/usr/local/lib/python3.10/site-packages/pip/_internal/commands/__init__.py", line 6, in <module> from pip._internal.commands.completion import CompletionCommand File "/usr/local/lib/python3.10/site-packages/pip/_internal/commands/completion.py", line 6, in <module> from pip._internal.cli.base_command import Command File "/usr/local/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 25, in <module> from pip._internal.index import PackageFinder File "/usr/local/lib/python3.10/site-packages/pip/_internal/index.py", line 14, in <module> from pip._vendor import html5lib, requests, six File "/usr/local/lib/python3.10/site-packages/pip/_vendor/html5lib/__init__.py", line 25, in <module> from .html5parser import HTMLParser, parse, parseFragment File "/usr/local/lib/python3.10/site-packages/pip/_vendor/html5lib/html5parser.py", line 8, in <module> from . import _tokenizer File "/usr/local/lib/python3.10/site-packages/pip/_vendor/html5lib/_tokenizer.py", line 16, in <module> from ._trie import Trie File "/usr/local/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/__init__.py", line 3, in <module> from .py import Trie as PyTrie File "/usr/local/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/py.py", line 6, in <module> from ._base import Trie as ABCTrie File "/usr/local/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/_base.py", line 3, in <module> from collections import Mapping ImportError: cannot import name 'Mapping' from 'collections' (/usr/lib64/python3.10/collections/__init__.py) How can I fix this to be able to use pip? I have already tried dnf reinstall python and dnf reinstall python3 and dnf remove python3-pip, dnf install python3-pip. | I fixed this by removing all pip folders in /usr/local/lib/python3.10/site-packages | 5 | 4 |
71,589,455 | 2022-3-23 | https://stackoverflow.com/questions/71589455/get-the-regex-match-and-the-rest-none-match-from-pythons-re-module | Does the re module of Python3 offer an in-build way to get the match and the rest (none-match) back? Here is a simple example: >>> import re >>> p = r'\d' >>> s = '1a' >>> re.findall(p, s) ['1'] The result I want is something like ['1', 'a'] or [['1'], ['a']] or something else where I can differentiate between match and rest. Of course can subtract the resulting (matching) string from the original one to get the rest. But is there an in build way for this? I do not set the regex tag here because the question is less related to RegEx itself but more to a feature of a Python package. | Possible solution is the following: import re string = '1a' re_pattern = r'^(\d+)(.*)' result = re.findall(re_pattern, string) print(result) Returns list of tuples [('1', 'a')] or if you like to return list of str items result = [item for t in re.findall(re_pattern, string) for item in t] print(result) Returns ['1', 'a'] Explanations to the code: re_pattern = r'(\d+)(.*)' is looking for two groups: 1st group (\d+) means digits one or more, 2nd group (.*) means the rest of the string. re.findall(re_pattern, string) returns list of tuple like [('1', 'a')] list comprehension converts list of tuples to list of string items | 4 | 3 |
71,589,628 | 2022-3-23 | https://stackoverflow.com/questions/71589628/np-where-for-2d-array-manipulate-whole-rows | I want to rebuild the following logic with numpy broadcasting function such as np.where: From a 2d array check per row if the first element satisfies a condition. If the condition is true then return the first three elements as a row, else the last three elements. A short MWE in form of a for-loop which I want to circumvent: import numpy as np array = np.array([ [1, 2, 3, 4], [1, 2, 4, 2], [2, 3, 4, 6] ]) new_array = np.zeros((array.shape[0], array.shape[1]-1)) for i, row in enumerate(array): if row[0] == 1: new_array[i] = row[:3] else: new_array[i] = row[-3:] | If you want to use np.where: import numpy as np array = np.array([ [1, 2, 3, 4], [1, 2, 4, 2], [2, 3, 4, 6] ]) cond = array[:, 0] == 1 np.where(cond[:, None], array[:,:3], array[:,-3:]) output: array([[1, 2, 3], [1, 2, 4], [3, 4, 6]]) EDIT slightly more concise version: np.where(array[:, [0]] == 1, array[:,:3], array[:,-3:]) | 4 | 2 |
71,581,197 | 2022-3-23 | https://stackoverflow.com/questions/71581197/what-is-the-loss-function-used-in-trainer-from-the-transformers-library-of-huggi | What is the loss function used in Trainer from the Transformers library of Hugging Face? I am trying to fine tune a BERT model using the Trainer class from the Transformers library of Hugging Face. In their documentation, they mention that one can specify a customized loss function by overriding the compute_loss method in the class. However, if I do not do the method override and use the Trainer to fine tine a BERT model directly for sentiment classification, what is the default loss function being use? Is it the categorical crossentropy? Thanks! | It depends! Especially given your relatively vague setup description, it is not clear what loss will be used. But to start from the beginning, let's first check how the default compute_loss() function in the Trainer class looks like. You can find the corresponding function here, if you want to have a look for yourself (current version at time of writing is 4.17). The actual loss that will be returned with default parameters is taken from the model's output values: loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0] which means that the model itself is (by default) responsible for computing some sort of loss and returning it in outputs. Following this, we can then look into the actual model definitions for BERT (source: here, and in particular check out the model that will be used in your Sentiment Analysis task (I assume a BertForSequenceClassification model. The code relevant for defining a loss function looks like this: if labels is not None: if self.config.problem_type is None: if self.num_labels == 1: self.config.problem_type = "regression" elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): self.config.problem_type = "single_label_classification" else: self.config.problem_type = "multi_label_classification" if self.config.problem_type == "regression": loss_fct = MSELoss() if self.num_labels == 1: loss = loss_fct(logits.squeeze(), labels.squeeze()) else: loss = loss_fct(logits, labels) elif self.config.problem_type == "single_label_classification": loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) elif self.config.problem_type == "multi_label_classification": loss_fct = BCEWithLogitsLoss() loss = loss_fct(logits, labels) Based on this information, you should be able to either set the correct loss function yourself (by changing model.config.problem_type accordingly), or otherwise at least be able to determine whichever loss will be chosen, based on the hyperparameters of your task (number of labels, label scores, etc.) | 16 | 26 |
71,583,528 | 2022-3-23 | https://stackoverflow.com/questions/71583528/python-extracting-string | I have a dataframe where one of the columns which is in string format looks like this filename 0 Machine02-2022-01-28_00-21-45.blf.424 1 Machine02-2022-01-28_00-21-45.blf.425 2 Machine02-2022-01-28_00-21-45.blf.426 3 Machine02-2022-01-28_00-21-45.blf.427 4 Machine02-2022-01-28_00-21-45.blf.428 I want my column to look like this filename 0 2022-01-28 00-21-45 424 1 2022-01-28 00-21-45 425 2 2022-01-28 00-21-45 426 3 2022-01-28 00-21-45 427 4 2022-01-28 00-21-45 428 I tried this code df['filename'] = df['filename'].str.extract(r"(\d{4}-\d{1,2}-\d{1,2})_(\d{2}-\d{2}-\d{2}).*\.(\d+)", r"\1 \2 \3") I am getting this error, unsupported operand type(s) for &: 'str' and 'int'. Can anyone please tell me where I am doing wrong ? | please try this: df['filename'] = df['filename'].str.split('-',1).apply(lambda x:' '.join(x[1].split('_')).replace('.blf.',' ')) | 7 | 4 |
71,580,727 | 2022-3-23 | https://stackoverflow.com/questions/71580727/translating-async-generator-into-sync-one | Imagine we have an original API that returns a generator (it really is a mechanisms that brings pages/chunks of results from a server while the providing a simple generator to the user, and lets him iterate over these results one by one. For simplicity: # Original sync generator def get_results(): # fetch from server yield 1 yield 2 # fetch next page yield 3 yield 4 # .... Now there is a need to implement an asyncio version of the API, however we need to keep the old API operational as well. This is where things get complicated, we kind of want to translate an async generator into sync one, but I can't find an elegant way to do that. The best I could make work so far is "fetch all result into a list first, then provide a fake sync generator on that list". Which kind of defeats the purpose: # Async generator async def get_results_async(): # await fetch from server yield 1 yield 2 # await fetch next page yield 3 yield 4 # .... # Backward compatible sync generator def get_results(): async def gather_all_results(): res = [] async for i in get_results_async(): res.append(i) return res res = asyncio.run(gather_all_results()) for i in res: yield i Is there a better, more elegant way to do that without fetching all the results before returning them? Thanks | For the reason that asyncio is contagious, it's hard to write elegant code to integrate asyncio code into the old codes. For the scenario above, the flowing code is a little better, but I don't think it's elegant enough. async def get_results_async(): # await fetch from server yield 1 yield 2 # await fetch next page yield 3 yield 4 # .... # Backward compatible sync generator def get_results(): gen = get_results_async() while True: try: yield asyncio.run(gen.__anext__()) except StopAsyncIteration: break And you can re-use your event loop and not to create a new one. async def get_results_async(): # await fetch from server yield 1 yield 2 # await fetch next page yield 3 yield 4 # .... # loop that you save in somewhere. loop = asyncio.get_event_loop() # Backward compatible sync generator def get_results(): gen = get_results_async() while True: try: yield loop.run_until_complete(gen.__anext__()) except StopAsyncIteration: break | 6 | 7 |
71,580,859 | 2022-3-23 | https://stackoverflow.com/questions/71580859/importerror-when-importing-psycopg2-on-m1 | Has anyone gotten this error when importing psycopg2 after successful installation? ImportError: dlopen(/Users/chrishicks/Desktop/test/venv/lib/python3.9/site-packages/psycopg2/_psycopg.cpython-39-darwin.so, 0x0002): tried: '/Users/chrishicks/Desktop/test/venv/lib/python3.9/site-packages/psycopg2/_psycopg.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/_psycopg.cpython-39-darwin.so' (no such file), '/usr/lib/_psycopg.cpython-39-darwin.so' (no such file) I have tried installing psycopg2 and psycopg2-binary and have tried both while running iTerm in Rosetta. | Using this line should fix it: pip3.9 install psycopg2-binary --force-reinstall --no-cache-dir | 10 | 35 |
71,516,140 | 2022-3-17 | https://stackoverflow.com/questions/71516140/fastapi-runs-api-calls-in-serial-instead-of-parallel-fashion | I have the following code: from fastapi import FastAPI, Request import time app = FastAPI() @app.get("/ping") async def ping(request: Request): print("Hello") time.sleep(5) print("bye") return {"ping": "pong!"} If I run my code on localhost—e.g., http://localhost:8501/ping—in different tabs of the same browser window, I get: Hello bye Hello bye instead of: Hello Hello bye bye I have read about using httpx, but, still, I cannot have a true parallelization. What's the problem? | As per FastAPI's docs: When you declare an endpoint with normal def instead of async def, it is run in an external threadpool that is then awaited, instead of being called directly (as it would block the server). and: If you are using a third party library that communicates with something (a database, an API, the file system, etc.) and doesn't have support for using await, (this is currently the case for most database libraries), then declare your endpoints as normally, with just def. If your application (somehow) doesn't have to communicate with anything else and wait for it to respond, use async def. You can mix def and async def in your endpoints as much as you need and define each one using the best option for you. FastAPI will do the right thing with them. Thus, a def (synchronous) endpoint in FastAPI will still run in the event loop, but instead of calling it directly, which would block the server, FastAPI will run it in a separate thread from an external threadpool and then await it (more details on the external threadpool are given later on); hence, FastAPI will still work asynchronously. In other words, the server will process requests to such endpoints concurrently (at the cost, though, of spawning a new thread or reusing an existing one from the threadpool, for every incoming request to such endpoints). Whereas, async def endpoints run directly in the event loop—which runs in a single thread, typically the main thread of a process/worker, and here is created when calling uvicorn.run(), or the equivalent method of some other ASGI server—that is, the server will also process requests to such endpoints concurrently/asynchronously, as long as there is an await call to non-blocking operations inside such async def endpoints; usually, these are I/O-bound operations, such as waiting for (1) data from the client to be sent through the network, (2) contents of a file in the disk to be read, and (3) a database operation to finish. However, if an endpoint defined with async def does not await for some coroutine inside (i.e., a coroutine object is the result of calling an async def function), in order to give up time for other tasks in the event loop to run (e.g., requests to the same or other endpoints, background tasks, etc.), each request to such an endpoint will have to be completely finished (i.e., exit the endpoint), before returning control back to the event loop and allowing other tasks in the event loop to run (see this answer, if you would like to monitor all pending tasks in an event loop). In other words, in such cases, the server would be "blocked", and hence, any requests would be processed sequentially. Having said that, you should still consider defining an endpoint with async def, if it does not execute any blocking operations inside that has to wait for them to respond (e.g., time.sleep()), but is instead used to return simple JSON data, a simple HTMLResponse or even a FileResponse (in which case the file contents will be read asynchronously and in chunks regardless, using await anyio.open_file(), as can be seen in the relevant FileResponse implementation), even if there is not an await call inside the endpoint in such cases, as FastAPI would likely perform better, when running such a simple endpoint directly in the event loop, rather than running the endpoint in a separate thread from the external threadpool (which would be the case, if the endpoint was instead defined with normal def). If, however, you had to return some complex and large JSON data, either encoding them on your own within the endpoint, as shown in the linked answer earlier, or using Starlette's JSONResponse or FastAPI's ORJSONResponse/UJSONResponse (see this related answer as well), which, all these classes, would encode the data in a synchronous way, using json.dumps() and orjson.dumps()/ujson.dumps() respectively, in that case, you might consider having the endpoint defined with normal def (related answers could be found here and here). Alternatively, you could keep using an async def endpoint, but have such blocking operations inside (e.g., orjson.dumps() or df.to_json()) run in a separate thread/process, as described in the solutions provided later on (It would be a good practice to perform benchmark tests, similar to this, and compare the results to find the best-performing approach for your case). Note that the same concept not only applies to endpoints, but also to functions that are used as StreamingResponse generators (see StreamingResponse class implementation) or Background Tasks (see BackgroundTask class implementation and this answer), as well as Dependencies. That means FastAPI, behind the scenes, will also run such functions defined with normal def in a separate thread from the same external threadpool; whereas, if such functions were defined with async def, they would run directly in the event loop. In order to run an endpoint or a function described above in a separate thread and await it, FastAPI uses Starlette's asynchronous run_in_threadpool() function, which, under the hood, calls anyio.to_thread.run_sync(). The default number of worker threads of that external threadpool is 40 and can be adjusted as required—please have a look at this answer for more details on the external threadpool and how to adjust the number of threads. Hence, after reading this answer to the end, you should be able to know when to define a FastAPI endpoint/StreamingResponse generator/BackgroundTask/Dependency with def or async def, as well as whether or not you should increase the number of threads of the external threadpool. Python's async def function and await The keyword await (which only works within an async def function) passes function control back to the event loop. In other words, it suspends the execution of the surrounding coroutine, and tells the event loop to let some other task run, until that awaited task is completed. Note that just because you may define a custom function with async def and then await it inside your async def endpoint, it doesn't mean that your code will work asynchronously, if that custom function contains, for example, calls to time.sleep(), CPU-bound tasks, non-async I/O libraries, or any other blocking call that is incompatible with asynchronous Python code. In FastAPI, for example, when using the async methods of UploadFile, such as await file.read() and await file.close(), FastAPI/Starlette, behind the scenes, actually calls the corresponding synchronous File methods in a separate thread from the external threadpool described earlier (using run_in_threadpool()) and awaits it; otherwise, such methods/operations would block the event loop—you could find out more by looking at the implementation of the UploadFile class. Note that async does not mean parallel, but concurrently. As mentioned earlier, asynchronous code with async and await is many times summarized as using coroutines. Coroutines are cooperative, meaning that at any given time, a program with coroutines is running only one of its coroutines, and this running coroutine suspends its execution only when it explicitly requests to be suspended. As described here: Specifically, whenever execution of a currently-running coroutine reaches an await expression, the coroutine may be suspended, and another previously-suspended coroutine may resume execution if what it was suspended on has since returned a value. Suspension can also happen when an async for block requests the next value from an asynchronous iterator or when an async with block is entered or exited, as these operations use await under the hood. If, however, a blocking I/O-bound or CPU-bound operation was directly executed inside an async def function/endpoint, it would then block the event loop, and hence, the main thread would be blocked as well. Hence, a blocking operation such as time.sleep() in an async def endpoint would block the entire server (as in the example provided in your question). Thus, if your endpoint is not going to make any async calls, you could declare it with normal def instead, in which case, FastAPI would run it in a separate thread from the external threadpool and await it, as explained earlier (more solutions are given in the following sections). Example: @app.get("/ping") def ping(request: Request): #print(request.client) print("Hello") time.sleep(5) print("bye") return "pong" Otherwise, if the functions that you had to execute inside the endpoint are async functions that you had to await, you should define your endpoint with async def. To demonstrate this, the example below uses asyncio.sleep(), which provides a non-blocking sleep operation. Calling it will suspend the execution of the surrounding coroutine (until the sleep operation is completed), thus allowing other tasks in the event loop to run. import asyncio @app.get("/ping") async def ping(request: Request): #print(request.client) print("Hello") await asyncio.sleep(5) print("bye") return "pong" Both the endpoints above will print out the specified messages to the screen in the same order as mentioned in your question—if two requests arrived at (around) the same time—that is: Hello Hello bye bye Important Note When using a web browser to call the same endpoint for the second (third, and so on) time, please remember to do that from a tab that is isolated from the browser's main session; otherwise, succeeding requests (i.e., coming after the first one) might be blocked by the browser (i.e., on client side), as the browser might be waiting for a response to the previous request from the server, before sending the next request. This is a common behaviour for the Chrome web browser at least, due to waiting to see the result of a request and check if the result can be cached, before requesting the same resource again (Also, note that every browser has a specific limit for parallel connections to a given hostname). You could confirm that by using print(request.client) inside the endpoint, where you would see that the hostname and port number are the same for all incoming requests—in case the requests were initiated from tabs opened in the same browser window/session; otherwise, the port number would normally be different for every request—and hence, those requests would be processed sequentially by the server, because of the browser sending them sequentially in the first place. To overcome this, you could either: Reload the same tab (as is running), or Open a new tab in an (isolated) Incognito Window, or Use a different web browser/client to send the request, or Use the httpx library to make asynchronous HTTP requests, along with the awaitable asyncio.gather(), which allows executing multiple asynchronous operations concurrently and then returns a list of results in the same order the awaitables (tasks) were passed to that function (have a look at this answer for more details). Example: import httpx import asyncio URLS = ['http://127.0.0.1:8000/ping'] * 2 async def send(url, client): return await client.get(url, timeout=10) async def main(): async with httpx.AsyncClient() as client: tasks = [send(url, client) for url in URLS] responses = await asyncio.gather(*tasks) print(*[r.json() for r in responses], sep='\n') asyncio.run(main()) In case you had to call different endpoints that may take different time to process a request, and you would like to print the response out on client side as soon as it is returned from the server—instead of waiting for asyncio.gather() to gather the results of all tasks and print them out in the same order the tasks were passed to the send() function—you could replace the send() function of the example above with the one shown below: async def send(url, client): res = await client.get(url, timeout=10) print(res.json()) return res Python's GIL and Blocking Operations inside Threads Simply put, the Global Interpreter Lock (GIL) is a mutex (lock), ensuring that only one thread (per process) can hold the control of the Python interpreter (and run Python bytecode) at any point in time. One might wonder that if a blocking operation inside a thread, such as time.sleep() within a def endpoint, blocks the calling thread, how is the GIL released, so that other threads get a chance to execute? The answer is because time.sleep() is not really a CPU-bound operation, but it "suspends execution of the calling thread for the given number of seconds"; hence, the thread is switched out of the CPU for x seconds, allowing other threads to switch in for execution. In other words, it does block the calling thread, but the calling process is still alive, so that other threads can still run within the process (obviously, in a single-threaded application, everything would be blocked). The state of the thread is stored, so that it can be restored and resume execution at a later point. That process of the CPU jumping from one thread of execution to another is called context switching. Even if a CPU-bound operation (or an I/O-bound one that wouldn't voluntarily release the GIL) was executed inside a thread, and the GIL hadn't been released after 5ms (or some other configurable interval), Python would (automatically) tell the current thread to release the GIL. To find the default thread switch interval, use: import sys print(sys.getswitchinterval()) # 0.005 However, this automatic GIL release is best-effort, not guaranteed—see this, for instance. Async/await and Blocking I/O-bound or CPU-bound Operations If you are required to define a FastAPI endpoint (or a StreamingResponse generator, a background task, etc.) with async def (as you might need to await for some coroutines inside it), but also have some synchronous blocking I/O-bound or CPU-bound operation (computationally intensive task) that would block the event loop (essentially, the entire server) and wouldn't let other requests to go through, for example: @app.post("/ping") async def ping(file: UploadFile = File(...)): print("Hello") try: contents = await file.read() res = cpu_bound_task(contents) # this would block the event loop finally: await file.close() print("bye") return "pong" then: You should check whether you could change your endpoint's definition to normal def instead of async def. One way, if the only method in your endpoint that had to be awaited was the one reading the file contents would be to declare the file contents parameter as bytes, i.e., contents: bytes = File(). Using that definition, FastAPI would read the file for you and you would receive the contents as bytes. Hence, there would be no need to use an async def endpoint with await file.read() inside. Please note that this approach (i.e., using contents: bytes = File()) should work fine for small files; however, for larger files, and always depending on your server's resources, this might cause issues, as the enitre file contents would be stored to memory (see the documentation on File Parameters). Hence, if your system does not have enough RAM available to accommodate the accumulated data, your application may end up crashing—if, for instance, you have 8GB of RAM (the available RAM will always be less than the amount installed on your device, as other apps/services will be using it as well), you can't load a 50GB file. Alternatively, you could use file: UploadFile = File(...) definition in your endpoint, but this time call the synchronous .read() method of the SpooledTemporaryFile directly, which can be accessed through the .file attribute of the UploadFile object. In this way, you will be able to declare your endpoint with a normal def instead, and hence, each request will run in a separate thread from the external threadpool and then be awaited (as explained earlier). Example is given below. For more details on how to upload a File, as well as how FastAPI/Starlette uses the SpooledTemporaryFile behind the scenes when uploading a File, please have a look at this answer and this answer. @app.post("/ping") def ping(file: UploadFile = File(...)): print("Hello") try: contents = file.file.read() res = cpu_bound_task(contents) finally: file.file.close() print("bye") return "pong" Another way, when you would like having the endpoint defined with normal def, as you might need to run blocking operations inside and would like having it run in a separate thread instead of calling it directly in the event loop, but at the same time you would have to await for coroutines inside, is to await such coroutines within an async dependency instead, as demonstrated in this answer, which will then return the result to the def endpoint. Use FastAPI's (Starlette's) run_in_threadpool() function from the concurrency module—as @tiangolo suggested—which, as noted earlier, will run the function in a separate thread from an external threadpool to ensure that the main thread (where coroutines are run) does not get blocked. The run_in_threadpool() is an awaitable function, where its first parameter is a normal function, and the following parameters are passed to that function directly. It supports both sequence and keyword arguments. from fastapi.concurrency import run_in_threadpool res = await run_in_threadpool(cpu_bound_task, contents) Alternatively, use asyncio's loop.run_in_executor()—after obtaining the running event loop using asyncio.get_running_loop()—to run the task, which, in this case, you can await for it to complete and return the result(s), before moving on to the next line of code. Passing None to the executor argument, the default executor will be used; which is a ThreadPoolExecutor: import asyncio loop = asyncio.get_running_loop() res = await loop.run_in_executor(None, cpu_bound_task, contents) or, if you would like to pass keyword arguments instead, you could use a lambda expression (e.g., lambda: cpu_bound_task(some_arg=contents)), or, preferably, functools.partial(), which is specifically recommended in the documentation for loop.run_in_executor(): import asyncio from functools import partial loop = asyncio.get_running_loop() res = await loop.run_in_executor(None, partial(cpu_bound_task, some_arg=contents)) In Python 3.9+, you could also use asyncio.to_thread() to asynchronously run a synchronous function in a separate thread—which, essentially, uses await loop.run_in_executor(None, func_call) under the hood, as can been seen in the implementation of asyncio.to_thread(). The to_thread() function takes the name of a blocking function to execute, as well as any arguments (*args and/or **kwargs) to the function, and then returns a coroutine that can be awaited. Example: import asyncio res = await asyncio.to_thread(cpu_bound_task, contents) Note that as explained in this answer, passing None to the executor argument does not create a new ThreadPoolExecutor every time you call await loop.run_in_executor(None, ...), but instead re-uses the default executor with the default number of worker threads (i.e., min(32, os.cpu_count() + 4)). Thus, depending on the requirements of your application, that number might not be enough. In that case, you should rather use a custom ThreadPoolExecutor. For instance: import asyncio import concurrent.futures loop = asyncio.get_running_loop() with concurrent.futures.ThreadPoolExecutor() as pool: res = await loop.run_in_executor(pool, cpu_bound_task, contents) I would strongly recommend having a look at the linked answer above to learn about the difference between using run_in_threadpool() and run_in_executor(), as well as how to create a re-usable custom ThreadPoolExecutor at the application startup, and adjust the number of maximum worker threads as needed. ThreadPoolExecutor will successfully prevent the event loop from being blocked (and should be prefered for calling blocking I/O-bound tasks), but won't give you the performance improvement you would expect from running code in parallel; especially, when one needs to perform CPU-bound tasks, such as audio or image processing and machine learning (see here). It is thus preferable to run CPU-bound tasks in a separate process—using ProcessPoolExecutor, as shown below—which, again, you can integrate with asyncio, in order to await it to finish its work and return the result(s). As described here, it is important to protect the entry point of the program to avoid recursive spawning of subprocesses, etc. Basically, your code must be under if __name__ == '__main__'. import concurrent.futures loop = asyncio.get_running_loop() with concurrent.futures.ProcessPoolExecutor() as pool: res = await loop.run_in_executor(pool, cpu_bound_task, contents) Again, I'd suggest having a look at the linked answer earlier on how to create a re-usable ProcessPoolExecutor at application startup—you should find this answer helpful as well. More solutions, as shown in this answer, include using asyncio.create_task() (if your task is actually async def, but you wouldn't like to await for it to complete) or background tasks, as well as spawning a new thread (using threading) or process (using multiprocessing) in the background instead of using concurrent.futures. Moreover, if you had to perform some heavy background computation task that wouldn't necessarily have to be run by the same process (for example, you don't need to share memory, variables, etc.), you could also benefit from using other bigger tools like Celery. Using apscheduler, as demonstrated in this answer, might be another option as well—always choose what suits you best. Use more server workers to take advantage of multi-core CPUs, in order to run multiple processes in parallel and be able to serve more requests. For example, uvicorn main:app --workers 4. When using 1 worker, only one process is run. When using multiple workers, this will spawn multiple processes (all single threaded). Each process has a separate GIL, as well as its own event loop, which runs in the main thread of each process and executes all tasks in its thread. That means, there is only one thread that can take a lock on the interpreter of each process; unless, of course, you employ additional threads, either outside or inside the event loop, e.g., when using run_in_threadpool, a custom ThreadPoolExecutor or defining endpoints/StreamingResponse generators/background tasks/dependencies with normal def instead of async def, as well as when calling UploadFile's methods (see the first two paragraphs of this answer for more details). Note that each worker "has its own things, variables and memory". This means that global variables/objects, etc., won't be shared across the processes/workers. In this case, you should consider using a database storage, or Key-Value stores (Caches), as described here and here. Additionally, note that "if you are consuming a large amount of memory in your code, each process will consume an equivalent amount of memory". | 92 | 262 |
71,526,175 | 2022-3-18 | https://stackoverflow.com/questions/71526175/how-to-switch-vs-code-to-use-pylance-rather-than-jedi | I am trying to use Structural Pattern Matching (PEP634) from Python 3.10, but Jedi language server doesn't support the syntax. I've heard Pylance is better, but I can't find any way to switch VS Code to Pylance. I've downloaded the default Python extension, but only the Jedi language server is running. How can I make the switch? EDIT Adding a picture of trying to search for "pylance" so there's no confusion, it's not there at all. It seems to claim it's part of the Python extension, but the language server being used is always Jedi. Python extension packs: | I was using the open source version of VS Code which doesn't have all extensions. Switching to the proprietary version (available on the AUR) fixed my issue. | 9 | 9 |
71,512,035 | 2022-3-17 | https://stackoverflow.com/questions/71512035/how-should-i-specify-default-values-on-pydantic-fields-with-validate-always-to | My type checker moans at me when I use snippets like this one from the Pydantic docs: from datetime import datetime from pydantic import BaseModel, validator class DemoModel(BaseModel): ts: datetime = None # Expression of type "None" cannot be # assigned to declared type "datetime" @validator('ts', pre=True, always=True) def set_ts_now(cls, v): return v or datetime.now() My workarounds so far have been: ts: datetime = datetime(1970, 1, 1) # yuck ts: datetime = None # type: ignore ts: Optional[datetime] = None # Not really true. `ts` is not optional. Is there a preferred way out of this conundrum? Or is there a type checker I could use which doesn't mind this? | New answer Use a Field with a default_factory for your dynamic default value: from datetime import datetime from pydantic import BaseModel, Field class DemoModel(BaseModel): ts: datetime = Field(default_factory=datetime.now) Your type hints are correct, the linter is happy and DemoModel().ts is not None. From the Field docs: default_factory: a zero-argument callable that will be called when a default value is needed for this field. Among other purposes, this can be used to set dynamic default values. | 21 | 37 |
71,539,448 | 2022-3-19 | https://stackoverflow.com/questions/71539448/using-different-pydantic-models-depending-on-the-value-of-fields | I have 2 Pydantic models (var1 and var2). The input of the PostExample method can receive data either for the first model or the second. The use of Union helps in solving this issue, but during validation it throws errors for both the first and the second model. How to make it so that in case of an error in filling in the fields, validator errors are returned only for a certain model, and not for both at once? (if it helps, the models can be distinguished by the length of the field A). main.py @app.post("/PostExample") def postExample(request: Union[schemas.var1, schemas.var2]): result = post_registration_request.requsest_response() return result schemas.py class var1(BaseModel): A: str B: int C: str D: str class var2(BaseModel): A: str E: int F: str | You could use Discriminated Unions (credits to @larsks for mentioning that in the comments). Setting a discriminated union, "validation is faster since it is only attempted against one model", as well as "only one explicit error is raised in case of failure". Working example is given below. Another approach would be to attempt parsing the models (based on a discriminator you pass as query/path param), as described in this answer (Option 1). Working Example app.py import schemas from fastapi import FastAPI, Body from typing import Union app = FastAPI() @app.post("/") def submit(item: Union[schemas.Model1, schemas.Model2] = Body(..., discriminator='model_type')): return item schemas.py from typing import Literal from pydantic import BaseModel class Model1(BaseModel): model_type: Literal['m1'] A: str B: int C: str D: str class Model2(BaseModel): model_type: Literal['m2'] A: str E: int F: str Test inputs - outputs #1 Successful Response #2 Validation error #3 Validation error # Request body # Request body # Request body { { { "model_type": "m1", "model_type": "m1", "model_type": "m2", "A": "string", "A": "string", "A": "string", "B": 0, "C": "string", "C": "string", "C": "string", "D": "string" "D": "string" "D": "string" } } } # Server response # Server response # Server response 200 { { "detail": [ "detail": [ { { "loc": [ "loc": [ "body", "body", "Model1", "Model2", "B" "E" ], ], "msg": "field required", "msg": "field required", "type": "value_error.missing" "type": "value_error.missing" } }, ] { } "loc": [ "body", "Model2", "F" ], "msg": "field required", "type": "value_error.missing" } ] } | 19 | 20 |
71,542,183 | 2022-3-19 | https://stackoverflow.com/questions/71542183/websocket-getting-closed-immediately-after-connecting-to-fastapi-endpoint | I'm trying to connect a websocket aiohttp client to a fastapi websocket endpoint, but I can't send or recieve any data because it seems that the websocket gets closed immediately after connecting to the endpoint. server import uvicorn from fastapi import FastAPI, WebSocket app = FastAPI() @app.websocket('/ws') async def websocket_endpoint(websocket: WebSocket): await websocket.accept() ... if __name__ == '__main__': uvicorn.run('test:app', debug=True, reload=True) client import aiohttp import asyncio async def main(): s = aiohttp.ClientSession() ws = await s.ws_connect('ws://localhost:8000/ws') while True: ... asyncio.run(main()) When I try to send data from the server to the client when a connection is made server @app.websocket('/ws') async def websocket_endpoint(websocket: WebSocket): await websocket.accept() await websocket.send_text('yo') client while True: print(await ws.receive()) I always get printed in my client's console WSMessage(type=<WSMsgType.CLOSED: 257>, data=None, extra=None) While in the server's debug console it says INFO: ('127.0.0.1', 59792) - "WebSocket /ws" [accepted] INFO: connection open INFO: connection closed When I try to send data from the client to the server server @app.websocket('/ws') async def websocket_endpoint(websocket: WebSocket): await websocket.accept() while True: await websocket.receive_text() client ws = await s.ws_connect('ws://localhost:8000/ws') await ws.send_str('client!') Nothing happens, I get no message printed out in the server's console, just the debug message saying the client got accepted, connection opened and closed again. I have no idea what I'm doing wrong, I followed this tutorial in the fastAPI docs for a websocket and the example there with the js websocket works completely fine. | The connection is closed by either end (client or server), as shown from your code snippets. You would need to have a loop in both the server and the client for being able to await for messages, as well as send messages, continuously (have a look here and here). Additionally, as per FastAPI's documentation: When a WebSocket connection is closed, the await websocket.receive_text() will raise a WebSocketDisconnect exception, which you can then catch and handle like in this example. Thus, on server side, you should use a try-except block to catch and handle WebSocketDisconnect exceptions, as well as websockets.exceptions.ConnectionClosed exceptions, as explained in this answer. Below is a working example demonstrating a client (in aiohttp) - server (in FastAPI) communication using websockets. Related examples can be found here and here, as well as here and here. Working Example Server from fastapi import FastAPI, WebSocket, WebSocketDisconnect from websockets.exceptions import ConnectionClosed import uvicorn app = FastAPI() @app.websocket("/ws") async def websocket_endpoint(websocket: WebSocket): # await for connections await websocket.accept() try: # send "Connection established" message to client await websocket.send_text("Connection established!") # await for messages and send messages while True: msg = await websocket.receive_text() if msg.lower() == "close": await websocket.close() break else: print(f'CLIENT says - {msg}') await websocket.send_text(f"Your message was: {msg}") except (WebSocketDisconnect, ConnectionClosed): print("Client disconnected") if __name__ == "__main__": uvicorn.run(app, host="127.0.0.1", port=8000) Client Examples using the websockets library instead of aiohttp can be found here, as well as here and here. import aiohttp import asyncio async def main(): async with aiohttp.ClientSession() as session: async with session.ws_connect('ws://127.0.0.1:8000/ws') as ws: # await for messages and send messages async for msg in ws: if msg.type == aiohttp.WSMsgType.TEXT: print(f'SERVER says - {msg.data}') text = input('Enter a message: ') await ws.send_str(text) elif msg.type == aiohttp.WSMsgType.ERROR: break asyncio.run(main()) | 9 | 7 |
71,528,875 | 2022-3-18 | https://stackoverflow.com/questions/71528875/signal-handling-in-uvicorn-with-fastapi | I have an app using Uvicorn with FastAPI. I have also some connections open (e.g. to MongoDB). I want to gracefully close these connections once some signal occurs (SIGINT, SIGTERM and SIGKILL). My server.py file: import uvicorn import fastapi import signal import asyncio from source.gql import gql app = fastapi.FastAPI() app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"]) app.mount("/graphql", gql) # handle signals HANDLED_SIGNALS = ( signal.SIGINT, signal.SIGTERM ) loop = asyncio.get_event_loop() for sig in HANDLED_SIGNALS: loop.add_signal_handler(sig, _some_callback_func) if __name__ == "__main__": uvicorn.run(app, port=6900) Unfortunately, the way I try to achieve this is not working. When I try to Ctrl+C in terminal, nothing happens. I believe it is caused because Uvicorn is started in different thread... What is the correct way of doing this? I have noticed uvicorn.Server.install_signal_handlers() function, but wasn't lucky in using it... | FastAPI allows defining event handlers (functions) that need to be executed before the application starts up, or when the application is shutting down. Thus, you could use the shutdown event, as described here: @app.on_event("shutdown") def shutdown_event(): # close connections here Update Since startup and shutdown events are now deprecated (and might be removed in the future), one could use a lifespan function instead. Examples and details can be found in this answer, as well as here, here and here. | 6 | 8 |
71,497,081 | 2022-3-16 | https://stackoverflow.com/questions/71497081/how-to-build-multiple-packages-from-a-single-python-module-using-pyproject-toml | I want to achieve a similar behavior as the library Dask does, it is possible to use pip to install dask, dask[dataframe], dask[array] and others. They do it by using the setup.py with a packages key like this. If I install only dask the dask[dataframe] is not installed and they warn you about this when executing the module. I found this in the poetry documentation but when I execute poetry build I only get one .whl file with all of the packages within. How can I package my module to be able to install specific parts of a library using poetry? | Actually the Dask example does not install subpackages separately, it just installs the custom dependencies separately as explained in this link. In order to accomplish the same behavior using poetry you need to use this (as mentioned by user @sinoroc in this comment) The example pyproject.toml from the poetry extras page is this: [tool.poetry] name = "awesome" [tool.poetry.dependencies] # These packages are mandatory and form the core of this package’s distribution. mandatory = "^1.0" # A list of all of the optional dependencies, some of which are included in the # below `extras`. They can be opted into by apps. psycopg2 = { version = "^2.7", optional = true } mysqlclient = { version = "^1.3", optional = true } [tool.poetry.extras] mysql = ["mysqlclient"] pgsql = ["psycopg2"] By using poetry build --format wheel a single wheel file would be created. In order to install a specific set of extra dependencies using pip and the wheel file you should use: pip install "wheel_filename.whl[mysql]" | 5 | 5 |
71,549,500 | 2022-3-20 | https://stackoverflow.com/questions/71549500/how-to-create-an-abstract-cached-property-in-python | In order to create an abstract property in Python one can use the following code: from abc import ABC, abstractmethod class AbstractClassName(ABC): @cached_property @abstractmethod def property_name(self) -> str: pass class ClassName(AbstractClassName): @property def property_name(self) -> str: return 'XYZ' >>> o = AbstractClassName() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class AbstractClassName with abstract method property_name >>> o = ClassName() >>> o.property_name 'XYZ' which is what I have expected. I wanted to create an abstract cached property, so I tried the following: from abc import ABC, abstractmethod from functools import cached_property class AbstractClassName(ABC): @cached_property @abstractmethod def property_name(self) -> str: pass class ClassName(AbstractClassName): @cached_property def property_name(self) -> str: return 'XYZ' However, this is not working as I expected: >>> o = AbstractClassName() >>> o.property_name >>> o = ClassName() >>> o.property_name 'XYZ' Notice that this time it is allowing me to create an instance of an abstract class AbstractClassName. I am using Python 3.10. Is there any way to defined an abstract cached property? | Here is a possible solution from abc import ABC, abstractmethod from functools import cached_property class AbstractClassName(ABC): @cached_property def property_name(self) -> str: return self._property_name() @abstractmethod def _property_name(self) -> str: ... class ClassName(AbstractClassName): def _property_name(self) -> str: print("Heavy calculation") return "XYZ" Test AbstractClassName() # raise a = ClassName() print(a.property_name) print(a.property_name) # Heavy calculation # XYZ # XYZ | 8 | 1 |
71,563,696 | 2022-3-21 | https://stackoverflow.com/questions/71563696/pandas-to-gbq-typeerror-expected-bytes-got-a-int-object | I am using the pandas_gbq module to try and append a dataframe to a table in Google BigQuery. I keep getting this error: ArrowTypeError: Expected bytes, got a 'int' object. I can confirm the data types of the dataframe match the schema of the BQ table. I found this post regarding Parquet files not being able to have mixed datatypes: Pandas to parquet file In the error message I'm receiving, I see there is a reference to a Parquet file, so I'm assuming the df.to_gbq() call is creating a Parquet file and I have a mixed data type column, which is causing the error. The error message doesn't specify. I think that my challenge is that I can't see to find which column has the mixed datatype - I've tried casting them all as strings and then specifying the table schema parameter, but that hasn't worked either. This is the full error traceback: In [76]: df.to_gbq('Pricecrawler.Daily_Crawl_Data', project_id=project_id, if_exists='append') ArrowTypeError Traceback (most recent call last) <ipython-input-76-74cec633c5d0> in <module> ----> 1 df.to_gbq('Pricecrawler.Daily_Crawl_Data', project_id=project_id, if_exists='append') ~\Anaconda3\lib\site-packages\pandas\core\frame.py in to_gbq(self, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials) 1708 from pandas.io import gbq 1709 -> 1710 gbq.to_gbq( 1711 self, 1712 destination_table, ~\Anaconda3\lib\site-packages\pandas\io\gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials) 209 ) -> None: 210 pandas_gbq = _try_import() --> 211 pandas_gbq.to_gbq( 212 dataframe, 213 destination_table, ~\Anaconda3\lib\site-packages\pandas_gbq\gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials, api_method, verbose, private_key) 1191 return 1192 -> 1193 connector.load_data( 1194 dataframe, 1195 destination_table_ref, ~\Anaconda3\lib\site-packages\pandas_gbq\gbq.py in load_data(self, dataframe, destination_table_ref, chunksize, schema, progress_bar, api_method, billing_project) 584 585 try: --> 586 chunks = load.load_chunks( 587 self.client, 588 dataframe, ~\Anaconda3\lib\site-packages\pandas_gbq\load.py in load_chunks(client, dataframe, destination_table_ref, chunksize, schema, location, api_method, billing_project) 235 ): 236 if api_method == "load_parquet": --> 237 load_parquet( 238 client, 239 dataframe, ~\Anaconda3\lib\site-packages\pandas_gbq\load.py in load_parquet(client, dataframe, destination_table_ref, location, schema, billing_project) 127 128 try: --> 129 client.load_table_from_dataframe( 130 dataframe, 131 destination_table_ref, ~\Anaconda3\lib\site-packages\google\cloud\bigquery\client.py in load_table_from_dataframe(self, dataframe, destination, num_retries, job_id, job_id_prefix, location, project, job_config, parquet_compression, timeout) 2669 parquet_compression = parquet_compression.upper() 2670 -> 2671 _pandas_helpers.dataframe_to_parquet( 2672 dataframe, 2673 job_config.schema, ~\Anaconda3\lib\site-packages\google\cloud\bigquery\_pandas_helpers.py in dataframe_to_parquet(dataframe, bq_schema, filepath, parquet_compression, parquet_use_compliant_nested_type) 584 585 bq_schema = schema._to_schema_fields(bq_schema) --> 586 arrow_table = dataframe_to_arrow(dataframe, bq_schema) 587 pyarrow.parquet.write_table( 588 arrow_table, filepath, compression=parquet_compression, **kwargs, ~\Anaconda3\lib\site-packages\google\cloud\bigquery\_pandas_helpers.py in dataframe_to_arrow(dataframe, bq_schema) 527 arrow_names.append(bq_field.name) 528 arrow_arrays.append( --> 529 bq_to_arrow_array(get_column_or_index(dataframe, bq_field.name), bq_field) 530 ) 531 arrow_fields.append(bq_to_arrow_field(bq_field, arrow_arrays[-1].type)) ~\Anaconda3\lib\site-packages\google\cloud\bigquery\_pandas_helpers.py in bq_to_arrow_array(series, bq_field) 288 if field_type_upper in schema._STRUCT_TYPES: 289 return pyarrow.StructArray.from_pandas(series, type=arrow_type) --> 290 return pyarrow.Array.from_pandas(series, type=arrow_type) 291 292 ~\Anaconda3\lib\site-packages\pyarrow\array.pxi in pyarrow.lib.Array.from_pandas() ~\Anaconda3\lib\site-packages\pyarrow\array.pxi in pyarrow.lib.array() ~\Anaconda3\lib\site-packages\pyarrow\array.pxi in pyarrow.lib._ndarray_to_array() ~\Anaconda3\lib\site-packages\pyarrow\error.pxi in pyarrow.lib.check_status() ArrowTypeError: Expected bytes, got a 'int' object | Had this same issue - solved it simply with df = df.astype(str) and doing to_gbq on that instead. Caveat is that all your fields will now be strings... | 15 | 11 |
71,542,947 | 2022-3-19 | https://stackoverflow.com/questions/71542947/how-can-i-fix-task-was-destroyed-but-it-is-pending | I have a problem. So I have a task that runs every time when a user writes a chat message on my discord server - it's called on_message. So my bot has many things to do in this event, and I often get this kind of error: Task was destroyed but it is pending! task: <Task pending name='pycord: on_message' coro=<Client._run_event() done, defined at /Bots/gift-bot/discord/client.py:374> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f68a7bdfc10>()]>> So I think if I want to fix this, I need to speedup my code. But sadly, I don't have any clue how i can do it to fix this error. Edit: I integrated timings and this is what I get printed: Task was destroyed but it is pending! task: <Task pending name='pycord: on_message' coro=<Client._run_event() done, defined at /Bots/gift-bot/discord/client.py:374> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f01063f98e0>()]>> 2 if checks done - 7.867813110351562e-06 5 if checks done - 0.0061550140380859375 mysql checks done - 0.010785341262817383 task done - 0.13075661659240723 2 if checks done - 8.344650268554688e-06 5 if checks done - 0.011545896530151367 mysql checks done - 0.02138519287109375 task done - 0.11132025718688965 2 if checks done - 2.0503997802734375e-05 5 if checks done - 0.008122920989990234 mysql checks done - 0.012276411056518555 2 if checks done - 1.0728836059570312e-05 5 if checks done - 0.014346837997436523 mysql checks done - 0.040288448333740234 task done - 0.12520265579223633 2 if checks done - 1.0728836059570312e-05 5 if checks done - 0.0077972412109375 mysql checks done - 0.013320684432983398 task done - 0.1502058506011963 task done - 0.10663175582885742 2 if checks done - 9.775161743164062e-06 5 if checks done - 0.006486177444458008 mysql checks done - 0.011229515075683594 Task was destroyed but it is pending! task: <Task pending name='pycord: on_message' coro=<Client._run_event() done, defined at /Bots/gift-bot/discord/client.py:374> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f010609a9d0>()]>> 2 if checks done - 6.67572021484375e-06 5 if checks done - 0.0049741268157958984 mysql checks done - 0.008575677871704102 task done - 0.10633635520935059 And this is the code for the integrated timings: @commands.Cog.listener("on_message") async def on_message(self, message): start = time.time() # Check ob Nachricht gezählt werden kann if message.author.bot: return if message.type != discord.MessageType.default: return print(f"2 if checks done - {time.time() - start}") if isinstance(message.channel, discord.channel.DMChannel): return await message.reply(f'Hey {message.author.name}!\nLeider bin ich der falsche Ansprechpartner, falls du Hilfe suchst.. <:pepe_hands:705896495601287320>\nBetrete den https://discord.gg/deutschland Bl4cklist-Discord und sende unserem Support-Bot <@671421220566204446> (`Bl4cklist🔥Support#7717`) eine Private-Nachricht, damit sich unser Support-Team um dein Problem so schnell es geht kümmern kann. <:pepe_love:759741232443949107>') # ENTFERNEN AM 30. APRIL prefix_now = await get_prefix(message) if message.content.startswith(str(prefix_now)): try: await message.reply("› <a:alarm:769215249261789185> - **UMSTIEG AUF SLASH-COMMANDS:** Ab **jetzt** laufen alle Befehle dieses Bots auf `/` - um Leistung zu sparen und die Erfahrung zu verbessern. Nutze `/help` um eine Befehlsliste zu sehen.") except discord.Forbidden: pass return if self.client.user in message.mentions: response = choice([ "Mit mir kann man die coolsten Gewinnspiele starten! <a:gift:843914342835421185>", 'Wird Zeit jemanden den Tag zu versüßen! <:smile:774755282618286101>', "Wer nicht auf diesem Server ist, hat die Kontrolle über sein Leben verloren! <a:lach_blue2:803693710490861608>", "Wann startet endlich ein neues Gewinnspiel? <:whut:848347703217487912>", "Ich bin der BESTE Gewinnspiel-Bot - Wer was anderes sagt, lügt! <:wyldekatze:842157727169773608>" ]) try: await message.reply(f"{response} (Mein Präfix: `/`)", mention_author=False) except (discord.Forbidden, discord.HTTPException, discord.NotFound): pass return print(f"5 if checks done - {time.time() - start}") # Cooldown #self.member_cooldown_list = [i for i in self.member_cooldown_list if i[1] + self.cooldown_val > int(time.time())] #member_index = next((i for i, v in enumerate(self.member_cooldown_list) if v[0] == message.author.id), None) #if member_index is not None: # if self.member_cooldown_list[member_index][1] + self.cooldown_val > int(time.time()): # return #self.member_cooldown_list.append((message.author.id, int(time.time()))) # Rollen-Check (Bonus/Ignore) count = 1 mydb = await getConnection() mycursor = await mydb.cursor() await mycursor.execute("SELECT ignore_role_id, bonus_role_id FROM guild_role_settings WHERE guild_id = %s", (message.author.guild.id,)) in_database = await mycursor.fetchone() if in_database: if in_database[0] is not None: role_list = in_database[0].split(" ") for roleid in role_list: try: int(roleid) except ValueError: continue role = message.author.guild.get_role(int(roleid)) if role is None: continue if role in message.author.roles: await mycursor.close() mydb.close() return if in_database[1] is not None: role_list = in_database[1].split(" ") for roleid in role_list: try: int(roleid) except ValueError: continue role = message.author.guild.get_role(int(roleid)) if role is None: continue if role in message.author.roles: count += 1 # Kanal-Check (Bonus/Ignore) await mycursor.execute("SELECT ignore_channel_id FROM guild_channel_settings WHERE guild_id = %s", (message.author.guild.id,)) in_database1 = await mycursor.fetchone() if in_database1: if in_database1[0] is not None: channel_list = in_database1[0].split(" ") for channelid in channel_list: try: int(channelid) except ValueError: continue if int(message.channel.id) == int(channelid): await mycursor.close() mydb.close() return print(f"mysql checks done - {time.time() - start}") # In Datenbank eintragen await mycursor.execute("SELECT * FROM guild_message_count WHERE guild_id = %s AND user_id = %s", (message.author.guild.id, message.author.id)) in_database2 = await mycursor.fetchone() if in_database2: await mycursor.execute( "UPDATE guild_message_count SET user_id = %s, message_count = message_count + %s WHERE guild_id = %s AND user_id = %s", (message.author.id, count, message.author.guild.id, message.author.id)) else: await mycursor.execute( "INSERT INTO guild_message_count (user_id, message_count, guild_id) VALUES (%s, %s, %s)", (message.author.id, count, message.author.guild.id)) await mydb.commit() await mycursor.close() mydb.close() print(f"task done - {time.time() - start}") If I try to start my bot with asyncio.run(client.start('token')) I'm getting this error multiple times: Ignoring exception in on_guild_channel_delete Traceback (most recent call last): File "/Bots/gift-bot/discord/client.py", line 382, in _run_event await coro(*args, **kwargs) File "/Bots/gift-bot/cogs/misc_events.py", line 738, in on_guild_channel_delete await self.client.wait_until_ready() File "/Bots/gift-bot/discord/client.py", line 978, in wait_until_ready await self._ready.wait() File "/usr/local/lib/python3.9/asyncio/locks.py", line 226, in wait await fut RuntimeError: Task <Task pending name='pycord: on_guild_channel_delete' coro=<Client._run_event() running at /Bots/gift-bot/discord/client.py:382>> got Future <Future pending> attached to a different loop I'm using Python3.9 on a Debian 10 vServer with pycord2.0.0b5. | The await expression blocks the containing coroutine until the awaited awaitable returns. This hinders the progress of the coroutine. But await is necessary in a coroutine to yield control back to the event loop so that other coroutines can progress. Too many awaits can be problematic, it just makes progress slow. I've refactored on_message coroutine method by breaking it into sub tasks. async def _check_channel(self, message, pool): async with pool.acquire() as conn: async with conn.cursor() as cursor: await cursor.execute( "SELECT ignore_channel_id FROM guild_channel_settings WHERE guild_id = %s", (message.author.guild.id,), ) in_database = await cursor.fetchone() if in_database and in_database[0] is not None: channel_list = in_database[0].split(" ") for channelid in channel_list: try: channel_id_int = int(channelid) except ValueError: continue if int(message.channel.id) == channel_id_int: return False async def _get_role_count(self, message, pool): async with pool.acquire() as conn: async with conn.cursor() as cursor: await cursor.execute( "SELECT ignore_role_id, bonus_role_id FROM guild_role_settings WHERE guild_id = %s", (message.author.guild.id,), ) in_database = await cursor.fetchone() if in_database: first_item, second_item, *_ = in_database if first_item is not None: role_list = first_item.split(" ") for roleid in role_list: try: roleid_int = int(roleid) except ValueError: continue role = message.author.guild.get_role(roleid_int) if role is None: continue if role in message.author.roles: return False if second_item is not None: role_list = second_item.split(" ") count = 0 for roleid in role_list: try: roleid_int = int(roleid) except ValueError: continue role = message.author.guild.get_role(roleid_int) if role is None: continue if role in message.author.roles: count += 1 return count @commands.Cog.listener("on_message") async def on_message(self, message): if message.author.bot: return if message.type != discord.MessageType.default: return if isinstance(message.channel, discord.channel.DMChannel): return # Cooldown self.member_cooldown_list = [ i for i in self.member_cooldown_list if i[1] + self.cooldown_val > int(time.time()) ] member_index = next( ( i for i, v in enumerate(self.member_cooldown_list) if v[0] == message.author.id ), None, ) if member_index is not None: if self.member_cooldown_list[member_index][1] + self.cooldown_val > int( time.time() ): return self.member_cooldown_list.append((message.author.id, int(time.time()))) loop = asyncio.get_running_loop() db_pool = await aiomysql.create_pool( minsize=3, host="<host>", port=3306, user="<user>", password="<password>", db="<db_name>", autocommit=False, loop=loop, ) count = 1 check_channel_task = asyncio.create_task( self._check_channel(self, message, db_pool) ) role_count_task = asyncio.create_task(self._get_role_count(self, message, db_pool)) # write to database mydb = await db_pool.acquire() mycursor = await mydb.cursor() await mycursor.execute( "SELECT * FROM guild_message_count WHERE guild_id = %s AND user_id = %s", (message.author.guild.id, message.author.id), ) in_database = await mycursor.fetchone() role_count = await role_count_task check_channel = await check_channel_task if False in (role_count, check_channel): await mycursor.close() db_pool.release(mydb) db_pool.close() await db_pool.wait_closed() return if role_count: count += role_count if in_database: await mycursor.execute( "INSERT INTO guild_message_count (user_id, message_count, guild_id) VALUES (%s, %s, %s) ON DUPLICATE KEY UPDATE message_count = message_count + 1", (message.author.id, count, message.author.guild.id), ) await mydb.commit() await mycursor.close() db_pool.release(mydb) db_pool.close() await db_pool.wait_closed() I've created two private async methods with code from part of the on_message method to make progress concurrent. While on_message is blocked in an await, the refactored methods may progress independent of on_message method. To make this happen I create two tasks out of the two new coroutines. asyncio.create_tasks schedules tasks to be run negating the need for an await. These tasks may run as soon as on_message yields control back to event loop on any await following the tasks creation. I didn't run the code. This is best effort. You have to try experimenting by moving the block which awaits the tasks around. And also run it with client.run to avoid got Future attached to a different loop error. | 9 | 5 |
71,562,597 | 2022-3-21 | https://stackoverflow.com/questions/71562597/replace-image-in-word-docx-format | I'm attempting to replace an image in a Word 2019 .docx file using the following code in Python: from docxtpl import DocxTemplate tpl = DocxTemplate("C:\\temp\\replace_picture_tpl.docx") context = {} tpl.replace_pic('Sample.png','C:\\temp\\NewImage.png') tpl.render(context) tpl.save("C:\\temp\\TestOutput.docx") I get the error ValueError: Picture Sample.png not found in the docx template. I created the document by inserting a random .png file into a blank Word document and saving it. I have double checked and confirmed that it is a .docx file. I then used the code above to attempt to change the picture, which didn't work. I looked around online for a couple of days and haven't been able to find anything to help yet. I've gone through the docxtpl templates and sample code and it seems like the code references a "descr" tag to locate 'Sample.png'. However when I view the replace_picture_tpl.docx XML file, the image doesn't have this tag. Any suggestions? | So, this worked for me using docxtpl and a template I modified in MS Word: Right click the image in MS Word, Select "View Alt Text": Write "replace_me" as the Alt Text. Save and close. Then: from docxtpl import DocxTemplate tpl = DocxTemplate("sometemplate.docx") tpl.replace_pic("replace_me", "yourimage.png") Definitely worked on MS Word for Mac 2022, Version 16.69 (23010700). | 5 | 1 |
71,525,132 | 2022-3-18 | https://stackoverflow.com/questions/71525132/how-to-write-a-custom-fastapi-middleware-class | I have read FastAPI's documentation about middlewares (specifically, the middleware tutorial, the CORS middleware section and the advanced middleware guide), but couldn't find a concrete example of how to write a middleware class which you can add using the add_middleware function (in contrast to a basic middleware function added using a decorator) there nor on this site. The reason I prefer to use add_middleware over the app based decorator, is that I want to write a middleware in a shared library that will be used by several different projects, and therefore I can't tie it to a specific FastAPI instance. So my question is: how do you do it? | As FastAPI is actually Starlette underneath, you could use BaseHTTPMiddleware that allows you to implement a middleware class (you may want to have a look at this post as well). Below are given two variants of the same approach on how to do that, where the add_middleware() function is used to add the middleware class. Please note that is currently not possible to use BackgroundTasks (if that's a requirement for your task) with BaseHTTPMiddleware—check #1438 and #1640 for more details. Alternatives can be found in this answer and this answer. Option 1 middleware.py from fastapi import Request class MyMiddleware: def __init__( self, some_attribute: str, ): self.some_attribute = some_attribute async def __call__(self, request: Request, call_next): # do something with the request object content_type = request.headers.get('Content-Type') print(content_type) # process the request and get the response response = await call_next(request) return response app.py from fastapi import FastAPI from middleware import MyMiddleware from starlette.middleware.base import BaseHTTPMiddleware app = FastAPI() my_middleware = MyMiddleware(some_attribute="some_attribute_here_if_needed") app.add_middleware(BaseHTTPMiddleware, dispatch=my_middleware) Option 2 middleware.py from fastapi import Request from starlette.middleware.base import BaseHTTPMiddleware class MyMiddleware(BaseHTTPMiddleware): def __init__( self, app, some_attribute: str, ): super().__init__(app) self.some_attribute = some_attribute async def dispatch(self, request: Request, call_next): # do something with the request object, for example content_type = request.headers.get('Content-Type') print(content_type) # process the request and get the response response = await call_next(request) return response app.py from fastapi import FastAPI from middleware import MyMiddleware app = FastAPI() app.add_middleware(MyMiddleware, some_attribute="some_attribute_here_if_needed") | 35 | 50 |
71,504,627 | 2022-3-16 | https://stackoverflow.com/questions/71504627/runtimewarning-coroutine-botbase-load-extension-was-never-awaited-after-upd | The discord bot I made a year ago and deployed to Heroku has worked until now. However, after changing some cogs and updating python to version 3.9.10, I get the following warning in the Heroku logs: app[worker.1]: /app/m_bot.py:120: RuntimeWarning: coroutine 'BotBase.load_extension' was never awaited app[worker.1]: client.load_extension(f"cogs.{filename[:-3]}") app[worker.1]: RuntimeWarning: Enable tracemalloc to get the object allocation traceback app[worker.1]: Bot is ready. app[api]: Build succeeded> The 120 line block is: for filename in os.listdir("./cogs"): if filename.endswith(".py"): # cut of the .py from the file name client.load_extension(f"cogs.{filename[:-3]}") The bot goes online but doesn't respond to any command. I haven't made any other changes apart from what was listed above. It works when I run my bot on my PC, so I suspect it might be a version problem. How can I resolve this? | Explanation As of discord.py version 2.0, Bot.load_extension is now a coroutine and has to be awaited. This is to allow Cog subclasses to override cog_unload with a coroutine. Code await must be used in front of client.load_extension, as shown: await client.load_extension("your_extension") In each of your cogs: Replace the standard setup function with an asynchronous one: async def setup(bot): await bot.add_cog(YourCog(bot)) If you want to use the normal convention for adding extensions, you'll need to use the following code: In your client's file: async def load_extensions(): for filename in os.listdir("./cogs"): if filename.endswith(".py"): # cut off the .py from the file name await client.load_extension(f"cogs.{filename[:-3]}") You should also wrap your login in an asynchronous 'main' function, where you would call this function. Note that the code below does not setup logging, you need to do so yourself: async def main(): async with client: await load_extensions() await client.start('your_token') asyncio.run(main()) These two functions replace the old way: client.run("your_token") along with the code you posted in your question. Reference discord.py 2.0 async changes (Thank you ChrisDewa for mentioning this in your comment) | 9 | 17 |
71,512,301 | 2022-3-17 | https://stackoverflow.com/questions/71512301/error-could-not-build-wheels-for-spacy-which-is-required-to-install-pyproject | Hi Guys, I am trying to install spacy model == 2.3.5 but I am getting this error, please help me! | I had the similar error while executing pip install -r requirements.txt but for aiohttp module: socket.c -o build/temp.linux-armv8l-cpython-311/aiohttp/_websocket.o aiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~ 1 error generated. error: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects Just in case I will leave here solution to my error. This error is specific to Python 3.11 version. On Python with 3.10.6 version installation went fine. To solve it I needed to update requirements.txt. Not working versions of modules with Python 3.11: aiohttp==3.8.1 yarl==1.4.2 frozenlist==1.3.0 Working versions: aiohttp==3.8.2 yarl==1.8.1 frozenlist==1.3.1 Links to the corresponding issues with fixes: https://github.com/aio-libs/aiohttp/issues/6600 https://github.com/aio-libs/yarl/issues/706 https://github.com/aio-libs/frozenlist/issues/305 | 5 | 4 |
71,570,607 | 2022-3-22 | https://stackoverflow.com/questions/71570607/sqlalchemy-models-vs-pydantic-models | I'm following this tutorial to adapt it to my needs, in this case, to perform a sql module where I need to record the data collected by a webhook from the gitlab issues. For the database module I'm using SQLAlchemy library and PostgreSQL as database engine. So, I would like to solve some doubts, I have regarding the use of the Pydantic library, in particular with this example From what I've read, Pydantic is a library that is used for data validation using classes with attributes. But I don't quite understand some things...is the integration of Pydantic strictly necessary? The purpose of using Pydantic I understand, but the integration of using Pydantic with SQLAlchemy models I don't understand. In the tutorial, models.py has the following content: from sqlalchemy import Boolean, Column, ForeignKey, Integer, String from sqlalchemy.orm import relationship from .database import Base class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True, index=True) email = Column(String, unique=True, index=True) hashed_password = Column(String) is_active = Column(Boolean, default=True) items = relationship("Item", back_populates="owner") class Item(Base): __tablename__ = "items" id = Column(Integer, primary_key=True, index=True) title = Column(String, index=True) description = Column(String, index=True) owner_id = Column(Integer, ForeignKey("users.id")) owner = relationship("User", back_populates="items") And schemas.py has the following content: from typing import Optional from pydantic import BaseModel class ItemBase(BaseModel): title: str description: Optional[str] = None class ItemCreate(ItemBase): pass class Item(ItemBase): id: int owner_id: int class Config: orm_mode = True class UserBase(BaseModel): email: str class UserCreate(UserBase): password: str class User(UserBase): id: int is_active: bool items: list[Item] = [] class Config: orm_mode = True I know that the primary means of defining objects in Pydantic is via models and also I know that models are simply classes which inherit from BaseModel. Why does it create ItemBase, ItemCreate and Item that inherits from ItemBase? In ItemBase it passes the fields that are strictly necessary in Item table? and defines its type? The ItemCreate class I have seen that it is used latter in crud.py to create a user, in my case I would have to do the same with the incidents? I mean, I would have to create a clase like this: class IssueCreate(BaseModel): pass There are my examples trying to follow the same workflow: models.py import sqlalchemy from sqlalchemy import Column, Table from sqlalchemy import Integer, String, Datetime, TIMESTAMP from .database import Base class Issues(Base): __tablename__ = 'issues' id = Column(Integer, primary_key=True) gl_assignee_id = Column(Integer, nullable=True) gl_id_user = Column(Integer, nullable=False) current_title = Column(String, nullable=False) previous_title = Column(String, nullable=True) created_at = Column(TIMESTAMP(timezone=False), nullable=False) updated_at = Column(TIMESTAMP(timezone=False), nullable=True) closed_at = Column(TIMESTAMP(timezone=False), nullable=True) action = Column(String, nullable=False) And schemas.py from pydantic import BaseModel class IssueBase(BaseModel): updated_at: None closed_at: None previous_title: None class Issue(IssueBase): id: int gl_task_id: int gl_assignee_id: int gl_id_user: int current_title: str action: str class Config: orm_mode = True But I don't know if I'm right doing it in this way, any suggestions are welcome. | The tutorial you mentioned is about FastAPI. Pydantic by itself has nothing to do with SQL, SQLAlchemy or relational databases. It is FastAPI that is showing you a way to use a relational database. is the integration of pydantic strictly necessary [when using FastAPI]? Yes. Pydantic is a requirement according to the documentation: Requirements Python 3.6+ FastAPI stands on the shoulders of giants: Starlette for the web parts. Pydantic for the data parts. Why does it create ItemBase, ItemCreate and Item that inherits from ItemBase? Pydantic models are the way FastAPI uses to define the schemas of the data that it receives (requests) and returns (responses). ItemCreate represent the data required to create an item. Item represents the data that is returned when the items are queried. The fields that are common to ItemCreate and Item are placed in ItemBase to avoid duplication. In ItemBase it passes the fields that are strictly necessary in Item table? and defines its type? ItemBase has the fields that are common to ItemCreate and Item. It has nothing to do with a table. It is just a way to avoid duplication. Every field of a pydantic model must have a type, there is nothing unusual there. in my case I would have to do the same with the incidents? If you have a similar scenario where the schemas of the data that you receive (request) and the data that you return (response) have common fields (same name and type), you could define a model with those fields and have other models inherit from it to avoid duplication. This could be a (probably simplistic) way of understanding FastAPI and pydantic: FastAPI transforms requests to pydantic models. Those pydantic models are your input data and are also known as schemas (maybe to avoid confusion with other uses of the word model). You can do whatever you want with those schemas, including using them to create relational database models and persisting them. Whatever data you want to return as a response needs to be transformed by FastAPI to a pydantic model (schema). It just happens that pydantic supports an orm_mode option that allows it to parse arbitrary objects with attributes instead of dicts. Using that option you can return a relational database model and FastAPI will transform it to the corresponding schema (using pydantic). FastAPI uses the parsing and validation features of pydantic, but you have to follow a simple rule: the data that you receive must comply with the input schema and the data that you want to return must comply with the output schema. You are in charge of deciding whatever happens in between. | 31 | 48 |
71,517,365 | 2022-3-17 | https://stackoverflow.com/questions/71517365/pyproject-toml-wont-find-project-name-with-setuptools-python-m-build-format | What is the correct format for supplying a name to a Python package in a pyproject.toml? Here's the pyproject.toml file: [project] name = "foobar" version = "0.0.1" [build-system] requires = ["setuptools>=40.8.0", "wheel"] build-backend = "setuptools.build_meta" A build called using python -m build results in the following error. running check warning: check: missing required meta-data: name, url warning: check: missing meta-data: either (author and author_email) or (maintainer and maintainer_email) should be supplied Based on this reddit post question I had the same issue. | Update At the time the question was asked, setuptools did not have support for writing its configuration in a pyproject.toml file (PEP 621). So it was not possible to answer the question. Now and since its version 61.0.0, setuptools has support for PEP 621: https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html Original answer It seems that you are trying to write a PEP 621-style pyproject.toml with the setuptools build back-end. But, as of now, setuptools does not have support for PEP 621 yet. The work is ongoing: https://discuss.python.org/t/help-testing-experimental-features-in-setuptools/13821 https://github.com/pypa/setuptools/tree/experimental/support-pyproject https://github.com/pypa/setuptools/search?q=621&type=issues Until PEP 621 support arrives in setuptools, one can: Use setup.cfg https://setuptools.pypa.io/en/latest/userguide/declarative_config.html Switch to a PEP 621-compatible build back-end instead of setuptools: pdm flit trampolim enscons whey probably more... | 14 | 10 |
71,558,637 | 2022-3-21 | https://stackoverflow.com/questions/71558637/poetry-fails-with-retrieved-digest-for-package-not-in-poetry-lock-metadata | We're trying to merge and old branch in a project and when trying to build a docker image, poetry seems to fail for some reason that I don't understand. I'm not very familiar with poetry, as I've only used requirements.txt for dependencies up to now, so I'm fumbling a bit on what's going on. The error that I'm getting (part of the playbook that builds the image on the server) is this: "Installing dependencies from lock file", "", "Package operations: 16 installs, 14 updates, 0 removals", "", " • Updating importlib-metadata (4.8.3 -> 2.0.0)", " • Updating pyparsing (3.0.6 -> 2.4.7)", " • Updating six (1.16.0 -> 1.15.0)", "", " RuntimeError", "", " Retrieved digest for link six-1.15.0.tar.gz(sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259) not in poetry.lock metadata ['30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259', '8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced']", "", " at /usr/local/lib/python3.7/dist-packages/poetry/installation/chooser.py:115 in _get_links", " 111│ ", " 112│ if links and not selected_links:", " 113│ raise RuntimeError(", " 114│ \"Retrieved digest for link {}({}) not in poetry.lock metadata {}\".format(", " → 115│ link.filename, h, hashes", " 116│ )", " 117│ )", " 118│ ", " 119│ return selected_links", "", "", " RuntimeError", "", " Retrieved digest for link pyparsing-2.4.7.tar.gz(sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1) not in poetry.lock metadata ['c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1', 'ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b']", "", " at /usr/local/lib/python3.7/dist-packages/poetry/installation/chooser.py:115 in _get_links", " 111│ ", " 112│ if links and not selected_links:", " 113│ raise RuntimeError(", " 114│ \"Retrieved digest for link {}({}) not in poetry.lock metadata {}\".format(", " → 115│ link.filename, h, hashes", " 116│ )", " 117│ )", " 118│ ", " 119│ return selected_links", "", "", " RuntimeError", "", " Retrieved digest for link importlib_metadata-2.0.0.tar.gz(sha256:77a540690e24b0305878c37ffd421785a6f7e53c8b5720d211b211de8d0e95da) not in poetry.lock metadata ['77a540690e24b0305878c37ffd421785a6f7e53c8b5720d211b211de8d0e95da', 'cefa1a2f919b866c5beb7c9f7b0ebb4061f30a8a9bf16d609b000e2dfaceb9c3']", "", " at /usr/local/lib/python3.7/dist-packages/poetry/installation/chooser.py:115 in _get_links", " 111│ ", " 112│ if links and not selected_links:", " 113│ raise RuntimeError(", " 114│ \"Retrieved digest for link {}({}) not in poetry.lock metadata {}\".format(", " → 115│ link.filename, h, hashes", " 116│ )", " 117│ )", " 118│ ", " 119│ return selected_links" ] } If you notice, for all 3 packages, the retrieved digest is actually in the list of digests of the metadata section of the poetry lock file. Our guess is that maybe this lock file was generated by an older version of poetry and is no longer valid. Maybe a hashing method should be mentioned (for example the retrieved digest is sha256, but no method is specified on the ones that are compared with it)? Another curious thing is that poetry is not installed inside the dockerfile, but seems to reach that point, nevetheless, and I'm really curious how this can happen. Any insight would be greatly appreciated (and any link with more information, even)! Thanks a lot for your time! (Feel free to ask for more information if this seems inadequate to you!) Cheers! | When I've had this issue myself it has been fixed by recreating the lock file using a newer version of poetry. If you are able to view the .toml file I suggest deleting this lock file and then running poetry install to create a new lock file. | 22 | 10 |
71,577,892 | 2022-3-22 | https://stackoverflow.com/questions/71577892/how-change-the-syntax-in-elasticsearch-8-where-body-parameter-is-deprecated | After updating Python package elasticsearch from 7.6.0 to 8.1.0, I started to receive an error at this line of code: count = es.count(index=my_index, body={'query': query['query']} )["count"] receive following error message: DeprecationWarning: The 'body' parameter is deprecated and will be removed in a future version. Instead use individual parameters. count = es.count(index=ums_index, body={'query': query['query']} )["count"] I don't understand how to use the above-mentioned "individual parameters". Here is my query: query = { "bool": { "must": [ {"exists" : { "field" : 'device'}}, {"exists" : { "field" : 'app_version'}}, {"exists" : { "field" : 'updatecheck'}}, {"exists" : { "field" : 'updatecheck_status'}}, {"term" : { "updatecheck_status" : 'ok'}}, {"term" : { "updatecheck" : 1}}, { "range": { "@timestamp": { "gte": from_date, "lte": to_date, "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd" } } } ], "must_not": [ {"term" : { "device" : ""}}, {"term" : { "updatecheck" : ""}}, {"term" : { "updatecheck_status" : ""}}, { "terms" : { "app_version" : ['2.2.1.1', '2.2.1.2', '2.2.1.3', '2.2.1.4', '2.2.1.5', '2.2.1.6', '2.2.1.7', '2.1.2.9', '2.1.3.2', '0.0.0.0', ''] } } ] } } In the official documentation, I can't find any chance to find examples of how to pass my query in new versions of Elasticsearch. Possibly someone has a solution for this case other than reverting to previous versions of Elasticsearch? | According to the documentation, this is now to be done as follows: # ✅ New usage: es.search(query={...}) # ❌ Deprecated usage: es.search(body={"query": {...}}) So the queries are done directly in the same line of code without "body", substituting the api you need to use, in your case "count" for "search". You can try the following: # ✅ New usage: es.count(query={...}) # ❌ Deprecated usage: es.count(body={"query": {...}}) enter code here You can find out more by clicking on the following link: https://github.com/elastic/elasticsearch-py/issues/1698 For example, if the query would be: GET index-00001/_count { "query" : { "match_all": { } } } Python client would be the next: my_index = "index-00001" query = { "match_all": { } } hits = en.count(index=my_index, query=query) or hits = en.count(index=my_index, query={"match_all": {}}) | 9 | 17 |
71,530,764 | 2022-3-18 | https://stackoverflow.com/questions/71530764/binance-order-timestamp-for-this-request-was-1000ms-ahead-of-the-servers-time | I am writing some Python code to create an order with the Binance API: from binance.client import Client client = Client(API_KEY, SECRET_KEY) client.create_order(symbol='BTCUSDT', recvWindow=59999, #The value can't be greater than 60K side='BUY', type='MARKET', quantity = 0.004) Unfortunately I get the following error message: "BinanceAPIException: APIError(code=-1021): Timestamp for this request was 1000ms ahead of the server's time." I already checked the difference (in miliseconds) between the Binance server time and my local time: import time import requests import json url = "https://api.binance.com/api/v1/time" t = time.time()*1000 r = requests.get(url) result = json.loads(r.content) print(int(t)-result["serverTime"]) OUTPUT: 6997 It seems that the recvWindow of 60000 is still not sufficient (but it may not exceed 60K). I still get the same error. Does anybody know how I can solve this issue? Many thanks in advance! | Probably the PC's time is out of sync. You can do it using Windows -> Setting-> Time & Language -> Date & Time -> 'Sync Now'. Screenshot: | 14 | 34 |
71,557,674 | 2022-3-21 | https://stackoverflow.com/questions/71557674/when-importing-cartopy-importerror-dll-load-failed-while-importing-trace-the-s | I installed Christoph Gohlke's prebuilt wheel Cartopy‑0.20.2‑cp39‑cp39‑win_amd64.whl using pip in an active virtual environment. The environment is using Python 3.9.5. When trying to import Cartopy I get the error message below. This used to work before and now it no longer works and I can't figure out why. Does anyone know what the issue could be or what I'm missing? --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Input In [4], in <cell line: 1>() ----> 1 import cartopy 2 import cartopy.crs as ccrs 3 import cartopy.io.img_tiles as cimgt File ~\Downloads\GitHub\Project\venv\lib\site-packages\cartopy\__init__.py:110, in <module> 105 pass 108 # Commonly used sub-modules. Imported here to provide end-user 109 # convenience. --> 110 import cartopy.crs 111 import cartopy.feature File ~\Downloads\GitHub\Project\venv\lib\site-packages\cartopy\crs.py:27, in <module> 24 from pyproj.exceptions import ProjError 25 from shapely.prepared import prep ---> 27 import cartopy.trace 30 try: 31 # https://github.com/pyproj4/pyproj/pull/912 32 from pyproj.crs import CustomConstructorCRS as _CRS ImportError: DLL load failed while importing trace: The specified module could not be found. | As mentioned by cgohlke in the comments, installing the wheels of shapely and pyproj from his website solves the issue. If the libraries are already installed, use --force-reinstall to overwrite the existing installations. | 8 | 15 |
71,542,207 | 2022-3-19 | https://stackoverflow.com/questions/71542207/when-to-use-oauth-in-django-what-is-its-exact-role-on-django-login-framework | I am trying to be sure that I understand it correctly: Is OAuth a bridge for only third party authenticator those so common like Facebook, Google? And using it improves user experience in secure way but not adding extra secure layer to Django login framework? Or only Authorization Code grant type is like that? Can I take it like this? | What is OAuth? According to RFC 6749: The OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. Essentially, it is an authorization protocol used to share permissions between multiple applications. If you decide to implement OAuth, your application will be the one to allow other services to programmatically view your users' data and act on their behalf, if needed. Whenever an application requires access to another service that you use, it probably uses OAuth to perform those actions. (e.g. When games used to ask us to allow posting on Facebook on our behalf.) What OAuth is not? By looking at your question, I feel like there's a misunderstanding of OAuth. OAuth is not a bridge for third-party authentication methods. If you are looking for this type of authentication mechanism, you should take a look into Single Sign-On (SSO). For Django, you can use django-simple-sso. Does it enhance security? Depending on the use case, yes, it can enhance security. If your application needs to exchange information with other services, it is a good practice to limit what these third-party services are able to do in your app, feature and time-wise. Let's say, for example, that your user needs to give permission to another application to gather information from yours: If you were to use the old-fashioned e-mail and password combination method, these credentials would be exposed in case of this third-party service had a data breach. Using OAuth on the other hand is much more secure, as the credentials stored in the server would not contain the user's password and have very specific roles, apart from being easily revoked. | 6 | 5 |
71,560,036 | 2022-3-21 | https://stackoverflow.com/questions/71560036/how-to-preform-loc-with-one-condition-that-include-two-columns | I have df with two columns A and B both of them are columns with string values. Example: df_1 = pd.DataFrame(data={ "A":['a','b','c'], "B":['a x d','z y w','q m c'] #string values not a list }) print(df_1) #output A B 0 a a x d 1 b z y w 2 c q m c now what I'm trying to do is to preform loc in the df_1 to get all the row that col B cointain the string value in col A. In this example the output i want is the first and the third rows: A B 0 a a x d # 'a x d' contain value 'a' 2 c q m c # 'q m c' contain value 'c' I have tried different loc condition but got unhashable type: 'Series' error: df_1.loc[df_1["B"].str.contains(df_1["A"])] #TypeError: unhashable type: 'Series' df_1.loc[df_1["A"] in df_1["B"]] #TypeError: unhashable type: 'Series' I really don't want to use a for/while loop because of the size of the df. Any idea how can I preform this? | There is no vectorial method, to map in using two columns. You need to loop here: mask = [a in b for a,b in zip(df_1['A'], df_1['B'])] df_1.loc[mask] Output: A B 0 a a x d 2 c q m c comparison of speed (3000 rows) # operator.contains 518 µs ± 4.61 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # list comprehension 554 µs ± 3.84 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # numpy.apply_along_axis 7.32 ms ± 58.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # apply 20.7 ms ± 379 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) | 13 | 21 |
71,531,344 | 2022-3-18 | https://stackoverflow.com/questions/71531344/how-to-spawn-a-docker-container-in-a-remote-machine | Is it possible, using the docker SDK for Python, to launch a container in a remote machine? import docker client = docker.from_env() client.containers.run("bfirsh/reticulate-splines", detach=True) # I'd like to run this container ^^^ in a machine that I have ssh access to. Going through the documentation it seems like this type of management is out of scope for said SDK, so searching online I got hints that the kubernetes client for Python could be of help, but don't know where to begin. | It's possible, simply do this: client = docker.DockerClient(base_url=your_remote_docker_url) Here's the document I found related to this: https://docker-py.readthedocs.io/en/stable/client.html#client-reference If you only have SSH access to it, there is an use_ssh_client option | 5 | 4 |
71,520,075 | 2022-3-17 | https://stackoverflow.com/questions/71520075/zip-longest-for-the-left-list-always | I know about the zip function (which will zip according to the shortest list) and zip_longest (which will zip according to the longest list), but how would I zip according to the first list, regardless of whether it's the longest or not? For example: Input: ['a', 'b', 'c'], [1, 2] Output: [('a', 1), ('b', 2), ('c', None)] But also: Input: ['a', 'b'], [1, 2, 3] Output: [('a', 1), ('b', 2)] Do both of these functionalities exist in one function? | Solutions Chaining the repeated fillvalue behind the iterables other than the first: from itertools import chain, repeat def zip_first(first, *rest, fillvalue=None): return zip(first, *map(chain, rest, repeat(repeat(fillvalue)))) Or using zip_longest and trim it with a compress and zip trick: def zip_first(first, *rest, fillvalue=None): a, b = tee(first) return compress(zip_longest(b, *rest, fillvalue=fillvalue), zip(a)) Just like zip and zip_longest, these take any number (well, at least one) of any kind of iterables (including infinite ones) and return an iterator (convert to list if needed). Benchmark results Benchmarks with other equally general solutions (all code is at the end of the answer): 10 iterables of 10,000 to 90,000 elements, first has 50,000: ──────────────────────────────────────────────────────────── 2.2 ms 2.2 ms 2.3 ms limit_cheat 2.6 ms 2.6 ms 2.6 ms Kelly_Bundy_chain 3.3 ms 3.3 ms 3.3 ms Kelly_Bundy_compress 50.2 ms 50.6 ms 50.7 ms CrazyChucky 54.7 ms 55.0 ms 55.0 ms Sven_Marnach 74.8 ms 74.9 ms 75.0 ms Mad_Physicist 5.4 ms 5.4 ms 5.4 ms Kelly_Bundy_3 5.9 ms 6.0 ms 6.0 ms Kelly_Bundy_4 4.6 ms 4.7 ms 4.7 ms Kelly_Bundy_5 10,000 iterables of 0 to 100 elements, first has 50: ──────────────────────────────────────────────────── 4.6 ms 4.7 ms 4.8 ms limit_cheat 4.8 ms 4.8 ms 4.8 ms Kelly_Bundy_compress 8.4 ms 8.4 ms 8.4 ms Kelly_Bundy_chain 27.1 ms 27.3 ms 27.5 ms CrazyChucky 38.3 ms 38.5 ms 38.7 ms Sven_Marnach 73.0 ms 73.0 ms 73.1 ms Mad_Physicist 4.9 ms 4.9 ms 5.0 ms Kelly_Bundy_3 4.9 ms 4.9 ms 5.0 ms Kelly_Bundy_4 5.0 ms 5.0 ms 5.0 ms Kelly_Bundy_5 The first one is a cheat that knows the length, included to show what's probably a limit for how fast we can get. Explanations A little explanation of the above two solutions: The first solution, if used with for example three iterables, is equivalent to this: def zip_first(first, second, third, fillvalue=None): filler = repeat(fillvalue) return zip(first, chain(second, filler), chain(third, filler)) The second solution basically lets zip_longest do the job. The only problem with that is that it doesn't stop when the first iterable is done. So I duplicate the first iterable (with tee) and then use one for its elements and the other for its length. The zip(a) wraps every element in a 1-tuple, and non-empty tuples are true. So compress gives me all tuples produced by zip_longest, as many as there are elements in the first iterable. Benchmark code (Try it online!) def limit_cheat(*iterables, fillvalue=None): return islice(zip_longest(*iterables, fillvalue=fillvalue), cheat_length) def Kelly_Bundy_chain(first, *rest, fillvalue=None): return zip(first, *map(chain, rest, repeat(repeat(fillvalue)))) def Kelly_Bundy_compress(first, *rest, fillvalue=None): a, b = tee(first) return compress(zip_longest(b, *rest, fillvalue=fillvalue), zip(a)) def CrazyChucky(*iterables, fillvalue=None): SENTINEL = object() for first, *others in zip_longest(*iterables, fillvalue=SENTINEL): if first is SENTINEL: return others = [i if i is not SENTINEL else fillvalue for i in others] yield (first, *others) def Sven_Marnach(first, *rest, fillvalue=None): rest = [iter(r) for r in rest] for x in first: yield x, *(next(r, fillvalue) for r in rest) def Mad_Physicist(*args, fillvalue=None): # zip_by_first('ABCD', 'xy', fillvalue='-') --> Ax By C- D- # zip_by_first('ABC', 'xyzw', fillvalue='-') --> Ax By Cz if not args: return iterators = [iter(it) for it in args] while True: values = [] for i, it in enumerate(iterators): try: value = next(it) except StopIteration: if i == 0: return iterators[i] = repeat(fillvalue) value = fillvalue values.append(value) yield tuple(values) def Kelly_Bundy_3(first, *rest, fillvalue=None): a, b = tee(first) return map(itemgetter(1), zip(a, zip_longest(b, *rest, fillvalue=fillvalue))) def Kelly_Bundy_4(first, *rest, fillvalue=None): sentinel = object() for z in zip_longest(chain(first, [sentinel]), *rest, fillvalue=fillvalue): if z[0] is sentinel: break yield z def Kelly_Bundy_5(first, *rest, fillvalue=None): stopped = False def stop(): nonlocal stopped stopped = True return yield for z in zip_longest(chain(first, stop()), *rest, fillvalue=fillvalue): if stopped: break yield z import timeit from itertools import chain, repeat, zip_longest, islice, tee, compress from operator import itemgetter from collections import deque funcs = [ limit_cheat, Kelly_Bundy_chain, Kelly_Bundy_compress, CrazyChucky, Sven_Marnach, Mad_Physicist, Kelly_Bundy_3, Kelly_Bundy_4, Kelly_Bundy_5, ] def test(args_creator): # Correctness expect = list(funcs[0](*args_creator())) for func in funcs: result = list(func(*args_creator())) print(result == expect, func.__name__) # Speed tss = [[] for _ in funcs] for _ in range(5): print() print(args_creator.__name__) for func, ts in zip(funcs, tss): t = min(timeit.repeat(lambda: deque(func(*args_creator()), 0), number=1)) ts.append(t) print(*('%4.1f ms ' % (t * 1e3) for t in sorted(ts)[:3]), func.__name__) def args_few_but_long_iterables(): global cheat_length cheat_length = 50_000 first = repeat(0, 50_000) rest = [repeat(i, 10_000 * i) for i in range(1, 10)] return first, *rest def args_many_but_short_iterables(): global cheat_length cheat_length = 50 first = repeat(0, 50) rest = [repeat(i, i % 101) for i in range(1, 10_000)] return first, *rest test(args_few_but_long_iterables) funcs[1:3] = funcs[1:3][::-1] test(args_many_but_short_iterables) | 34 | 37 |
71,493,439 | 2022-3-16 | https://stackoverflow.com/questions/71493439/unable-to-import-module-lambda-function-no-module-named-psycopg2-psycopg-aw | I have installed the psycopg2 with this command in my package folder : pip install --target ./package psycopg2 # Or pip install -t ./package psycopg2 now psycopg2 module is in my package and I have created the zip and upload it in AWS lambda. In my local sprint is working fine but on AWS lambda it was not working. It shows me error { "errorMessage": "Unable to import module 'lambda_function': No module named 'psycopg2._psycopg'", "errorType": "Runtime.ImportModuleError", "stackTrace": [] } my lambda code is import psycopg2 def lambda_handler(): print('hello') my all other modules are working fine | add this lib pip install aws-psycopg2 | 7 | 2 |
71,491,982 | 2022-3-16 | https://stackoverflow.com/questions/71491982/how-to-segment-and-get-the-time-between-two-dates | I have the following table: id | number_of _trip | start_date | end_date | seconds 1 637hui 2022-03-10 01:20:00 2022-03-10 01:32:00 720 2 384nfj 2022-03-10 02:18:00 2022-03-10 02:42:00 1440 3 102fiu 2022-03-10 02:10:00 2022-03-10 02:23:00 780 4 948pvc 2022-03-10 02:40:00 2022-03-10 03:20:00 2400 5 473mds 2022-03-10 02:45:00 2022-03-10 02:58:00 780 6 103fkd 2022-03-10 03:05:00 2022-03-10 03:28:00 1380 7 905783 2022-03-10 03:12:00 null 0 8 498wsq 2022-03-10 05:30:00 2022-03-10 05:48:00 1080 I want to get the time that is driven for each hour, but if a trip takes the space of two hours, the time must be taken for each hour. If the end of the trip has not yet finished, the end_date field is null, but it must count the time it is taking in the respective hours from start_date. I have the following query: SELECT time_bucket(bucket_width := INTERVAL '1 hour',ts := start_date, "offset" := '0 minutes') AS init_date, sum(seconds) as seconds FROM trips WHERE start_date >= '2022-03-10 01:00:00' AND start_date <= '2022-03-10 06:00:00' GROUP BY init_date ORDER BY init_date; The result is: | init_date | seconds 2022-03-10 01:00:00 720 2022-03-10 02:00:00 5400 2022-03-10 03:00:00 1380 2022-03-10 05:00:00 1080 However I expect to receive a result like this: | init_date | seconds solo como una ayuda visual 2022-03-10 01:00:00 720 id(1:720) 2022-03-10 02:00:00 4200 id(2: 1440 3: 780 4: 1200 5: 780) 2022-03-10 03:00:00 5460 id(4:1200 6:1380 7:2880) 2022-03-10 05:00:00 1080 id(8:1080) EDIT If I replace the null the result is still unwanted: | init_date | seconds 2022-03-10 01:00:00 720 2022-03-10 02:00:00 5400 2022-03-10 03:00:00 1380 2022-03-10 05:00:00 1080 I have been thinking about getting all the data and solving the problem with pandas. I'll try and post if I get the answer. EDIT My previous result was not entirely correct, since there were hours left of a trip that has not yet finished, the correct result should be: start_date seconds 0 2022-03-10 01:00:00 720 1 2022-03-10 02:00:00 4200 2 2022-03-10 03:00:00 5460 3 2022-03-10 04:00:00 3600 4 2022-03-10 05:00:00 4680 NEW CODE def bucket_count(bucket, data): result = pd.DataFrame() list_r = [] for row_bucket in bucket.to_dict('records'): inicio = row_bucket['start_date'] fin = row_bucket['end_date'] df = data[ (inicio <= data['end_date']) & (inicio <= fin) & (data['start_date'] <= fin) & (data['start_date'] <= data['end_date']) ] df_dict = df.to_dict('records') for row in df_dict: seconds = 0 if row['start_date'] >= inicio and fin >= row['end_date']: seconds = (row['end_date'] - row['start_date']).total_seconds() elif row['start_date'] <= inicio <= row['end_date'] <= fin: seconds = (row['end_date'] - inicio).total_seconds() elif inicio <= row['start_date'] <= fin <= row['end_date']: seconds = (fin - row['start_date']).total_seconds() elif row['start_date'] < inicio and fin < row['end_date']: seconds = (fin - inicio).total_seconds() row['start_date'] = inicio row['end_date'] = fin row['seconds'] = seconds list_r.append(row) result = pd.DataFrame(list_r) return result.groupby(['start_date'])["seconds"].apply(lambda x: x.astype(int).sum()).reset_index() | This can be done in plain sql (apart from time_bucket function), in a nested sql query: select interval_start, sum(seconds_before_trip_ended - seconds_before_trip_started) as seconds from ( select interval_start, greatest(0, extract(epoch from start_date - interval_start)::int) as seconds_before_trip_started, least(3600, extract(epoch from coalesce(end_date, '2022-03-10 06:00:00') - interval_start)::int) as seconds_before_trip_ended from ( select generate_series( (select min(time_bucket(bucket_width := INTERVAL '1 hour', ts := start_date, "offset" := '0 minutes')) from trips), (select max(time_bucket(bucket_width := INTERVAL '1 hour', ts := coalesce(end_date, '2022-03-10 06:00:00'), "offset" := '0 minutes')) from trips), '1 hour') as interval_start) i join trips t on t.start_date <= i.interval_start + interval '1 hour' and coalesce(t.end_date, '2022-03-10 06:00:00') >= interval_start ) subq group by interval_start order by interval_start; This gives me the following result: interval_start | seconds ---------------------+--------- 2022-03-10 01:00:00 | 720 2022-03-10 02:00:00 | 4200 2022-03-10 03:00:00 | 5460 2022-03-10 04:00:00 | 3600 2022-03-10 05:00:00 | 4680 2022-03-10 06:00:00 | 0 (6 rows) Explanation Let's break the query down. In the innermost query: select generate_series( (select min(time_bucket(bucket_width := INTERVAL '1 hour', ts := start_date, "offset" := '0 minutes')) from trips), (select max(time_bucket(bucket_width := INTERVAL '1 hour', ts := coalesce(end_date, '2022-03-10 06:00:00'), "offset" := '0 minutes')) from trips), '1 hour' ) as interval_start we generate a series of time interval starts - from minimal start_date value up to the maximal end_time value, truncated to full hours, with 1-hour step. Each boundary can obviously be replaced with an arbitrary datetime. Direct result of this query is the following: interval_start --------------------- 2022-03-10 01:00:00 2022-03-10 02:00:00 2022-03-10 03:00:00 2022-03-10 04:00:00 2022-03-10 05:00:00 2022-03-10 06:00:00 (6 rows) Then, the middle-level query joins this series with the trips table, joining rows if and only if any part of the trip took place during the hour-long interval beginning at the time given by the 'interval_start' column: select interval_start, greatest(0, extract(epoch from start_date - interval_start)::int) as seconds_before_trip_started, least(3600, extract(epoch from coalesce(end_date, '2022-03-10 06:00:00') - interval_start)::int) as seconds_before_trip_ended from ( -- innermost query select generate_series( (select min(time_bucket(bucket_width := INTERVAL '1 hour', ts := start_date, "offset" := '0 minutes')) from trips), (select max(time_bucket(bucket_width := INTERVAL '1 hour', ts := coalesce(end_date, '2022-03-10 06:00:00'), "offset" := '0 minutes')) from trips), '1 hour' ) as interval_start -- innermost query end ) intervals join trips t on t.start_date <= intervals.interval_start + interval '1 hour' and coalesce(t.end_date, '2022-03-10 06:00:00') >= intervals.interval_start The two computed values represent respectively: seconds_before_trip_started - number of second passed between the beginning of the interval, and the beginning of the trip (or 0 if the trip begun prior to interval start). This is the time the trip didn't take place - thus we will be substructing it in the following step seconds_before_trip_ended - number of seconds passed between the end of the interval, and the end of the trip (or 3600 if the trip didn't end within concerned interval). The outermost query substracts the two beformentioned fields, effectively computing the time each trip took in each interval, and sums it for all trips, grouping by interval: select interval_start, sum(seconds_before_trip_ended - seconds_before_trip_started) as seconds from ( -- middle-level query select interval_start, greatest(0, extract(epoch from start_date - interval_start)::int) as seconds_before_trip_started, least(3600, extract(epoch from coalesce(end_date, '2022-03-10 06:00:00') - interval_start)::int) as seconds_before_trip_ended from ( select generate_series( (select min(time_bucket(bucket_width := INTERVAL '1 hour', ts := start_date, "offset" := '0 minutes')) from trips), (select max(time_bucket(bucket_width := INTERVAL '1 hour', ts := coalesce(end_date, '2022-03-10 06:00:00'), "offset" := '0 minutes')) from trips), '1 hour') as interval_start) i join trips t on t.start_date <= i.interval_start + interval '1 hour' and coalesce(t.end_date, '2022-03-10 06:00:00') >= interval_start -- middle-level query end ) subq group by interval_start order by interval_start; Additional grouping In case we have another column in the table, and what we really need is the segmentation of the above result in respect to that column, we simply need to add it to the appropriate select and group by clauses (optionally to order by clause as well). Suppose there's an additional driver_id column in the trips table: id | number_of_trip | start_date | end_date | seconds | driver_id ----+----------------+---------------------+---------------------+---------+----------- 1 | 637hui | 2022-03-10 01:20:00 | 2022-03-10 01:32:00 | 720 | 0 2 | 384nfj | 2022-03-10 02:18:00 | 2022-03-10 02:42:00 | 1440 | 0 3 | 102fiu | 2022-03-10 02:10:00 | 2022-03-10 02:23:00 | 780 | 1 4 | 948pvc | 2022-03-10 02:40:00 | 2022-03-10 03:20:00 | 2400 | 1 5 | 473mds | 2022-03-10 02:45:00 | 2022-03-10 02:58:00 | 780 | 1 6 | 103fkd | 2022-03-10 03:05:00 | 2022-03-10 03:28:00 | 1380 | 2 7 | 905783 | 2022-03-10 03:12:00 | | 0 | 2 8 | 498wsq | 2022-03-10 05:30:00 | 2022-03-10 05:48:00 | 1080 | 2 The modified query would look like that: select interval_start, driver_id, sum(seconds_before_trip_ended - seconds_before_trip_started) as seconds from ( select interval_start, driver_id, greatest(0, extract(epoch from start_date - interval_start)::int) as seconds_before_trip_started, least(3600, extract(epoch from coalesce(end_date, '2022-03-10 06:00:00') - interval_start)::int) as seconds_before_trip_ended from ( select generate_series( (select min(time_bucket(bucket_width := INTERVAL '1 hour', ts := start_date, "offset" := '0 minutes')) from trips), (select max(time_bucket(bucket_width := INTERVAL '1 hour', ts := coalesce(end_date, '2022-03-10 06:00:00'), "offset" := '0 minutes')) from trips), '1 hour') as interval_start ) intervals join trips t on t.start_date <= intervals.interval_start + interval '1 hour' and coalesce(t.end_date, '2022-03-10 06:00:00') >= intervals.interval_start ) subq group by interval_start, driver_id order by interval_start, driver_id; and give the following result: interval_start | driver_id | seconds ---------------------+-----------+--------- 2022-03-10 01:00:00 | 0 | 720 2022-03-10 02:00:00 | 0 | 1440 2022-03-10 02:00:00 | 1 | 2760 2022-03-10 03:00:00 | 1 | 1200 2022-03-10 03:00:00 | 2 | 4260 2022-03-10 04:00:00 | 2 | 3600 2022-03-10 05:00:00 | 2 | 4680 2022-03-10 06:00:00 | 2 | 0 | 6 | 1 |
71,527,595 | 2022-3-18 | https://stackoverflow.com/questions/71527595/efficiently-count-all-the-combinations-of-numbers-having-a-sum-close-to-0 | I have following pandas dataframe df column1 column2 list_numbers sublist_column x y [10,-6,1,-4] a b [1,3,7,-2] p q [6,2,-3,-3.2] the sublist_column will contain the numbers from the column "list_numbers" that adds up to 0 (0.5 is a tolerance) I have written following code. def return_list(original_lst,target_sum,tolerance): memo=dict() sublist=[] for i, x in enumerate(original_lst): if memo_func(original_lst, i + 1, target_sum - x, memo,tolerance) > 0: sublist.append(x) target_sum -= x return sublist def memo_func(original_lst, i, target_sum, memo,tolerance): if i >= len(original_lst): if target_sum <=tolerance and target_sum>=-tolerance: return 1 else: return 0 if (i, target_sum) not in memo: c = memo_func(original_lst, i + 1, target_sum, memo,tolerance) c += memo_func(original_lst, i + 1, target_sum - original_lst[i], memo,tolerance) memo[(i, target_sum)] = c return memo[(i, target_sum)] Then I am using the "return_list" function on the "sublist_column" to populate the result. target_sum = 0 tolerance=0.5 df['sublist_column']=df['list_numbers'].apply(lambda x: return_list(x,0,tolerance)) the following will be the resultant dataframe column1 column2 list_numbers sublist_column x y [10,-6,1,-4] [10,-6,-4] a b [1,3,7,-2] [] p q [6,2,-3,-3.2] [6,-3,-3.2] #sum is -0.2(within the tolerance) This is giving me correct result but it's very slow(takes 2 hrs to run if i use spyder IDE), as my dataframe size has roughly 50,000 rows, and the length of some of the lists in the "list_numbers" column is more than 15. The running time is particularly getting affected when the number of elements in the lists in the "list_numbers" column is greater than 15. e.g following list is taking almost 15 minutes to process [-1572.35,-76.16,-261.1,-7732.0,-1634.0,-52082.42,-3974.15, -801.65,-30192.79,-671.98,-73.06,-47.72,57.96,-511.18,-391.87,-4145.0,-1008.61, -17.53,-17.53,-1471.08,-119.26,-2269.7,-2709,-182939.59,-19.48,-516,-6875.75,-138770.16,-71.11,-295.84,-348.09,-3460.71,-704.01,-678,-632.15,-21478.76] How can i significantly improve my running time? | Step 1: using Numba Based on the comments, it appear that memo_func is the main bottleneck. You can use Numba to speed up its execution. Numba compile the Python code to a native one thanks to a just-in-time (JIT) compiler. The JIT is able to perform tail-call optimizations and native function calls are significantly faster than the one of CPython. Here is an example: import numba as nb @nb.njit('(float64[::1], int64, float64, float64)') def memo_func(original_arr, i, target_sum, tolerance): if i >= len(original_arr): if -tolerance <= target_sum <= tolerance: return 1 return 0 c = memo_func(original_arr, i + 1, target_sum, tolerance) c += memo_func(original_arr, i + 1, target_sum - original_arr[i], tolerance) return c @nb.njit('(float64[::1], float64, float64)') def return_list(original_arr, target_sum, tolerance): sublist = [] for i, x in enumerate(original_arr): if memo_func(original_arr, np.int64(i + 1), target_sum - x,tolerance) > 0: sublist.append(x) target_sum -= x return sublist Using memoization does not seems to speed up the result and this is a bit cumbersome to implement in Numba. In fact, there are much better ways to improve the algorithm. Note that you need to convert the lists in Numpy array before calling the functions: lst = [-850.85,-856.05,-734.09,5549.63,77.59,-39.73,23.63,13.93,-6455.54,-417.07,176.72,-570.41,3621.89,-233.47,-471.54,-30.33,-941.49,-1014.6,1614.5] result = return_list(np.array(lst, np.float64), 0, tolerance) Step 2: tail call optimization Calling many function to compute the right part of the input list is not efficient. The JIT is able to reduce the number of all but it is not able to completely remove them. You can unroll all the call when the depth of the tail calls is big. For example, when there is 6 items to compute, you can use this following code: if n-i == 6: c = 0 s0 = target_sum v0, v1, v2, v3, v4, v5 = original_arr[i:] for s1 in (s0, s0 - v0): for s2 in (s1, s1 - v1): for s3 in (s2, s2 - v2): for s4 in (s3, s3 - v3): for s5 in (s4, s4 - v4): for s6 in (s5, s5 - v5): c += np.int64(-tolerance <= s6 <= tolerance) return c This is pretty ugly but far more efficient since the JIT is able to unroll all the loop and produce a very fast code. Still, this is not enough for large lists. Step 3: better algorithm For large input lists, the problem is the exponential complexity of the algorithm. The thing is this problem looks really like a relaxed variant of subset-sum which is known to be NP-complete. Such class of algorithm is known to be very hard to solve. The best exact practical algorithms known so far to solve NP-complete problem are exponential. Put it shortly, this means that for any sufficiently large input, there is no known algorithm capable of finding an exact solution in a reasonable time (eg. less than the lifetime of a human). That being said, there are heuristics and strategies to improve the complexity of the current algorithm. One efficient approach is to use a meet-in-the-middle algorithm. When applied to your use-case, the idea is to generate a large set of target sums, then sort them, and then use a binary search to find the number of matching values. This is possible here since -tolerance <= target_sum <= tolerance where target_sum = partial_sum1 + partial_sum2 is equivalent to -tolerance + partial_sum2 <= partial_sum1 <= tolerance + partial_sum2. The resulting code is unfortunately quite big and not trivial, but this is certainly the cost to pay for trying to solve efficiently a complex problem like this one. Here it is: # Generate all the target sums based on in_arr and put the result in out_sum @nb.njit('(float64[::1], float64[::1], float64)', cache=True) def gen_all_comb(in_arr, out_sum, target_sum): assert in_arr.size >= 6 if in_arr.size == 6: assert out_sum.size == 64 v0, v1, v2, v3, v4, v5 = in_arr s0 = target_sum cur = 0 for s1 in (s0, s0 - v0): for s2 in (s1, s1 - v1): for s3 in (s2, s2 - v2): for s4 in (s3, s3 - v3): for s5 in (s4, s4 - v4): for s6 in (s5, s5 - v5): out_sum[cur] = s6 cur += 1 else: assert out_sum.size % 2 == 0 mid = out_sum.size // 2 gen_all_comb(in_arr[1:], out_sum[:mid], target_sum) gen_all_comb(in_arr[1:], out_sum[mid:], target_sum - in_arr[0]) # Find the number of item in sorted_arr where: # lower_bound <= item <= upper_bound @nb.njit('(float64[::1], float64, float64)', cache=True) def count_between(sorted_arr, lower_bound, upper_bound): assert lower_bound <= upper_bound lo_pos = np.searchsorted(sorted_arr, lower_bound, side='left') hi_pos = np.searchsorted(sorted_arr, upper_bound, side='right') return hi_pos - lo_pos # Count all the target sums in: # -tolerance <= all_target_sums(in_arr,sorted_target_sums)-s0 <= tolerance @nb.njit('(float64[::1], float64[::1], float64, float64)', cache=True) def multi_search(in_arr, sorted_target_sums, tolerance, s0): assert in_arr.size >= 6 if in_arr.size == 6: v0, v1, v2, v3, v4, v5 = in_arr c = 0 for s1 in (s0, s0 + v0): for s2 in (s1, s1 + v1): for s3 in (s2, s2 + v2): for s4 in (s3, s3 + v3): for s5 in (s4, s4 + v4): for s6 in (s5, s5 + v5): lo = -tolerance + s6 hi = tolerance + s6 c += count_between(sorted_target_sums, lo, hi) return c else: c = multi_search(in_arr[1:], sorted_target_sums, tolerance, s0) c += multi_search(in_arr[1:], sorted_target_sums, tolerance, s0 + in_arr[0]) return c @nb.njit('(float64[::1], int64, float64, float64)', cache=True) def memo_func(original_arr, i, target_sum, tolerance): n = original_arr.size remaining = n - i tail_size = min(max(remaining//2, 7), 16) # Tail call: for very small list (trivial case) if remaining <= 0: return np.int64(-tolerance <= target_sum <= tolerance) # Tail call: for big lists (better algorithm) elif remaining >= tail_size*2: partial_sums = np.empty(2**tail_size, dtype=np.float64) gen_all_comb(original_arr[-tail_size:], partial_sums, target_sum) partial_sums.sort() return multi_search(original_arr[-remaining:-tail_size], partial_sums, tolerance, 0.0) # Tail call: for medium-sized list (unrolling) elif remaining == 6: c = 0 s0 = target_sum v0, v1, v2, v3, v4, v5 = original_arr[i:] for s1 in (s0, s0 - v0): for s2 in (s1, s1 - v1): for s3 in (s2, s2 - v2): for s4 in (s3, s3 - v3): for s5 in (s4, s4 - v4): for s6 in (s5, s5 - v5): c += np.int64(-tolerance <= s6 <= tolerance) return c # Recursion c = memo_func(original_arr, i + 1, target_sum, tolerance) c += memo_func(original_arr, i + 1, target_sum - original_arr[i], tolerance) return c @nb.njit('(float64[::1], float64, float64)', cache=True) def return_list(original_arr, target_sum, tolerance): sublist = [] for i, x in enumerate(original_arr): if memo_func(original_arr, np.int64(i + 1), target_sum - x,tolerance) > 0: sublist.append(x) target_sum -= x return sublist Note that the code takes few seconds to compile since it is quite big. The cache should help not to recompile it every time. Step 4: even better algorithm The previous code count the number of matching values (the value stored in c). This is not needed since we just want to know if 1 value exists (ie. memo_func(...) > 0). As a result, we can return a boolean to define if a value has been found and optimize the algorithm so to directly return True when some early solutions are found. Big parts of the exploration tree can be skipped with this method (which is particularly efficient when there are many possible solutions like on random arrays). Another optimization is then to perform only one binary search (instead of two) and check before if the searched values can be found in the min-max range of the sorted array (so to skip this trivial case before applying the expensive binary search). This is possible because of the previous optimization. A final optimization is to early discard a part the exploration tree when the values generated by multi_search are so small/big that we can be sure there is no need to perform a binary search. This can be done by computing a pessimistic over-approximation of the searched values. This is especially useful in pathological cases that have almost no solutions. Here is the final implementation: @nb.njit('(float64[::1], float64[::1], float64)', cache=True) def gen_all_comb(in_arr, out_sum, target_sum): assert in_arr.size >= 6 if in_arr.size == 6: assert out_sum.size == 64 v0, v1, v2, v3, v4, v5 = in_arr s0 = target_sum cur = 0 for s1 in (s0, s0 - v0): for s2 in (s1, s1 - v1): for s3 in (s2, s2 - v2): for s4 in (s3, s3 - v3): for s5 in (s4, s4 - v4): for s6 in (s5, s5 - v5): out_sum[cur] = s6 cur += 1 else: assert out_sum.size % 2 == 0 mid = out_sum.size // 2 gen_all_comb(in_arr[1:], out_sum[:mid], target_sum) gen_all_comb(in_arr[1:], out_sum[mid:], target_sum - in_arr[0]) # Find the number of item in sorted_arr where: # lower_bound <= item <= upper_bound @nb.njit('(float64[::1], float64, float64)', cache=True) def has_items_between(sorted_arr, lower_bound, upper_bound): if upper_bound < sorted_arr[0] or sorted_arr[sorted_arr.size-1] < lower_bound: return False lo_pos = np.searchsorted(sorted_arr, lower_bound, side='left') return lo_pos < sorted_arr.size and sorted_arr[lo_pos] <= upper_bound # Count all the target sums in: # -tolerance <= all_target_sums(in_arr,sorted_target_sums)-s0 <= tolerance @nb.njit('(float64[::1], float64[::1], float64, float64)', cache=True) def multi_search(in_arr, sorted_target_sums, tolerance, s0): assert in_arr.size >= 6 if in_arr.size == 6: v0, v1, v2, v3, v4, v5 = in_arr x3, x4, x5 = min(v3, 0), min(v4, 0), min(v5, 0) y3, y4, y5 = max(v3, 0), max(v4, 0), max(v5, 0) mini = sorted_target_sums[0] maxi = sorted_target_sums[sorted_target_sums.size-1] for s1 in (s0, s0 + v0): for s2 in (s1, s1 + v1): for s3 in (s2, s2 + v2): # Prune the exploration tree early if a # larger range cannot be found. lo = s3 + (x3 + x4 + x5 - tolerance) hi = s3 + (y3 + y4 + y5 + tolerance) if hi < mini or maxi < lo: continue for s4 in (s3, s3 + v3): for s5 in (s4, s4 + v4): for s6 in (s5, s5 + v5): lo = -tolerance + s6 hi = tolerance + s6 if has_items_between(sorted_target_sums, lo, hi): return True return False return ( multi_search(in_arr[1:], sorted_target_sums, tolerance, s0) or multi_search(in_arr[1:], sorted_target_sums, tolerance, s0 + in_arr[0]) ) @nb.njit('(float64[::1], int64, float64, float64)', cache=True) def memo_func(original_arr, i, target_sum, tolerance): n = original_arr.size remaining = n - i tail_size = min(max(remaining//2, 7), 13) # Tail call: for very small list (trivial case) if remaining <= 0: return -tolerance <= target_sum <= tolerance # Tail call: for big lists (better algorithm) elif remaining >= tail_size*2: partial_sums = np.empty(2**tail_size, dtype=np.float64) gen_all_comb(original_arr[-tail_size:], partial_sums, target_sum) partial_sums.sort() return multi_search(original_arr[-remaining:-tail_size], partial_sums, tolerance, 0.0) # Tail call: for medium-sized list (unrolling) elif remaining == 6: s0 = target_sum v0, v1, v2, v3, v4, v5 = original_arr[i:] for s1 in (s0, s0 - v0): for s2 in (s1, s1 - v1): for s3 in (s2, s2 - v2): for s4 in (s3, s3 - v3): for s5 in (s4, s4 - v4): for s6 in (s5, s5 - v5): if -tolerance <= s6 <= tolerance: return True return False # Recursion return ( memo_func(original_arr, i + 1, target_sum, tolerance) or memo_func(original_arr, i + 1, target_sum - original_arr[i], tolerance) ) @nb.njit('(float64[::1], float64, float64)', cache=True) def return_list(original_arr, target_sum, tolerance): sublist = [] for i, x in enumerate(original_arr): if memo_func(original_arr, np.int64(i + 1), target_sum - x,tolerance): sublist.append(x) target_sum -= x return sublist This final implementation is meant to efficiently compute pathological cases (the one where there is only few non-trivial solutions or even no solutions like on the big provided input lists). However, it can can be tuned so to compute faster the cases where there are many solutions (like on large random uniformly-distributed arrays) at the expense of a significantly slower execution on the pathological cases. This tread-off can be set by changing the variable tail_size (smaller values are better for cases with more solutions). Benchmark Here is the tested inputs: target_sum = 0 tolerance = 0.5 small_lst = [-850.85,-856.05,-734.09,5549.63,77.59,-39.73,23.63,13.93,-6455.54,-417.07,176.72,-570.41,3621.89,-233.47,-471.54,-30.33,-941.49,-1014.6,1614.5] big_lst = [-1572.35,-76.16,-261.1,-7732.0,-1634.0,-52082.42,-3974.15,-801.65,-30192.79,-671.98,-73.06,-47.72,57.96,-511.18,-391.87,-4145.0,-1008.61,-17.53,-17.53,-1471.08,-119.26,-2269.7,-2709,-182939.59,-19.48,-516,-6875.75,-138770.16,-71.11,-295.84,-348.09,-3460.71,-704.01,-678,-632.15,-21478.76] random_lst = [-86145.13, -34783.33, 50912.99, -87823.73, 37537.52, -22796.4, 53530.74, 65477.91, -50725.36, -52609.35, 92769.95, 83630.42, 30436.95, -24347.08, -58197.95, 77504.44, 83958.08, -85095.73, -61347.26, -14250.65, 2012.91, 83969.32, -69356.41, 29659.23, 94736.29, 2237.82, -17784.34, 23079.36, 8059.84, 26751.26, 98427.46, -88735.07, -28936.62, 21868.77, 5713.05, -74346.18] The uniformly-distributed random list has a very large number of solutions while the provided big list has none. The tuned final implementation set tail_size to min(max(remaining//2, 7), 13) so to compute the random list much faster at the expense of a significantly slower execution on the big list. Here is the timing with the small list on my machine: Naive python algorithm: 173.45 ms Naive algorithm using Numba: 7.21 ms Tail call optimization + Numba: 0.33 ms KellyBundy's implementation: 0.19 ms Efficient algorithm + optim + Numba: 0.10 ms Final implementation (tuned): 0.05 ms Final implementation (default): 0.05 ms Here is the timing with the large random list on my machine (easy case): Efficient algorithm + optim + Numba: 209.61 ms Final implementation (default): 4.11 ms KellyBundy's implementation: 1.15 ms Final implementation (tuned): 0.85 ms Other algorithms are not shown here because they are too slow (see below) Here is the timing with the big list on my machine (challenging case): Naive python algorithm: >20000 s [estimation & out of memory] Naive algorithm using Numba: ~900 s [estimation] Tail call optimization + Numba: 42.61 s KellyBundy's implementation: 0.671 s Final implementation (tuned): 0.078 s Efficient algorithm + optim + Numba: 0.051 s Final implementation (default): 0.013 s Thus, the final implementation is up to ~3500 times faster on the small input and more than 1_500_000 times faster on the large input! It also use far less RAM so it can actually be executed on a cheap PC. It is worth noting that the execution time can be reduced even further be using multiple thread so to reach a speed up >5_000_000 though it may be slower on small inputs and it will make the code a bit complex. | 7 | 17 |
71,538,933 | 2022-3-19 | https://stackoverflow.com/questions/71538933/preparing-metadata-pyproject-toml-error-when-installing-numpy-on-vs-code | So I was trying to install numpy 1.20.3, on VS Code, when it says: Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [239 lines of output] setup.py:66: RuntimeWarning: NumPy 1.20.3 may not yet support Python 3.10. warnings.warn( Running from numpy source directory. setup.py:485: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\tools\cythonize.py:67: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.version import LooseVersion Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Processing numpy/random\mtrand.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE blis_info: libraries blis not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE openblas_info: libraries openblas not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\system_info.py:1989: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): blas_info: libraries blas not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\system_info.py:1989: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): blas_src_info: NOT AVAILABLE C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\system_info.py:1989: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): NOT AVAILABLE non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE flame_info: libraries flame not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries tatlas,tatlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries satlas,satlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries ptf77blas,ptcblas,atlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: libraries lapack_atlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries f77blas,cblas,atlas not found in C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: libraries lapack not found in ['C:\\Users\\_\\OneDrive\\Desktop\\VOR-Models\\2021-VOR-Model\\venv\\lib', 'C:\\'] NOT AVAILABLE C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\system_info.py:1849: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() lapack_src_info: NOT AVAILABLE C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\system_info.py:1849: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() NOT AVAILABLE numpy_linalg_lapack_lite: FOUND: language = c define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')] C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.752.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running dist_info running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.10 creating build\src.win-amd64-3.10\numpy creating build\src.win-amd64-3.10\numpy\distutils building library "npymath" sources LINK : fatal error LNK1104: cannot open file 'kernel32.lib' Traceback (most recent call last): File "C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 363, in <module> main() File "C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 345, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "C:\Users\_\OneDrive\Desktop\VOR-Models\2021-VOR-Model\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 164, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "C:\Users\_\AppData\Local\Temp\pip-build-env-fmnw10id\overlay\Lib\site-packages\setuptools\build_meta.py", line 157, in prepare_metadata_for_build_wheel self.run_setup() File "C:\Users\_\AppData\Local\Temp\pip-build-env-fmnw10id\overlay\Lib\site-packages\setuptools\build_meta.py", line 248, in run_setup super(_BuildMetaLegacyBackend, File "C:\Users\_\AppData\Local\Temp\pip-build-env-fmnw10id\overlay\Lib\site-packages\setuptools\build_meta.py", line 142, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 513, in <module> setup_package() File "setup.py", line 505, in setup_package setup(**metadata) File "C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\core.py", line 169, in setup return old_setup(**new_attr) File "C:\Users\_\AppData\Local\Temp\pip-build-env-fmnw10id\overlay\Lib\site-packages\setuptools\__init__.py", line 165, in setup return distutils.core.setup(**attrs) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.752.0_x64__qbz5n2kfra8p0\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.752.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.752.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\_\AppData\Local\Temp\pip-build-env-fmnw10id\overlay\Lib\site-packages\setuptools\command\dist_info.py", line 31, in run egg_info.run() File "C:\Users\\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\command\egg_info.py", line 24, in run self.run_command("build_src") File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.752.0_x64__qbz5n2kfra8p0\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.752.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\command\build_src.py", line 144, in run self.build_sources() File "C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\command\build_src.py", line 155, in build_sources self.build_library_sources(*libname_info) File "C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\command\build_src.py", line 288, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File "C:\Users\_\AppData\Local\Temp\pip-install-12pl1k89\numpy_a61d254ad189429092d1fab3dbdca78f\numpy\distutils\command\build_src.py", line 378, in generate_sources source = func(extension, build_dir) File "numpy\core\setup.py", line 676, in get_mathlib_info raise RuntimeError("Broken toolchain: cannot link a simple C program") RuntimeError: Broken toolchain: cannot link a simple C program [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. I am installing it like this: (venv) PS C:\Users_\OneDrive\Desktop\VOR-Models\2021-VOR-Model> pip install -r requirements.txt Because I am so lost, I will show you requirements.txt. This function literally installs everything before numpy, then it fails. clearbeautifulsoup4==4.9.3 certifi==2021.5.30 chardet==4.0.0 idna==2.10 numpy==1.20.3 pandas==1.2.4 python-dateutil==2.8.1 pytz==2021.1 requests==2.25.1 six==1.16.0 soupsieve==2.2.1 urllib3==1.26.5 Any help is appreciated. The reason I am doing this is because I took a course online (https://www.fantasyfootballdatapros.com/), and am trying to do data munging with pandas. I need NumPy for that. I also tried looking at other questions, but they didn't give me the answers I needed, or didn't have an answer at all. Please help. | It says it in the error message. RuntimeWarning: NumPy 1.20.3 may not yet support Python 3.10. Two quick trys: Without looking through each packages, try removing numpy from requirements, and run "python -m pip install -r requirements.txt" again. Then in command line, try typing this to see if it installed because the other packages (ex. Tensorflow install numpy automatically, with the version it depends on). python -m numpy try a lower numpy version. inside requirments.txt try typing or something under 1.20.3 numpy==1.19.2 | 5 | 1 |
71,575,112 | 2022-3-22 | https://stackoverflow.com/questions/71575112/annotate-a-function-argument-as-being-a-specific-module | I have a pytest fixture that imports a specific module. This is needed as importing the module is very expensive, so we don't want to do it on import-time (i.e. during pytest test collection). This results in code like this: @pytest.fixture def my_module_fix(): import my_module yield my_module def test_something(my_module_fix): assert my_module_fix.my_func() = 5 I am using PyCharm and would like to have type-checking and autocompletion in my tests. To achieve that, I would somehow have to annotate the my_module_fix parameter as having the type of the my_module module. I have no idea how to achieve that. All I found is that I can annotate my_module_fix as being of type types.ModuleType, but that is not enough: It is not any module, it is always my_module. | If I get your question, you have two (or three) separate goals Deferred import of slowmodule Autocomplete to continue to work as if it was a standard import (Potentially?) typing (e.g. mypy?) to continue to work I can think of at least five different approaches, though I'll only briefly mention the last because it's insane. Import the module inside your tests This is (by far) the most common and IMHO preferred solution. e.g. instead of import slowmodule def test_foo(): slowmodule.foo() def test_bar(): slowmodule.bar() you'd write: def test_foo(): import slowmodule slowmodule.foo() def test_bar(): import slowmodule slowmodule.bar() [deferred importing] Here, the module will be imported on-demand/lazily. So if you have pytest setup to fail-fast, and another test fails before pytest gets to your (test_foo, test_bar) tests, the module will never be imported and you'll never incur the runtime cost. Because of Python's module cache, subsequent import statements won't actually re-import the module, just grab a reference to the already-imported module. [autocomplete/typing] Of course, autocomplete will continue to work as you expect in this case. This is a perfectly fine import pattern. While it does require adding potentially many additional import statements (one inside each test function), it's immediately clear what is going on (regardless of whether it's clear why it's going on). [3.7+] Proxy your module with module __getattr__ If you create a module (e.g. slowmodule_proxy.py) with the contents like: def __getattr__(name): import slowmodule return getattr(slowmodule, name) And in your tests, e.g. import slowmodule def test_foo(): slowmodule.foo() def test_bar(): slowmodule.bar() instead of: import slowmodule you write: import slowmodule_proxy as slowmodule [deferred import] Thanks to PEP-562, you can "request" any name from slowmodule_proxy and it will fetch and return the corresponding name from slowmodule. Just as above, including the import inside the function will cause slowmodule to be imported only when the function is called and executed instead of on module load. Module caching still applies here of course, so you're only incurring the import penalty once per interpreter session. [autocomplete] However, while deferred importing will work (and your tests run without issue), this approach (as stated so far) will "break" autocomplete: Now we're in the realm of PyCharm. Some IDEs will perform "live" analysis of modules and actually load up the module and inspect its members. (PyDev had this option). If PyCharm did this, implementing module.__dir__ (same PEP) or __all__ would allow your proxy module to masquerade as the actual slowmodule and autocomplete would work.† But, PyCharm does not do this. Nonetheless, you can fool PyCharm into giving you autocomplete suggestions: if False: import slowmodule else: import slowmodule_proxy as slowmodule The interpreter will only execute the else branch, importing the proxy and naming it slowmodule (so your test code can continue to reference slowmodule unchanged). But PyCharm will now provide autocompletion for the underlying module: † While live-analysis can be an incredibly helpful, there's also a (potential) security concern that comes with it that static syntax analysis doesn't have. And the maturation of type hinting and stub files has made it less of an issue still. Proxy slowmodule explicitly If you really hated the dynamic proxy approach (or the fact that you have to fool PyCharm in this way), you could proxy the module explicitly. (You'd likely only want to consider this if the slowmodule API is stable.) If slowmodule has methods foo and bar you'd create a proxy module like: def foo(*args, **kwargs): import slowmodule return slowmodule.foo(*args, **kwargs) def bar(*args, **kwargs): import slowmodule return slowmodule.bar(*args, **kwargs) (Using args and kwargs to pass arguments through to the underlying callables. And you could add type hinting to these functions to mirror the slowmodule functions.) And in your test, import slowmodule_proxy as slowmodule Same as before. Importing inside the method gives you the deferred importing you want and the module cache takes care of multiple import calls. And since it's a real module whose contents can be statically analyzed, there's no need to "fool" PyCharm. So the benefit of this solution is that you don't have a bizarre looking if False in your test imports. This, however, comes at the (substantial) cost of having to maintain a proxy file alongside your module -- which could prove painful in the case that slowmodule's API wasn't stable. [3.5+] Use importlib's LazyLoader instead of a proxy module Instead of the proxy module slowmodule_proxy, you could follow a pattern similar to the one shown in the importlib docs >>> import importlib.util >>> import sys >>> def lazy_import(name): ... spec = importlib.util.find_spec(name) ... loader = importlib.util.LazyLoader(spec.loader) ... spec.loader = loader ... module = importlib.util.module_from_spec(spec) ... sys.modules[name] = module ... loader.exec_module(module) ... return module ... >>> lazy_typing = lazy_import("typing") >>> #lazy_typing is a real module object, >>> #but it is not loaded in memory yet. You'd still need to fool PyCharm though, so something like: if False: import slowmodule else: slowmodule = lazy_import('slowmodule') would be necessary. Outside of the single additional level of indirection on module member access (and the two minor version availability difference), it's not immediately clear to me what, if anything, there is to be gained from this approach over the previous proxy module method, however. Use importlib's Finder/Loader machinery to hook import (don't do this) You could create a custom module Finder/Loader that would (only) hook your slowmodule import and, instead load, for example your proxy module. Then you could just import that "importhook" module before you imported slowmode in your tests, e.g. import myimporthooks import slowmodule def test_foo(): ... (Here, myimporthooks would use importlib's finder and loader machinery to do something simlar to the importhook package but intercept and redirect the import attempt rather than just serving as an import callback.) But this is crazy. Not only is what you want (seemingly) achievable through (infinitely) more common and supported methods, but it's incredibly fragile, error-prone and, without diving into the internals of PyTest (which may mess with module loaders itself), it's hard to say whether it'd even work. | 5 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.