question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-16 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
74,051,135
2022-10-13
https://stackoverflow.com/questions/74051135/split-a-text-by-specific-word-or-phrase-and-keep-the-word-in-python
Is there any elegant way of splitting a text by a word and keep the word as well. Although there are some works around split with re package and pattern like (Python RE library String Split but keep the delimiters/separators as part of the next string), but none of them works for this scenario when the delimiter is repeated multiple times. For example: s = "I want to split text here, and also keep here, and return all as list items" Using partition: s.partition("here") >> ('I want to split text ', 'here', ', and also keep here, and return all as list items') Using re.split(): re.split("here",s) >> ['I want to split text ', ', and also keep ', ', and return all as list items'] The desired output should be something to the following list: ['I want to split text', 'here', ' , and also keep ', 'here', ' , and return all as list items']
Yes. What you're looking for is a feature of the re.split() method. If you use a capture group in the expression, it will return the matched terms as well: import re s = "I want to split text here, and also keep here, and return all as list items" r = re.split('(here)', s) print(r) Result: ['I want to split text ', 'here', ', and also keep ', 'here', ', and return all as list items'] If you define multiple capture groups, it will return each of them individually. So you can return just a part of the delimiter, or multiple parts that each get returned. I've done some fairly crazy things with this feature in the past. It can replace an appreciable amount of code that would otherwise be necessary.
4
4
74,047,903
2022-10-12
https://stackoverflow.com/questions/74047903/django-channels-and-react-error-websocket-connection-to-ws-localhost8000-my
First-of-all, I still don't know whether the issue is on front-end or back-end side, but it seems like back-end is more likely. I have built a Django application using django-channels to send packets of data to a front-end react.js SPA via websocket connection; I've also built a simple vanilla-javascript client previously (served by Django), and it doesn't have this issue. The issue is, I get this error in firefox console upon open (they switch places at random times, indicating they happen at the same time): Firefox can't establish a connection to the server at ws://localhost:8000/ws/XY_Broadcast/. App.jsx:32 The connection to ws://localhost:8000/ws/XY_Broadcast/ was interrupted while the page was loading. App.jsx:32 using Brave browser (chromium-based), I get this: (warning) App.jsx:69 WebSocket connection to 'ws://localhost:8000/ws/XY_Broadcast' failed: WebSocket is closed before the connection is established. (error)App.jsx:38 WebSocket connection to 'ws://localhost:8000/ws/XY_Broadcast' failed: The odd thing is, though, that my page seems to open and work - it prints messages both upon open and upon receiving data after some time. The websocket error code is 1006. What I have tried: A few browser extensions for testing websocket connection - none of them would connect (which is why I think it's a back-end issue). I also don't think the issue is with CORS - I've had issues with it before (my front-end apps display wouldn't display an image they get from an HTTP URL at backend) and I have an extension to turn it ON/OFF in browser - it doesn't affect anything. I tried switching ASGI_APPLICATION Django setting to myproject.asgi.application as mentioned in django-channels docs, but it didn't help, so I set it back to myApp.routing.application I tried running the project with daphne instead of runserver, in my docker-compose file, I have daphne -b 0.0.0.0 -p 8000 some_demo_django.asgi:application. The React pages open up and display the image fetched from back-end, but the errors persist. Additionally, though, old (vanilla JS pages) won't load, I get 500 Internal Server Error Daphne HTTP processing error, with the following traceback from the backend: File "/usr/local/lib/python3.10/site-packages/daphne/http_protocol.py", line 163, in process self.application_queue = yield maybeDeferred( TypeError: ASGIHandler.__call__() missing 2 required positional arguments: 'receive' and 'send' 172.19.0.1:41728 - - [13/Oct/2022:00:06:48] "GET /dev_td/" 500 452 172.19.0.1:41718 - - [13/Oct/2022:00:06:49] "WSDISCONNECT /ws/XY_Broadcast" - - 2022-10-13 00:06:49,429 ERROR Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/daphne/http_protocol.py", line 163, in process self.application_queue = yield maybeDeferred( TypeError: ASGIHandler.__call__() missing 2 required positional arguments: 'receive' and 'send' 172.19.0.1:41728 - - [13/Oct/2022:00:06:49] "GET /dev_td/" 500 452 2022-10-13 00:06:55,721 ERROR Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/daphne/http_protocol.py", line 163, in process self.application_queue = yield maybeDeferred( TypeError: ASGIHandler.__call__() missing 2 required positional arguments: 'receive' and 'send' The React code is: export default function App() { const [messages, setMessages] = useState([]); useEffect(() => { const ws = new WebSocket('ws://localhost:8000/ws/XY_Broadcast/'); ws.onopen = (event) => { console.log(event.data) console.log('ws opened') }; ws.onmessage = function (event) { console.log(event.data) }; ws.onclose = function(event){ console.log("closed, code is:",event.code) }; return () => ws.close(); }, []); The previously-built vanilla JavaScript client code is: var XY_Broadcast = new WebSocket( 'ws://' + window.location.host + '/ws/XY_Broadcast/'); XY_Broadcast.onmessage = function(e) { let loc_data = JSON.parse(e.data); ... (my logic) XY_Broadcast.onclose = function (e) { console.error('XY_Broadcast socket connection closed'); for(var i=0; i<e.length; i++) { console.log(e) } }; I've browsed through all the answers I could find all over the internet and had discussed the issue with the senior developers in my company, but no success.
it happens when you attempts to close the WebSocket connection ws.close() before it's had a chance to actually connect. Try changing useEffect cleanup function : return () => ws.close(); to : return () => { if (socket.readyState === 1) { socket.close(); }
3
5
74,047,721
2022-10-12
https://stackoverflow.com/questions/74047721/prevent-selenium-from-taking-the-focus-to-the-opened-window
I have 40 Python unit tests and each of them open a Selenium driver as they are separate files and cannot share the same driver. from selenium import webdriver webdriver.Firefox() The above commands will take the focus to the new opened window. For example, if I am on my editor and typing something, in the middle of my work, suddenly a selenium browser is opening and Linux switch to that window. I am not sure if Windows or Mac have a similar problem or not. This means that every time I run a unit, I cannot use my computer as it keeps switching away from the application that I am currently using. How can I tell Selenium not to switch to the opened window?
Here is an example of running Selenium/Firefox on linux, in headless mode. You can see various imports as well - gonna leave them there. Browser will start in headless mode, will go to ddg page and print out the page source, then quit. from selenium.common.exceptions import NoSuchElementException, TimeoutException from selenium import webdriver from selenium.webdriver.firefox.service import Service from selenium.webdriver.common.keys import Keys from selenium.webdriver.firefox.options import Options as Firefox_Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support.ui import Select from selenium.webdriver.support import expected_conditions as EC firefox_options = Firefox_Options() firefox_options.add_argument("--width=1280") firefox_options.add_argument("--height=720") firefox_options.headless = True driverService = Service('chromedriver/geckodriver') ## path where geckodriver is browser = webdriver.Firefox(service=driverService, options=firefox_options) wait = WebDriverWait(browser, 20) browser.get('https://duckduckgo.com') print(browser.page_source) browser.quit() Browser can also take screenshots while in headless mode, if any unit test requires it. A reasonably well structured documentation for Selenium can be found at https://www.selenium.dev/documentation/ (with some gaps, of course, but generally decent).
4
4
74,047,268
2022-10-12
https://stackoverflow.com/questions/74047268/is-there-a-way-to-split-a-string-on-delimiters-including-colon-except-when-it
I am trying to split the string below on a number of delimiters including \n, comma(,), and colon(:) except when the colon is part of a time value. Below is my string: values = 'City:hell\nCountry:rome\nUpdate date: 2022-09-26 00:00:00' I have tried: result = re.split(':|,|\n', values) However, this ends up splitting the time resulting in ` ['City','hell','Country','rome','Update date',' 2022-09-26 00','00','00'] Whereas the expected outcome is ['City','hell','Country','rome','Update date', '2022-09-26 00:00:00'] Any help/assistance will be appreciated
You could use look-behind to ensure that what is before : is not a pair of digits re.split('(?<![0-9]{2}):\s*|,|\n', values) It separates by colons with optional spaces when they are not preceded by digits , \n So : is a separator (when not preceded by a pair of digits). But so is : or : (still, when they are not preceded by a pair of digits). Consequence is that if, as it is the case if your string, there is a space after a colon, then that space is not included in the next field (since it is part of the separator, not of a field) Or, you could also keep the first version of my answer (without \s*) and just .strip() the fields.
4
4
74,043,416
2022-10-12
https://stackoverflow.com/questions/74043416/does-the-in-operator-have-side-effects
I have code which writes bytes to a serial port (pySerial 3.5) and then receives bytes. The code should work for micropython too, which uses UART instead of pySerial. With UART, I have to add a small delay before reading. The user should not have to pass an additional flag whether to add that delay, because the serial_port object is already platform specific, for example the UART implementation provides a .any() method which the pySerial implementation does not have. So my first attempt is to check for this method, and only delay when it exists. def __init__(self, serial_port): self.serial_port.write(my_bytes) # When checking for any on serial_port, I receive no bytes. if "any" in self.serial_port: print("UART specific delay will happen here") # When instead checking with getattr(self.serial_port, "any", None), bytes come in raw_config = self.serial_port.read(128) As soon as I add this "any" in self.serial_port check, the read() method returns an empty byte array. When I remove the check, I get bytes again. When I replace the check by getattr(self.serial_port, "any", None), I get bytes too. When I just run time.sleep() or anything else, I get bytes. When I add the in check, bytes are gone. Why on earth? Isn't an in check supposed to be side effect free? (All runs were executed with pySerial ports only.)
As mentioned in a comment by deceze, in just calls the __contains__() method. Depending on how this is implemented, it can have side effects. A simple example to demonstrate this: class Example: def __contains__(self, x): print('Side effect!') x = Example() if 'something' in x: print('Found') else: print('Not found') Output: Side effect! Not found In the case of your serial_port example, I guess the __contains__() method is implemented in such a way that it reads bytes until "any" is found, or until there are no more bytes. Consequently, all bytes are already consumed and your function returns an empty array. NB: According to the documentation: For objects that don’t define __contains__(), the membership test first tries iteration via __iter__(), then the old sequence iteration protocol via __getitem__().
3
3
74,042,960
2022-10-12
https://stackoverflow.com/questions/74042960/find-the-index-value-when-the-value-changes-in-a-column-in-dataframe-pandas
I have a dataframes as follows: df1 = col_1 val_1 0 4.0 0.89 1 4.0 0.56 2 49.0 0.7 3 49.0 1.23 4 52.0 0.8 5 52.0 0.12 6 32.0 0.5 I want to find the index value when the value in col_1 changes and put in a list I tried the following: n_change = (np.where(~df1.col_1.diff(+1).isin([0, np.nan]))) But it returns a tuple of array and it is difficult to itterate through it. I want a solution as follows n_change = [2,4,6] or n_change = array(2,4,6) Is there a better way to do this?
You can use: df.index[df['col_1'].ne(df['col_1'].shift().bfill())] # or with diff # df.index[df['col_1'].diff().fillna(0).ne(0)] output: Int64Index([2, 4, 6], dtype='int64') As list: df.index[df['col_1'].ne(df['col_1'].shift().bfill())].tolist() output: [2, 4, 6] With your solution: np.where(~df.col_1.diff().isin([0, np.nan]))[0].tolist() output: [2, 4, 6]
3
3
74,042,649
2022-10-12
https://stackoverflow.com/questions/74042649/pandas-how-to-duplicate-a-value-for-every-substring-in-a-column
I have a pandas dataframe as folllows, import pandas as pd df = pd.DataFrame({'text': ['set an alarm for [time : two hours from now]','wake me up at [time : nine am] on [date : friday]','check email from [person : john]']}) print(df) original dataframe text 0 set an alarm for [time : two hours from now] 1 wake me up at [time : nine am] on [date : friday] 2 check email from [person : john] I would like to repeat the list and the labels (date, time, and person) for all the values inside the lists if the value inside the list is more than one. so the desired output is, desired output: new_text 0 set an alarm for [time : two] [time : hours] [time : from] [time : now] 1 wake me up at [time : nine] [time : am] on [date : friday] 2 check email from [person : john] I have so far tried to separate the lists from the original column, but do not know how to continue. df['separated_list'] = df.text.str.split(r"\s(?![^[]*])|[|]").apply(lambda x: [y for y in x if '[' in y])
You can use a regex with a custom function as replacement: df['new_text'] = df.text.str.replace( r"\[([^\[\]]*?)\s*:\s*([^\[\]]*)\]", lambda m: ' '.join([f'[{m.group(1)} : {x}]' for x in m.group(2).split()]), # new chunk for each word regex=True) output: text new_text 0 set an alarm for [time : two hours from now] set an alarm for [time : two] [time : hours] [time : from] [time : now] 1 wake me up at [time : nine am] on [date : friday] wake me up at [time : nine] [time : am] on [date : friday] 2 check email from [person : john] check email from [person : john] regex demo
3
2
74,037,740
2022-10-12
https://stackoverflow.com/questions/74037740/can-i-use-a-context-manager-for-a-whole-module
I'd like to have a context manager that is opened (entered?) for the lifetime of the application. It should be opened when a module loads and be closed when the module is destroyed. Wrapping the whole module code won't work, since then the conetext manager is closed when the module is finished loading and is no longer open when the methods are actually called. import aiohttp with context as c: def f1(): c.do_something('one') def f2(): c.do_something('two') Specifically, I want to create a aiohttp.ClientSession for the module which is reused for multiple different requests and cleanly closed when the application stops. Is it even a good idea to use a context manager for this or is there a better way?
A context manager is not suitable for managing the lifecycle of an object across modules. Instead, you can register the close method of your aiohttp.ClientSession instance with the atexit module so that it would be called when the app stops for any reason. Since in this case the close method is a coroutine, register the asyncio.run function with the close coroutine as an argument: import aiohttp import asyncio import atexit c = aiohttp.ClientSession(some_url) atexit.register(asyncio.run, c.close())
4
7
74,037,299
2022-10-12
https://stackoverflow.com/questions/74037299/match-phone-number-using-regex-5x000y000
I am trying to match the pattern 5X000Y000 of 9 digit number. What I have tried I have written the below regex B_5 = 530004000 B_5_pattern = re.sub(r'^5(\d(000))(\d(000))', "Bronze", str(B_5)) print(B_5_pattern) What I want to achieve I want to update my regex to add a condition that X000 can not be the same as Y000. (X!=Y). So the regex will match 530004000 but will not match 530003000
You could use: B_5 = "530004000" if re.search(r'^5(\d)0{3}(?!\1)\d0{3}$', B_5): print("MATCH")
3
5
73,972,660
2022-10-6
https://stackoverflow.com/questions/73972660/how-to-return-data-in-json-format-using-fastapi
I have written the same API application with the same function in both FastAPI and Flask. However, when returning the JSON, the format of data differs between the two frameworks. Both use the same json library and even the same exact code: import json from google.cloud import bigquery bigquery_client = bigquery.Client() @router.get('/report') async def report(request: Request): response = get_clicks_impression(bigquery_client, source_id) return response def get_user(client, source_id): try: query = """ SELECT * FROM .....""" job_config = bigquery.QueryJobConfig( query_parameters=[ bigquery.ScalarQueryParameter("source_id", "STRING", source_id), ] ) query_job = client.query(query, job_config=job_config) # Wait for the job to complete. result = [] for row in query_job: result.append(dict(row)) json_obj = json.dumps(result, indent=4, sort_keys=True, default=str) except Exception as e: return str(e) return json_obj The returned data in Flask was dict: { "User": "fasdf", "date": "2022-09-21", "count": 205 }, { "User": "abd", "date": "2022-09-27", "count": 100 } ] While in FastAPI was string: "[\n {\n \"User\": \"aaa\",\n \"date\": \"2022-09-26\",\n \"count\": 840,\n]" The reason I use json.dumps() is that date cannot be itterable.
The wrong approach If you serialize the object before returning it, using json.dumps() (as shown in your example), for instance: import json @app.get('/user') async def get_user(): return json.dumps(some_dict, indent=4, default=str) the JSON object that is returned will end up being serialized twice, as, in this case, FastAPI will automatically serialize the return value behind the scenes as well. Hence, the reason for the output string you ended up with: "[\n {\n \"User\": \"aaa\",\n \"date\": \"2022-09-26\",\n ... Solutions Have a look at the available solutions, as well as the explanation given below as to how FastAPI/Starlette works under the hood. Option 1 The first option is to return data (such as dict, list, etc.) as usualβ€” i.e., using, for example, return some_dictβ€”and FastAPI, behind the scenes, will automatically convert that return value into JSON, after first converting the data into JSON-compatible data, using the jsonable_encoder. The jsonable_encoder ensures that objects that are not serializable, such as datetime objects, are converted to a str. Then, FastAPI will put that JSON-compatible data inside of a JSONResponse, which will return an application/json encoded response to the client (this is also explained in Option 1 of this answer). The JSONResponse, as can be seen in Starlette's source code here, will use the Python standard json.dumps() to serialize the dict (for alternatvie/faster JSON encoders, see this answer and this answer). Example from datetime import date d = [ {"User": "a", "date": date.today(), "count": 1}, {"User": "b", "date": date.today(), "count": 2}, ] @app.get('/') async def main(): return d The above is equivalent to: from fastapi.responses import JSONResponse from fastapi.encoders import jsonable_encoder @app.get('/') async def main(): return JSONResponse(content=jsonable_encoder(d)) Output: [{"User":"a","date":"2022-10-21","count":1},{"User":"b","date":"2022-10-21","count":2}] Changing the status_code To change the status_code when returning a dict object, you could use the Response object, as described in the documentation, and as shown below: from fastapi import Response, status @app.get('/') async def main(response: Response): response.status_code = status.HTTP_201_CREATED # or simply = 201 return d It is also possible to specify a custom status_code when returning a JSONResponse or a custom Response directly (it is demonstrated in Option 2 below), as well as any other response class that inherits from Response (see FastAPI's documentation here, as well as Starlette's documentation here and responses' implementation here). The implementation of FastAPI/Starlette's JSONResponse class can be found here, as well as a list of HTTP status codes that one may use (instead of passing the HTTP response status code as an int directly) can be seen here. Example: from fastapi import status from fastapi.responses import JSONResponse from fastapi.encoders import jsonable_encoder @app.get('/') async def main(): return JSONResponse(content=jsonable_encoder(d), status_code=status.HTTP_201_CREATED) Option 2 If, for any reason (e.g., trying to force some custom JSON format), you have to serialize the object before returning it, you can then return a custom Response directly, as described in this answer. As per the documentation: When you return a Response directly its data is not validated, converted (serialized), nor documented automatically. Additionally, as described here: FastAPI (actually Starlette) will automatically include a Content-Length header. It will also include a Content-Type header, based on the media_type and appending a charset for text types. Hence, you can also set the media_type to whatever type you are expecting the data to be; in this case, that is application/json. Example is given below. To optionally change the status_code of a Response object, you could use the same approach described in Option 1 with JSONResponse, e.g., Response(content=json_str, status_code=status.HTTP_201_CREATED, ...). Note 1: The JSON outputs posted in this answer (in both Options 1 & 2) are the result of accessing the API endpoint through the browser directly (i.e., by typing the URL in the address bar of the browser and then hitting the enter key). If you tested the endpoint through Swagger UI at /docs instead, you would see that the indentation differs (in both options). This is due to how Swagger UI formats application/json responses. If you needed to force your custom indentation on Swagger UI as well, you could avoid specifying the media_type for the Response in the example below. This would result in displaying the content as text, as the Content-Type header would be missing from the response, and hence, Swagger UI couldn't recognize the type of the data, in order to custom-format them (in case of application/json responses). Note 2: Setting the default argument to str in json.dumps() is what makes it possible to serialize the date object, otherwise if it wasn't set, you would get: TypeError: Object of type date is not JSON serializable. The default is a function that gets called for objects that can't otherwise be serialized. It should return a JSON-encodable version of the object. In this case it is str, meaning that every object that is not serializable, it is converted to string. You could also use a custom function or JSONEncoder subclass, as demosntrated here, if you would like to serialize an object in a custom way. Additionally, as mentioned in Option 1 earlier, one could instead use alternative JSON encoders, such as orjson, that might improve the application's performance compared to the standard json library (see this answer and this answer). Note 3: FastAPI/Starlette's Response accepts as a content argument either a str or bytes object. As shown in the implementation here, if you don't pass a bytes object, Starlette will try to encode it using content.encode(self.charset). Hence, if, for instance, you passed a dict, you would get: AttributeError: 'dict' object has no attribute 'encode'. In the example below, a JSON str is passed, which will later be encoded into bytes (you could alternatively encode it yourself before passing it to the Response object). Example from fastapi import Response from datetime import date import json d = [ {"User": "a", "date": date.today(), "count": 1}, {"User": "b", "date": date.today(), "count": 2}, ] @app.get('/') async def main(): json_str = json.dumps(d, indent=4, default=str) return Response(content=json_str, media_type='application/json') Output: [ { "User": "a", "date": "2022-10-21", "count": 1 }, { "User": "b", "date": "2022-10-21", "count": 2 } ] Dealing with Pydantic models If you were dealing with Pydantic models, you could either use the approach described in Option 1 above, i.e., return the model as is (e.g., return MyModel(msg="test")) or use model_dump() (which replaced dict() from Pydantic V1) to convert it into a dict and then return it (e.g., MyModel(msg="test").model_dump()), or use model_dump_json() (which replaced json() from Pydantic V1) to convert the model instance into a JSON-encoded string, and then return a custom Response directly: from fastapi import FastAPI, Response, status from pydantic import BaseModel class MyModel(BaseModel): msg: str app = FastAPI() @app.get('/') async def main(): m = MyModel(msg="test") return Response(content=m.model_dump_json(), status_code=status.HTTP_201_CREATED, media_type='application/json') Final Notes Please note that in all the examples provided above, the endpoints were defined with async def, and that should be perfectly fine, when dealing with small amounts of JSON data. If, however, you were dealing with rather large JSON, where the data serialization would take a long time to complete and the event loop could get blocked (since json.dumps()/model_dump_json()/etc. are executed synchronously), please have a look at the relevant solutions explained in this answer.
24
30
74,009,210
2022-10-10
https://stackoverflow.com/questions/74009210/how-to-create-a-fastapi-endpoint-that-can-accept-either-file-form-or-json-body
I would like to create an endpoint in FastAPI that might receive either multipart/form-data or JSON body. Is there a way I can make such an endpoint accept either, or detect which type of data is receiving?
Option 1 You could have a dependency function, where you would check the value of the Content-Type request header and parse the body using Starlette's methods, accordingly. Note that just because a request's Content-Type header says, for instance, application/json, application/x-www-form-urlencoded or multipart/form-data, doesn't always mean that this is true, or that the incoming data is a valid JSON, or File(s) and/or form-data. Hence, you should use a try-except block to catch any potential errors when parsing the body. Also, you may want to implement various checks to ensure that you get the correct type of data and all the fields that you expect to be required. For JSON body, you could create a BaseModel and use Pydantic's parse_obj function to validate the received dictionary (similar to Method 3 of this answer). Regarding File/Form-data, you can use Starlette's Request object directly, and more specifically, the request.form() method to parse the body, which will return a FormData object that is an immutable multidict (i.e., ImmutableMultiDict) containing both file uploads and text input. When you send a list of values for some form input, or a list of files, you can use the multidict's getlist() method to retrieve the list. In the case of files, this would return a list of UploadFile objects, which you can use in the same way as this answer and this answer to loop through the files and retrieve their content. Instead of using request.form(), you could also read the request body directly from the stream and parse it using the streaming-form-data library, as demonstrated in this answer. Working Example from fastapi import FastAPI, Depends, Request, HTTPException from starlette.datastructures import FormData from json import JSONDecodeError app = FastAPI() async def get_body(request: Request): content_type = request.headers.get('Content-Type') if content_type is None: raise HTTPException(status_code=400, detail='No Content-Type provided!') elif content_type == 'application/json': try: return await request.json() except JSONDecodeError: raise HTTPException(status_code=400, detail='Invalid JSON data') elif (content_type == 'application/x-www-form-urlencoded' or content_type.startswith('multipart/form-data')): try: return await request.form() except Exception: raise HTTPException(status_code=400, detail='Invalid Form data') else: raise HTTPException(status_code=400, detail='Content-Type not supported!') @app.post('/') def main(body = Depends(get_body)): if isinstance(body, dict): # if JSON data received return body elif isinstance(body, FormData): # if Form/File data received msg = body.get('msg') items = body.getlist('items') files = body.getlist('files') # returns a list of UploadFile objects if files: print(files[0].file.read(10)) return msg Option 2 Another option would be to have a single endpoint, and have your File(s) and/or Form data parameters defined as Optional (have a look at this answer and this answer for all the available ways on how to do that). Once a client's request enters the endpoint, you could check whether the defined parameters have any values passed to them, meaning that they were included in the request body by the client and this was a request having as Content-Type either application/x-www-form-urlencoded or multipart/form-data (Note that if you expected to receive arbitrary file(s) or form-data, you should rather use Option 1 above ). Otherwise, if every defined parameter was still None (meaning that the client did not include any of them in the request body), then this was likely a JSON request, and hence, proceed with confirming that by attempting to parse the request body as JSON. Working Example from fastapi import FastAPI, UploadFile, File, Form, Request, HTTPException from typing import Optional, List from json import JSONDecodeError app = FastAPI() @app.post('/') async def submit(request: Request, items: Optional[List[str]] = Form(None), files: Optional[List[UploadFile]] = File(None)): # if File(s) and/or form-data were received if items or files: filenames = None if files: filenames = [f.filename for f in files] return {'File(s)/form-data': {'items': items, 'filenames': filenames}} else: # check if JSON data were received try: data = await request.json() return {'JSON': data} except JSONDecodeError: raise HTTPException(status_code=400, detail='Invalid JSON data') Option 3 Another option would be to define two separate endpoints; one to handle JSON requests and the other for handling File/Form-data requests. Using a middleware, you could check whether the incoming request is pointing to the route you wish users to send either JSON or File/Form data (in the example below that is / route), and if so, check the Content-Type similar to the previous option and reroute the request to either /submitJSON or /submitForm endpoint, accordingly (you could do that by modifying the path property in request.scope, as demonstrated in this answer). The advantage of this approach is that it allows you to define your endpoints as usual, without worrying about handling errors if required fields were missing from the request, or the received data were not in the expected format. Working Example from fastapi import FastAPI, Request, Form, File, UploadFile from fastapi.responses import JSONResponse from typing import List, Optional from pydantic import BaseModel app = FastAPI() class Item(BaseModel): items: List[str] msg: str @app.middleware("http") async def some_middleware(request: Request, call_next): if request.url.path == '/': content_type = request.headers.get('Content-Type') if content_type is None: return JSONResponse( content={'detail': 'No Content-Type provided!'}, status_code=400) elif content_type == 'application/json': request.scope['path'] = '/submitJSON' elif (content_type == 'application/x-www-form-urlencoded' or content_type.startswith('multipart/form-data')): request.scope['path'] = '/submitForm' else: return JSONResponse( content={'detail': 'Content-Type not supported!'}, status_code=400) return await call_next(request) @app.post('/') def main(): return @app.post('/submitJSON') def submit_json(item: Item): return item @app.post('/submitForm') def submit_form(msg: str = Form(...), items: List[str] = Form(...), files: Optional[List[UploadFile]] = File(None)): return msg Option 4 I would also suggest you have a look at this answer, which provides solutions on how to send both JSON body and Files/Form-data together in the same request, which might give you a different perspective on the problem you are trying to solve. For instance, declaring the various endpoint's parameters as Optional and checking which ones have been received and which haven't from a client's requestβ€”as well as using Pydantic's model_validate_json() method to parse a JSON string passed in a Form parameterβ€”might be another approach to solving the problem. Please see the linked answer above for more details and examples. Testing Options 1, 2 & 3 using Python requests test.py import requests url = 'http://127.0.0.1:8000/' files = [('files', open('a.txt', 'rb')), ('files', open('b.txt', 'rb'))] payload ={'items': ['foo', 'bar'], 'msg': 'Hello!'} # Send Form data and files r = requests.post(url, data=payload, files=files) print(r.text) # Send Form data only r = requests.post(url, data=payload) print(r.text) # Send JSON data r = requests.post(url, json=payload) print(r.text)
8
13
73,991,575
2022-10-7
https://stackoverflow.com/questions/73991575/how-to-transform-polars-datetime-column-into-a-string-column
I'm trying to change a datetime column to a string column using polars library. I only want the dates on the new column: import polars as pl df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date_time β”‚ β”‚ --- β”‚ β”‚ datetime[ns] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 2007-04-19 00:00:00 β”‚ β”‚ 2007-05-02 00:00:00 β”‚ β”‚ 2007-05-03 00:00:00 β”‚ β”‚ 2007-05-03 00:00:00 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ """) The solution below is including the time, I just need the date. df.with_columns(pl.col('date_time').cast(pl.String)) shape: (4, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date_time β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 2007-04-19 00:00:00.000000000 β”‚ β”‚ 2007-05-02 00:00:00.000000000 β”‚ β”‚ 2007-05-03 00:00:00.000000000 β”‚ β”‚ 2007-05-03 00:00:00.000000000 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
You should try this: # Polars df = df.with_columns(pl.col('date_time').dt.strftime('%Y-%m-%d')) # Pandas df['date_time'] = df['date_time'].dt.strftime('%Y-%m-%d') Edit: added Polars
4
5
73,991,045
2022-10-7
https://stackoverflow.com/questions/73991045/how-to-specify-type-for-function-parameter-python
I want to restrict scope of functions that can be passed as parameter to another function. For example, to restrict functions to be only one from two specified, or from particular module, or by signature. I tried the code below but in it there is now restrictions: as parameter can be passed any function. Is this possible in Python? def func_as_param(): print("func_as_param called") def other_func(): print("other_func called") def func_with_func_arg(func: func_as_param): # this is not giving restrictions # def func_with_func_arg(func: type(func_as_param)): # this is also not giving restrictions print("func_with_func_arg called") func() def test_func_with_func_arg(): print("test_func_with_func_arg") func_with_func_arg(func_as_param) func_with_func_arg(other_func) # <- here IDE must complain that only func_as_param is expected
Frameworks expecting callback functions of specific signatures might be type hinted using Callable[[Arg1Type, Arg2Type], ReturnType] You might want to use the Callable type. This might help https://docs.python.org/3/library/typing.html#annotating-callable-objects NOTE Type annotations in Python are not make-or-break like in C. They’re optional chunks of syntax that we can add to make our code more explicit. Erroneous type annotations will do nothing more than highlight the incorrect annotation in our code editor β€” no errors are ever raised due to annotations. If thats necessary, you must do the checking by yourself.
5
6
73,989,150
2022-10-7
https://stackoverflow.com/questions/73989150/how-to-make-a-python-magicmock-object-json-serializable
I am using a Python Mock object for a third-party package that needs to JSON serialize my mock. This means that I cannot change the invocation of json.dumps, so must use the solution here: https://stackoverflow.com/a/31207881/19643198 class FileItem(dict): def __init__(self, fname): dict.__init__(self, fname=fname) f = FileItem('tasks.txt') json.dumps(f) #No need to change anything here The only problem is that my object is not of class FileItem, but needs to be a MagicMock. This suggests multiple inheritance, so something like: class FileItem(MagicMock, dict): def __init__(self): MagicMock.__init__(self) dict.__init__(self) Unfortunately, multiple inheritance from both dict and MagicMock seems not to work. In case this helps make this problem easier to solve, the third-party library does not need to deserialize or even use the JSON serialized representation of the MagicMock.
There is no need to consider multiple inheritance. The problem is that Python's JSON module doesn't know how to serialize certain types, like MagicMock (or other common types like datetime, for that matter). You can tell json how to deal with unknown types by either using the cls= or default= parameters, but since you are asking about unit tests and don't want to modify the code you're testing, you can solve this by also mocking json.dumps (or json.dump). For instance, import json from json import dumps as _dumps from unittest.mock import MagicMock, Mock f1 = {'fname': 'tasks.txt'} # a normal dict f2 = {'fname': MagicMock()} # some value containing a MagicMock instance def dumps_wrapper(*args, **kwargs): return _dumps(*args, **(kwargs | {"default": lambda obj: "mock"})) # mock the `dumps` function json.dumps = MagicMock(wraps=dumps_wrapper) # now you can serialize objects containing MagicMock (or Mock) objects json.dumps(f1) # {"fname": "tasks.txt"} json.dumps(f2) # {"fname": "mock"} In your unit tests, you can use the patch function to temporarily patch the json.dumps function: # my_module.py import json def make_json(obj): return json.dumps(obj) # test_mocked_json.py import unittest from unittest.mock import MagicMock, patch from json import dumps as _dumps import my_module def dumps_wrapper(*args, **kwargs): return _dumps(*args, **(kwargs | {"default": lambda obj: "mock"})) class TestMockedJSON(unittest.TestCase): def test_mocked_json(self): patch_json = patch( "my_module.json.dumps", MagicMock(wraps=dumps_wrapper)) with patch_json: f1 = {'fname': 'tasks.txt'} # a normal dict f2 = {'fname': MagicMock()} # some value containing a MagicMock instance s1 = my_module.make_json(f1) s2 = my_module.make_json(f2) assert s1 assert s2 print(s1, s2, "ok!") Which you can run with: python -m unittest test_mocked_json.py # {"fname": "tasks.txt"} {"fname": "mock"} ok! # . # ---------------------------------------------------------------------- # Ran 1 test in 0.001s # # OK
5
2
73,962,743
2022-10-5
https://stackoverflow.com/questions/73962743/fastapi-is-not-returning-cookies-to-react-frontend
Why doesn't FastAPI return the cookie to my frontend, which is a React app? Here is my code: @router.post("/login") def user_login(response: Response,username :str = Form(),password :str = Form(),db: Session = Depends(get_db)): user = db.query(models.User).filter(models.User.mobile_number==username).first() if not user: raise HTTPException(400, detail='wrong phone number or password') if not verify_password(password, user.password): raise HTTPException(400, detail='wrong phone number or password') access_token = create_access_token(data={"sub": user.mobile_number}) response.set_cookie(key="fakesession", value="fake-cookie-session-value") #here I am set cookie return {"status":"success"} When I login from Swagger UI autodocs, I can see the cookie in the response headers using DevTools on Chrome browser. However, when I login from my React app, no cookie is returned. I am using axios to send the request like this: await axios.post(login_url, formdata)
First, create the cookie, as shown in the example below, and make sure there is no error returned when performing the Axios POST request, and that you get a 'status': 'success' response with 200 status code. You may want to have a look at this answer as well, which provides explains how to use the max_age and expires flags too. from fastapi import FastAPI, Response app = FastAPI() @app.get('/') def main(response: Response): response.set_cookie(key='token', value='some-token-value', httponly=True) return {'status': 'success'} Second, as you mentioned that you are using React in the frontendβ€”which needs to be listening on a different port from the one used for the FastAPI backend, meaning that you are performing CORS requestsβ€”you need to set the withCredentials property to true (by default this is set to false), in order to allow receiving/sending credentials, such as cookies and HTTP authentication headers, from/to other origins. Two servers with same domain and protocol, but different ports, e.g., http://localhost:8000 and http://localhost:3000 are considered different origins (see FastAPI documentation on CORS and this answer, which provides details around cookies in general, as well as solutions for setting cross-domain cookiesβ€”which you don't actually need in your case, as the domain is the same for both the backend and the frontend, and hence, setting the cookie as usual would work just fine). Note that if you are accessing your React frontend by typing http://localhost:3000 in the address bar of your browser, then your Axios requests to FastAPI backend should use the localhost domain in the URL, e.g., axios.post('http://localhost:8000',..., and not axios.post('http://127.0.0.1:8000',..., as localhost and 127.0.0.1 are two different domains, and hence, the cookie would otherwise fail to be created for the localhost domain, as it would be created for 127.0.0.1, i.e., the domain used in the axios request (and then, that would be a case for cross-domain cookies, as described in the linked answer above, which again, in your case, would not be needed). Thus, to accept cookies sent by the server, you need to use withCredentials: true in your Axios request; otherwise, the cookies will be ignored in the response (which is the default behaviour, when withCredentials is set to false; hence, preventing different domains from setting cookies for their own domain). The same withCredentials: true property has to be included in every subsequent request to your API, if you would like the cookie to be sent to the server, so that the user can be authenticated and provided access to protected routes. Hence, an Axios request that includes credentials should look like this: await axios.post(url, data, {withCredentials: true})) The equivalent in a fetch() request (i.e., using Fetch API) is credentials: 'include'. The default value for credentials is same-origin. Using credentials: 'include' will cause the browser to include credentials in both same-origin and cross-origin requests, as well as set any cookies sent back in cross-origin responses. For instance: fetch('https://example.com', { credentials: 'include' }); Important Note Since you are performing a cross-origin request, for either the above to work, you would need to explicitly specify the allowed origins, as described in this answer (behind the scenes, that is setting the Access-Control-Allow-Origin response header). For instance: origins = ['http://localhost:3000', 'http://127.0.0.1:3000', 'https://localhost:3000', 'https://127.0.0.1:3000'] Using the * wildcard instead would mean that all origins are allowed; however, that would also only allow certain types of communication, excluding everything that involves credentials, such as cookies, authorization headers, etcβ€”hence, you should not use the * wildcard. Also, make sure to set allow_credentials=True when using the CORSMiddleware (which sets the Access-Control-Allow-Credentials response header to true). Example (see here): app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], )
6
14
73,989,179
2022-10-7
https://stackoverflow.com/questions/73989179/install-usd-pixar-library
I have some trouble for installing USD library on Ubuntu. Here is the tuto I want to follow. On github, I cloned the git, run the script build_usd.py and change the env var. But when I want to run this simple code from pxr import Usd, UsdGeom stage = Usd.Stage.CreateNew('HelloWorld.usda') xformPrim = UsdGeom.Xform.Define(stage, '/hello') spherePrim = UsdGeom.Sphere.Define(stage, '/hello/world') stage.GetRootLayer().Save() By changing the env var, eitheir the command "python " can't be found or the "module pxr" can't be found. I tried to move the lib and include in my directory in usr/lib, ur/local/lib, usr/local/include but still pxr seems not to be found. I am really confused about how to install and use library in general for python to found on ubuntu.
You should install usd-core library, which has the pxr module pip install usd-core
3
5
74,027,680
2022-10-11
https://stackoverflow.com/questions/74027680/pytorch-profiler-with-scheduler-prints-unwanted-message-at-step
I am trying to learn how to use the Pytorch profiler API to measure the difference in performance when training a model using different methods. In the dedicated tutorial, there is one part where they show how to do just that using the "schedule" parameter of the profiler. My problem is that when I want to use it in my code, calling step the first "wait" times prints a message [W kineto_shim.cpp:337] Profiler is not initialized: skipping step() invocation Since I want my profiler to sleep most of the time, my "wait" value is quite high so it pollutes my terminal with a bunch of those lines until the profiler is actually executed for the first time How can I get rid of it ? Here's a minimal code sample that reproduces the problem import torch from torch.profiler import profile, record_function, ProfilerActivity with profile( activities=[torch.profiler.ProfilerActivity.CUDA], schedule=torch.profiler.schedule(wait=15, warmup=1, active=4), profile_memory=False, record_shapes=True, with_stack=True, ) as prof: for _ in range(20): y = torch.randn(1).cuda() + torch.randn(1).cuda() prof.step() print(prof.key_averages())
This was just recently fixed/added in a pull request now you can set the env variable KINETO_LOG_LEVEL. For example in a bash script: export KINETO_LOG_LEVEL=3 The levels according to the source code are: enum LoggerOutputType { VERBOSE = 0, INFO = 1, WARNING = 2, ERROR = 3, STAGE = 4, ENUM_COUNT = 5 }; Thats atleast how it should work, according to this issue the changes for the log level have not been merged yet
6
1
74,010,813
2022-10-10
https://stackoverflow.com/questions/74010813/fastapi-how-can-i-modify-request-from-inside-dependency
How can I modify request from inside a dependency? Basically I would like to add some information (test_value) to the request and later be able to get it from the view function (in my case root() function). Below is a simple example: from fastapi import FastAPI, Depends, Request app = FastAPI() def test(request: Request): request['test_value'] = 'test value' @app.get("/", dependencies=[Depends(test)]) async def root(request: Request): print(request.test_value) return {"test": "test root path."}
Option 1 You could store arbitrary extra state to request.state, and use the Request object inside the endpoint to retrieve the state (the relevant implementation of Starlette's State method and class can be found here and here, respectively): from fastapi import FastAPI, Depends, Request app = FastAPI() def func(request: Request): request.state.test = 'test value' @app.get('/', dependencies=[Depends(func)]) def root(request: Request): return request.state.test If you would like that state (i.e., test attribute above) to be globally accessible from any request/user, you might want to store it on the application instance, as described in this answer, as well this and this answer. Option 2 Instead of adding dependencies=[Depends(test)] to the decorator of the endpoint, you could use the dependency injection directly, by defining test (in this example) as an endpoint parameter and using Depends. You could then have the dependency function returning the attribute. Using this option, however, you would have to do this for every endpoint in your app/router, if you had to define a global dependency for the entire app/router, as demonstrated here and here. from fastapi import FastAPI, Depends, Request app = FastAPI() def func(request: Request): return 'test value' @app.get('/') def root(request: Request, test: str = Depends(func)): return test
6
11
73,997,704
2022-10-8
https://stackoverflow.com/questions/73997704/how-can-i-use-celery-in-django-with-just-the-db
Looking at https://docs.celeryq.dev/en/v5.2.7/getting-started/backends-and-brokers/index.html it sounds pretty much as if it's not possible / desirable. There is a section about SQLAlchemy, but Django does not use SQLAlchemy. In way older docs, there is https://docs.celeryq.dev/en/3.1/getting-started/brokers/django.html . Is it possible with recent Celery / Django versions to use Celery with just the database for storing messages / results?
Yes you can totally do this, even if it's not the most performant/recommended way to do. I use it for simple projects in which I don't want to add Redis. To do so, first, add SQLAlchemy v1 as a dependency in your project: SQLAlchemy = "1.*" Then in your settings.py: if you use PostgreSQL: CELERY_BROKER_URL = sqla+postgresql://user:[email protected]:5432/dbname if you use SQLite: CELERY_BROKER_URL = "sqla+sqlite:///" + os.path.join(BASE_DIR, 'your_database.db') . Note that the folder holding the database must be writable. For example if your database is located in project/dbfolder/database.db, chmod 777 project/dbfolder will do the trick. As a sidenote, I'm using django-celery-results to store results of my tasks. This way, I have a fully featured Celery without using other tech tool (like rabbitMQ or Redis) in my stack.
3
5
74,001,347
2022-10-9
https://stackoverflow.com/questions/74001347/django-autocomplete-light-not-working-with-bootstrap-5-modal
I am newbie to Python and Django. I'm using DAL on a form inside a Bootstrap modal. Clicking the dropdown list appears behind the modal. The autocomplete function works correctly if I do it outside of the modal. I'm using: Django: 4.0.5 django-autocomplete-light: 3.9.4 Bootstrap 5.2.2 Python 3.10.4 To try to fix it, I have created a style.css file with the following code, as indicated in: bootstrap modal with select2 z-index .select2-container { z-index: 9999 !important; } Now the list appears in front of the modal, but it doesn't allow typing in the search box. I've tried doing the following, but it doesn't work: https://select2.org/troubleshooting/common-problems $(document).ready(function() { $("#codigo_emplazamiento").select2({ dropdownParent: $("#FormularioCrear") }); }); I've tried removing tabindex="-1" from the modal, but it doesn't work. It seems to be related to this, but I can't get it to work: https://lightrun.com/answers/yourlabs-django-autocomplete-light-search-input-not-receiving-focus This is my code: forms.py class FormularioTarea(forms.ModelForm): class Meta: model = Tarea fields = ['referencia_interna', 'codigo_emplazamiento', 'estado', 'fecha_fin', 'tipo_documento'] autocomplete.ModelSelect2(url='emplazamientoautocompletar')}) widgets = { 'codigo_emplazamiento': autocomplete.Select2( url='emplazamientoautocompletar', ) } models.py class Emplazamiento(models.Model): TIPO_ELECCION = (('URB', 'Urbano'), ('RUR', 'Rural')) codigo_emplazamiento = models.CharField(unique=True, max_length=10) direccion = models.CharField(max_length=254) municipio = models.ForeignKey(Municipio, on_delete=models.PROTECT) ref_catastral = models.CharField(max_length=14, blank=True) tipo_emplazamiento = models.CharField(max_length=15, choices=TIPO_ELECCION) latitud = models.DecimalField( blank=True, null=True, decimal_places=10, max_digits=13) longitud = models.DecimalField( blank=True, null=True, decimal_places=10, max_digits=13) def save(self, *args, **kwargs): self.codigo_emplazamiento = self.codigo_emplazamiento.upper() return super(Emplazamiento, self).save(*args, **kwargs) class Meta: ordering =('-id',) def __str__(self): return (self.codigo_emplazamiento) class Tarea(models.Model): referencia_interna = models.CharField(max_length=20, unique=True) codigo_comparticion = models.CharField(max_length=20, unique=True, blank=True, null=True) ESTADOS_DE_TAREA = (('PENDIENTE', 'Pendiente'), ('EN_CURSO', 'En Curso'), ('FINALIZADO', 'Finalizado')) codigo_emplazamiento = models.ForeignKey(Emplazamiento, on_delete=models.PROTECT) estado = models.CharField(max_length=15, choices=ESTADOS_DE_TAREA) SELECCION_DE_DOCUMENTOS = (('PROYECTO', 'Proyecto'), ('ANEXO', 'Anexo'), ('PEP', 'Pep')) tipo_documento = models.CharField(max_length=15, choices=SELECCION_DE_DOCUMENTOS) fecha_entrada = models.DateField(auto_now_add=True) fecha_fin = models.DateField(blank=True, null=True) class Meta: ordering =('-id',) def __str__(self): return (self.referencia_interna + ' - ' + str(self.id)) urls.py urlpatterns = [ path('emplazamientos/autocompletar', EmplazamientoAutocomplete.as_view(), name = 'emplazamientoautocompletar'), ] views.py class EmplazamientoAutocomplete(autocomplete.Select2QuerySetView): def get_queryset(self): #if not self.request.user.is_authenticated: # return Country.objects.none() qs = Emplazamiento.objects.all() if self.q: qs = qs.filter(codigo_emplazamiento__icontains=self.q) return qs Modal in Template: <div class="modal fade" id="FormularioCrear" data-bs-backdrop="static" data-bs-keyboard="false" tabindex="-1" aria-labelledby="staticBackdropLabel" aria-hidden="true"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <h5 class="modal-title" id="staticBackdropLabel">{{ titulocreador }}</h5> <button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button> </div> <div class="modal-body" id="CuerpoModal"> <form method="POST" enctype="multipart/form-data" action=" {% url 'creatarea' %}"> {% csrf_token %} {{ formulario.as_p }} </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancelar</button> <button type="submit" class="btn btn-primary">Crear</button> </form> </div> </div> </div> </div> <script> $(document).ready(function() { $("#codigo_emplazamiento").select2({ dropdownParent: $("#FormularioCrear") }); }); </script> Modal in rendered template: <div class="modal fade" id="FormularioCrear" data-bs-backdrop="static" data-bs-keyboard="false" tabindex="-1" aria-labelledby="staticBackdropLabel" aria-hidden="true"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <h5 class="modal-title" id="staticBackdropLabel">Crea nueva Tarea</h5> <button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button> </div> <div class="modal-body" id="CuerpoModal"> <form method="POST" enctype="multipart/form-data" action=" /proyectos/tareas/crear/"> <input type="hidden" name="csrfmiddlewaretoken" value="kWogOGY57TUQv49PCR5CroMbEmpMiM5qNUC17gTI6gPZjrq3riiuiXrhm2STNuTk"> <p> <label for="id_referencia_interna">Referencia interna:</label> <input type="text" name="referencia_interna" maxlength="20" required id="id_referencia_interna"> </p> <p> <label for="id_codigo_emplazamiento">Codigo emplazamiento:</label> <select name="codigo_emplazamiento" required id="id_codigo_emplazamiento" data-autocomplete-light-url="/proyectos/emplazamientos/autocompletar" data-autocomplete-light-function="select2" data-autocomplete-light-language="es"> </select> </p> <p> <label for="id_estado">Estado:</label> <select name="estado" required id="id_estado"> <option value="" selected>---------</option> <option value="PENDIENTE">Pendiente</option> <option value="EN_CURSO">En Curso</option> <option value="FINALIZADO">Finalizado</option> </select> </p> <p> <label for="id_fecha_fin">Fecha fin:</label> <input type="text" name="fecha_fin" id="id_fecha_fin"> </p> <p> <label for="id_tipo_documento">Tipo documento:</label> <select name="tipo_documento" required id="id_tipo_documento"> <option value="" selected>---------</option> <option value="PROYECTO">Proyecto</option> <option value="ANEXO">Anexo</option> <option value="PEP">Pep</option> </select> </p> </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancelar</button> <button type="submit" class="btn btn-primary">Crear</button> </form> </div> </div> </div> </div> <script> $(document).ready(function() { $("#codigo_emplazamiento").select2({ dropdownParent: $("#FormularioCrear") }); }); </script>
The problem is modal focus, so if you review the documentation https://getbootstrap.com/docs/5.3/components/modal/#options in options there's a definition for the focus. So at the end you just need to add data-bs-focus="false" to your modal definition. ...
3
4
73,967,640
2022-10-6
https://stackoverflow.com/questions/73967640/how-to-activate-existing-python-environment-with-r-reticulate
I have the following existing Python environments: $ conda info --envs base * /home/ubuntu/anaconda3 tensorflow2_latest_p37 /home/ubuntu/anaconda3/envs/tensorflow2_latest_p37 What I want to do is to activate tensorflow2_latest_p37 environment and use it in R code. I tried the following code: library(reticulate) use_condaenv( "tensorflow2_latest_p37") library(tensorflow) tf$constant("Hello Tensorflow!") But it failed to recognize the environment: > library(reticulate) > use_condaenv( "tensorflow2_latest_p37") /tmp/RtmpAs9fYG/file41912f80e49f.sh: 3: /home/ubuntu/anaconda3/envs/tensorflow2_latest_p37/etc/conda/activate.d/00_activate.sh: Bad substitution Error in Sys.setenv(PATH = new_path) : wrong length for argument In addition: Warning message: In system2(Sys.which("sh"), fi, stdout = if (identical(intern, FALSE)) "" else intern) : running command ''/bin/sh' /tmp/RtmpAs9fYG/file41912f80e49f.sh' had status 2 What is the right way to do it?
I found the most reliable way is to set the RETICULATE_PYTHON system variable before running library(reticulate), since this will load the default environment and changing environments seems to be a bit of an issue. So you should try something like this: library(tidyverse) py_bin <- reticulate::conda_list() %>% filter(name == "tensorflow2_latest_p37") %>% pull(python) Sys.setenv(RETICULATE_PYTHON = py_bin) library(reticulate) You can make this permanent by placing this in an .Renviron file. I usually place one in the project folder, so it is evaluated upon opening the project. In code this would look like that: readr::write_lines(paste0("RETICULATE_PYTHON=", py_bin), ".Renviron", append = TRUE) Or even easier, use usethis::edit_r_environ(scope = "project") (thank you @rodrigo-zepeda!).
3
4
74,015,280
2022-10-10
https://stackoverflow.com/questions/74015280/pipreqs-not-including-all-packages
I currently have a conda environment tf_gpu and I pip installed pipreqs in it to auto generate requirements.txt Now, in my project folder, I have app.py with the imports : import os from dotenv import load_dotenv from flask import Flask, request from predict import get_recs import urllib.request Also, predict uses pandas, scipy, numpy, pickle So, but the requirements.txt generated by pipreqs using pipreqs ./ inside the project folder only gets me the following: Flask==2.1.3 numpy==1.23.3 pandas==1.4.4 scipy==1.9.1 Why is python-dotenv not included? It isnt a standard library right? So what's happening here?
According to the open issues in the GitHub repo, some packages don't map well. You could try opening an issue for this package.
4
4
73,965,176
2022-10-5
https://stackoverflow.com/questions/73965176/authenticating-firebase-connection-in-github-action
Background I have a Python script that reads data from an Excel file and uploads each row as a separate document to a collection in Firestore. I want this script to run when I push a new version of the Excel file to GitHub. Setup I placed the necessary credentials in GitHub repo secrets and setup the following workflow to run on push to my data/ directory: name: update_firestore on: push: branches: - main paths: - data/**.xlsx jobs: build: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v2 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v4 with: python-version: '3.*' # install the latest python version - name: install python packages run: | python -m pip install --upgrade pip pip install -r requirements.txt - name: execute python script env: TYPE: service_account PROJECT_ID: ${{ secrets.PROJECT_ID }} PRIVATE_KEY_ID: ${{ secrets.PRIVATE_KEY_ID }} PRIVATE_KEY: ${{ secrets.PRIVATE_KEY }} CLIENT_EMAIL: ${{ secrets.CLIENT_EMAIL }} TOKEN_URI: ${{ secrets.TOKEN_URI }} run: python src/update_database.py -n ideas -delete -add The Problem I keep getting the following error: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/firebase_admin/credentials.py", line 96, in __init__ self._g_credential = service_account.Credentials.from_service_account_info( File "/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/google/oauth2/service_account.py", line 221, in from_service_account_info signer = _service_account_info.from_dict( File "/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/google/auth/_service_account_info.py", line 58, in from_dict signer = crypt.RSASigner.from_service_account_info(data) File "/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/google/auth/crypt/base.py", line 113, in from_service_account_info return cls.from_string( File "/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/google/auth/crypt/_python_rsa.py", line 171, in from_string raise ValueError("No key could be detected.") ValueError: No key could be detected. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/runner/work/IRIS/IRIS/src/update_database.py", line 9, in <module> import fire File "/home/runner/work/IRIS/IRIS/src/fire/__init__.py", line 35, in <module> cred = credentials.Certificate(create_keyfile_dict()) File "/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/firebase_admin/credentials.py", line 99, in __init__ raise ValueError('Failed to initialize a certificate credential. ' ValueError: Failed to initialize a certificate credential. Caused by: "No key could be detected." Error: Process completed with exit code 1. My Attempted Solutions I have tried a variety of approaches including what I show above, just hardcoding each of the secrets, and copying the .json formatted credentials directly as a single secret. I know there are some issues dealing with multiline environment variables which the PRIVATE_KEY is. I have tried: Pasting the PRIVATE_KEY str directly from the download firebase provides which includes \n Removing escape characters and formatting the secret like: -----BEGIN PRIVATE KEY----- BunC40fL3773R5AndNumb3r5 ... rAndomLettersANDNumb3R5== -----END PRIVATE KEY----- I feel like the solution should be pretty straight-forward but have been struggling and my knowledge with all this is a bit limited. Thank you in advance!
After hours of research, I found an easy way to store the Firestore service account JSON as a Github Secret. Step 1 : Convert your service account JSON to base-64 Let's name the base-64 encoded JSON SERVICE_ACCOUNT_KEY. There are two ways to get this value: Method 1 : Using command line cat path-to-your-service-account.json | base64 | xargs This will return a single line representing the encoded service account JSON. Copy this value. Method 2 : Using python import json import base64 service_key = { "type": "service_account", "project_id": "xxx", "private_key_id": "xxx", "private_key": "-----BEGIN PRIVATE KEY-----\nxxxxx\n-----END PRIVATE KEY-----\n", "client_email": "xxxx.com", "client_id": "xxxx", "auth_uri": "xxxx", "token_uri": "xxxx", "auth_provider_x509_cert_url": "xxxx", "client_x509_cert_url": "xxxx" } # convert json to a string service_key = json.dumps(service_key) # encode service key SERVICE_ACCOUNT_KEY= base64.b64encode(service_key.encode('utf-8')) print(SERVICE_ACCOUNT_KEY) # FORMAT: b'a_long_string' Copy only the value between the quotes. (copy a_long_string instead of b'a_long_string') Step 2 : Create your environment variable I am using dotenv library to read environment variables. You will have to install it first using pip install python-dotenv. Also add this dependency in your requirements.txt for github actions. Create a Github repository secret SERVICE_ACCOUNT_KEY which will store the base-64 value. In your Github YML file, add the environment variable: - name: execute py script env: SERVICE_ACCOUNT_KEY: ${{ secrets.SERVICE_ACCOUNT_KEY }} run: python src/main.py To be able to test your program locally, you might also want to add SERVICE_ACCOUNT_KEY together with its value to your .env file (which should be in the root directory of your project). Remember to add .env to your .gitignore file to avoid exposing your key on Github. Step 3 : Decoding the service key You will now need to get the value of SERVICE_ACCOUNT_KEY in your Python code and convert this value back to a JSON. I am using the dotenv library to get the value of the SERVICE_ACCOUNT_KEY. import json import base64 import os from dotenv import load_dotenv, find_dotenv # get the value of `SERVICE_ACCOUNT_KEY`environment variable load_dotenv(find_dotenv()) encoded_key = os.getenv("SERVICE_ACCOUNT_KEY") # decode SERVICE_ACCOUNT_JSON = json.loads(base64.b64decode(encoded_key).decode('utf-8')) # Use `SERVICE_ACCOUNT_JSON` later to initialse firestore db: # cred = credentials.Certificate(SERVICE_ACCOUNT_JSON) # firebase_admin.initialize_app(cred)
3
5
74,016,277
2022-10-10
https://stackoverflow.com/questions/74016277/accuracy-while-learning-mnist-database-is-very-low-0-2
I am developing my ANN from scratch which is supposed to classify MNIST database of handwritten digits (0-9). My feed-forward fully connected ANN has to be composed of: One input layer, with 28x28 = 784 nodes (that is, features of each image) One hidden layer, with any number of neurons (shallow network) One output layer, with 10 nodes (one for each digit) and has to compute gradient w.r.t. weights and bias thanks to backpropagation algorithm and, finally, it should learn exploiting gradient descent with momentum algorithm. The loss function is: cross_entropy on "softmaxed" network's outputs, since the task is about classification. Each hidden neuron is activated by the same activation function, I've chosen the sigmoid; meanwhile the output's neurons are activated by the identity function. The dataset has been divided into: 60.000 training pairs (image, label) - for the training 5000 validation pairs (image, label) - for evaluation and select the network which minimize the validation loss 5000 testing pairs (image, label) - for testing the model picked using new metrics such as accuracy The data has been shuffled invoking sklearn.utils.shuffle method. These are my net's performance about training loss, validation loss and validation accuracy: E(0) on TrS is: 798288.7537714319 on VS is: 54096.50409967187 Accuracy: 12.1 % E(1) on TrS is: 798261.8584179751 on VS is: 54097.23663558976 Accuracy: 12.1 % ... E(8) on TrS is: 798252.1191081362 on VS is: 54095.5016235736 Accuracy: 12.1 % ... E(17) on TrS is: 798165.2674011206 on VS is: 54087.2823473459 Accuracy: 12.8 % E(18) on TrS is: 798155.0888987815 on VS is: 54086.454077456074 Accuracy: 13.22 % ... E(32) on TrS is: 798042.8283810444 on VS is: 54076.35518400717 Accuracy: 19.0 % E(33) on TrS is: 798033.2512910366 on VS is: 54075.482037626025 Accuracy: 19.36 % E(34) on TrS is: 798023.431899881 on VS is: 54074.591145985265 Accuracy: 19.64 % E(35) on TrS is: 798013.4023181734 on VS is: 54073.685418577166 Accuracy: 19.759999999999998 % E(36) on TrS is: 798003.1960815473 on VS is: 54072.76783050559 Accuracy: 20.080000000000002 % ... E(47) on TrS is: 797888.8213232228 on VS is: 54062.70342708315 Accuracy: 21.22 % E(48) on TrS is: 797879.005388998 on VS is: 54061.854566864626 Accuracy: 21.240000000000002 % E(49) on TrS is: 797869.3890292909 on VS is: 54061.02482142968 Accuracy: 21.26 % Validation loss is minimum at epoch: 49 As you can see the losses are very high and the learning is very slow. This is my code: import numpy as np from scipy.special import expit from matplotlib import pyplot as plt from mnist.loader import MNIST from sklearn.utils import shuffle def relu(a, derivative=False): f_a = np.maximum(0, a) if derivative: return (a > 0) * 1 return f_a def softmax(y): e_y = np.exp(y - np.max(y, axis=0)) return e_y / np.sum(e_y, axis=0) def cross_entropy(y, t, derivative=False, post_process=True): epsilon = 10 ** -308 if post_process: if derivative: return y - t sm = softmax(y) sm = np.clip(sm, epsilon, 1 - epsilon) # avoids log(0) return -np.sum(np.sum(np.multiply(t, np.log(sm)), axis=0)) def sigmoid(a, derivative=False): f_a = expit(a) if derivative: return np.multiply(f_a, (1 - f_a)) return f_a def identity(a, derivative=False): f_a = a if derivative: return np.ones(np.shape(a)) return f_a def accuracy_score(targets, predictions): correct_predictions = 0 for item in range(np.shape(predictions)[1]): argmax_idx = np.argmax(predictions[:, item]) if targets[argmax_idx, item] == 1: correct_predictions += 1 return correct_predictions / np.shape(predictions)[1] def one_hot(targets): return np.asmatrix(np.eye(10)[targets]).T def plot(epochs, loss_train, loss_val): plt.plot(epochs, loss_train) plt.plot(epochs, loss_val, color="orange") plt.legend(["Training Loss", "Validation Loss"]) plt.xlabel("Epochs") plt.ylabel("Loss") plt.grid(True) plt.show() class NeuralNetwork: def __init__(self): self.layers = [] def add_layer(self, layer): self.layers.append(layer) def build(self): for i, layer in enumerate(self.layers): if i == 0: layer.type = "input" else: layer.type = "output" if i == len(self.layers) - 1 else "hidden" layer.configure(self.layers[i - 1].neurons) def fit(self, X_train, targets_train, X_val, targets_val, max_epochs=50): e_loss_train = [] e_loss_val = [] # Getting the minimum loss on validation set predictions_val = self.predict(X_val) min_loss_val = cross_entropy(predictions_val, targets_val) best_net = self # net which minimize validation loss best_epoch = 0 # epoch where the validation loss is minimum # batch mode for epoch in range(max_epochs): predictions_train = self.predict(X_train) self.back_prop(targets_train, cross_entropy) self.learning_rule(l_rate=0.00001, momentum=0.9) loss_train = cross_entropy(predictions_train, targets_train) e_loss_train.append(loss_train) # Validation predictions_val = self.predict(X_val) loss_val = cross_entropy(predictions_val, targets_val) e_loss_val.append(loss_val) print("E(%d) on TrS is:" % epoch, loss_train, " on VS is:", loss_val, " Accuracy:", accuracy_score(targets_val, predictions_val) * 100, "%") if loss_val < min_loss_val: min_loss_val = loss_val best_epoch = epoch best_net = self plot(np.arange(max_epochs), e_loss_train, e_loss_val) return best_net # Matrix of predictions where the i-th column corresponds to the i-th item def predict(self, dataset): z = dataset.T for layer in self.layers: z = layer.forward_prop_step(z) return z def back_prop(self, target, loss): for i, layer in enumerate(self.layers[:0:-1]): next_layer = self.layers[-i] prev_layer = self.layers[-i - 2] layer.back_prop_step(next_layer, prev_layer, target, loss) def learning_rule(self, l_rate, momentum): # Momentum GD for layer in [layer for layer in self.layers if layer.type != "input"]: layer.update_weights(l_rate, momentum) layer.update_bias(l_rate, momentum) class Layer: def __init__(self, neurons, type=None, activation=None): self.dE_dW = None # derivatives dE/dW where W is the weights matrix self.dE_db = None # derivatives dE/db where b is the bias self.dact_a = None # derivative of the activation function self.out = None # layer output self.weights = None # input weights self.bias = None # layer bias self.w_sum = None # weighted_sum self.neurons = neurons # number of neurons self.type = type # input, hidden or output self.activation = activation # activation function self.deltas = None # for back-prop def configure(self, prev_layer_neurons): self.set_activation() self.weights = np.asmatrix(np.random.normal(-0.1, 0.02, (self.neurons, prev_layer_neurons))) self.bias = np.asmatrix(np.random.normal(-0.1, 0.02, self.neurons)).T def set_activation(self): if self.activation is None: if self.type == "hidden": self.activation = sigmoid elif self.type == "output": self.activation = identity # will be softmax in cross entropy calculation def forward_prop_step(self, z): if self.type == "input": self.out = z else: self.w_sum = np.dot(self.weights, z) + self.bias self.out = self.activation(self.w_sum) return self.out def back_prop_step(self, next_layer, prev_layer, target, local_loss): if self.type == "output": self.dact_a = self.activation(self.w_sum, derivative=True) self.deltas = np.multiply(self.dact_a, local_loss(self.out, target, derivative=True)) else: self.dact_a = self.activation(self.w_sum, derivative=True) # (m,batch_size) self.deltas = np.multiply(self.dact_a, np.dot(next_layer.weights.T, next_layer.deltas)) self.dE_dW = self.deltas * prev_layer.out.T self.dE_db = np.sum(self.deltas, axis=1) def update_weights(self, l_rate, momentum): # Momentum GD self.weights = self.weights - l_rate * self.dE_dW self.weights = -l_rate * self.dE_dW + momentum * self.weights def update_bias(self, l_rate, momentum): # Momentum GD self.bias = self.bias - l_rate * self.dE_db self.bias = -l_rate * self.dE_db + momentum * self.bias if __name__ == '__main__': mndata = MNIST(path="data", return_type="numpy") X_train, targets_train = mndata.load_training() # 60.000 images, 28*28 features X_val, targets_val = mndata.load_testing() # 10.000 images, 28*28 features X_train = X_train / 255 # normalization within [0;1] X_val = X_val / 255 # normalization within [0;1] X_train, targets_train = shuffle(X_train, targets_train.T) X_val, targets_val = shuffle(X_val, targets_val.T) # Getting the test set splitting the validation set in two equal parts # Validation set size decreases from 10.000 to 5000 (of course) X_val, X_test = np.split(X_val, 2) # 5000 images, 28*28 features targets_val, targets_test = np.split(targets_val, 2) X_test, targets_test = shuffle(X_test, targets_test.T) targets_train = one_hot(targets_train) targets_val = one_hot(targets_val) targets_test = one_hot(targets_test) net = NeuralNetwork() d = np.shape(X_train)[1] # number of features, 28x28 c = np.shape(targets_train)[0] # number of classes, 10 # Shallow network with 1 hidden neuron # That is 784, 1, 10 for m in (d, 1, c): layer = Layer(m) net.add_layer(layer) net.build() best_net = net.fit(X_train, targets_train, X_val, targets_val, max_epochs=50) What I have done: Set 500 instead of 1 hidden neuron Add many hidden layers Decrease/increase learning rate (l_rate) value Decrease/increase momentum (and set it to 0) Replace sigmoid with relu but there still is the problem. These are the formulas I used for calculations (but you can check them out from the source code, of course): Note: f and g in formulas stand for hidden layers activation function and output layer activation function. EDIT: I re-implemented the cross_entropy function considering the average loss, replacing -np.sum(..., axis=0)) with -np.mean(..., axis=0)) and losses are now comparable. But the problem about low accuracy persists as you can see: E(0) on TrS is: 2.3033276613180695 on VS is: 2.3021572339654925 Accuracy: 10.96 % E(1) on TrS is: 2.3021765614184284 on VS is: 2.302430432090161 Accuracy: 10.96 % E(2) on TrS is: 2.302371681532198 on VS is: 2.302355601340701 Accuracy: 10.96 % E(3) on TrS is: 2.3023151858432804 on VS is: 2.302364165840666 Accuracy: 10.96 % E(4) on TrS is: 2.3023186844504564 on VS is: 2.3023457770291267 Accuracy: 10.96 % ... E(34) on TrS is: 2.2985702635977137 on VS is: 2.2984384616550875 Accuracy: 18.52 % E(35) on TrS is: 2.2984081462987076 on VS is: 2.2982663840016873 Accuracy: 18.8 % E(36) on TrS is: 2.2982422912146845 on VS is: 2.298091144330386 Accuracy: 19.06 % E(37) on TrS is: 2.2980732333918854 on VS is: 2.2979132918897367 Accuracy: 19.36 % E(38) on TrS is: 2.297901523346666 on VS is: 2.2977333860658424 Accuracy: 19.68 % E(39) on TrS is: 2.2977277198903883 on VS is: 2.297551989820155 Accuracy: 19.78 % ... E(141) on TrS is: 2.291884965880953 on VS is: 2.2917100547472575 Accuracy: 21.08 % E(142) on TrS is: 2.29188099824872 on VS is: 2.291706280301498 Accuracy: 21.08 % E(143) on TrS is: 2.2918771014203316 on VS is: 2.291702575667588 Accuracy: 21.08 % E(144) on TrS is: 2.291873271054674 on VS is: 2.2916989365939067 Accuracy: 21.08 % E(145) on TrS is: 2.2918695030455183 on VS is: 2.291695359057886 Accuracy: 21.08 % E(146) on TrS is: 2.291865793508291 on VS is: 2.291691839253129 Accuracy: 21.08 % E(147) on TrS is: 2.2918621387676166 on VS is: 2.2916883735772675 Accuracy: 21.08 % E(148) on TrS is: 2.2918585353455745 on VS is: 2.291684958620525 Accuracy: 21.08 % E(149) on TrS is: 2.2918549799506307 on VS is: 2.291681591154936 Accuracy: 21.08 % E(150) on TrS is: 2.2918514694672263 on VS is: 2.291678268124199 Accuracy: 21.08 % ... E(199) on TrS is: 2.2916983481535644 on VS is: 2.2915343016441727 Accuracy: 21.060000000000002 % I incrased MAX_EPOCHS value from 50 to 200 for better visualizing results.
Combining the changes you and other mentionned, I was able to have it work. See this gist: https://gist.github.com/theevann/77bb863ef260fe633e3e99f68868f116/revisions Changes made: Use a uniform initialisation (critical) Use relu activation (critical) Use more hidden layers (critical) Comment your SGD momentum as it seems incorrect (not critical) Reduce your learning rate (not critical) I did not look for optimizing it. Obviously using a working momentum GD, using SGD instead of GD, taking the mean to display the loss and tuning the architecture would be logical (if not required) next steps.
6
2
74,023,492
2022-10-11
https://stackoverflow.com/questions/74023492/netsuite-rest-api-returns-no-content-status-204-when-completed-successfully
i use the requests library. how can this be the default behavior? any way to return the ID of the item created? def create_sales_order(): url = f"https://{url_account}.suitetalk.api.netsuite.com/services/rest/record/v1/salesOrder" data = { "entity": { "id": "000" }, "item": { "items": [ { "item": { "id": 25 }, "quantity": 3, "amount": 120 } ] }, "memo": "give me money", "Department": "109" } body = json.dumps(data) response = client.post(url=url, headers=headers, data=body) print(response.text)
Ok so it turns out that the header returned in the 204 empty response contains a link to the created item (Location is the key name in the json returned) , which is sufficient to do another get request and have all the info returned.
3
4
73,961,938
2022-10-5
https://stackoverflow.com/questions/73961938/flask-sqlalchemy-db-create-all-raises-runtimeerror-working-outside-of-applicat
I recently updated Flask-SQLAlchemy, and now db.create_all is raising RuntimeError: working outside of application context. How do I call create_all? from flask import Flask from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///project.db' db = SQLAlchemy(app) class User(db.Model): id = db.Column(db.Integer, primary_key=True) db.create_all() This raises the following error: Traceback (most recent call last): File "/home/david/Projects/flask-sqlalchemy/example.py", line 11, in <module> db.create_all() File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 751, in create_all self._call_for_binds(bind_key, "create_all") File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 722, in _call_for_binds engine = self.engines[key] File "/home/david/Projects/flask-sqlalchemy/src/flask_sqlalchemy/extension.py", line 583, in engines app = current_app._get_current_object() # type: ignore[attr-defined] File "/home/david/Projects/flask-sqlalchemy/.venv/lib/python3.10/site-packages/werkzeug/local.py", line 513, in _get_current_object raise RuntimeError(unbound_message) from None RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed the current application. To solve this, set up an application context with app.app_context(). See the documentation for more information.
As of Flask-SQLAlchemy 3.0, all access to db.engine (and db.session) requires an active Flask application context. db.create_all uses db.engine, so it requires an app context. with app.app_context(): db.create_all() When Flask handles requests or runs CLI commands, a context is automatically pushed. You only need to push one manually outside of those situations, such as while setting up the app. Instead of calling create_all in your code, you can also call it manually in the shell. Use flask shell to start a Python shell that already has an app context and the db object imported. $ flask shell >>> db.create_all() Or push a context manually if using a plain python shell. $ python >>> from project import app, db >>> app.app_context().push() >>> db.create_all()
32
82
73,997,582
2022-10-8
https://stackoverflow.com/questions/73997582/should-i-repeat-parent-class-init-arguments-in-the-child-classs-init-o
Imagine a base class that you'd like to inherit from: class Shape: def __init__(self, x: float, y: float): self.x = x self.y = y There seem to be two common patterns of handling a parent's kwargs in a child class's __init__ method. You can restate the parent's interface completely: class Circle(Shape): def __init__(self, x: float, y: float, radius: float): super().__init__(x=x, y=y) self.radius = radius Or you can specify only the part of the interface which is specific to the child, and hand the remaining kwargs to the parent's __init__: class Circle(Shape): def __init__(self, radius: float, **kwargs): super().__init__(**kwargs) self.radius = radius Both of these seem to have pretty big drawbacks, so I'd be interested to hear what is considered standard or best practice. The "restate the interface" method is appealing in toy examples like you commonly find in discussions of Python inheritance, but what if we're subclassing something with a really complicated interface, like pandas.DataFrame or logging.Logger? Also, if the parent interface changes, I have to remember to change all of my child class's interfaces to match, type hints and all. Not very DRY. In these cases, you're almost certain to go for the **kwargs option. But the **kwargs option leaves the user unsure about which arguments are actually required. In the toy example above, a user might naively write: circle = Circle() # Argument missing for parameter "radius" Their IDE (or mypy or Pyright) is being helpful and saying that the radius parameter is required. circle = Circle(radius=5) The IDE (or type checker) is now happy, but the code won't actually run: Traceback (most recent call last): File "foo.py", line 13, in <module> circle = Circle(radius=5) File "foo.py", line 9, in __init__ super().__init__(**kwargs) TypeError: Shape.__init__() missing 2 required positional arguments: 'x' and 'y' So I'm stuck with a choice between writing out the parent interface multiple times, and not being warned by my IDE when I'm using a child class incorrectly. What to do? Research This mypy issue is loosely related to this. This reddit thread has a good rehearsal of the relevant arguments for/against each approach I outline. This SO question is maybe a duplicate of this one. Does the fact I'm talking about __init__ make any difference though? I've found a real duplicate, although the answer is a bit esoteric and doesn't seem like it would qualify as best, or normal, practice.
If the parent class has required (positional) arguments (as your Shape class does), then I'd argue that you must include those arguments in the __init__ of the child (Circle) for the sake of being able to pass around "shape-like" instances and be sure that a Circle will behave like any other shape. So this would be your Circle class: class Shape: def __init__(x: float, y: float): self.x = x self.y = y class Circle(Shape): def __init__(x: float, y: float, radius: float): super().__init__(x=x, y=y) self.radius = radius # The expectation is that this should work with all instances of `Shape` def move_shape(shape: Shape, x: float, y: float): shape.x = x shape.y = y However if the parent class is using optional kwargs, that's where stuff gets tricky. You shouldn't have to define colour: str on your Circle class just because colour is an optional argument for Shape. It's up to the developer using your Circle class to know the interface of all shapes and if need be, interrogate the code and note that Circle can accept colour=green as it passes **kwargs to its parent constructor: class Shape: def __init__(x: float, y: float, colour: str = "black"): self.x = x self.y = y self.colour = colour class Circle(Shape): def __init__(x: float, y: float, radius: float, **kwargs): super().__init__(x=x, y=y, **kwargs) self.radius = radius def move_shape(shape: Shape, x: float, y: float): shape.x = x shape.y = y def colour_shape(shape: Shape, colour: str): shape.colour = colour Generally my attitude is that a docstring exists to explain why something is written the way it is, not what it's doing. That should be clear from the code. So, if your Circle requires an x and y parameter for use in the parent class, then it should say as much in the signature. If the parent class has optional requirements, then **kwargs is sufficient in the child class and it's incumbent upon the developer to interrogate Circle and Shape to see what the options are.
27
20
74,012,595
2022-10-10
https://stackoverflow.com/questions/74012595/why-does-code-that-in-3-10-throws-a-recursionerror-as-expected-not-throw-in-earl
To start I tried this def x(): try: 1/0 # just an division error to get an exception except: x() And this code behaves normally in 3.10 and I get RecursionError: maximum recursion depth exceeded as I expected but 3.8 goes into a stack overflow and doesn't handle the recursion error properly. But I did remember that there was RecursionError in older versions of Python too, so I tried def x(): x() And this gives back RecursionError in both versions of Python. It's as if (in the first snippet) the recursion error is never thrown in the except but the function called and then the error thrown at the first instruction of the function called but handled by the try-except. I then tried something else: def x(): try: x() except: x() This is even weirder in some way, stack overflow below 3.10 but it get stuck in the loop in 3.10 Can you explain this behavior? UPDATE @MisterMiyagi found a even stranger behavior, adding a statement in the except in <=python3.9 doesn't result in a stackoverflow def x(): try: 1/0 except: print("") x()
The different behaviors for 3.10 and other versions seem to be because of a Python issue (python/cpython#86666), you can also see the correct error on Python 2.7. The print "fixes" things because it makes Python check the recursion limit again, and through a path that is presumably not broken. You can see the code where it does that here, it also skips the repeated check if the object supports the Vectorcall calling protocol, so things like int keep the fatal error.
10
5
73,992,417
2022-10-7
https://stackoverflow.com/questions/73992417/itertools-combinations-find-if-a-combination-is-divisible
Given itertools combinations with an r of 4: from itertools import combinations mylist = range(0,35) r = 4 combinationslist = list(combinations(mylist, r)) Which will output: (0, 1, 2, 3) (0, 1, 2, 4) (0, 1, 2, 5) (0, 1, 2, 6) (0, 1, 2, 7) (0, 1, 2, 8) (0, 1, 2, 9) ... (30, 31, 32, 33) (30, 31, 32, 34) (30, 31, 33, 34) (30, 32, 33, 34) (31, 32, 33, 34) My question is if we were to chunk to the list into blocks of 10 can we find what nth a combination is within those blocks, but without generating all combinations. Or in another words if the position is divisible by x. One of the problems with this is the positions will get into the billions of billions and might not be possible to derive what the nth is. Is there a heuristic that can regardless find whether a particular combination/sequence of elements is divisible by x Edit/addition: The reasoning for this question is for situations where the list is range(0,1000000) and r =30000 for example. Then provided a combination, find if it's divisible by x. Naturally the actual index will be ridiculously enormous (and the full combinations too much to generate)
I have authored a package in R called RcppAlgos that has functions specifically for this task. TL;DR Use comboRank from RcppAlgos. Details In the article that @wim linked to, you will see that this procedure is often called ranking and as many have pointed out, this boils down to counting. In the RcppAlgos package there are several ranking functions for ranking various structures (e.g. partitionsRank for ranking integer partitions). We will use comboRank for the task at hand: library(RcppAlgos) ## Generate random combination from 1:35 of length 4 set.seed(42) small_random_comb = sort(sample(35, 4)) ## Print the combination small_random_comb #> [1] 1 4 10 25 ## Ranking the combination with comboRank (See ?comboRank for more info). ## N.B. for the ranking functions, we must provide the source vector to rank appropriately idx = comboRank(small_random_comb, v = 35) ## Remember, R is base 1. idx #> [1] 1179 ## Generate all combinations to confirm all_combs = comboGeneral(35, 4) ## Same result all_combs[idx, ] #> [1] 1 4 10 25 Efficiency The functions are very efficient as well. They are written in C++ and use the gmp library for handling large numbers. Are they efficient enough for the very large case n = 1000000 and r = 10000 (or even r = 30000)? set.seed(97) large_random_comb = sort(sample(1e6, 1e4)) head(large_random_comb) #> [1] 76 104 173 608 661 828 tail(large_random_comb) #> [1] 999684 999731 999732 999759 999824 999878 system.time(lrg_idx <- comboRank(large_random_comb, v = 1e6)) #> user system elapsed #> 2.036 0.003 2.039 ## Let’s not print this number as it is over 20,000 digits gmp::log10.bigz(lrg_idx) #> [1] 24318.49 ## And for r = 30000 we have: set.seed(123) really_large_random_comb = sort(sample(1e6, 3e4)) system.time(really_lrg_idx <- comboRank(really_large_random_comb, v = 1e6)) #> user system elapsed #> 4.942 0.003 4.945 gmp::log10.bigz(really_lrg_idx) #> [1] 58514.98 Under 5 seconds ain't that bad! We can use comboSample, which essentially β€œunranks” when we use the sampleVec argument, for confirmation: check_large_comb = comboSample(1e6, 1e4, sampleVec = lrg_idx) ## Sense comboSample returns a matrix, we must convert to a vector before we compare identical(as.vector(check_large_comb), large_random_comb) #> [1] TRUE What about Python? And if you need this in python, we can make use of rpy2. Here is a snippet from a Jupyter Notebook: #> Cell 0 ------------------------------------------------------- import rpy2 import random from itertools import combinations mylist = range(0,35) r = 4 combinationslist = list(combinations(mylist, r)) combo = random.choice(combinationslist) combo ------------------------------------------------------- #> Out[25]: (1, 25, 30, 31) #> Cell 1 ------------------------------------------------------- ## Convert it to a list to ease the transition to R lst_combo = list(combo) ------------------------------------------------------- #> Cell 2 ------------------------------------------------------- %load_ext rpy2.ipython ------------------------------------------------------- #> Cell 3 ------------------------------------------------------- %%R -i lst_combo -o idx ​ library(RcppAlgos) idx = comboRank(lst_combo, v = 0:34) ------------------------------------------------------- #> Cell 4 ------------------------------------------------------- idx[0] ------------------------------------------------------- #> Out[39]: 11347 #> Cell 5 ------------------------------------------------------- ## R is base 1, so we subtract 1 combinationslist[idx[0] - 1] ------------------------------------------------------- #> Out[40]: (1, 25, 30, 31) Addendum - Key Idea in Ranking Algorithm Even if we were to translate the excellent algorithm outlined by @wim to a compiled language, we would still not be anywhere close to tackling the large cases presented here. That is because successive calls to any combinatorial function, no matter how optimized, are expensive. Instead, we take advantage of the fact that this algorithm relies on very subtle differences on each iteration. For example, what if we wanted to calculate the following 3 numbers: nCr(20, 15) = 15504 nCr(19, 14) = 11628 nCr(18, 13) = 8568 Given the formula for nCr: n! / (r! * (n - r)!) With this we can use the result in 1 to get the result in step 2 with only two operations and we can use this result to get the result in step 3 in only two operations as well! Observe: (15504 * 15) / 20 = 11628 (11628 * 14) / 19 = 8568 This is the key idea behind most of the ranking/unranking algorithms in RcppAlgos. I'm not sure of an elegant way to get to the C++ code in RcppAlgos from python. Probably the best solution if you don't want to deal with rpy2 is to adapt the algorithms below to your personal needs: https://github.com/jwood000/RcppAlgos/blob/main/src/RankCombination.cpp
3
2
74,032,055
2022-10-11
https://stackoverflow.com/questions/74032055/how-to-verify-if-a-graph-has-crossing-edges-in-networkx
I am creating a genetic algorithm to solve the traveling salesman problem using python and networkx. And I'm adding a condition to converge to a satisfactory solution: the path must not have crossing edges. I wonder if there's a quick function in networkx to verify if the graph has crossing edges or, at least, want to know if it's possible to create one. The graph is created with a list of points (path), each point has a coordinate in x, and a coordinate in y. The sequence of points index the path to tour. I created an object nx.Graph() like below: G = nx.Graph() for i in range(len(path)): G.add_node(i, pos=(path[i].x, path[i].y)) for i in range(len(path)-1): G.add_edge(i, i+1) G.add_edge(len(path)-1, 0) One example of converging not optimal solution: printing out the points with nx.get_node_attributes(G,'pos'): {0: (494, 680), 1: (431, 679), 2: (217, 565), 3: (197, 581), 4: (162, 586), 5: (90, 522), 6:(138, 508), 7: (217, 454), 8: (256, 275), 9: (118, 57), 10: (362, 139), 11: (673, 89), 12: (738, 153), 13: (884, 119), 14: (687, 542), 15: (720, 618), 16: (745, 737), 17: (895, 887), 18: (902, 574), 19: (910, 337), 20: (823, 371), 21: (601, 345), 22: (608, 302), 23: (436, 294), 24: (515, 384), 25: (646, 495)} Here is an article supporting the condition of convergence: http://www.ams.org/publicoutreach/feature-column/fcarc-tsp
My first reading was the same as @AveragePythonEngineer's. Normally in the travelling salesman problem, and graph theory in general, we don't care too much about the positions of the vertices, only the distances between them. And I thought you might be confusing the drawing of a graph with the graph (it's just one realization of infinite possible drawings). So while you can draw a planar graph with crossing edges if you wish (like your example), the point is that you could draw it in the plane. On re-reading your question, I think you're actually introducing the 'no crossing paths' as a constraint. To put it another way using the jargon: the path must not be self-intersecting. If that's right, then I think this question in GIS Stack Exchange will help you. It uses shapely, a very useful tool for 2D geometric questions. From the first answer: [Check out] .is_simple Returns True if the feature does not cross itself. from shapely.wkt import loads l = loads('LINESTRING (9603380.577551289 2719693.31939431, 9602238.01822002 2719133.882441244, 9601011.900844947 2718804.012436028, 9599670.800095448 2718931.680117098, 9599567.204161201 2717889.384686942, 9600852.184025297 2721120.409265322, 9599710.80929024 2720511.270897166, 9602777.832940497 2718125.875545334)') print(l.is_simple) # False If you're looking to solve the problem from scratch then this answer to a similar question, but in a different framework, has some interesting leads, especially the Bentley–Ottmann algorithm, which might be useful.
4
7
73,975,798
2022-10-6
https://stackoverflow.com/questions/73975798/why-does-asyncio-wait-keep-a-task-with-a-reference-around-despite-exceeding-the
I recently found and reproduced a memory leak caused by the use of asyncio.wait. Specifically, my program periodically executes some function until stop_event is set. I simplified my program to the snippet below (with a reduced timeout to demonstrate the issue better): async def main(): stop_event = asyncio.Event() while True: # Do stuff here await asyncio.wait([stop_event.wait()], timeout=0.0001) asyncio.run(main()) While this looked innocuous to me, it turns out there's a memory leak here. If you execute the code above, you'll see the memory usage growing to hundreds of MBs in a matter of minutes. This surprised me and took a long time to track down. I was expecting that after the timeout, anything I was waiting for would be cleaned up (since I'm not keeping any references to it myself). However, that turns out not to be the case. Using gc.get_referrers, I was able to infer that every time I call asyncio.wait(...), a new task is created that holds a reference to the object returned by stop_event.wait() and that task is kept around forever. Specifically, len(asyncio.all_tasks()) keeps increasing over time. Even if the timeout is passed, the tasks are still there. Only upon calling stop_event.set() do these tasks all finish at once and does memory usage decrease drastically. After discovering that, this note in the documentation made me try asyncio.wait_for instead: Unlike wait_for(), wait() does not cancel the futures when a timeout occurs. It turns out that actually behaves like I expected. There are no references kept after the timeout, and memory usage and number of tasks stay flat. This is the code without a memory leak: async def main(): stop_event = asyncio.Event() while True: # Do stuff here try: await asyncio.wait_for(event.stop_event(), timeout=0.0001) except asyncio.TimeoutError: pass asyncio.run(main()) While I'm happy this is fixed now, I don't really understand this behavior. If the timeout has been exceeded, why keep this task holding a reference around? It seems like that's a recipe for creating memory leaks. The note about not cancelling futures is also not clear to me. What if we don't explicitly cancel the future, but we just don't keep a task holding a reference after the timeout? Wouldn't that work as well? It would be very much appreciated if anybody could shine some light on this. Thanks a lot!
The key concept to understand here is that the return value of wait() is a tuple (completed, pending) tasks. The typical way to use wait()-based code is like this: async def main(): stop_event = asyncio.Event() pending = [... add things to wait ...] while pending: completed, pending = await asyncio.wait(pending, timeout=0.0001) process(completed) # e.g. update progress bar pending.extend(more_tasks_to_wait) wait() with timeout isn't used to have one coroutine to wait for another coroutines/tasks to finish, instead its primary use case is for periodically flushing completed tasks, while letting the unfinished tasks to continue "in the background", so cancelling the unfinished tasks automatically isn't really desirable, because you usually want to continue waiting for those pending tasks again in the next iteration. This usage pattern resembles the select() system call. On the other hand, the usage pattern of await wait_for(xyz, ) is basically just like doing await xyz with a timeout. It's a common and much simpler use case.
7
4
74,026,454
2022-10-11
https://stackoverflow.com/questions/74026454/julia-spherical-harmonics-different-from-python
I would like to calculate the Spherical Harmonics with Julia. I have done this with the following code: using GSL function radius(x, y, z) return sqrt(x^2 + y^2 + z^2) end function theta(x, y, z) return acos(z / radius(x, y, z)) end function phi(x, y, z) return atan(y, x) end function harmonics(l, m, x, y, z) return (-1)^(m) * GSL.sf_legendre_sphPlm(l, m, cos(theta(x,y,z)))*β„―^(im*m*phi(x,y,z)) end harmonics(1, 1, 11.66, -35, -35) harmonics(1, 1, -35, -35, -35) The output is the following: 0.07921888327321648 - 0.23779253126608726im -0.1994711402007164 - 0.19947114020071643im But doing the same with the following python code: import scipy.special as spe import numpy as np def radius(x, y, z): return np.sqrt(x**2 + y**2 + z**2) def theta(x, y, z): return np.arccos(z / radius(x, y, z)) def phi(x, y, z): return np.arctan(y / x) def harmonics(l, m, x, y, z): return spe.sph_harm(m, l, phi(x, y, z), theta(x, y, z)) harmonics(1, 1, 11.66, -35, -35) harmonics(1, 1, -35, -35, -35) Results in the following output: (-0.07921888327321645+0.23779253126608718j) (-0.19947114020071638-0.19947114020071635j) So the sign of the first result is different. But since only one of the results has a different sign, the cause cannot be in the prefactor (-1)^m. I can't see through this anymore and can't explain why the results are different.
The comment by @Oscar Smith got me started on the solution. Julia uses a different convention for the angles returned by atan, provided two arguments are passed [Julia, Numpy]. If we use atan(y / x) instead of atan(y, x) in Julia we get the same result.
3
3
74,031,620
2022-10-11
https://stackoverflow.com/questions/74031620/calculate-the-slope-for-every-n-days-per-group
I have the following dataframe (sample): import pandas as pd data = [['A', '2022-09-01', 2], ['A', '2022-09-02', 1], ['A', '2022-09-04', 3], ['A', '2022-09-06', 2], ['A', '2022-09-07', 1], ['A', '2022-09-07', 2], ['A', '2022-09-08', 4], ['A', '2022-09-09', 2], ['B', '2022-09-01', 2], ['B', '2022-09-03', 4], ['B', '2022-09-04', 2], ['B', '2022-09-05', 2], ['B', '2022-09-07', 1], ['B', '2022-09-08', 3], ['B', '2022-09-10', 2]] df = pd.DataFrame(data = data, columns = ['group', 'date', 'value']) df['date'] = pd.to_datetime(df['date']) df['diff_days'] = (df['date']-df['date'].groupby(df['group']).transform('first')).dt.days group date value diff_days 0 A 2022-09-01 2 0 1 A 2022-09-02 1 1 2 A 2022-09-04 3 3 3 A 2022-09-06 2 5 4 A 2022-09-07 1 6 5 A 2022-09-07 2 6 6 A 2022-09-08 4 7 7 A 2022-09-09 2 8 8 B 2022-09-01 2 0 9 B 2022-09-03 4 2 10 B 2022-09-04 2 3 11 B 2022-09-05 2 4 12 B 2022-09-07 1 6 13 B 2022-09-08 3 7 14 B 2022-09-10 2 9 I would like to create a column called "slope" which shows the slope for every n (n = 3) days per group. This means that when the first date is "2022-09-01" and 3 days later are used for the calculation. The slope can be calculated using the "diff_days" (calculated by difference with the first value per group) and "value" columns. Here is the desired output: data = [['A', '2022-09-01', 2, 0, 0.43], ['A', '2022-09-02', 1, 1, 0.43], ['A', '2022-09-04', 3, 3, 0.43], ['A', '2022-09-06', 2, 5, -0.5], ['A', '2022-09-07', 1, 6, -0.5], ['A', '2022-09-07', 2, 6, -0.5], ['A', '2022-09-08', 4, 7, -2], ['A', '2022-09-09', 2, 8, -2], ['B', '2022-09-01', 2, 0, 0.14], ['B', '2022-09-03', 4, 2, 0.14], ['B', '2022-09-04', 2, 3, 0.14], ['B', '2022-09-05', 2, 4, -0.5], ['B', '2022-09-07', 1, 6, -0.5], ['B', '2022-09-08', 3, 7, -0.5], ['B', '2022-09-10', 2, 9, -0.5]] df_desired = pd.DataFrame(data = data, columns = ['group', 'date', 'value', 'diff_days', 'slope']) group date value diff_days slope 0 A 2022-09-01 2 0 0.43 1 A 2022-09-02 1 1 0.43 2 A 2022-09-04 3 3 0.43 3 A 2022-09-06 2 5 -0.50 4 A 2022-09-07 1 6 -0.50 5 A 2022-09-07 2 6 -0.50 6 A 2022-09-08 4 7 -2.00 7 A 2022-09-09 2 8 -2.00 8 B 2022-09-01 2 0 0.14 9 B 2022-09-03 4 2 0.14 10 B 2022-09-04 2 3 0.14 11 B 2022-09-05 2 4 -0.50 12 B 2022-09-07 1 6 -0.50 13 B 2022-09-08 3 7 -0.50 14 B 2022-09-10 2 9 -0.50 Here are some example calculations to give you an idea: For the first 3 days of group A: slope([0,1,3],[2,1,3])=0.43 For the 3 days later of group A: slope([5,6,6],[2,1,2])=-0.5 For again 3 days later of group A: slope([7,8],[4,2])=-2.0 So I was wondering if anyone knows how to determine the slope for every n days (this case 3 days) per group? Please note: Not all dates are included, so it is really every n days.
Solution df['n'] = df.groupby('group').cumcount() // 3 df.merge( df .groupby(['group', 'n']) .apply(lambda s: np.polyfit(s['diff_days'], s['value'], 1)[0]) .reset_index(name='slope') ) How this works? Create a sequential counter per group using cumcount then floor divide by 3 to get blocks of 3 rows Group the dataframe by group column along with the blocks and aggregate with np.polyfit to get the slope Merge the aggregated frame back to original dataframe to broadcast the slope values Result group date value diff_days n slope 0 A 2022-09-01 2 0 0 0.428571 1 A 2022-09-02 1 1 0 0.428571 2 A 2022-09-04 3 3 0 0.428571 3 A 2022-09-06 2 5 1 -0.500000 4 A 2022-09-07 1 6 1 -0.500000 5 A 2022-09-07 2 6 1 -0.500000 6 A 2022-09-08 4 7 2 -2.000000 7 A 2022-09-09 2 8 2 -2.000000 8 B 2022-09-01 2 0 0 0.142857 9 B 2022-09-03 4 2 0 0.142857 10 B 2022-09-04 2 3 0 0.142857 11 B 2022-09-05 2 4 1 0.214286 12 B 2022-09-07 1 6 1 0.214286 13 B 2022-09-08 3 7 1 0.214286 14 B 2022-09-10 2 9 2 0.111111
3
3
74,015,708
2022-10-10
https://stackoverflow.com/questions/74015708/why-when-i-send-an-email-via-fastapi-mail-the-email-i-receive-displays-the-same
I am trying to send an email using FastAPI-mail, and even though I am successfully sending it, when I open the email in Gmail or Outlook, the content (message) appears twice. I am looking at the code but I don't think I am attaching the message twice (also note that the top message always shows the tags, while the second doesn't (see below image). Any help will be appreciated! main.py from fastapi import FastAPI from fastapi_mail import FastMail, MessageSchema, ConnectionConfig from starlette.requests import Request from starlette.responses import JSONResponse from pydantic import EmailStr, BaseModel from typing import List app = FastAPI() class EmailSchema(BaseModel): email: List[EmailStr] conf = ConnectionConfig( MAIL_USERNAME='myGmailAddress', MAIL_PASSWORD="myPassword", MAIL_FROM='myGmailAddress', MAIL_PORT=587, MAIL_SERVER="smtp.gmail.com", MAIL_TLS=True, MAIL_SSL=False ) @app.post("/send_mail") async def send_mail(email: EmailSchema): template = """ <html> <body> <p>Hi !!! <br>Thanks for using <b>fastapi mail</b>!!!</p> </body> </html> """ message = MessageSchema( subject="Fastapi-Mail module", recipients=email.dict().get("email"), # List of recipients, as many as you can pass body=template, subtype="html" ) template = """ <p>Hi !!! <br>Thanks for using <b>fastapi mail</b>!!! </p>""" ''' template = """ <p>Hi !!! <br>Thanks for using <b>fastapi mail</b>!!! </p>""" ''' fm = FastMail(conf) await fm.send_message(message) return JSONResponse(status_code=200, content={"message": "email has been sent"})
Instead of body, use the html property. message = MessageSchema( subject="Fastapi-Mail module", recipients=email.dict().get("email"), # List of recipients, as many as you can pass html=template, # <<<<<<<<< here subtype="html" )
3
3
74,008,146
2022-10-9
https://stackoverflow.com/questions/74008146/bifurcation-diagram-of-dynamical-system
TL:DR How can one implement a bifurcation diagram of a seasonally forced epidemiological model such as SEIR (susceptible, exposed, infected, recovered) in Python? I already know how to implement the model itself and display a sampled time series (see this stackoverflow question), but I am struggling with reproducing a bifurcation figure from a textbook. Context and My Attempt I am trying to reproduce figures from the book "Modeling Infectious Diseases in Humans and Animals" (Keeling 2007) to both validate my implementations of models and to learn/visualize how different model parameters affect the evolution of a dynamical system. Below is the textbook figure. I have found implementations of bifurcation diagrams for examples using the logistic map (see this ipython cookbook this pythonalgos bifurcation, and this stackoverflow question). My main takeaway from these implementations was that a single point on the bifurcation diagram has an x-component equal to some particular value of the varied parameter (e.g., Beta 1 = 0.025) and its y-component is the solution (numerical or otherwise) at time t for a given model/function. I use this logic to implement the plot_bifurcation function in the code section at the end of this question. Questions Why do my panel outputs not match those in the figure? I assume I can't try to reproduce the bifurcation diagram from the textbook without my panels matching the output in the textbook. I have tried to implement a function to produce a bifurcation diagram, but the output looks really strange. Am I misunderstanding something about the bifurcation diagram? NOTE: I receive no warnings/errors during code execution. Code to Reproduce my Figures from typing import Callable, Dict, List, Optional, Any import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint def seasonal_seir(y: List, t: List, params: Dict[str, Any]): """Seasonally forced SEIR model. Function parameters much match with those required by `scipy.integrate.odeint` Args: y: Initial conditions. t: Timesteps over which numerical solution will be computed. params: Dict with the following key-value pairs: beta_zero -- Average transmission rate. beta_one -- Amplitude of seasonal forcing. omega -- Period of forcing. mu -- Natural mortality rate. sigma -- Latent period for infection. gamma -- Recovery from infection term. Returns: Tuple whose components are the derivatives of the susceptible, exposed, and infected state variables w.r.t to time. References: [SEIR Python Program from Textbook](http://homepages.warwick.ac.uk/~masfz/ModelingInfectiousDiseases/Chapter2/Program_2.6/Program_2_6.py) [Seasonally Forced SIR Program from Textbook](http://homepages.warwick.ac.uk/~masfz/ModelingInfectiousDiseases/Chapter5/Program_5.1/Program_5_1.py) """ beta_zero = params['beta_zero'] beta_one = params['beta_one'] omega = params['omega'] mu = params['mu'] sigma = params['sigma'] gamma = params['gamma'] s, e, i = y beta = beta_zero*(1 + beta_one*np.cos(omega*t)) sdot = mu - (beta * i + mu)*s edot = beta*s*i - (mu + sigma)*e idot = sigma*e - (mu + gamma)*i return sdot, edot, idot def plot_panels( model: Callable, model_params: Dict, panel_param_space: List, panel_param_name: str, initial_conditions: List, timesteps: List, odeint_kwargs: Optional[Dict] = dict(), x_ticks: Optional[List] = None, time_slice: Optional[slice] = None, state_var_ix: Optional[int] = None, log_scale: bool = False): """Plot panels that are samples of the parameter space for bifurcation. Args: model: Function that models dynamical system. Returns dydt. model_params: Dict whose key-value pairs are the names of parameters in a given model and the values of those parameters. bifurcation_parameter_space: List of varied bifurcation parameters. bifuraction_parameter_name: The name o the bifurcation parameter. initial_conditions: Initial conditions for numerical integration. timesteps: Timesteps for numerical integration. odeint_kwargs: Key word args for numerical integration. state_var_ix: State variable in solutions to use for plot. time_slice: Restrict the bifurcation plot to a subset of the all solutions for numerical integration timestep space. Returns: Figure and axes tuple. """ # Set default ticks if x_ticks is None: x_ticks = timesteps # Create figure fig, axs = plt.subplots(ncols=len(panel_param_space)) # For each parameter that is varied for a given panel # compute numerical solutions and plot for ix, panel_param in enumerate(panel_param_space): # update model parameters with the varied parameter model_params[panel_param_name] = panel_param # Compute solutions solutions = odeint( model, initial_conditions, timesteps, args=(model_params,), **odeint_kwargs) # If there is a particular solution of interst, index it # otherwise squeeze last dimension so that [T, 1] --> [T] # where T is the max number of timesteps if state_var_ix is not None: solutions = solutions[:, state_var_ix] elif state_var_ix is None and solutions.shape[-1] == 1: solutions = np.squeeze(solutions) else: raise ValueError( f'solutions to model are rank-2 tensor of shape {solutions.shape}' ' with the second dimension greater than 1. You must pass' ' a value to :param state_var_ix:') # Slice the solutions based on the desired time range if time_slice is not None: solutions = solutions[time_slice] # Natural log scale the results if log_scale: solutions = np.log(solutions) # Plot the results axs[ix].plot(x_ticks, solutions) return fig, axs def plot_bifurcation( model: Callable, model_params: Dict, bifurcation_parameter_space: List, bifurcation_param_name: str, initial_conditions: List, timesteps: List, odeint_kwargs: Optional[Dict] = dict(), state_var_ix: Optional[int] = None, time_slice: Optional[slice] = None, log_scale: bool = False): """Plot a bifurcation diagram of state variable from dynamical system. Args: model: Function that models system. Returns dydt. model_params: Dict whose key-value pairs are the names of parameters in a given model and the values of those parameters. bifurcation_parameter_space: List of varied bifurcation parameters. bifuraction_parameter_name: The name o the bifurcation parameter. initial_conditions: Initial conditions for numerical integration. timesteps: Timesteps for numerical integration. odeint_kwargs: Key word args for numerical integration. state_var_ix: State variable in solutions to use for plot. time_slice: Restrict the bifurcation plot to a subset of the all solutions for numerical integration timestep space. log_scale: Flag to natural log scale solutions. Returns: Figure and axes tuple. """ # Track the solutions for each parameter parameter_x_time_matrix = [] # Iterate through parameters for param in bifurcation_parameter_space: # Update the parameter dictionary for the model model_params[bifurcation_param_name] = param # Compute the solutions to the model using # dictionary of parameters (including the bifurcation parameter) solutions = odeint( model, initial_conditions, timesteps, args=(model_params, ), **odeint_kwargs) # If there is a particular solution of interst, index it # otherwise squeeze last dimension so that [T, 1] --> [T] # where T is the max number of timesteps if state_var_ix is not None: solutions = solutions[:, state_var_ix] elif state_var_ix is None and solutions.shape[-1] == 1: solutions = np.squeeze(solutions) else: raise ValueError( f'solutions to model are rank-2 tensor of shape {solutions.shape}' ' with the second dimension greater than 1. You must pass' ' a value to :param state_var_ix:') # Update the parent list of solutions for this particular # bifurcation parameter parameter_x_time_matrix.append(solutions) # Cast to numpy array parameter_x_time_matrix = np.array(parameter_x_time_matrix) # Transpose: Bifurcation plots Function Output vs. Parameter # This line ensures that each row in the matrix is the solution # to a particular state variable in the system of ODEs # a timestep t # and each column is that solution for a particular value of # the (varied) bifurcation parameter of interest time_x_parameter_matrix = np.transpose(parameter_x_time_matrix) # Slice the iterations to display to a smaller range if time_slice is not None: time_x_parameter_matrix = time_x_parameter_matrix[time_slice] # Make bifurcation plot fig, ax = plt.subplots() # For the solutions vector at timestep plot the bifurcation # NOTE: The elements of the solutions vector represent the # numerical solutions at timestep t for all varied parameters # in the parameter space # e.g., # t beta1=0.025 beta1=0.030 .... beta1=0.30 # 0 solution00 solution01 .... solution0P for sol_at_time_t_for_all_params in time_x_parameter_matrix: if log_scale: sol_at_time_t_for_all_params = np.log(sol_at_time_t_for_all_params) ax.plot( bifurcation_parameter_space, sol_at_time_t_for_all_params, ',k', alpha=0.25) return fig, ax # Define initial conditions based on figure s0 = 6e-2 e0 = i0 = 1e-3 initial_conditions = [s0, e0, i0] # Define model parameters based on figure # NOTE: omega is not mentioned in the figure, but # omega is defined elsewhere as 2pi/365 days_per_year = 365 mu = 0.02/days_per_year beta_zero = 1250 sigma = 1/8 gamma = 1/5 omega = 2*np.pi / days_per_year model_params = dict( beta_zero=beta_zero, omega=omega, mu=mu, sigma=sigma, gamma=gamma) # Define timesteps nyears = 200 ndays = nyears * days_per_year timesteps = np.arange(1, ndays + 1, 1) # Define different levels of seasonality (from figure) beta_ones = [0.025, 0.05, 0.25] # Define the time range to actually show on the plot min_year = 190 max_year = 200 # Create a slice of the iterations to display on the diagram time_slice = slice(min_year*days_per_year, max_year*days_per_year) # Get the xticks to display on the plot based on the time slice x_ticks = timesteps[time_slice]/days_per_year # Plot the panels using the infected state variable ix infection_ix = 2 # Plot the panels panel_fig, panel_ax = plot_panels( model=seasonal_seir, model_params=model_params, panel_param_space=beta_ones, panel_param_name='beta_one', initial_conditions=initial_conditions, timesteps=timesteps, odeint_kwargs=dict(hmax=5), x_ticks=x_ticks, time_slice=time_slice, state_var_ix=infection_ix, log_scale=False) # Label the panels panel_fig.suptitle('Attempt to Reproduce Panels from Keeling 2007') panel_fig.supxlabel('Time (years)') panel_fig.supylabel('Fraction Infected') panel_fig.set_size_inches(15, 8) # Plot bifurcation bi_fig, bi_ax = plot_bifurcation( model=seasonal_seir, model_params=model_params, bifurcation_parameter_space=np.linspace(0.025, 0.3), bifurcation_param_name='beta_one', initial_conditions=initial_conditions, timesteps=timesteps, odeint_kwargs={'hmax':5}, state_var_ix=infection_ix, time_slice=time_slice, log_scale=False) # Label the bifurcation bi_fig.suptitle('Attempt to Reproduce Bifurcation Diagram from Keeling 2007') bi_fig.supxlabel(r'$\beta_1$') bi_fig.supylabel('Fraction Infected') bi_fig.set_size_inches(15, 8)
The answer to this questions is here on the Computational Science stack exchange. All credit to Lutz Lehmann.
5
0
74,027,060
2022-10-11
https://stackoverflow.com/questions/74027060/specify-separate-sources-for-different-packages-in-pyproject-toml
My project has various private python packages developed internally in my organization. I am using [tool.poetry.source] to specify the PyPi server. I have a use case to specify custom PyPi server url for different packages. This is the content of my pyproject.toml [tool.poetry.dependencies] python = "^3.8" package-a = "0.1.2" package-b = "0.2.1" package-c = "0.4.2" [[tool.poetry.source]] name = "internal-repo-1" url = "https://<private-repo-1>" [[tool.poetry.source]] name = "internal-repo-2" url = "https://<private-repo-2>" I want to use private-repo-1 from package-a and private-repo-2 for package-b and package-c. How can this be achieved ? Also can this be achieved without scanning all the private repositories for each and every package? I am using poetry for dependency management.
This is described in the docs: [tool.poetry.dependencies] python = "^3.8" package-a = { version = "0.1.2", source = "internal-repo-1" } package-b = { version = "0.2.1", source = "internal-repo-2" } package-c = { version = "0.4.2", source = "internal-repo-2" } [[tool.poetry.source]] name = "internal-repo-1" url = "https://<private-repo-1>" [[tool.poetry.source]] name = "internal-repo-2" url = "https://<private-repo-2>"
3
4
74,031,424
2022-10-11
https://stackoverflow.com/questions/74031424/how-to-implement-python-udf-in-dbt
Please I need some help with applying python UDF to run on my dbt models. I successfully created a python function in snowflake (DWH) and ran it against a table. This seems to work as expected, but implementing this on dbt seems to be a struggle. Some advice/help/direction will make my day. here is my python UDF created on snowflake create or replace function "077"."Unity".sha3_512(str varchar) returns varchar language python runtime_version = '3.8' handler = 'hash' as $$ import hashlib def hash(str): # create a sha3 hash object hash_sha3_512 = hashlib.new("sha3_512", str.encode()) return hash_sha3_512.hexdigest() $$ ; The objective is the create the python function in dbt and apply it to the model below {{ config(materialized = 'view') }} WITH SEC AS( SELECT A."AccountID" AS AccountID, A."AccountName" AS AccountName , A."Password" AS Passwords, apply function here (A."Password") As SHash FROM {{ ref('Green', 'Account') }} A ) ----------------VIEW RECORD------------------------------ SELECT * FROM SEC is there a way to do this please. Thank you
Assuming that UDF already exists in Snowflake: {{ config(materialized = 'view') }} WITH SEC AS( SELECT A."AccountID" AS AccountID, A."AccountName" AS AccountName , A."Password" AS Passwords, {{target.schema}}.sha3_512(A."Password") As SHash FROM {{ ref('Green', 'Account') }} A ) SELECT * FROM SEC; The function could be created using on-run-start: on-run-start: - '{{ creating_udf()}}' and macro: {% macro creating_udf() %} create function if not exists {{target.schema}}.sha3_512(str varchar) returns varchar language python runtime_version = '3.8' handler = 'hash' as $$ import hashlib def hash(str): # create a sha3 hash object hash_sha3_512 = hashlib.new("sha3_512", str.encode()) return hash_sha3_512.hexdigest() $$ ; {% endmacro %}
6
6
74,028,201
2022-10-11
https://stackoverflow.com/questions/74028201/can-you-plot-multiple-precision-recall-curves-using-precisionrecalldisplay
I am trying to plot Precision Recall curve using PrecisionRecallDisplay from scikit-learn. I have model predicted values in y_pred and actual values in y_true. I can plot precision recall curve using the following syntax: metrics.PrecisionRecallDisplay.from_predictions(y_true, y_pred) But I want to plot multiple curves (say by applying model on training or validation data) in the same plot. So is it possible to achieve this using PrecisionRecallDisplay? Or Is there some other standard way to achieve this using scikit-learn?
Since sklearn display routines are basically just matplotlib wrappers, the easiest way seems to be utilizing the ax argument, like this: import matplotlib.pyplot as plt fig, ax = plt.subplots() PrecisionRecallDisplay.from_predictions(y_train, y_pred_train, ax=ax) PrecisionRecallDisplay.from_predictions(y_test, y_pred, ax=ax) plt.show()
4
11
74,027,350
2022-10-11
https://stackoverflow.com/questions/74027350/python3-permutations-for-7-digit-number-that-totals-to-a-number
I need to find a solution for the below problem in Python3. I tried itertools.combinations but not clear on how to do it. Prepare a 7-digit number that sums to 5. Each digit can be between 0-4 only. Also, there can be repetitions. Valid example numbers are - [ [2,1,1,0,0,1,0], [3,0,1,0,0,1,0], [0,0,0,4,0,0,1], [1,0,0,3,0,1,0], [1,1,1,1,0,1,0], ...... ] As you can see, numbers may appear more than once in this list. How can I create a list of all combinations meeting the criteria above?
This function will find every combination, with repeated combinations, that sum to N: from itertools import product from typing import List, Tuple def perm_n_digit_total(n_digits, total, choices) -> List[Tuple]: return list(filter( lambda x: sum(x) == total, product(choices, repeat=n_digits) )) Example: perm_n_digit_total(3, 1, range(4)) Out[43]: [(0, 0, 1), (0, 1, 0), (1, 0, 0)] perm_n_digit_total(7, 5, range(4))[::50] Out[49]: [(0, 0, 0, 0, 0, 0, 5), (0, 0, 0, 3, 1, 1, 0), (0, 0, 2, 0, 3, 0, 0), (0, 1, 0, 1, 3, 0, 0), (0, 2, 0, 0, 1, 0, 2), (0, 4, 1, 0, 0, 0, 0), (1, 0, 1, 1, 1, 0, 1), (1, 1, 1, 1, 1, 0, 0), (2, 0, 1, 0, 0, 2, 0), (3, 1, 0, 0, 0, 1, 0)]
3
2
74,025,103
2022-10-11
https://stackoverflow.com/questions/74025103/how-to-make-python-for-loops-faster
I have a list of dictionaries, like this: [{'user': '123456', 'db': 'db1', 'size': '8628'} {'user': '123456', 'db': 'db1', 'size': '7168'} {'user': '123456', 'db': 'db1', 'size': '38160'} {'user': '222345', 'db': 'db3', 'size': '8628'} {'user': '222345', 'db': 'db3', 'size': '8628'} {'user': '222345', 'db': 'db5', 'size': '840'} {'user': '34521', 'db': 'db6', 'size': '12288'} {'user': '34521', 'db': 'db6', 'size': '476'} {'user': '2345156', 'db': 'db7', 'size': '5120'}.....] This list contains millions of entries. Each user can be found in multiple dbs, each user can have multiple entires in the same db. I want to sum up how much is the size occupied by each user, per each db. I don't want to use pandas. At the moment I do it this way: I create 2 lists of unique users and unique dbs Use those lists to iterate through the big list and sum up where user and db are the same result = [] for user in unique_users: for db in unique_dbs: total_size = 0 for i in big_list: if (i['user'] == user and i['db'] == db): total_size += float(i['size']) if(total_size) > 0: row = {} row['user'] = user row['db'] = db row['size'] = total_size result.append(row) The problem is that this triple for loop develops into something very large (hundreds of billions of iterations) which takes forever to sum up the result. If the big_list is small, this works very well. How should I approach this in order to keep it fast and simple? Thanks a lot!
There are two main issue with the current approach: the inefficient algorithm and the inefficient data structure. The first is that the algorithm used is clearly inefficient as it iterates many times over the big list. There is not need to iterate over the whole list to filter a unique user and db. You can iterate over the big list once and aggregate data using a dictionary. The key of the target dictionary is simply a (user, db) tuple. The value of the dictionary is total_size. Here is an untested example: # Aggregation part # Note: a default dict can be used instead to make the code possibly simpler aggregate_dict = dict() for i in big_list: key = (i['user'], i['db']) value = float(i['size']) if key in aggregate_dict: aggregate_dict[key] += value else: aggregate_dict[key] = value # Fast creation of `result` result = [] for user in unique_users: for db in unique_dbs: total_size = aggregate_dict.get((user, key)) if total_size is not None and total_size > 0: result.append({'user': user, 'db': db, 'size': total_size}) The other issue is the inefficient data structure: for each row, the keys are replicated while tuples can be used instead. In fact, a better data structure is to store a dictionary of (column, items) key-values where items is a list of items for the target column. This way of storing data is called a dataframe. This is roughly what Pandas uses internally (except it is a Numpy array which is even better as it is more compact and generally more efficient than a list for most operations). Using this data structure for both the input and the output should result in a significant speed up (if combined with Numpy) and a lower memory footprint.
3
3
74,019,260
2022-10-10
https://stackoverflow.com/questions/74019260/how-to-specify-dependencies-for-the-entire-router
class User(BaseModel): name: str token: str fake_db = [ User(name='foo', token='a1'), User(name='bar', token='a2') ] async def get_user_by_token(token: str = Header()): for user in fake_db: if user.token == token: return user else: raise HTTPException(status_code=401, detail='Invalid token') @router.get(path='/test_a', summary='Test route A') async def test_route_a(user: User = Depends(get_user_by_token)): return {'name': user.name} @router.get(path='/test_b', summary='Test route B') async def test_route_a(user: User = Depends(get_user_by_token)): return {'name': user.name} I would like to avoid code duplication. Is it possible to somehow set the line user: User = Depends(get_user_by_token) for the entire router? At the same time, I need the user object to be available in each method. It is very important that the openapi says that you need to specify a header with a token for the method.
You can use the dependencies parameter to add global dependencies when creating the router instance: router = APIRouter(dependencies=[Depends(get_user_by_token)]) or, when adding the router to the app instance: app.include_router(router, dependencies=[Depends(get_user_by_token)]) Please have a look at FastAPI's documentation on Dependencies for more details. As for getting the return value of a global dependency, you can't really do that. The way around this issue is to store the returned value to request.state (as described here), which is used to store arbitrary state (see the implementation of State as well). Hence, you could have something like this: def get_user_by_token(request: Request, token: str = Header()): for user in fake_db: if user.token == token: request.state.user = user # ... Then, inside your endpoint, you could retrieve the user object using request.state.user, as described in this answer.
3
6
74,014,379
2022-10-10
https://stackoverflow.com/questions/74014379/how-to-fine-tune-gpt-j-using-huggingface-trainer
I'm attempting to fine-tune gpt-j using the huggingface trainer and failing miserably. I followed the example that references bert, but of course, the gpt-j model isn't exactly like the bert model. The error indicates that the model isn't producing a loss, which is great, except that I have no idea how to make it generate a loss or how to change what the trainer is expecting. I'm using Transformers 4.22.2. I would like to get this working on a CPU before I try to do anything on Paperspace with a GPU. I did make an initial attempt there using a GPU that received the same error, with slightly different code to use cuda. I suspect that my approach is entirely wrong. I found a very old example of fine-tuning gpt-j using 8-bit quantization, but even that repository says it is deprecated. I'm unsure if my mistake is in using the compute_metrics() I found in the bert example or if it is something else. Any advice would be appreciated. Or, maybe it is an issue with the labels I provide the config, but I've tried different permutations. I understand what a loss function is, but I don't know how it is supposed to be configured in this case. My Code: from transformers import Trainer, TrainingArguments, AutoModelForCausalLM from transformers import GPTJForCausalLM, AutoTokenizer from datasets import load_dataset import time import torch import os import numpy as np import evaluate import sklearn start = time.time() GPTJ_FINE_TUNED_FILE = "./fine_tuned_models/gpt-j-6B" print("Loading model") model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", low_cpu_mem_usage=True) model.config.pad_token_id = model.config.eos_token_id print("Loading tokenizer") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") tokenizer.pad_token = tokenizer.eos_token print("Loading dataset") current_dataset = load_dataset("wikitext", 'wikitext-103-v1') current_dataset['train'] = current_dataset['train'].select(range(1200)) def tokenize_function(examples): current_tokenizer_result = tokenizer(examples["text"], padding="max_length", truncation=True) return current_tokenizer_result print("Splitting and tokenizing dataset") tokenized_datasets = current_dataset.map(tokenize_function, batched=True) small_train_dataset = tokenized_datasets["train"].select(range(100)) print("Preparing training arguments") training_args = TrainingArguments(output_dir=GPTJ_FINE_TUNED_FILE, report_to='all', logging_dir='./logs', per_device_train_batch_size=1, label_names=['input_ids', 'attention_mask'], # 'logits', 'past_key_values' num_train_epochs=1, no_cuda=True ) metric = evaluate.load("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset ) print("Starting training") trainer.train() print(f"Finished fine-tuning in {time.time() - start}") Which leads to the error and stacktrace: File "xxx\ft_v3.py", line 66, in <module> File "xxx\venv\lib\site-packages\transformers\trainer.py", line 1521, in train return inner_training_loop( File "xxx\venv\lib\site-packages\transformers\trainer.py", line 1763, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "xxx\venv\lib\site-packages\transformers\trainer.py", line 2499, in training_step loss = self.compute_loss(model, inputs) File "xxx\venv\lib\site-packages\transformers\trainer.py", line 2544, in compute_loss raise ValueError( ValueError: The model did not return a loss from the inputs, only the following keys: logits,past_key_values. For reference, the inputs it received are input_ids,attention_mask.
I found what appears to work, though now I'm running low on memory and working through ways of handling it. The data_collator parameter seems to take care of the exact issue that I was having. data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False) trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, data_collator=data_collator, )
4
1
74,020,472
2022-10-10
https://stackoverflow.com/questions/74020472/extract-all-phrases-from-a-pandas-dataframe-based-on-multiple-words-in-list
I have a list, L: L = ['top', 'left', 'behind', 'before', 'right', 'after', 'hand', 'side'] I have a pandas DataFrame, DF: Text the objects are both before and after the person the object is behind the person the object in right is next to top left hand side of person I would like to extract all words in L from the DF column 'Text' in such a manner: Text Extracted_Value the objects are both before and after the person before_after the object is behind the person behind the object in right is next to top left hand side of person right_top left hand side For case 1 and 2, my code is working: L = ['top', 'left', 'behind', 'before', 'right', 'after', 'hand', 'side'] pattern = r"(?:^|\s+)(" + "|".join(L) + r")(?:\s+|$)" df["Extracted_Value "] = ( df['Text'].str.findall(pattern).str.join("_").replace({"": None}) ) For CASE 3, I get right_top_hand. As in the third example, If identified words are contiguous, they are to be picked up as a phrase (one extraction). So in the object in right is next to top left hand side of person, there are two extractions - right and top left hand side. Hence, only these two extractions are separated by an _. I am not sure how to get it to work!
Try: df["Extracted_Value"] = ( df.Text.apply( lambda x: "|".join(w if w in L else "" for w in x.split()).strip("|") ) .replace(r"\|{2,}", "_", regex=True) .str.replace("|", " ", regex=False) ) print(df) Prints: Text Extracted_Value 0 the objects are both before and after the person before_after 1 the object is behind the person behind 2 the object in right is next to top left hand side of person right_top left hand side EDIT: Adapting @Wiktor's answer to pandas: pattern = fr"\b((?:{'|'.join(L)})(?:\s+(?:{'|'.join(L)}))*)\b" df["Extracted_Value"] = ( df["Text"].str.extractall(pattern).groupby(level=0).agg("_".join) ) print(df)
3
4
73,991,675
2022-10-7
https://stackoverflow.com/questions/73991675/on-failure-callback-triggered-multiple-times
I want to publish SINGLE Kafka message in case of airflow PARALLEL task failures. my airflow dags are similar to below. from datetime import datetime, timedelta from airflow.models import Variable from airflow import DAG from airflow.operators.dummy import DummyOperator from airflow.operators.python_operator import PythonOperator def task_failure_callback(context): ti = context['task_instance'] print(f"task {ti.task_id } failed in dag { ti.dag_id }, error: {ti.xcom_pull(key='error')} ") #call function to publish kafka message def task_success_callback(context): ti = context['task_instance'] print(f"Task {ti.task_id } has succeeded in dag { ti.dag_id }.") #call function to publish kafka message def dag_success_callback(context): dag_status = f"DAG has succeeded, run_id: {context['run_id']}" print(dag_status) Variable.set("TEST_CALLBACK_DAG_STATUS", dag_status) #call function to publish kafka message def dag_failure_callback(context): ti = context['task_instance'] dag_status = f"DAG has failed, run_id: {context['run_id']}, task id: {ti.task_id}" print(dag_status) Variable.set("TEST_CALLBACK_DAG_STATUS", dag_status) #call function to publish kafka message def user_func1(ti): try: input_val = int(Variable.get("TEST_CALLBACK_INPUT", 0)) if input_val % 10 == 0: raise ValueError("Invalid Input") except Exception as e: ti.xcom_push(key="error", value=str(e)) raise e def user_func2(ti): try: input_val = int(Variable.get("TEST_CALLBACK_INPUT", 0)) if input_val % 2 == 0: raise ValueError("Invalid Input") except Exception as e: ti.xcom_push(key="error", value=str(e)) raise e # pass default_args = { "on_success_callback": None, "on_failure_callback": dag_failure_callback, } with DAG( dag_id="test_callbacks_dag", default_args=default_args, schedule_interval=None, start_date=datetime(2021, 1, 1), dagrun_timeout=timedelta(minutes=60), catchup=False, ) as dag: task1 = PythonOperator(task_id="task1", python_callable=user_func1) task2 = PythonOperator(task_id="task2", python_callable=user_func2) task3 = DummyOperator(task_id="task3", on_success_callback=task_success_callback) [task1, task2] >> task3 Airflow parallel tasks failure logs: [2022-10-08, 00:10:51 IST] {logging_mixin.py:115} INFO - DAG has failed, run_id: manual__2022-10-07T18:40:50.355282+00:00, task id: task1 [2022-10-08, 00:10:51 IST] {logging_mixin.py:115} INFO - DAG has failed, run_id: manual__2022-10-07T18:40:50.355282+00:00, task id: task2 As mentioned above task1 and task2 are parallel tasks. I have used callback functions to trigger respective Kafka messages. for the success scenario, its triggers one success message during the final task. The problem is during the failure tasks mainly when tasks are run in parallel. if task1 & task2 both tasks failed during the parallel run, airflow triggered TWO on_failure_callback for task1 & task2. I agree that this should be the behavior of airflow. But for my requirement, I don't want to trigger multiple on_failure_callback. when its triggers the first on_failure_callback, it should not trigger the next callback, since the receiver side was designed to handle single error scenarios, not multiple/ batch errors. I have written kafka message call function under on_failure_callback function (dag_failure_callback) if my first task1 has failed, it triggered one message to kafka topic, same time if task2 also got failed, it triggered the second message to the same kafka topic, I could not handle it since both are running parallel as well independent. I want to stop when the first kafka publish on the topic, don't want to trigger kafka messages for further failures. Please suggest, how can I restrict the on_failure_callback during the parallel tasks failures.
You can use trigger_rule + PythonOperator to processing failed tasks. Here is an example: import logging import pendulum from airflow import DAG from airflow.models import TaskInstance from airflow.operators.python import PythonOperator from airflow.utils.state import TaskInstanceState from airflow.utils.trigger_rule import TriggerRule dag = DAG( dag_id='test', start_date=pendulum.today('UTC').add(hours=-1), schedule_interval=None, ) def green_task(ti: TaskInstance, **kwargs): logging.info('green') def red_task(ti: TaskInstance, **kwargs): raise Exception('red') def check_tasks(ti: TaskInstance, **kwargs): # find failed tasks. do what you need... for task in ti.get_dagrun().get_task_instances(state=TaskInstanceState.FAILED): # type: TaskInstance logging.info(f'failed dag: {task.dag_id}, task: {task.task_id}. url: {task.log_url}') t1 = PythonOperator( dag=dag, task_id='green_task', python_callable=green_task, provide_context=True, ) t2 = PythonOperator( dag=dag, task_id='red_task1', python_callable=red_task, provide_context=True, ) t3 = PythonOperator( dag=dag, task_id='red_task2', python_callable=red_task, provide_context=True, ) check = PythonOperator( dag=dag, task_id='check', python_callable=check_tasks, provide_context=True, trigger_rule=TriggerRule.NONE_SKIPPED, ) t1 >> check t2 >> check t3 >> check Run task and see check task logs: [2022-10-10, 15:12:39 UTC] {dag_test.py:27} INFO - failed dag: test, task: red_task1. url: http://localhost:8080/log?execution_date=2022-10-10T14%3A49%3A57.530923%2B00%3A00&task_id=red_task1&dag_id=test&map_index=-1 [2022-10-10, 15:12:39 UTC] {dag_test.py:27} INFO - failed dag: test, task: red_task2. url: http://localhost:8080/log?execution_date=2022-10-10T14%3A49%3A57.530923%2B00%3A00&task_id=red_task2&dag_id=test&map_index=-1 Or you can move processing into on_failure_callback: def on_failure_callback(context): ti = context['task_instance'] # type: TaskInstance for task in ti.get_dagrun().get_task_instances(state=TaskInstanceState.FAILED): # type: TaskInstance # blablabla
3
5
73,991,600
2022-10-7
https://stackoverflow.com/questions/73991600/equivalent-of-tf-contrib-legacy-seq2seq-attention-decoder-in-tensorflow-2-after
I have the following code in TensorFlow 1.0. I tried to migrate it to TensorFlow 2.0 using tf_upgrade_v2 script. However, it didnt find an equivalent function in the tf-2 compact version. I was recommended to use tensorflow_addons. However, I dont see an equivalent attention_decoder in the tf_addons module. Please guide me. decoder_outputs,decoder_state = tf.contrib.legacy_seq2seq.attention_decoder( decoder_inputs = decoder_inputs, initial_state = encoder_state, attention_states = encoder_outputs, cell = cell, output_size = word_embedding_dim, loop_function = None if mode=='pretrain' else feed_prev_loop, scope = scope ) The link to tf 1.0 code is here: https://github.com/yaushian/CycleGAN-sentiment-transfer/blob/master/lib/seq2seq.py
While there is no equivalent in Tensorflow 2.x API, the original implementation can be revised to be compatible with the new API. I have made the conversion below, along with a simple test case to verify it runs successfully. # https://github.com/tensorflow/tensorflow/blob/v1.15.5/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py#L537 from tensorflow.python.framework import dtypes from tensorflow.python.ops import math_ops from tensorflow.python.ops import variable_scope from tensorflow.python.ops import array_ops from tensorflow.python.ops import nn_ops from tensorflow.python.util import nest from tensorflow.python.ops import init_ops from tensorflow.python.framework import constant_op import tensorflow as tf class Linear: """Linear map: sum_i(args[i] * W[i]), where W[i] is a variable. Args: args: a 2D Tensor or a list of 2D, batch, n, Tensors. output_size: int, second dimension of weight variable. dtype: data type for variables. build_bias: boolean, whether to build a bias variable. bias_initializer: starting value to initialize the bias (default is all zeros). kernel_initializer: starting value to initialize the weight. Raises: ValueError: if inputs_shape is wrong. """ def __init__(self, args, output_size, build_bias, bias_initializer=None, kernel_initializer=None): self._build_bias = build_bias if args is None or (nest.is_sequence(args) and not args): raise ValueError("`args` must be specified") if not nest.is_sequence(args): args = [args] self._is_sequence = False else: self._is_sequence = True # Calculate the total size of arguments on dimension 1. total_arg_size = 0 shapes = [a.get_shape() for a in args] for shape in shapes: if shape.ndims != 2: raise ValueError("linear is expecting 2D arguments: %s" % shapes) if shape.dims[1].value is None: raise ValueError("linear expects shape[1] to be provided for shape %s, " "but saw %s" % (shape, shape[1])) else: total_arg_size += shape.dims[1].value dtype = [a.dtype for a in args][0] scope = variable_scope.get_variable_scope() with variable_scope.variable_scope(scope) as outer_scope: self._weights = variable_scope.get_variable( 'weights', [total_arg_size, output_size], dtype=dtype, initializer=kernel_initializer) if build_bias: with variable_scope.variable_scope(outer_scope) as inner_scope: inner_scope.set_partitioner(None) if bias_initializer is None: bias_initializer = init_ops.constant_initializer(0.0, dtype=dtype) self._biases = variable_scope.get_variable( 'bias', [output_size], dtype=dtype, initializer=bias_initializer) def __call__(self, args): if not self._is_sequence: args = [args] if len(args) == 1: res = math_ops.matmul(args[0], self._weights) else: # Explicitly creating a one for a minor performance improvement. one = constant_op.constant(1, dtype=dtypes.int32) res = math_ops.matmul(array_ops.concat(args, one), self._weights) if self._build_bias: res = nn_ops.bias_add(res, self._biases) return res def attention_decoder(decoder_inputs, initial_state, attention_states, cell, output_size=None, num_heads=1, loop_function=None, dtype=None, scope=None, initial_state_attention=False): """RNN decoder with attention for the sequence-to-sequence model. In this context "attention" means that, during decoding, the RNN can look up information in the additional tensor attention_states, and it does this by focusing on a few entries from the tensor. This model has proven to yield especially good results in a number of sequence-to-sequence tasks. This implementation is based on http://arxiv.org/abs/1412.7449 (see below for details). It is recommended for complex sequence-to-sequence tasks. Args: decoder_inputs: A list of 2D Tensors [batch_size x input_size]. initial_state: 2D Tensor [batch_size x cell.state_size]. attention_states: 3D Tensor [batch_size x attn_length x attn_size]. cell: tf.compat.v1.nn.rnn_cell.RNNCell defining the cell function and size. output_size: Size of the output vectors; if None, we use cell.output_size. num_heads: Number of attention heads that read from attention_states. loop_function: If not None, this function will be applied to i-th output in order to generate i+1-th input, and decoder_inputs will be ignored, except for the first element ("GO" symbol). This can be used for decoding, but also for training to emulate http://arxiv.org/abs/1506.03099. Signature -- loop_function(prev, i) = next * prev is a 2D Tensor of shape [batch_size x output_size], * i is an integer, the step number (when advanced control is needed), * next is a 2D Tensor of shape [batch_size x input_size]. dtype: The dtype to use for the RNN initial state (default: tf.float32). scope: VariableScope for the created subgraph; default: "attention_decoder". initial_state_attention: If False (default), initial attentions are zero. If True, initialize the attentions from the initial state and attention states -- useful when we wish to resume decoding from a previously stored decoder state and attention states. Returns: A tuple of the form (outputs, state), where: outputs: A list of the same length as decoder_inputs of 2D Tensors of shape [batch_size x output_size]. These represent the generated outputs. Output i is computed from input i (which is either the i-th element of decoder_inputs or loop_function(output {i-1}, i)) as follows. First, we run the cell on a combination of the input and previous attention masks: cell_output, new_state = cell(linear(input, prev_attn), prev_state). Then, we calculate new attention masks: new_attn = softmax(V^T * tanh(W * attention_states + U * new_state)) and then we calculate the output: output = linear(cell_output, new_attn). state: The state of each decoder cell the final time-step. It is a 2D Tensor of shape [batch_size x cell.state_size]. Raises: ValueError: when num_heads is not positive, there are no inputs, shapes of attention_states are not set, or input size cannot be inferred from the input. """ if not decoder_inputs: raise ValueError("Must provide at least 1 input to attention decoder.") if num_heads < 1: raise ValueError("With less than 1 heads, use a non-attention decoder.") if attention_states.get_shape()[2] is None: raise ValueError("Shape[2] of attention_states must be known: %s" % attention_states.get_shape()) if output_size is None: output_size = cell.output_size with variable_scope.variable_scope( scope or "attention_decoder", dtype=dtype) as scope: dtype = scope.dtype batch_size = array_ops.shape(decoder_inputs[0])[0] # Needed for reshaping. attn_length = attention_states.get_shape()[1] if attn_length is None: attn_length = array_ops.shape(attention_states)[1] attn_size = attention_states.get_shape()[2] # To calculate W1 * h_t we use a 1-by-1 convolution, need to reshape before. hidden = array_ops.reshape(attention_states, [-1, attn_length, 1, attn_size]) hidden_features = [] v = [] attention_vec_size = attn_size # Size of query vectors for attention. for a in range(num_heads): k = variable_scope.get_variable( "AttnW_%d" % a, [1, 1, attn_size, attention_vec_size], dtype=dtype) hidden_features.append(nn_ops.conv2d(hidden, k, [1, 1, 1, 1], "SAME")) v.append( variable_scope.get_variable( "AttnV_%d" % a, [attention_vec_size], dtype=dtype)) state = initial_state def attention(query): """Put attention masks on hidden using hidden_features and query.""" ds = [] # Results of attention reads will be stored here. if nest.is_sequence(query): # If the query is a tuple, flatten it. query_list = nest.flatten(query) for q in query_list: # Check that ndims == 2 if specified. ndims = q.get_shape().ndims if ndims: assert ndims == 2 query = array_ops.concat(query_list, 1) for a in range(num_heads): with variable_scope.variable_scope("Attention_%d" % a): y = Linear(query, attention_vec_size, True)(query) y = array_ops.reshape(y, [-1, 1, 1, attention_vec_size]) y = math_ops.cast(y, dtype) # Attention mask is a softmax of v^T * tanh(...). s = math_ops.reduce_sum(v[a] * math_ops.tanh(hidden_features[a] + y), [2, 3]) a = nn_ops.softmax(math_ops.cast(s, dtype=dtypes.float32)) # Now calculate the attention-weighted vector d. a = math_ops.cast(a, dtype) d = math_ops.reduce_sum( array_ops.reshape(a, [-1, attn_length, 1, 1]) * hidden, [1, 2]) ds.append(array_ops.reshape(d, [-1, attn_size])) return ds outputs = [] prev = None batch_attn_size = array_ops.stack([batch_size, attn_size]) attns = [ array_ops.zeros(batch_attn_size, dtype=dtype) for _ in range(num_heads) ] for a in attns: # Ensure the second shape of attention vectors is set. a.set_shape([None, attn_size]) if initial_state_attention: attns = attention(initial_state) for i, inp in enumerate(decoder_inputs): if i > 0: variable_scope.get_variable_scope().reuse_variables() # If loop_function is set, we use it instead of decoder_inputs. if loop_function is not None and prev is not None: with variable_scope.variable_scope("loop_function", reuse=True): inp = loop_function(prev, i) # Merge input and previous attentions into one vector of the right size. input_size = inp.get_shape().with_rank(2)[1] if input_size is None: raise ValueError("Could not infer input size from input: %s" % inp.name) inputs = [inp] + attns inputs = [math_ops.cast(e, dtype) for e in inputs] x = Linear(inputs, input_size, True)(inputs) # Run the RNN. cell_output, state = cell(x, state) # Run the attention mechanism. if i == 0 and initial_state_attention: with variable_scope.variable_scope( variable_scope.get_variable_scope(), reuse=True): attns = attention(state) else: attns = attention(state) with variable_scope.variable_scope("AttnOutputProjection"): cell_output = math_ops.cast(cell_output, dtype) inputs = [cell_output] + attns output = Linear(inputs, output_size, True)(inputs) if loop_function is not None: prev = output outputs.append(output) return outputs, state if __name__ == "__main__": _outputs, _state = attention_decoder([tf.ones((1, 1))], tf.ones((1, 1)), tf.ones((1, 1, 1)), tf.compat.v1.nn.rnn_cell.BasicRNNCell(1)) print(_outputs, _state) As mentioned in the Github Issue for the same question: There is no direct replacement for this function but there are modules that achieve the same thing. You can extend your RNN cell with an attention mechanism using tfa.seq2seq.AttentionWrapper You can create a decoder with the RNN cell using tfa.seq2seq.BasicDecoder There are small examples in these pages which should get you started with these modules. The optimal approach is probably using new RNN and Attention modules introduced in the 2.x API, but for the sake of experimenting with scripts which were written using the 1.x API, similar to the one referenced in the question, this approach may be enough to bridge the gap.
5
3
74,015,663
2022-10-10
https://stackoverflow.com/questions/74015663/how-to-plot-axes-with-arrows-in-matplotlib
I want to achieve the following three things: add arrows on x and y axes show only the values that are used add labels for coordinates My code on this moment: x = [9, 8, 11, 11, 14, 13, 16, 14, 14] y = [9, 16, 15, 11, 10, 11, 10, 8, 8] fig = plt.figure(figsize=(7,7), dpi=300) axes = fig.add_axes([0,1,1,1]) axes.set_xlim(0, 17) axes.set_ylim(0, 17) axes.invert_yaxis() axes.scatter(x, y, color='green') axes.vlines(x, 0, y, linestyle="dashed", color='green') axes.hlines(y, 0, x, linestyle="dashed", color='green') axes.spines.right.set_visible(False) axes.spines.bottom.set_visible(False) plt.show() Visually: And plot that I want to realize
You can draw arrows by overlaying triangle shaped points over the ends of your spines. You'll need to leverage some transforms, but you can also create your labels by manually adding text to your Axes objects as well. Labelling each coordinate can be done via axes.annotate, but you'll need to manually specify the location of each annotation to ensure they don't overlap with lines or other annotations. import matplotlib.pyplot as plt from matplotlib.ticker import FixedLocator x = [9, 8, 11, 11, 14, 13, 16, 14, 14] y = [9, 16, 15, 11, 10, 11, 10, 8, 8] fig = plt.figure(figsize=(7,7), dpi=300) axes = fig.add_axes([.05,.05,.9,.9]) # Plots the data axes.scatter(x, y, color='green') axes.vlines(x, 0, y, linestyle="dashed", color='green') axes.hlines(y, 0, x, linestyle="dashed", color='green') axes.set_xlim(0, 17) axes.set_ylim(0, 17) axes.set_xticks(x) axes.set_yticks(y) axes.invert_yaxis() # Move ticks to top side of plot axes.xaxis.set_tick_params( length=0, bottom=False, labelbottom=False, top=True, labeltop=True ) axes.xaxis.set_tick_params(length=0) # Add arrows to the spines by drawing triangle shaped points over them axes.plot(1, 1, '>k', transform=axes.transAxes, clip_on=False) axes.plot(0, 0, 'vk', transform=axes.transAxes, clip_on=False) axes.spines[['bottom', 'right']].set_visible(False) # Add labels for 0, F_1 and F_2 from matplotlib.transforms import offset_copy axes.text( 0, 1, s='0', fontstyle='italic', ha='right', va='bottom', transform=offset_copy(axes.transAxes, x=-5, y=5, fig=fig, units='points'), ) axes.text( 1, 1, s='$F_1$', fontstyle='italic', ha='right', va='bottom', transform=offset_copy(axes.transAxes, x=0, y=5, fig=fig, units='points'), ) axes.text( 0, 0, s='$F_2$', fontstyle='italic', ha='right', transform=offset_copy(axes.transAxes, x=-5, y=0, fig=fig, units='points'), ) # Add labels at each point. Leveraging the alignment of the text # AND padded offset. lc = ('top', 'center', 0, -5) ll = ('top', 'right', -5, -5) lr = ('top', 'left', 5, -5) ur = ('bottom', 'left', 5, 5) alignments = [lc, lc, lc, ll, lc, ll, lc, ur, lr] for i, (xc, yc, (va, ha, padx, pady)) in enumerate(zip(x, y, alignments)): axes.annotate( xy=(xc, yc), xytext=(padx, pady), text=f'$F(x_{i})$', ha=ha, va=va, textcoords='offset points') plt.show()
3
3
74,015,260
2022-10-10
https://stackoverflow.com/questions/74015260/how-to-make-a-list-inside-a-class-static-for-the-entire-program
I'm messing around with classes and data flow and I am having difficulties creating a list of classes inside the class (to give control of the list to the class in itself). class Person: listOfPeople = [] def __init__(self, name, age): self.name = name self.age = age self.listOfPeople = [] def set_age(self, age): if age <= 0: raise ValueError('The age must be positive') self._age = age def get_age(self): return self._age def AppendList(self): self.listOfPeople.append(self) def returnList(self): return self.listOfPeople age = property(fget=get_age, fset=set_age) john = Person('John', 18) barry = Person("Barry", 19) john.AppendList() barry.AppendList() print(Person.listOfPeople) The output is simply [] LetΒ΄s use this example. I want the class Person to have a list of people. That list of people has instances of the class it's in. I want the entire program to have access to this class, regardless of having an instance initialised. Is it even possible to do what I want in Python? My expected output is a list with the 2 instances I added to the list.
this code automatically adds new instances to Person.listOfPeople class Person: listOfPeople = [] def __init__(self, name, age): self.name = name self.age = age Person.listOfPeople.append(self) def set_age(self, age): if age <= 0: raise ValueError('The age must be positive') self._age = age def get_age(self): return self._age def AppendList(self): self.listOfPeople.append(self) def returnList(self): return self.listOfPeople def __repr__(self): #I've added __repr__ return self.name+' '+str(self.age) age = property(fget=get_age, fset=set_age) john = Person('John', 18) barry = Person("Barry", 19) # john.AppendList() # barry.AppendList() print(Person.listOfPeople) the output: [John 18, Barry 19] is this what you need?
4
1
74,008,953
2022-10-9
https://stackoverflow.com/questions/74008953/trying-to-set-a-superclass-field-in-a-subclass-using-validator
I am trying to set a super-class field in a subclass using validator as follows: Approach 1 from typing import List from pydantic import BaseModel, validator, root_validator class ClassSuper(BaseModel): field1: int = 0 class ClassSub(ClassSuper): field2: List[int] @validator('field1') def validate_field1(cls, v, values): return len(values["field2"]) sub = ClassSub(field2=[1, 2, 3]) print(sub.field1) # It prints 0, but expected it to print 3 If I run the code above it prints 0, but I expected it to print 3 (which is basically len(field2)). However, if I use @root_validator() instead, I get the expected result. Approach 2 from typing import List from pydantic import BaseModel, validator, root_validator class ClassSuper(BaseModel): field1: int = 0 class ClassSub(ClassSuper): field2: List[int] @root_validator() def validate_field1(cls, values): values["field1"] = len(values["field2"]) return values sub = ClassSub(field2=[1, 2, 3]) print(sub.field1) # This prints 3, as expected New to using pydantic and I am bit puzzled what I am doing wrong with the Approach 1. Thank you for your help.
The reason your Approach 1 does not work is because by default, validators for a field are not called, when the value for that field is not supplied (see docs). Your validate_field1 is never even called. If you add always=True to your @validator, the method is called, even if you don't provide a value for field1. However, if you try that, you'll see that it will still not work, but instead throw an error about the key "field2" not being present in values. This in turn is due to the fact that validators are called in the order they were defined. In this case, field1 is defined before field2, which means that field2 is not yet validated by the time validate_field1 is called. And values only contains previously-validated fields (see docs). Thus, at the time validate_field1 is called, values is simply an empty dictionary. Using the @root_validator is the correct approach here because it receives the entire model's data, regardless of whether or not field values were supplied explicitly or by default. And just as a side note: If you don't need to specify any parameters for it, you can use @root_validator without the parantheses. And as another side note: If you are using Python 3.9+, you can use the regular list class as the type annotation. (See standard generic alias types) That means field2: list[int] without the need for typing.List. Hope this helps.
3
3
74,014,499
2022-10-10
https://stackoverflow.com/questions/74014499/python-remove-element-list-element-with-same-value-at-position
Let's assume I have a list, structured like this with approx 1 million elements: a = [["a","a"],["b","a"],["c","a"],["d","a"],["a","a"],["a","a"]] What is the fastest way to remove all elements from a that have the same value at index 0? The result should be b = [["a","a"],["b","a"],["c","a"],["d","a"]] Is there a faster way than this: processed = [] no_duplicates = [] for elem in a: if elem[0] not in processed: no_duplicates.append(elem) processed.append(elem[0]) This works but the appending operations take ages.
you can use set to keep the record of first element and check if for each sublist first element in this or not. it will took O(1) time compare to O(n) time to your solution to search. >>> a = [["a","a"],["b","a"],["c","a"],["d","a"],["a","a"],["a","a"]] >>> >>> seen = set() >>> new_a = [] >>> for i in a: ... if i[0] not in seen: ... new_a.append(i) ... seen.add(i[0]) ... >>> new_a [['a', 'a'], ['b', 'a'], ['c', 'a'], ['d', 'a']] >>> Space complexity : O(N) Time complexity: O(N) Search if first element there or not : O(1) In case, no new list to be declared, then use del element, but this will increase time complexity
4
6
74,014,203
2022-10-10
https://stackoverflow.com/questions/74014203/filter-rows-where-dates-are-available-across-all-groups-using-pandas
I have the following dataframe (sample): import pandas as pd data = [['A', '2022-09-01'], ['A', '2022-09-03'], ['A', '2022-09-07'], ['A', '2022-09-08'], ['B', '2022-09-03'], ['B', '2022-09-07'], ['B', '2022-09-08'], ['B', '2022-09-09'], ['C', '2022-09-01'], ['C', '2022-09-03'], ['C', '2022-09-07'], ['C', '2022-09-10'], ['D', '2022-09-01'], ['D', '2022-09-03'], ['D', '2022-09-05'], ['D', '2022-09-07']] df = pd.DataFrame(data = data, columns = ['group', 'date']) group date 0 A 2022-09-01 1 A 2022-09-03 2 A 2022-09-07 3 A 2022-09-08 4 B 2022-09-03 5 B 2022-09-07 6 B 2022-09-08 7 B 2022-09-09 8 C 2022-09-01 9 C 2022-09-03 10 C 2022-09-07 11 C 2022-09-10 12 D 2022-09-01 13 D 2022-09-03 14 D 2022-09-05 15 D 2022-09-07 I would like to filter the dates which are available across all groups. For example, the date "2022-09-03" is available in groups: A, B, C and D so all groups. The date "2022-09-01" is only available in groups: A, C, and D which means it is missing in group B. Here is the desired output: data = [['A', '2022-09-03'], ['A', '2022-09-07'], ['B', '2022-09-03'], ['B', '2022-09-07'], ['C', '2022-09-03'], ['C', '2022-09-07'], ['D', '2022-09-03'], ['D', '2022-09-07']] df_desired = pd.DataFrame(data = data, columns = ['group', 'date']) group date 0 A 2022-09-03 1 A 2022-09-07 2 B 2022-09-03 3 B 2022-09-07 4 C 2022-09-03 5 C 2022-09-07 6 D 2022-09-03 7 D 2022-09-07 I know how to filter groups with all the same values within a group, but I want to filter the dates which are available in each group. So I was wondering if anyone knows how to perform this using pandas?
You can use set operations: # which dates are common to all groups? keep = set.intersection(*df.groupby('group')['date'].agg(set)) # {'2022-09-03', '2022-09-07'} # keep only the matching ones out = df[df['date'].isin(keep)] output: group date 1 A 2022-09-03 2 A 2022-09-07 4 B 2022-09-03 5 B 2022-09-07 9 C 2022-09-03 10 C 2022-09-07 13 D 2022-09-03 15 D 2022-09-07 comparison of approaches: # set operations 669 Β΅s Β± 13.4 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) # nunique 750 Β΅s Β± 16.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) # 2D reshaping (crosstab) 5.45 ms Β± 418 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) # on 200k rows (random or like original) # set operations 21.1 ms Β± 2.23 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # nunique 26.7 ms Β± 1.48 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # crosstab 47.8 ms Β± 3.69 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)
4
3
73,998,994
2022-10-8
https://stackoverflow.com/questions/73998994/python-vscode-importing-from-sibling-directory-without-using-os-paths-append
python 3.8 with VScode. I have two sibling directories, and I want to import the first sibling (support_tools) to the second one. this is my project hierarchy: β”œβ”€β”€ support_tools β”‚ β”œβ”€β”€ __init__.py β”‚ └── file_utils.py └── optimizations β”œβ”€β”€.vscode β”‚ β”œβ”€β”€ launch.json β”‚ └── settings.json β”œβ”€β”€ __init__.py └── test1.py I added the parent path to the launch.json: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "env": { "PYTHONPATH": "${workspaceFolder}${pathSeparator}..${pathSeparator}", }, } ] } and to the settings.json: { "python.analysis.extraPaths": [ "${workspaceFolder}/../" ] } The pylance recognize the module support_tools.py, but I cannot import the support_tools module without appending the parent path to the os.paths: sys.path.append("../") In this tutorial: https://k0nze.dev/posts/python-relative-imports-vscode/ they clearly mention that after adding the paths to both file, I should be able to remove the os.path.append line In addition I tried to find an answer in the following pages: VSCode settings for Pylance Import Sibling Packages with __init__.py doesn't work Importing modules from parent folder Thanks for the helpers
A simple way is to use the optimizations parent folder as the workspace. VsCode searches files using the workspace as the root directory. If the file you want to import is not in the workspace, it will not find the content. Tips: You can use the absolute path in "python. analytics. extraPaths". Because in the workspace, the workspace is already the root directory. If you use the relative path, there does not exist the parent directory.
4
2
74,000,515
2022-10-8
https://stackoverflow.com/questions/74000515/python-unit-testing-how-to-patch-an-async-call-internal-to-the-method-i-am-tes
Im using unittest.mock for building tests for my python code. I have a method that I am trying to test that contains a async call to another function. I want to patch that async call so that I can just have Mock return a testing value for the asset id, and not actually call the async method. I have tried many things I've found online, but none have worked thus far. Simplified example below: test.py import pytest from app.create.creations import generate_new_asset from app.fakeapi.utils import create_asset from unittest.mock import Mock, patch @patch("app.fakeapi.utils.create_asset") @pytest.mark.anyio async def test_generate_new_asset(mock_create): mock_create.return_value = 12345678 await generate_new_asset() ... creations.py from app.fakeapi.utils import create_asset ... async def generate_new_asset() ... # When I run tests this does not return the 12345678 value, but actually calls the `create_asset` method. return await create_asset(...)
Testing async code is bit tricky. If you are using python3.8 or higher AsyncMock is available. Note: it will work only for Python > 3.8 I think in your case event loop is missing. Here is the code which should work, you may need to do few tweaks. You may also need to install pytest-mock. Having it as fixture will allow you to have mock different values for testing for different scenarios. import asyncio from unittest.mock import AsyncMock, Mock @pytest.fixture(scope="module") def mock_create_asset(mocker): async_mock = AsyncMock() mocker.patch('app.fakeapi.utils.create_asset', side_effect=async_mock) return async_mock @pytest.fixture(scope="module") def event_loop(): return asyncio.get_event_loop() @pytest.mark.asyncio async def test_generate_new_asset(mock_create_asset): mock_create_asset.return_value = 12345678 await generate_new_asset()
7
5
74,009,559
2022-10-10
https://stackoverflow.com/questions/74009559/dont-drop-unique-value-with-dropna-pandas
what's up? I am having a little problem, where I need to use the pandas dropna function to remove rows from my dataframe. However, I need it to not delete the unique values from my dataframe. Let me explain better. I have the following dataframe: id birthday 0102-2 09/03/2020 0103-2 14/03/2020 0104-2 NaN 0105-2 NaN 0105-2 25/03/2020 0108-2 07/04/2020 In the case above, I need to delete the row from my dataframe based on the NaN values in the birthday column. However, as you can see the id "0104-2" is unique unlike the id "0105-2" where it has a NaN value and another with a date. So I would like to keep track of all the lines that have NaN that are unique. Is it feasible to do this with dropna, or would I have to pre-process the information beforehand?
You could sort by the birthday column and then drop duplicates keeping the first out of the two, by doing the following: The complete code would look like this: import pandas as pd import numpy as np data = { "id": ['102-2','103-2','104-2', '105-2', '105-2', '108-2'], "birthday":['09/03/2020', '14/03/2020', np.nan, np.nan, '25/03/2020', '07/04/2020'] } df = pd.DataFrame(data) df.sort_values(['birthday'], inplace=True) df.drop_duplicates(subset="id", keep='first', inplace=True) df.sort_values(['id'], inplace=True) CODE EXPLANATION: Here is the original dataframe: import pandas as pd import numpy as np data = { "id": ['102-2','103-2','104-2', '105-2', '105-2', '108-2'], "birthday":['09/03/2020', '14/03/2020', np.nan, np.nan, '25/03/2020', '07/04/2020'] } df = pd.DataFrame(data) Now sort the dataframe: df.sort_values(['birthday'], inplace=True) Then drop the duplicates based on the id column. Keeping only the first value. df.drop_duplicates(subset="id", keep='first', inplace=True)
4
4
73,992,851
2022-10-7
https://stackoverflow.com/questions/73992851/error-while-opening-ipynb-notebook-in-vscode
I was just writing a python code and after switching folder, it threw an error stating:- Error loading webview: Error: Could not register service workers: InvalidStateError: Failed to register a ServiceWorker: The document is in an invalid state.
Read this issue which can help. Emptying the cache is an effective solution. The simplest step is killall code or restart vscode.
3
9
74,008,101
2022-10-9
https://stackoverflow.com/questions/74008101/flask-how-to-specify-a-default-value-for-select-tag-html
I have an app that tracks truck appointments. In this app I have a list of carriers in a db table. When the user wants to update an appointment, they can choose a new carrier from the list of carriers in the db using a dropdown menu. How can I set the dropdown default value to be the current carrier selection? Here's what I tried so far (without any luck): app.py: class carriers_db(db.Model): carrier_id = db.Column(db.Integer, primary_key=True) carrier_name = db.Column(db.String(100), nullable=False) class appts_db(db.Model): id = db.Column(db.Integer, primary_key=True) carrier = db.Column(db.String(100), nullable=False) @app.route('/update/<int:id>', methods=['GET', 'POST']) def update(id): appt = appts_db.query.get_or_404(id) carriers = carriers_db.query.order_by(carriers_db.carrier_name).all() update.html: <h4>Current carrier: {{ appt.carrier }}</h4> <label>Option to select a new carrier:</label><br> <select name="carrier"> {% for carrier in carriers %} <option value = "{{ carrier.carrier_name }}" selected = "{{ carrier.carrier_name }}"> {{ carrier.carrier_name }}</option> {% endfor %} </select>
You can add a check if the value is equal to the selected value in the for loop in update.html: update.html: <h4>Current carrier: {{ appt.carrier }}</h4> <label>Option to select a new carrier:</label><br> <select name="carrier"> {% for carrier in carriers %} <option value = "{{ carrier.carrier_name }}" {% if carrier.carrier_name == appt.carrier %} selected {% endif %}> {{ carrier.carrier_name }}</option> {% endfor %} </select> A full example to show selected option value in Jinja template: app.py: from flask import Flask, render_template, flash, url_for, request, redirect app = Flask(__name__) app.secret_key = b'a secret key' @app.route('/update', methods=['GET', 'POST']) def show_update(): current_carrier = "tello" carriers = [{"carrier_name": "mint"}, {"carrier_name": "tmobile"}, {"carrier_name": "tello"}] return render_template('update.html', current_carrier=current_carrier, carriers=carriers) update.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Update page</title> </head> <body> <h4>Current carrier: {{ current_carrier }}</h4> <label>Option to select a new carrier:</label><br> <select name="carrier"> {% for carrier in carriers %} <option value="{{ carrier.carrier_name }}" {% if carrier.carrier_name== current_carrier %} selected {% endif %}> {{ carrier.carrier_name }} </option> {% endfor %} </select> </body> </html> Output:
4
4
74,007,673
2022-10-9
https://stackoverflow.com/questions/74007673/dash-rangeslider-automatically-rounds-marks
I am using the RangeSlider in Python Dash. This slider is supposed to allow users to select a range of dates to display, somewhere between the minimum and maximum years in the dataset. The issue that I am having is that each mark shows as 2k due to it being automatically rounded. The years range between 1784 and 2020, with a step of 10 each time. How do I get the marks to show as the actual dates and not just 2k? This is what I have below. dcc.RangeSlider(sun['Year'].min(), sun['Year'].max(), 10, value=[sun['Year'].min(), sun['Year'].max()], id='years')
You can use attribute marks to style the ticks of the sliders as follows: marks={i: '{}'.format(i) for i in range(1784,2021,10)} The full code: from dash import Dash, dcc, html app = Dash(__name__) app.layout = html.Div([ dcc.RangeSlider(1784, 2020, id='non-linear-range-slider', marks={i: '{}'.format(i) for i in range(1784,2021,10)}, value=list(range(1784,2021,10)), dots=False, step=10, updatemode='drag' ), html.Div(id='output-container-range-slider-non-linear', style={'margin-top': 20}) ]) if __name__ == '__main__': app.run_server(debug=True, use_reloader=False) Output
4
2
74,005,380
2022-10-9
https://stackoverflow.com/questions/74005380/how-to-get-all-combinations-of-n-binary-values-where-number-of-1s-are-equal-to
I want to find a list of all possible combinations of 0's and 1's. The only condition is that the number of 1's must be more than or equal to the number of 0's. For example for n = 4 the output should be something like this: [(0, 0, 1, 1), (0, 1, 0, 1), (0, 1, 1, 0), (0, 1, 1, 1), (1, 0, 0, 1), (1, 0, 1, 0), (1, 0, 1, 1), (1, 1, 0, 0), (1, 1, 0, 1), (1, 1, 1, 0), (1, 1, 1, 1)] Is there an elegant way to do this?
You can use distinct_permutations: from more_itertools import distinct_permutations def get_combos(n): for i in range((n+1)//2, n + 1): for permutation in distinct_permutations([1] * i + [0] * (n - i), n): yield permutation print(list(get_combos(4))) # [(0, 0, 1, 1), (0, 1, 0, 1), (0, 1, 1, 0), (1, 0, 0, 1), (1, 0, 1, 0), (1, 1, 0, 0), (0, 1, 1, 1), (1, 0, 1, 1), (1, 1, 0, 1), (1, 1, 1, 0)] Here, we simply consider the permutations of each sublist: [0, 0, 1, 1] [0, 1, 1, 1] [1, 1, 1, 1] Notice that for large n, the yield statement is very useful because you do not generate all permutations at once. We need to use distinct_permutations because you use just 1's and 0's , so regular permutations will give you repeated elements. If you don't want to install another library, you can use: from itertools import permutations def get_combos(n): for i in range(n // 2 if n%2 == 0 else n//2 + 1, n): for permutation in permutations([1] * i + [0] * (n - i), n): yield permutation print(set(get_combos(4))) # {(0, 1, 0, 1), (0, 1, 1, 1), (1, 0, 1, 1), (1, 1, 0, 0), (1, 1, 1, 0), (0, 1, 1, 0), (1, 0, 1, 0), (1, 0, 0, 1), (1, 1, 0, 1), (0, 0, 1, 1)} as set will eliminate the repeated elements, at the cost of needing to process the entire set of permutations at once (ie., by calling set, you will consume the entire generator immediately, rather than drawing elements from it as you need them). More details on distinct_permutations It might not be clear why these are needed. Consider this list: [1, 2] permutations, by default, will tell you that all the permutations of this list are (1, 2) and (2, 1) However, permutations doesn't bother checking what the elements are or if they are repeated, so it simply performs the swap as above and if the list is [1, 1] you'll get back [(1, 1), (1, 1)]
4
6
74,004,525
2022-10-9
https://stackoverflow.com/questions/74004525/adding-spaces-between-words-evenly-from-left-to-right-untill-a-certain-length
Receiving a string that has 2 or more words and a certain length, I need to insert spaces uniformly between words, adding additional spaces between words from left to right. Let's say I receive "Hello I'm John" and a length of 17, it should return:'Hello I'm John I have tried many different ways and I couldn't do the left-to-right requirement. This is what I have now: if ' ' in string: final_string='' string=string.split() for i in range(len(string)): if string[i]==string[-1]: final_string+=string[i] else: final_string+=string[i]+' ' print(final_string) Output: Hello I'm John which gives me a length greater than what I want...
Probably something like this? words = cad_carateres.split() total_nb_of_spaces_to_add = total_string_length - len(cad_carateres) nb_of_spaces_to_add_list = [total_nb_of_spaces_to_add // (len(words) - 1) + int(i < (total_nb_of_spaces_to_add % (len(words) - 1))) for i in range(len(words) - 1)] + [0] result = ' '.join([w + ' ' * (nb_of_spaces_to_add) for w, nb_of_spaces_to_add in zip(words, nb_of_spaces_to_add_list)]) First line - you split your line into words Second line - how many spaces, in total, we need to add. Third line - suppose that we need to add 20 spaces, and we have 7 words in total (thus, 6 gaps where we add additional spaces). Let's add [3, 3, 2, 2, 2, 2] spaces in these gaps, and, obviously 0 spaces after the last word. nb_of_spaces_to_add_list will contain the list like [3, 3, 2, 2, 2, 2, 0] The last line line - join the padded (with spaces, their number comes from nb_of_spaces_to_add_list) words into the string - this is your result! I also use zip function to convert 2 one-dimensional lists (words and number of spaces to add) into a two-dimensional list, this allows me to use list comprehension.
3
4
74,003,752
2022-10-9
https://stackoverflow.com/questions/74003752/expected-type-iterable-matched-generic-type-iterablesupportslessthant
@dataclass(frozen=True, eq=True, order=True) class C: x: int l = [C(1), C(2), C(1)] print(sorted(l)) The above code works but gives a warning: Expected type 'Iterable' (matched generic type 'Iterable[SupportsLessThanT]'), got 'list[C]' instead. I think the order=True param passed to @dataclass should result in generation of __lt__ etc and thus confirm to SupportsLessThanT? An explicit implementation of __lt__ silences the warning, but how do I silence it without?
Apparently a known bug in PyCharm, tracked here
4
7
74,003,276
2022-10-9
https://stackoverflow.com/questions/74003276/python-start-loop-at-row-n-in-a-dataframe
I have this dataframe: a = [0,0,5,0,0,0,0,0,0,7,0,0,0,0,0,0,0,0] b = [0,0,0,0,250,350,500,0,0,0,0,0,0,125,70,95,0,0] df = pd.DataFrame(columns=['a', 'b']) df = pd.DataFrame.assign(df, a=a, b=b) df a b 0 0 0 1 0 0 2 5 0 3 0 0 4 0 250 5 0 350 6 0 500 7 0 0 8 0 0 9 7 0 10 0 0 11 0 0 12 0 0 13 0 125 14 0 70 15 0 95 16 0 0 17 0 0 I wanted to record the first value from column B, following each iteration through column A. I was looking for this result: 5 250 7 125 My first attempt is this Loop below. I tried to extract the row index, so I could pass it to the next for loop, to start the loop at n index, but it's not quite what I expected. for item in df.a: if item > 0: print(item) index = df.iterrows() print(index) for i in df.b: if i > 0: print(i) break which yields: 5 <generator object DataFrame.iterrows at 0x000002C654B0EF20> 250 7 <generator object DataFrame.iterrows at 0x000002C654B01C80> 250 Advice on how to approach this is much appreciated!
Another possible solution: df1 = df.mask(df.eq(0)).dropna(how='all') df1.assign(b = df1['b'].shift(-1)).dropna() Output: a b 2 5.0 250.0 9 7.0 125.0
3
2
74,000,594
2022-10-8
https://stackoverflow.com/questions/74000594/why-is-super-not-behaving-like-i-expected-when-assigning-to-a-class-variable-o
I am attempting to experiment with classes so I can better understand what they do. I wanted to build a counter which records the number of instances of a class (MyClass): class ObjectCounter: # I want this to count the number of objects in each class myclass_obj_count = 0 class MyClass(ObjectCounter): def __init__(self): super().myclass_obj_count += 1 # AttributeError: 'super' object has no attribute 'myclass_obj_count' m1 = MyClass() m2 = MyClass() m3 = MyClass() print(ObjectCounter.myclass_obj_count) Since that didn't work, I looked online for someone trying to do the same thing. Here is some code I found online. This works as expected, and I feel like I have a basic understanding of how this works. This is a better solution to the task I was attempting, but I'm not satisfied because I want to know how super() works. class geeks: counter = 0 def __init__(self): geeks.counter += 1 g1 = geeks() g2 = geeks() g3 = geeks() print(geeks.counter) # this gives an expected result Therefore, I tried this instead: class ObjectCounter: # I want this to count the number of objects in each class myclass_obj_count = 0 def add_myclass(self): self.myclass_obj_count += 1 class MyClass(ObjectCounter): def __init__(self): super().add_myclass() my_class_1 = MyClass() my_class_2 = MyClass() my_class_3 = MyClass() print(ObjectCounter.myclass_obj_count) # expected output: 3 Instead of getting the expected output of 3, I got an output of 0. Why is this happening?
First, be aware of the += operator; it's implementation is quite subtle: a += b becomes a = a.__iadd__(b) This perhaps strange definition allows python to support it even for immutable types (like strings). Note what happens when used for a class variable that is referred to by the alias self class ObjectCounter: # I want this to count the number of objects in each class myclass_obj_count = 0 def add_myclass(self): self.myclass_obj_count += 1 # effectively becomes: # self.myclass_obj_count = self.myclass_obj_count.__iadd__(1) This will introduce an instance variable of the same name, shadowing the class variable. You don't even need the subclass to test this: >>> x = ObjectCounter() >>> x.add_myclass() >>> x.add_myclass() >>> x.add_myclass() >>> x.myclass_obj_count 3 >>> ObjectCounter.myclass_obj_count 0 Referring to the class variable directly instead of using self fixes this def add_myclass(self): ObjectCounter.myclass_obj_count += 1 I'm hesitant to give definite answers of what happens under the hood when class variables, super() and assignments are used, other than it just doesn't work. Perhaps because it would be quite ambiguous of whether or not we are defining class variables or new instance variables. super() won't let you assign to either; class ObjectCounter: myclass_obj_count = 0 def __init__(self): self.x = 'test' class MyClass(ObjectCounter): def __init__(self): super().__init__() print(super().myclass_obj_count) # reading works just fine print(type(super())) # this isn't actually exactly the same as "ObjectCounter" super().myclass_obj_count = 123 # no good super().x = 'foo' # also no good. All in all, for any assignment to class variables you can use the class name itself.
4
2
73,993,861
2022-10-8
https://stackoverflow.com/questions/73993861/automatic-custom-constructor-for-python-dataclass
I'm trying to create a custom constructor for my python dataclass that will ideally take in a dict (from request json data) and fill in the attributes of the dataclass. Eg @dataclass class SoldItem: title: str purchase_price: float shipping_price: float order_data: datetime def main(): json = requests.get(URL).json() sold_item = SoldItem(json) So I want SoldItem to have a method that saves the json data in the appropriate attributes of the dataclass instead of having to do SoldItem(title=json['title']... I would also preferably have the class be able to recognise that the data being passed in is a dict and execute the from dict constructor. I have done my best to look up possible solutions but have come up mostly empty. Any help would be greatly appreciated.
For the simplest approach - with no additional libraries - I would personally go with a de-structuring approach via **kwargs. For example: >>> json = {'title': 'test', 'purchase_price': 1.2, 'shipping_price': 42, 'order_data': datetime.min} >>> SoldItem(**json) SoldItem(title='test', purchase_price=1.2, shipping_price=42, order_data=datetime.datetime(1, 1, 1, 0, 0)) In case of a more involved use case, such as: a nested dataclass structure the input dict is the result of an API call keys in the dict are not in snake_case value for a key does not match annotated type for a dataclass field In such cases, I would suggest third-party tools that will automagically handle this data transform for you. For instance, the dataclass-wizard is a (de)serialization library I have come up with, for exactly this use case, i.e. when the input data might be coming from another source, such as the result of an API call or response. It can be installed with pip: pip install dataclass-wizard Usage: from dataclasses import dataclass from datetime import datetime from dataclass_wizard import JSONWizard @dataclass class SoldItem(JSONWizard): title: str purchase_price: float shipping_price: float order_data: datetime def main(): from pprint import pprint # json = requests.get(URL).json() json = {'title': 'test', 'purchasePrice': '1.23', 'shipping-price': 42, 'Order_Data': '2021-01-02T12:34:56Z'} # create a `SoldItem` instance from an input `dict`, # such as from an API response. sold_item = SoldItem.from_dict(json) pprint(sold_item) if __name__ == '__main__': main() Prints: SoldItem(title='test', purchase_price=1.23, shipping_price=42.0, order_data=datetime.datetime(2021, 1, 2, 12, 34, 56, tzinfo=datetime.timezone.utc))
4
5
73,997,410
2022-10-8
https://stackoverflow.com/questions/73997410/plotly-python-how-to-change-the-background-color-for-title-text
I'm trying to change the background colour for the title in plotly to something like the below: Plotly Code: from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots( rows=2, cols=2, subplot_titles=("Plot 1", "Plot 2", "Plot 3", "Plot 4")) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=1, col=1) fig.add_trace(go.Scatter(x=[20, 30, 40], y=[50, 60, 70]), row=1, col=2) fig.add_trace(go.Scatter(x=[300, 400, 500], y=[600, 700, 800]), row=2, col=1) fig.add_trace(go.Scatter(x=[4000, 5000, 6000], y=[7000, 8000, 9000]), row=2, col=2) fig.update_layout(height=500, width=700, title_text="Multiple Subplots with Titles") fig.show() Is there a way to change the background colour of the title text using plotly?
From what I can see, there does not seem to be a direct way to set the subplot title backgrounds, however, the closest I could get to what you were looking for was to create an annotation at the top of each chart, and set it's background accordingly. Now you could perhaps generate some form of formula to calculate the best position of each annotation, however, I just did it by eye - as a proof of concept. In general, the x-coord is the middle x-point of each chart, and the y-coord is just above the maximum y-value of each chart. Please let me know what you think: from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots( rows=2, cols=2) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]),row=1, col=1) fig.add_trace(go.Scatter(x=[20, 30, 40], y=[50, 60, 70]), row=1, col=2) fig.add_trace(go.Scatter(x=[300, 400, 500], y=[600, 700, 800]), row=2, col=1) fig.add_trace(go.Scatter(x=[4000, 5000, 6000], y=[7000, 8000, 9000]), row=2, col=2) fig.update_layout(height=500, width=700, title_text="Multiple Subplots with Annotated Titles", annotations=[dict( x=2, y=6.3, # Coordinates of the title (plot 1) xref='x1', yref='y1', text='* Plot 1 Title *', bgcolor = "#2b3e50", # dark background font_color = "#FFFFFF", # white font showarrow=False), dict( x=30, y=73, # Coordinates of the title (plot 2) xref='x2', yref='y2', text='* Plot 2 Title *', bgcolor = "#2b3e50", # dark background font_color = "#FFFFFF", # white font showarrow=False), dict( x=400, y=830, # Coordinates of the title (plot 3) xref='x3', yref='y3', text='* Plot 3 Title *', bgcolor = "#2b3e50", # dark background font_color = "#FFFFFF", # white font showarrow=False), dict( x=5000, y=9290, # Coordinates of the title (plot 4) xref='x4', yref='y4', text='* Plot 4 Title *', bgcolor = "#2b3e50", # dark background font_color = "#FFFFFF", # white font showarrow=False)]) fig.show() OUTPUT:
3
4
73,991,085
2022-10-7
https://stackoverflow.com/questions/73991085/pandas-groupby-headn-where-n-is-a-function-of-group-label
I have a dataframe, and I would like to group by a column and take the head of each group, but I want the depth of the head to be defined by a function of the group label. If it weren't for the variable group sizes, I could easily do df.groupby('label').head(n). I can imagine a solution that involves iterating through df['label'].unique(), slicing the dataframe and building a new one, but I'm in a context where I'm pretty sensitive to performance so I'd like to avoid that kind of iteration if possible. Here's an exmaple dataframe: label values 0 apple 7 1 apple 5 2 apple 4 3 car 9 4 car 6 5 dog 5 6 dog 3 7 dog 2 8 dog 1 and code for my example setup: import pandas as pd df = pd.DataFrame({'label': ['apple', 'apple', 'apple', 'car', 'car', 'dog', 'dog', 'dog', 'dog'], 'values': [7, 5, 4, 9, 6, 5, 3, 2 ,1]}) def depth(label): if label == 'apple': return 1 elif label == 'car': return 2 elif label == 'dog': return 3 my desired output is a dataframe with the number of rows from each group defined by that function: label values 0 apple 7 3 car 9 4 car 6 5 dog 5 6 dog 3 7 dog 2
I would use a dictionary here and using <group>.name in groupby.apply: depth = {'apple': 1, 'car': 2, 'dog': 3} out = (df.groupby('label', group_keys=False) .apply(lambda g: g.head(depth.get(g.name, 0))) ) NB. if you really need a function, you can do the same with a function call. Make sure to return a value in every case. Alternative option with groupby.cumcount and boolean indexing: out = df[df['label'].map(depth).gt(df.groupby('label').cumcount())] output: label values 0 apple 7 3 car 9 4 car 6 5 dog 5 6 dog 3 7 dog 2
4
2
73,990,548
2022-10-7
https://stackoverflow.com/questions/73990548/how-to-provide-c-version-when-extending-python
I want to make c++ code callable from python. https://docs.python.org/3/extending/ explains how to do this, but does not mention how to specify c++ version. By default distutils calls g++ with a bunch of arguments, however does not provide the version argument. Example of setup.py: from distutils.core import setup, Extension MOD = "ext" module = Extension("Hello", sources = ["hello.cpp"]) setup( name="PackageName", version="0.01", description="desc", ext_modules = [module] ) I'm using linux, if that matters.
You can pass compiler arguments as extra_compile_args so for example module = Extension( "Hello", sources = ["hello.cpp"], extra_compile_args = ["-std=c++20"] )
4
4
73,987,319
2022-10-7
https://stackoverflow.com/questions/73987319/how-to-typehint-dynamic-class-instantiation-like-pydantic-and-dataclass
Both Pydantic and Dataclass can typehint the object creation based on the attributes and their typings, like these examples: from pydantic import BaseModel, PrivateAttr, Field from dataclasses import dataclass # Pydantic way class Person(BaseModel): name : str address : str _valid : bool = PrivateAttr(default=False) #dataclass way @dataclass class PersonDataclass(): name : str address : str _valid : bool = False bob = Person(name="Bob", address="New York") bobDataclass = PersonDataclass("Bob", "New York") With this code, I can get typehint on object creation (see screenshots below): pydantic typehint on object creation dataclass typehint on object creation Not only that, but the object's attributes also get documented. I studied the code of pydantic to try to achieve the same result, but I couldn't. The code that I tried was this: class MyBaseModelMeta(type): def __new__(cls, name, bases, dct): def new_init(self : cls, /, name : str, address : str): self.name = name self.address = address self._valid = False dct["__init__"] = new_init dct["__annotations__"] = {"__init__": {"name": str, "address": str, "_valid": bool}} return super().__new__(cls, name, bases, dct) class MyBaseModel(metaclass=MyBaseModelMeta): def __repr__(self) -> str: return f"MyBaseModel: {self.__dict__}" class MyPerson(MyBaseModel): pass myBob = MyPerson("Bob", "New York") My class works (the dynamic init insertion works) but the class and object get no typehint. my class works but it doesn't get typehinted What am I doing wrong? How can I achieve the typehints?
@Daniil Fajnberg is mostly correct, but depending on your type checker you can can use the dataclass_transform(Python 3.11) or __dataclass_transform__ early adopters program decorator. Pylance and Pyright (usually used in VS-Code) at least work with these. You can only mimic the behaviour of dataclasses that way though, I don't think you're able to define that your Metaclass adds extra fields. :/ Edit: At least pydantic uses this decorator for their BaseModel: https://pydantic-docs.helpmanual.io/visual_studio_code/#technical-details If you dig through the code of pydantic you'll find that their ModelMetaclass is decorated with __dataclass_transform__
5
5
73,984,925
2022-10-7
https://stackoverflow.com/questions/73984925/specify-dependency-version-for-git-repository-in-pyproject-toml
I have a python project with all dependencies and versions managed by pyproject.toml file. One of these dependencies is a git reference: [project] name = 'my_package' version = '1.0.0' dependencies = [ 'my_dependency @ git+https://github.com/some_user/some_repo.git' ] In order to improve version management after some time I started to use tags to specify exact "version". Like this: dependencies = [ 'my_dependency @ git+https://github.com/some_user/[email protected]' ] But this is still not enough. Ideally I want something flexible, open for minor or patch version increase. Like this: dependencies = [ 'my_dependency >= 1.2 @ git+https://github.com/some_user/some_repo.git' ] To be clear: I want pip to look for different versions in the entire repo history and take one's that match the condition. According to semver. In this particular case with >= 1.2 it should take 1.2.1, 1.2.2, 1.3, 1.157.256 or any other commit with version in pyproject.toml (or at least git tag) greater or equal to 1.2. Is this possible? Can pip manage versions so well for git repositories?
Unfortunately it's not possible. As specified here https://peps.python.org/pep-0508/ or here https://pip.pypa.io/en/stable/reference/requirement-specifiers/, you can't use of version requirements with url based dependencies. Your second approach about using tags is the one you need.
9
5
73,983,298
2022-10-7
https://stackoverflow.com/questions/73983298/pydantic-error-wrappers-validationerror-value-is-not-a-valid-list-type-type-e
New to FastAPI Getting a "value is not a valid list (type=type_error.list)" error Whenever I try to return {"posts": post} @router.get('', response_model = List[schemas.PostResponseSchema]) def get_posts(db : Session = Depends(get_db)): print(limit) posts = db.query(models.Post).all() return {"posts" : posts} Although it works if I return posts like this: return posts Here is my response model: class PostResponseSchema(PostBase): user_id: int id: str created_at : datetime user : UserResponseSchema class Config: orm_mode = True And Model: class Post(Base): __tablename__ = "posts" id = Column(Integer, primary_key=True, nullable=False) title = Column(String, nullable=False) content = Column(String, nullable = False) published = Column(Boolean, server_default = 'TRUE' , nullable = False) created_at = Column(TIMESTAMP(timezone=True), nullable = False, server_default = text('now()')) user_id = Column(Integer, ForeignKey("users.id", ondelete = "CASCADE"), nullable = False ) user = relationship("User") what am I missing here?
your response expects to be the list by this line of code of yours: @router.get('', response_model = List[schemas.PostResponseSchema]) but your response return {"posts" : posts} is object. so you have to return posts because it is a list of objects as your response expects. otherwise, if you want to use return {"posts": posts} just change router.get('', response_model = List[schemas.PostResponseSchema]) to router.get('') then you will get something like this: {"posts": [......]} inside [] will be a list of posts.
8
12
73,978,318
2022-10-6
https://stackoverflow.com/questions/73978318/splitting-a-list-on-non-sequential-numbers
I have an ordered list of entities, numbered in a broken sequence: [1, 2, 3, 6, 7, 11, 17, 18, 19] I'd like to break the list where there's a gap, and collect the results in a new list: [[1, 2, 3], [6, 7], [11], [17, 18, 19]] I have the feeling there's a name for what I want to do and probably a nice library function for it - but I can't think of it. Can anyone shine some light before I possibly reinvent a wheel? edit: Thanks, folks, but I was asking if there's a name for this operation and an existing algorithm, not for implementations - this is what I came up with: def group_adjoining(elements, key=lambda x: x): """Returns list of lists of contiguous elements :key: function to get key integer from list element """ if not elements: return elements result = [[elements[0]]] for a, b in zip(elements, elements[1:]): if key(a) + 1 == key(b): result[-1].append(b) else: result.append([b]) return result
Plain itertools.groupby approach: from itertools import groupby lst = [1, 2, 3, 6, 7, 11, 17, 18, 19] out = [] for _, g in groupby(enumerate(lst), lambda x: x[0] - x[1]): out.append([v for _, v in g]) print(out) Prints: [[1, 2, 3], [6, 7], [11], [17, 18, 19]]
4
3
73,968,584
2022-10-6
https://stackoverflow.com/questions/73968584/flask-sqlalchemy-db-create-all-got-an-unexpected-keyword-argument-app
I'm following a tutorial for creating a Flask app with Flask-SQLAlchemy. However, it has started raising an error when creating the database. How do I create the database? from flask import Flask from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() def create_app(): app = Flask(__name__) app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///project.db" db.init_app(app) from . import models create_database(app) return app def create_database(app): if not path.exists("website/project.db"): db.create_all(app=app) print("created database") The line db.create_all(app=app) gives me this error: SQLAlchemy.create_all() got an unexpected keyword argument 'app'
Flask-SQLAlchemy 3 no longer accepts an app argument to methods like create_all. Instead, it always requires an active Flask application context. db = SQLAlchemy() def create_app(): app = Flask(__name__) app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///project.db" db.init_app(app) from . import models with app.app_context(): db.create_all() return app There is no need for that create_database function. SQLAlchemy will already not overwrite an existing file, and the only time the database wouldn't be created is if it raised an error.
23
49
73,973,332
2022-10-6
https://stackoverflow.com/questions/73973332/check-if-were-in-a-github-action-travis-ci-circle-ci-etc-testing-environme
I would like to programmatically determine if a particular Python script is run a testing environment such as GitHub action Travis CI Circle CI etc. I realize that this will require some heuristics, but that's good enough for me. Are certain environment variables always set? Is the user name always the same? Etc.
An environment variable is generally set for each CI/CD pipeline tool. The ones I know about: os.getenv("GITHUB_ACTIONS") os.getenv("TRAVIS") os.getenv("CIRCLECI") os.getenv("GITLAB_CI") Will return true in a python script when executed in the respective tool environment. e.g: os.getenv("GITHUB_ACTIONS") == "true" in a Github Action workflow. os.getenv("CIRCLECI") == "true" in a CircleCI pipeline. ... PS: If I'm not mistaken, to identify the python script is being executed in Jenkins or Kubernetes Service host, the behavior isn't the same.
16
30
73,973,736
2022-10-6
https://stackoverflow.com/questions/73973736/how-to-use-column-value-as-parameter-in-aggregation-function-in-pandas
Given a certain table of type A B C t r 1 t r 1 n j 2 n j 2 n j 2 I would like to group on A and B and only take the number of rows specified by C So the desired output would be A B C t r 1 n j 2 n j 2 I am trying to achieve that through this function but with no luck df.groupby(['A', 'B']).agg(lambda x: x.head(df.C))
You can use groupby.cumcount and boolean indexing: out = df[df['C'].gt(df.groupby(['A', 'B']).cumcount())] Or with a classical groupby.apply: (df.groupby(['A', 'B'], sort=False, as_index=False, group_keys=False) .apply(lambda g: g.head(g['C'].iloc[0])) ) output: A B C 0 t r 1 2 n j 2 3 n j 2 Intermediates for the groupby.cumcount approach: A B C cumcount C > cumcount 0 t r 1 0 True 1 t r 1 1 False 2 n j 2 0 True 3 n j 2 1 True 4 n j 2 2 False
3
5
73,955,605
2022-10-5
https://stackoverflow.com/questions/73955605/docker-airflow-error-can-not-perform-a-user-install-user-site-packages-a
GOAL Since 2022 Sept 19 The release of Apache Airflow 2.4.0 Airflow supports ExternalPythonOperator I have asked the main contributors as well and I should be able to add 2 python virtual environments to the base image of Airflow Docker 2.4.1 and be able to rune single tasks inside a DAG. My goal is to use multiple host python virtualenvs that built from a local requirements.txt. using ExternalPythonOperator to run them (Each of my dags just execute a timed python function) CODE Dockerfile FROM apache/airflow:2.4.1-python3.8 RUN python3 -m venv /opt/airflow/venv1 COPY requirements.txt . RUN . /opt/airflow/venv1/bin/activate && pip install -r requirements.txt TERMINAL INPUT docker build -t my-image-apache/airflow:2.4.1 . TERMINAL OUTPUT [+] Building 4.3s (9/9) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.55kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/apache/airflow:2.4.1-python3.8 1.2s => [auth] apache/airflow:pull token for registry-1.docker.io 0.0s => CACHED [1/4] FROM docker.io/apache/airflow:2.4.1-python3.8@sha256:5f9f4eff86993e11893f371f591aed73cf2310a96d84ae8fddec11857c6345da 0.0s => [internal] load build context 0.0s => => transferring context: 37B 0.0s => [2/4] RUN python3 -m venv /opt/airflow/venv1 2.2s => [3/4] COPY requirements.txt . 0.0s => ERROR [4/4] RUN . /opt/airflow/venv1/bin/activate && pip install -r requirements.txt 0.8s ------ > [4/4] RUN . /opt/airflow/venv1/bin/activate && pip install -r requirements.txt: #9 0.621 ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv. #9 0.763 WARNING: You are using pip version 22.0.4; however, version 22.2.2 is available. #9 0.763 You should consider upgrading via the '/opt/airflow/venv1/bin/python3 -m pip install --upgrade pip' command. ------ executor failed running [/bin/bash -o pipefail -o errexit -o nounset -o nolog -c . /opt/airflow/venv1/bin/activate && pip install -r requirements.txt]: exit code: 1 Tried Solutions I dont use --user flag, and in my case this is a Dockerfile commands - Pip default behavior conflicts with virtualenv? https://splunktool.com/error-can-not-perform-a-user-install-user-sitepackages-are-not-visible-in-this-virtualenv FROM apache/airflow:2.4.1-python3.8 ENV VIRTUAL_ENV=/opt/airflow/venv RUN python3 -m venv $VIRTUAL_ENV ENV PATH="$VIRTUAL_ENV/bin:$PATH" # Install dependencies: COPY requirements.txt . RUN pip install -r requirements.txt Same error as above FROM apache/airflow:2.4.1-python3.8 ADD . /opt/airflow/ WORKDIR /opt/airflow/ RUN python -m venv venv RUN venv/bin/pip install --upgrade pip RUN venv/bin/pip install -r requirements.txt Same error as above
Dockerfile [CORRECT] FROM apache/airflow:2.4.1-python3.8 # Compulsory to switch parameter ENV PIP_USER=false #python venv setup RUN python3 -m venv /opt/airflow/venv1 # Install dependencies: COPY requirements.txt . # --user <--- WRONG, this is what ENV PIP_USER=false turns off #RUN /opt/airflow/venv1/bin/pip install --user -r requirements.txt <---this is all wrong RUN /opt/airflow/venv1/bin/pip install -r requirements.txt ENV PIP_USER=true
3
7
73,970,233
2022-10-6
https://stackoverflow.com/questions/73970233/reset-pandas-cumsum-when-the-condition-is-not-satisified
I went through different stackoverflow questions and finally posting it because I couldnt solve one of the issues I am facing. I have a dataframe like below A B C group1 group1_c 12 group1 group1_c 12 group1 group1_c 12 group1 group1_c 1 group1 group1_c 12 group1 group1_c 12 I have to match two rows together and whenever the value matches, I cumsum it. To do this, df['cumul'] = df['C'].eq(df.groupby(['A','B'])['C'].shift(1).ffill()).groupby([df['A'],df['B']).cumsum() Once I do this, A B C Cumul group1 group1_c 12 0 group1 group1_c 12 1 group1 group1_c 12 2 group1 group1_c 1 2 group1 group1_c 12 3 group1 group1_c 12 3 Whereas I want to reset if the condition is not met.Expected solution A B C Cumul group1 group1_c 12 0 group1 group1_c 12 1 group1 group1_c 12 2 group1 group1_c 1 0 group1 group1_c 12 0 group1 group1_c 12 1 Please advice Thank you
If need count groups per consecutive values of C column use Series.ne with Series.shift and cumulative sum, last use counter by GroupBy.cumcount: df['cumul'] = df.groupby(df['C'].ne(df['C'].shift()).cumsum()).cumcount() print (df) A B C cumul 0 group1 group1_c 12 0 1 group1 group1_c 12 1 2 group1 group1_c 12 2 3 group1 group1_c 1 0 4 group1 group1_c 12 0 5 group1 group1_c 12 1 If need per A, B groups also add both groups: print (df) A B C 0 group1 group1_c 12 1 group1 group2_c 12 <-changed groups 2 group1 group2_c 12 <-changed groups 3 group1 group1_c 1 4 group1 group1_c 12 5 group1 group1_c 12 s = df['C'].ne(df['C'].shift()).cumsum() df['cumul'] = df.groupby([df['A'],df['B'], s]).cumcount() df['cumul1'] = df.groupby(df['C'].ne(df['C'].shift()).cumsum()).cumcount() print (df) A B C cumul cumul1 0 group1 group1_c 12 0 0 1 group1 group2_c 12 0 1 2 group1 group2_c 12 1 2 3 group1 group1_c 1 0 0 4 group1 group1_c 12 0 0 5 group1 group1_c 12 1 1 Alternative solution: s = df[['A','B','C']].ne(df[['A','B','C']].shift()).any(axis=1).cumsum() df['cumul'] = df.groupby(s).cumcount()
3
4
73,968,566
2022-10-6
https://stackoverflow.com/questions/73968566/with-pydantic-how-can-i-create-my-own-validationerror-reason
it seems impossible to set a regex constraint with a __root__ field like this one: class Cars(BaseModel): __root__: Dict[str, CarData] so, i've resorted to doing it at the endpoint: @app.post("/cars") async def get_cars(cars: Cars = Body(...)): x = cars.json() y = json.loads(x) keys = list(y.keys()) try: if any([re.search(r'^\d+$', i) is None for i in keys]): raise ValidationError except ValidationError as ex: return 'wrong type' return 'works' this works well in that i get wrong type returned if i dont use a digit in the request body. but i'd like to return something similar to what pydantic returns but with a custom message: { "detail": [ { "loc": [ "body", "__root__", ], "msg": "hey there, you can only use digits!", "type": "type_error.???" } ] }
You can pass your own error string by using raise ValidationError("Wrong data type"). Hope it helps.
5
-4
73,969,054
2022-10-6
https://stackoverflow.com/questions/73969054/how-to-create-combobox-with-django-model
I want to create something like a combo box with django model, but i don't find any field type to do that. something like this: enter image description here
Simply you can do this in models.py: class Student(models.Model): select_gender = ( ('Male', 'Male'), ('Female', 'Female'), ('Other', 'Other'), ) student_name = models.CharField(max_length=100) student_gender = models.CharField(max_length=8, choices=select_gender) In forms.py file, do this: class StudentForm(forms.ModelForm): class Meta: model = Student fields = '__all__' widgets = { 'student_name' : forms.TextInput(attrs={'class':'form-control'}), 'student_gender' : forms.Select(attrs={'class':'form-control'}) } This is the way you can do.
3
4
73,962,994
2022-10-5
https://stackoverflow.com/questions/73962994/binding-data-in-type-dict-is-not-supported
I am trying to write json object into a particular column of my sql table variant_str = response.json() print(con.cursor().execute('INSERT INTO TEST_TABLE (JSON_STR) VALUES (?)', (variant_str,))) Here, the variant_str is of type dict. I get an error that: snowflake.connector.errors.ProgrammingError: 252004: Failed processing pyformat-parameters; 255001: Binding data in type (dict) is not supported. The table's ddl where I am trying to load the data is this: create or replace TABLE TEST_TABLE ( JSON_STR VARIANT );
It is possible to use ? as the placeholder for parameter binding: import snowflake.connector snowflake.connector.paramstyle='qmark' Insert using INSERT INTO ... SELECT PARSE_JSON(...) con.cursor().execute('INSERT INTO TEST_TABLE (JSON_STR) SELECT PARSE_JSON(?)' , (variant_str)) where variant_str is a valid JSON string
3
4
73,958,381
2022-10-5
https://stackoverflow.com/questions/73958381/this-engine-did-not-provide-a-list-of-tried-templates
Wup, I'm trying to deploy a localhost web using Django but I get the following error: TemplateDoesNotExist at / templates/index.html Request Method: GET Request URL: http://127.0.0.1:8000/ Django Version: 4.1.2 Exception Type: TemplateDoesNotExist I was looking tutorials and docs but I didn't found the answer. Here is my settings.py ALLOWED_HOSTS = ['*'] DEBUG = True ROOT_URLCONF = 'FortyTwop.urls' SECRET_KEY = 'This is a secret key' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': ["templates"], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] also urls.py from django.urls import path from . import views urlpatterns = [ path('', views.index, name='index') ] And views.py from email import message from http.client import HTTPResponse from django.shortcuts import render # Create your views here. def index(request): return render(request, 'index.html') My path tree: . β”œβ”€β”€ FortyTwop β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ admin.py β”‚ β”œβ”€β”€ apps.py β”‚ β”œβ”€β”€ migrations β”‚ β”‚ └── __init__.py β”‚ β”œβ”€β”€ models.py β”‚ β”œβ”€β”€ templates β”‚ β”‚ └── index.html β”‚ β”œβ”€β”€ tests.py β”‚ β”œβ”€β”€ urls.py β”‚ └── views.py └── manage.py What Im doing bad or what I didn't have in the settings?, this project is just for test.
templates/ is the base path you've configured in the settings, so you'll want def index(request): return render(request, 'index.html') Furthermore, TEMPLATE_DIRS hasn't been a thing since Django 1.8 or so, and you're on Django 4.x. See these instructions.
3
5
73,933,043
2022-10-3
https://stackoverflow.com/questions/73933043/removing-null-values-on-selected-columns-only-in-polars-dataframe
I am trying to remove null values across a list of selected columns. But it seems that I might have got the with_columns operation not right. What's the right approach if you want to operate the removing only on selected columns? df = pl.DataFrame( { "id": ["NY", "TK", "FD"], "eat2000": [1, None, 3], "eat2001": [-2, None, 4], "eat2002": [None, None, None], "eat2003": [-9, None, 8], "eat2004": [None, None, 8] } ); df β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ eat2000 ┆ eat2001 ┆ eat2002 ┆ eat2003 ┆ eat2004 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ f64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ NY ┆ 1 ┆ -2 ┆ null ┆ -9 ┆ null β”‚ β”‚ TK ┆ null ┆ null ┆ null ┆ null ┆ null β”‚ β”‚ FD ┆ 3 ┆ 4 ┆ null ┆ 8 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ col_list = [word for word in df.columns if word.startswith(("eat"))] ( df .with_columns( pl.col(col_list).filter(~pl.fold(True, lambda acc, s: acc & s.is_null(), pl.all())) ) ) # InvalidOperationError: dtype String not supported in 'not' operation Expected output: β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ eat2000 ┆ eat2001 ┆ eat2002 ┆ eat2003 ┆ eat2004 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ f64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ NY ┆ 1 ┆ -2 ┆ null ┆ -9 ┆ null β”‚ β”‚ FD ┆ 3 ┆ 4 ┆ null ┆ 8 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
polars.col also accepts Regular expressions which is one way to select all columns that start with a specific string. polars.all_horizontal combines all results horizontally (i.e., row-wise) to give a single True/False value per row. df.select( ~pl.all_horizontal(pl.col(r'^eat.*$').is_null()) ) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β” β”‚ all β”‚ β”‚ --- β”‚ β”‚ bool β”‚ β•žβ•β•β•β•β•β•β•β•‘ β”‚ true β”‚ β”‚ false β”‚ β”‚ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”˜ DataFrame.filter can be used to keep only the true rows: df.filter( ~pl.all_horizontal(pl.col(r'^eat.*$').is_null()) ) shape: (2, 6) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ eat2000 ┆ eat2001 ┆ eat2002 ┆ eat2003 ┆ eat2004 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ f32 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ NY ┆ 1 ┆ -2 ┆ null ┆ -9 ┆ null β”‚ β”‚ FD ┆ 3 ┆ 4 ┆ null ┆ 8 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The ~ in front of the pl.all_horizontal stands for negation. Notice that we didn't need the col_list. One caution: the regex expression in the pl.col must start with ^ and end with $. These cannot be omitted, even if the resulting regex expression is otherwise valid. Alternately, if you don't like the ~ operator, there is .not_() df.filter( pl.all_horizontal(pl.col(r'^eat.*$').is_null()).not_() ) Or, we can check if there are any non-null values instead: df.filter( pl.any_horizontal(pl.col(r'^eat.*$').is_not_null()) ) Other Notes As an aside, Polars has other dedicated horizontal functions e.g. min_horizontal, max_horizontal, sum_horizontal Edit - using fold FYI, here's how to use the fold method, if that is what you'd prefer. Note the use of pl.col with a regex expression. df.filter( ~pl.fold(True, lambda acc, s: acc & s.is_null(), exprs=pl.col(r'^eat.*$')) ) shape: (2, 6) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ eat2000 ┆ eat2001 ┆ eat2002 ┆ eat2003 ┆ eat2004 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ null ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ NY ┆ 1 ┆ -2 ┆ null ┆ -9 ┆ null β”‚ β”‚ FD ┆ 3 ┆ 4 ┆ null ┆ 8 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
8
11
73,948,502
2022-10-4
https://stackoverflow.com/questions/73948502/take-cumsum-of-each-row-in-polars
E.g. if I have import polars as pl df = pl.DataFrame({'a': [1,2,3], 'b': [4,5,6]}) how would I find the cumulative sum of each row? Expected output: a b 0 1 5 1 2 7 2 3 9 Here's the equivalent in pandas: >>> import pandas as pd >>> pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]}).cumsum(axis=1) a b 0 1 5 1 2 7 2 3 9 but I can't figure out how to do it in polars
cum_sum_horizontal() generates a struct of cum_sum values. df.select(pl.cum_sum_horizontal(pl.all())) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ cum_sum β”‚ β”‚ --- β”‚ β”‚ struct[2] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {1,5} β”‚ β”‚ {2,7} β”‚ β”‚ {3,9} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Which you can unnest() df.select(pl.cum_sum_horizontal(pl.all())).unnest('cum_sum') shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1 ┆ 5 β”‚ β”‚ 2 ┆ 7 β”‚ β”‚ 3 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ The code for cum_sum_horizontal is here. As it stands, it just calls cum_fold() df.select(pl.cum_fold(pl.lit(0, pl.UInt32), lambda x, y: x + y, pl.all())) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ cum_fold β”‚ β”‚ --- β”‚ β”‚ struct[2] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {1,5} β”‚ β”‚ {2,7} β”‚ β”‚ {3,9} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
9
73,917,061
2022-10-1
https://stackoverflow.com/questions/73917061/how-to-do-a-horizontal-forward-fill-in-polars
I am wondering if there's a way to do forward filling by columns in polars. df = pl.DataFrame( { "id": ["NY", "TK", "FD"], "eat2000": [1, 6, 3], "eat2001": [-2, None, 4], "eat2002": [None, None, None], "eat2003": [-9, 3, 8], "eat2004": [None, None, 8] } ); df shape: (3, 6) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ eat2000 ┆ eat2001 ┆ eat2002 ┆ eat2003 ┆ eat2004 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ f64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ NY ┆ 1 ┆ -2 ┆ null ┆ -9 ┆ null β”‚ β”‚ TK ┆ 6 ┆ null ┆ null ┆ 3 ┆ null β”‚ β”‚ FD ┆ 3 ┆ 4 ┆ null ┆ 8 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I would like to do the equivlanet of .ffill(axis=1) in pandas. pl.from_pandas(df.to_pandas().ffill(axis=1)) shape: (3, 6) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ eat2000 ┆ eat2001 ┆ eat2002 ┆ eat2003 ┆ eat2004 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ f64 ┆ f64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ NY ┆ 1 ┆ -2.0 ┆ -2.0 ┆ -9 ┆ -9.0 β”‚ β”‚ TK ┆ 6 ┆ 6.0 ┆ 6.0 ┆ 3 ┆ 3.0 β”‚ β”‚ FD ┆ 3 ┆ 4.0 ┆ 4.0 ┆ 8 ┆ 8.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
You can use the new coalesce Expression to fold columns horizontally. If you place the coalesce expressions in a with_columns context, they will be run in parallel. ( df .with_columns(pl.col("^eat.*$").cast(pl.Int64)) .with_columns( pl.coalesce("eat2004", "eat2003", "eat2002", "eat2001", "eat2000"), pl.coalesce("eat2003", "eat2002", "eat2001", "eat2000"), pl.coalesce("eat2002", "eat2001", "eat2000"), pl.coalesce("eat2001", "eat2000"), ) ) shape: (3, 6) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ eat2000 ┆ eat2001 ┆ eat2002 ┆ eat2003 ┆ eat2004 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ NY ┆ 1 ┆ -2 ┆ -2 ┆ -9 ┆ -9 β”‚ β”‚ TK ┆ 6 ┆ 6 ┆ 6 ┆ 3 ┆ 3 β”‚ β”‚ FD ┆ 3 ┆ 4 ┆ 4 ┆ 8 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Couple of notes. I first cast the eatXXXX columns to the same type. (In the DataFrame constructor, eat2002 is of type Float64 because of the way Polars initializes an all-null column that is not supplied with an explicit datatype). I've written out the list of coalesce Expressions for demonstration, but the list of expressions can be generated with a Python list comprehension. eat_cols = [col_nm for col_nm in reversed(df.columns) if col_nm.startswith('eat')] ( df .with_columns(pl.col("^eat.*$").cast(pl.Int64)) .with_columns( pl.coalesce(eat_cols[idx:]) for idx in range(0, len(eat_cols) - 1) ) )
4
3
73,908,734
2022-9-30
https://stackoverflow.com/questions/73908734/how-to-run-uvicorn-fastapi-server-as-a-module-from-another-python-file
I want to run FastAPI server using Uvicorn from A different Python file. uvicornmodule/main.py import uvicorn import webbrowser from fastapi import FastAPI from fastapi.responses import FileResponse from fastapi.staticfiles import StaticFiles app = FastAPI() import os script_dir = os.path.dirname(__file__) st_abs_file_path = os.path.join(script_dir, "static/") app.mount("/static", StaticFiles(directory=st_abs_file_path), name="static") @app.get("/") async def index(): return FileResponse('static/index.html', media_type='text/html') def start_server(): # print('Starting Server...') uvicorn.run( "app", host="0.0.0.0", port=8765, log_level="debug", reload=True, ) # webbrowser.open("http://127.0.0.1:8765") if __name__ == "__main__": start_server() So, I want to run the FastAPI server from the below test.py file: from uvicornmodule import main main.start_server() Then, I run python test.py. But I am getting the below error: RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. What I am doing wrong? I need to run this module as package.
When spawning new processes from the main process (as this is what happens when uvicorn.run() is called), it is important to protect the entry point to avoid recursive spawning of subprocesses, etc. As described in this article: If the entry point was not protected with an if-statement idiom checking for the top-level environment, then the script would execute again directly, rather than run a new child process as expected. Protecting the entry point ensures that the program is only started once, that the tasks of the main process are only executed by the main process and not the child processes. Basically, your code that creates the new process must be under if __name__ == '__main__':. Hence: from uvicornmodule import main if __name__ == "__main__": main.start_server() Additionally, running uvicorn programmatically and having reload and/or workers flag(s) enabled, you must pass the application as an import string in the format of "<module>:<attribute>". For example: # main.py import uvicorn from fastapi import FastAPI app = FastAPI() if __name__ == "__main__": uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) On a sidenote, the below would also work, if reload and/or workers flags were not used: if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000) Also, as per FastAPI documentation, when running the server from a terminal in the following way: > uvicorn main:app --reload the command uvicorn main:app refers to: main: the file main.py (the Python "module"). app: the object created inside of main.py with the line app = FastAPI(). --reload: make the server restart after code changes. Only use for development. Note that the default host and port are 127.0.0.1 and 8000, respectively. You could use the --host and/or --port flag(s), in order to change the host and/or port of the server (have a look at all the available Uvicorn command line options, as well as this answer). Example: > uvicorn main:app --host 0.0.0.0 --port 8000
6
8
73,914,566
2022-9-30
https://stackoverflow.com/questions/73914566/how-to-properly-suppress-mypy-error-name-qualname-not-defined
When using __qualname__ as part of the python logger formatter for classes, I am getting mypy error 'Name "qualname" not defined'. I can suppress it with inline # type: ignore, but wonder if there are more proper ways to do this, or if there are ways to have mypy recognize __qualname__.
The issue linked by @SilvioMayolo and its duplicate were closed in October 2023 as completed. __qualname__ (and __module__, for that matter) can now be used as-is in class bodies: (playground) class C: foo: ClassVar[str] = __qualname__ + 'bar' This has been possible since 1.9.0.
4
0
73,879,213
2022-9-28
https://stackoverflow.com/questions/73879213/instance-is-not-bound-to-a-session
Just like many other people I run into the problem of an instance not being bound to a Session. I have read the SQLAlchemy docs and the top-10 questions here on SO. Unfortunatly I have not found an explanation or solution to my error. My guess is that the commit() closes the session rendering the object unbound. Does this mean that In need to create two objects, one to use with SQL Alchemy and one to use with the rest of my code? I create an object something like this: class Mod(Base): __tablename__ = 'mod' insert_timestamp = Column(DateTime, default=datetime.datetime.now()) name = Column(String, nullable=False) Then I add it to the database using this function and afterwards the object is usesless, I cannot do anything with it anymore, always getting the error that it is not bound to a session. I have tried to keep the session open, keep it closed, copy the object, open two sessions, return the object, return a copy of the object. def add(self, dataobjects: list[Base]) -> bool: s = self.sessionmaker() try: s.add_all(dataobjects) except TypeError: s.rollback() raise else: s.commit() return True This is my session setup: self.engine = create_engine(f"sqlite:///{self.config.database['file']}") self.sessionmaker = sessionmaker(bind=self.engine) Base.metadata.bind = self.engine My last resort would be to create every object twice, once for SQL Alchemy and once so I can actually use the object in my code. This defeats the purpose of SQL Alchemy for me.
ORM entities are expired when committed. By default, if an entity attribute is accessed after expiry, the session emits a SELECT to get the current value from the database. In this case, the session only exists in the scope of the add method, so subsequent attribute access raises the "not bound to a session" error. There are a few ways to approach this: Pass expire_on_commit=False when creating the session. This will prevent automatic expiry so attribute values will remain accessible after leaving the add function, but they may be stale. Create the session outside of the add function and pass it as an argument (or return it from add, though that's rather ugly). As long as the session is not garbage collected the entities remain bound to it. Create a new session and do mod = session.merge(mod) for each entity to bind it to the new session. Which option you choose depends on your application.
4
11
73,877,870
2022-9-28
https://stackoverflow.com/questions/73877870/can-i-resolve-environment-variables-with-pathlib-path-without-os-path-expandvars
Is there a clean way to resolve environment variables or %VARIABLES% in general purely with the Path class without having to use a fallback solution like os.path.expandvars or "my/path/%variable%".fortmat(**os.environ)? The question How to use win environment variable "pathlib" to save files? doesn't provide an answer. It's exactly the opposite. It suggests using os.path.expandvars.
This comment from an answer of the linked thread clearly states that it does not provide anything equivalent to os.path.expandvars. Going through the documentation for the module (as of Python 3.10) does not make any references to environment variables. This issue on the CPython issue tracker was closed under the same rationale - various comments noted that any method along "expandvars() works with string, not with path" instances. So in short, no.
6
6
73,892,881
2022-9-29
https://stackoverflow.com/questions/73892881/error-fail-to-create-pixmap-with-tk-getpixmap-in-tkimgphotoinstancesetsize-wh
When i executing this function i get this error: Fail to create pixmap with Tk_GetPixmap in TkImgPhotoInstanceSetSize. This error occur in this line: fig, ax = plt.subplots()(line 12). The strange thing is, that this error only occur sometimes (Maybe only 10% of their executes). Here is my whole function: from bs4 import BeautifulSoup from PIL import ImageDraw, Image, ImageFont from colour import Color import base64 from io import BytesIO import matplotlib.pyplot as plt def getCurrentBatterySOCImage(batterySOC): red = Color("red") colors = list(red.range_to(Color("#23b818"), 101)) color = colors[round(batterySOC)].hex fig, ax = plt.subplots(facecolor="black") ax.set_facecolor("black") fig.tight_layout() ax.axis("off") ax.bar([1], batterySOC, color=color) ax.axis([0.6,1.4,0,100]) ax.set_aspect(.02) tmpfile = BytesIO() fig.savefig(tmpfile, format="png", pad_inches=0, bbox_inches="tight") plt.close(fig) bar = Image.open(tmpfile) bar = bar.resize((107, 208)) img = Image.new("RGBA", (136, 248), (255, 0, 0, 0)) img.paste(bar, (14, 26), bar) battery = Image.open("img/battery.png") img.paste(battery, (0, 0), battery) font = ImageFont.truetype('font/RobotoMono.ttf', 40) message = f"{batterySOC}%" draw = ImageDraw.Draw(img) w, h = draw.textsize(message, font=font) draw.text(((136-w)/2, (248-h)/2), f"{batterySOC}%", fill=(255, 255, 255), font=font, stroke_width=2, stroke_fill="black") tmpfile1 = BytesIO() img.save(tmpfile1, format="png") encoded = base64.b64encode(tmpfile1.getvalue()).decode('utf-8') with open("index.html") as fp: soup = BeautifulSoup(fp, "html.parser") soup.find("img", attrs={'id': 'battery'})['src'] = f"data:image/png;base64,{encoded}" with open("index.html", "w") as fp: fp.write(soup.decode()) return Can someone help me to fix this?
This seems to be a memory issue from this issue, new in Matplotlib 3.5 (vs 3.3) The recommended way is to switch to 'Agg' backend if possible [EDIT] Issue is now resolved with default backend in newer versions of Matplotlib: 3.6 (sept 2022), 3.7... If possible, as per wisbucky answer you may upgrade lib: pip install --upgrade matplotlib
4
2
73,893,871
2022-9-29
https://stackoverflow.com/questions/73893871/interactive-brokers-unable-to-fetch-forex-historical-data
I am trying IB very the first time. I am trying to fetch historical data of $EUR but I am getting an error: Error 162, reqId 3: Historical Market Data Service error message:No historical market data for EUR/CASH@FXSUBPIP Last 1800, contract: Contract(secType='CASH', symbol='EUR', exchange='IDEALPRO', currency='USD') Below is my code: import datetime from ib_insync import * if __name__ == '__main__': ib = IB() r = ib.connect('127.0.0.1', port=7497, clientId=1) contract = Contract() contract.symbol = "EUR" contract.secType = "CASH" contract.currency = "USD" contract.exchange = "IDEALPRO" data = ib.reqHistoricalData( contract=contract, endDateTime='', durationStr='100 D', barSizeSetting='30 mins', useRTH=True, whatToShow='ADJUSTED_LAST' )
The issue here is not on the definition of the FOREX contract itself, but the bar settings requested. durationStr='100 D', barSizeSetting='30 mins', The barSizeSetting is not available for the durationStr selected. If you check the TWS GUI and pull the historical data chart for the FX contract you'll see the lowest bar size available for 100 Day duration is 2 hours. You should always check if the functionality you are demanding from the API is available through the TWS GUI. IBKR's API talks to either TWS or IBGateway which in turn send the request to IBKR. If the functionality is not available through the GUI it most likely is not available through the API.
3
1
73,948,109
2022-10-4
https://stackoverflow.com/questions/73948109/pandas-read-csv-file-error-unicodedecodeerror-utf-8-codec-cant-decode-byte-0
I would like to open csv data but keep getting the same error, what can I do to succesfully open csv files using Python? #Reading in the files import pandas as pd data1 = pd.read_csv("data1.csv") UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
byte 0xff in position 0 means that your .csv is probably encoded in utf-16. Try this : data1 = pd.read_csv("data1.csv", encoding="utf-16")
10
17
73,951,746
2022-10-4
https://stackoverflow.com/questions/73951746/how-to-use-contextlib-contextmanager-with-a-classmethod
See the below Python 3.10 snippet: import contextlib class Foo: @contextlib.contextmanager @classmethod def state(cls): try: yield finally: pass with Foo.state(): pass It throws a TypeError: Traceback (most recent call last): File "/path/to/code/play/quick_play.py", line 12, in <module> with Foo.state(): File "/path/to/.pyenv/versions/3.10.5/lib/python3.10/contextlib.py", line 281, in helper return _GeneratorContextManager(func, args, kwds) File "/path/to/.pyenv/versions/3.10.5/lib/python3.10/contextlib.py", line 103, in __init__ self.gen = func(*args, **kwds) TypeError: 'classmethod' object is not callable Is it possible to decorate a classmethod with contextlib.contextmanager? And if yes, how can it be done?
This should work: import contextlib class Foo: @classmethod @contextlib.contextmanager def state(cls): try: print("A") yield print("C") finally: pass with Foo.state(): print("B") This prints A B C.
8
11
73,906,943
2022-9-30
https://stackoverflow.com/questions/73906943/dash-app-cannot-find-pages-folder-when-deploying-on-gcp-using-gunicorn
I am trying to deploy my dash app which uses dash_extensions, Dash_proxy and has multiple pages in the pages folder on GCP cloud run using gunicorn but the app cannot find the pages folder. It works perfectly fine when I use the development server but breaks in the production server because it cannot find the folder path. The app (following code is inside the app.py file): app = DashProxy(use_pages=True, pages_folder=pages_folder, external_stylesheets=[dbc.themes.SIMPLEX]) The app.py file and the pages folder are in the same directory I have tried tried to following methods to get the folder path: pages_folder="pages" pages_folder=os.path.join(os.path.dirname(__file__), "pages") for p in Path('.').rglob('*'): if str(p).endswith('pages'): pages_folder = str(p) break None of the above three work in when deploying on gcp using gunicorn through docker: Dockerfile command: CMD ["gunicorn" , "-b", "0.0.0.0:8000", "app:server"] But if I use dev server through docker like following code it works: CMD python app.py Does anyone have any ideas of how to make it work with gunicorn? Thanks for the help! -Rexon
Yes I did. Just had to specify the root folder. This is what I did and it seems to work for me. pages_folder=os.path.join(os.path.dirname(__name__), "pages") app = DashProxy(__name__,use_pages=True, pages_folder=pages_folder, external_stylesheets=[dbc.themes.SIMPLEX]) server=app.server
5
1
73,894,238
2022-9-29
https://stackoverflow.com/questions/73894238/gevent-21-12-0-installation-failing-in-mac-os-monterey
I am trying to install gevent 21.12.0 on Mac OS Monterey (version 12.6) with python 3.9.6 and pip 21.3.1. But it is failing with the below error. Any suggestion? (venv) debrajmanna@debrajmanna-DX6QR261G3 qa % pip install gevent Collecting gevent Using cached gevent-21.12.0.tar.gz (6.2 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting greenlet<2.0,>=1.1.0 Using cached greenlet-1.1.3-cp39-cp39-macosx_10_9_universal2.whl Collecting zope.event Using cached zope.event-4.5.0-py2.py3-none-any.whl (6.8 kB) Collecting zope.interface Using cached zope.interface-5.4.0-cp39-cp39-macosx_10_9_universal2.whl Requirement already satisfied: setuptools in /Users/debrajmanna/code/python/github/spotnana/venv/lib/python3.9/site-packages (from gevent) (60.2.0) Building wheels for collected packages: gevent Building wheel for gevent (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: /Users/debrajmanna/code/python/github/spotnana/venv/bin/python /Users/debrajmanna/code/python/github/spotnana/venv/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/tmpi2i_lqc2 cwd: /private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370 Complete output (46 lines): running bdist_wheel running build running build_py running build_ext generating cffi module 'build/temp.macosx-10.9-universal2-cpython-39/gevent.libuv._corecffi.c' Running '(cd "/private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370/deps/libev" && sh ./configure -C > configure-output.txt )' in /private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370 generating cffi module 'build/temp.macosx-10.9-universal2-cpython-39/gevent.libev._corecffi.c' Not configuring libev, 'config.h' already exists Not configuring libev, 'config.h' already exists building 'gevent.libev.corecext' extension Embedding c-ares <cffi.setuptools_ext._add_c_module.<locals>.build_ext_make_mod object at 0x104f40bb0> <_setuputils.Extension('gevent.resolver.cares') at 0x1048f4640> Inserted build/temp.macosx-10.9-universal2-cpython-39/c-ares/include in include dirs ['build/temp.macosx-10.9-universal2-cpython-39/c-ares/include', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/include/python3.9', '/private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370/deps', '/private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370/deps/c-ares/include', '/private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370/deps/c-ares/src/lib', 'src/gevent', 'src/gevent/libev', 'src/gevent/resolver', '.'] Running '(cd "/private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370/deps/c-ares" && if [ -r include/ares_build.h ]; then cp include/ares_build.h include/ares_build.h.orig; fi && sh ./configure --disable-dependency-tracking -C CFLAGS="-Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration" && cp src/lib/ares_config.h include/ares_build.h "$OLDPWD" && cat include/ares_build.h && if [ -r include/ares_build.h.orig ]; then mv include/ares_build.h.orig include/ares_build.h; fi) > configure-output.txt' in /private/var/folders/ls/b6mf3_jd17916k8bs8jwy2g80000gn/T/pip-install-qprhzmpd/gevent_54aaef476d2d411ba9ad080d0291a370/build/temp.macosx-10.9-universal2-cpython-39/c-ares/include configure: WARNING: Continuing even with errors mentioned immediately above this line. rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory configure: WARNING: Continuing even with errors mentioned immediately above this line. building 'gevent.resolver.cares' extension building 'gevent._gevent_c_greenlet_primitives' extension building 'gevent._gevent_c_hub_primitives' extension building 'gevent._gevent_c_hub_local' extension building 'gevent._gevent_c_waiter' extension building 'gevent._gevent_cgreenlet' extension building 'gevent._gevent_c_tracer' extension building 'gevent._gevent_c_abstract_linkable' extension building 'gevent._gevent_c_semaphore' extension building 'gevent._gevent_clocal' extension building 'gevent._gevent_c_ident' extension building 'gevent._gevent_c_imap' extension building 'gevent._gevent_cevent' extension building 'gevent._gevent_cqueue' extension src/gevent/queue.c:7071:12: warning: unused function '__pyx_pw_6gevent_14_gevent_cqueue_5Queue_25__nonzero__' [-Wunused-function] static int __pyx_pw_6gevent_14_gevent_cqueue_5Queue_25__nonzero__(PyObject *__pyx_v_self) { ^ 1 warning generated. src/gevent/queue.c:7071:12: warning: unused function '__pyx_pw_6gevent_14_gevent_cqueue_5Queue_25__nonzero__' [-Wunused-function] static int __pyx_pw_6gevent_14_gevent_cqueue_5Queue_25__nonzero__(PyObject *__pyx_v_self) { ^ 1 warning generated. building 'gevent.libev._corecffi' extension building 'gevent.libuv._corecffi' extension build/temp.macosx-10.9-universal2-cpython-39/gevent.libuv._corecffi.c:50:14: fatal error: 'pyconfig.h' file not found # include <pyconfig.h> ^~~~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for gevent Failed to build gevent ERROR: Could not build wheels for gevent, which is required to install pyproject.toml-based projects
Looked all over trying to figure out a solution to this problem until I finally stumbled on this post. I think the issue is specific to the virtual environment. I had the project open with it's own venv in PyCharm, and it seems that the python distribution headers were not findable. To reiterate the solution linked: Find where the Python.h file is defined. I was able to find it using find /usr/local -name Python.h Copy the path to the directory Python.h is defined in Set the C_INCLUDE_PATH environment variable accordingly, for me: export C_INCLUDE_PATH="/usr/local/munki/Python.framework/Versions/3.9/include/python3.9" After this, I was able to run pip3 install gevent with no issues.
6
0
73,947,300
2022-10-4
https://stackoverflow.com/questions/73947300/pycharm-doesnt-recognize-file-or-folder-with-remote-interpreter
In a file whose only content is def test_sanity(): pass I am trying to run the file, named test_something.py. The folder structure is uv_metadata |---uv_metadata |------tests |----------test_something.py Getting the error ssh://noam.s@ML:2202/home/noam.s/src/uv_metadata/venv/bin/python -u /home/noam.s/.pycharm_helpers/pycharm/_jb_pytest_runner.py --target test_something.py::test_sanity Testing started at 14:34 ... sh: 1: cd: can't cd to C:/Users/noam.s/src/uv_metadata/uv_metadata/tests Launching pytest with arguments test_something.py::test_sanity --no-header --no-summary -q in /home/noam.s ============================= test session starts ============================== collected 0 items ============================ no tests ran in 0.00s ============================= ERROR: file or directory not found: test_something.py::test_sanity Process finished with exit code 4 Empty suite I notice the line sh: 1: cd: can't cd to C:/Users/noam.s/src/uv_metadata/uv_metadata/tests, which doesn't make sense. Here is how my remote interpreter is configured: This just stopped working, I don't know what has changed. How to make Pycharm recognize the test folder again?
PyCharm 2021.2.3 has a buggy deployment configuration, which can cause clashes and sometimes not work properly when some mappings are configured separately on the same interpreter, even from different projects. Likely, I was using the same interpreter with a different mapping, from some other Pycharm instance. I had to go to Build, Execution, Deployment -> Deployment and remove everything from the list. Following is the state after everything was clean and I added a new interpreter. I also cleaned the interpreter list and it went back to work. A very inconvenient method to avoid this, but be safe My method of working to avoid this is having only a single path mapping per interpreter. If I need the same interpreter with different mappings, I create a new docker container, to make Pycharm think it is actually a different interpreter. If I need the same mapping with a different interpreter, there isn't a problem. For accessing the interpreter that runs inside a remote docker, I am tunneling PORT to 22 on the docker, and making ssh server run on the docker on startup, like so, and mapping ~ to ~ in the docker, so the actual same venv is reflected in all dockers.
3
5
73,925,578
2022-10-2
https://stackoverflow.com/questions/73925578/pyautogui-was-unable-to-import-pyscreeze
my code: import pyautogui from time import sleep x, y = pyautogui.locateOnScreen("slorixsh.png", confidence=0.5) error: Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> x, y = pyautogui.locateOnScreen("slorixsh.png", confidence=0.5) File "C:\Users\mrnug\AppData\Local\Programs\Python\Python310\lib\site-packages\pyautogui\__init__.py", line 231, in _couldNotImportPyScreeze raise PyAutoGUIException( pyautogui.PyAutoGUIException: PyAutoGUI was unable to import pyscreeze. (This is likely because you're running a version of Python that Pillow (which pyscreeze depends on) doesn't support currently.) Please install this module to enable the function you tried to call. line 231 in that error: raise PyAutoGUIException( "PyAutoGUI was unable to import pyscreeze. (This is likely because you're running a version of Python that Pillow (which pyscreeze depends on) doesn't support currently.) Please install this module to enable the function you tried to call." ) somehow it doesnt work i just reinstalled and updated that modules: pyscreeze, opencv, pyautogui, pillow
Yeah, you can fix it by this pip install pyscreeze import pyscreeze import time time.sleep(5) x, y = pyscreeze.locateOnScreen("slorixsh.png", confidence=0.5)
4
8
73,890,417
2022-9-29
https://stackoverflow.com/questions/73890417/working-on-new-version-of-psycopg-3-and-while-installing-psycopgc-it-wont-inst
I am working on psycopg 3 and not 2. Here is my code that I am trying to work on: from fastapi import FastAPI, Response, status, HTTPException from fastapi.params import Body from pydantic import BaseModel from typing import Optional from random import randrange import psycopg import psycopg2 from psycopg2.extras import RealDictCursor import time app = FastAPI() while True: try: conn = psycopg.connect(host = 'localhost', database = 'fastapi', user = 'postgres', password = 'islamabad', cursor_factory=RealDictCursor) cursor = conn.cursor() print("Database successfully connected!") break except Exception as error: print("Connection Failed") print("Error: ", error) time.sleep(2) But I am getting the following error: ImportError: no pq wrapper available. Attempts made: - couldn't import psycopg 'c' implementation: No module named 'psycopg_c' - couldn't import psycopg 'binary' implementation: No module named 'psycopg_binary' - couldn't import psycopg 'python' implementation: libpq library not found So I read somewhere to install psycopg[c] and psycopg[binary] Now when I am installing psycopg[c] it is giving the following error: Using cached psycopg-c-3.1.2.tar.gz (616 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error Γ— Preparing metadata (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─> [6 lines of output] running dist_info writing C:\Users\themr\AppData\Local\Temp\pip-modern-metadata-cr8kpz5q\psycopg_c.egg-info\PKG-INFO writing dependency_links to C:\Users\themr\AppData\Local\Temp\pip-modern-metadata-cr8kpz5q\psycopg_c.egg-info\dependency_links.txt writing top-level names to C:\Users\themr\AppData\Local\Temp\pip-modern-metadata-cr8kpz5q\psycopg_c.egg-info\top_level.txt couldn't run 'pg_config' --includedir: [WinError 2] The system cannot find the file specified error: [WinError 2] The system cannot find the file specified [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. I can instead work on psycopg2 which is working but I want to shift to the new version. So, help out please!
There are 3 ways of installing psycopg 3: Binary installation This method will install a self-contained package with all the libraries required to connect python to your Postgres database. Install packages by running: pip install "psycopg[binary]" Local installation To use psycopg for a production site, this is the most preferred way of installing the psycopg adapter. Install packages by running: pip install "psycopg[c]" 3)Pure python installation In case you want to use pycopg for a test environment and for debugging purposes, use this method of installation. Install packages by running: pip install psycopg To use the pure python installation, your system must contain the libpq library. libpq library is what the PostgreSQLcommand line client uses to connect to the database. Install the library by running: sudo apt install libpq5 Based on your code it seems that you are working on the python API development course. It would be best to use the 3rd method of installation as i assume you are using it for learning purposes. Therefore your code should look something like this: from typing import Optional from fastapi import Body, FastAPI, Response, status, HTTPException from pydantic import BaseModel from random import randrange import psycopg #from psycopg import ClientCursor import time app = FastAPI() try: conn = psycopg.connect(dbname='fastapi',user='postgres',password='islamabad', cursor_factory=RealDictCursor) cursor = conn.cursor() print("Database connection successful!") break except Exception as error: print("Connection failed") print("Error: ", error) time.sleep(2) For more information kindly check the docs at: https://www.psycopg.org/psycopg3/docs/basic/install.html
4
4
73,900,006
2022-9-29
https://stackoverflow.com/questions/73900006/best-directory-structure-for-a-repository-with-several-python-entry-points-and-i
I'm working on a project with the following directory structure: project/ package1/ module1.py module2.py package2/ module1.py module2.py main1.py main2.py main3.py ... mainN.py where each mainX.py file is an executable Python script that imports modules from either package1, package2, or both. package1 and package2 are subpackages meant to be distributed along with the rest of the project (not independently). The standard thing to do is to put your entry point in the top-level directory. I have N entry points, so I put them all in the top-level directory. The trouble is that N keeps growing, so my top-level directory is getting flooded with entry points. I could move the mainX.py files to a sub-directory (say, project/run), but then all of the package1 and package2 imports would break. I could extract package1 and package2 to a separate repository and just expect it to be installed on the system (i.e., in the system / user python path), but that would complicate installation. I could modify the Python path as a precondition or during runtime, but that's messy and could introduce unintended consequences. I could write a single main.py entry point script with argument subparsers respectively pointing to run/main1.py, ..., run/mainN.py, but that would introduce coupling between main.py and each of the run/mainX.py files. What's the standard, "Pythonic" solution to this issue?
The standard solution is to use console_scripts packaging for your entry points - read about the entry-points specification here. This feature can be used to generate script wrappers like main1.py ... mainN.py at installation time. Since these script wrappers are generated code, they do not exist in the project source directory at all, so that problem of clutter ("top-level directory is getting flooded with entry points") goes away. The actual code for the scripts will be defined somewhere within the package, and the places where the main*.py scripts will actually hook into code within the package is defined in the package metadata. You can hook a console script entry-point up to any callable within the package, provided it can be called without arguments (optional arguments, i.e. args with default values, are fine). project β”œβ”€β”€ package1 β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ module1.py β”‚ └── module2.py β”œβ”€β”€ package2 β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ module1.py β”‚ └── module2.py β”œβ”€β”€ pyproject.toml └── scripts └── __init__.py This is the new directory structure. Note the addition of __init__.py files, which indicates that package1 and package2 are packages and not just subdirectories. For the new files added, here's the scripts/__init__.py: # these imports should work # from package1 import ... # from package2.module1 import ... def myscript1(): # put whatever main1.py did here print("hello") def myscript2(): # put whatever main2.py did here print("world") These don't need to be all in the same file, and you can put them wherever you want within the package actually, as long as you update the hooks in the [project.scripts] section of the packaging definition. And here's that packaging definition: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [project] name = "mypackage" version = "0.0.1" [project.scripts] "main1.py" = "scripts:myscript1" "main2.py" = "scripts:myscript2" [tool.setuptools] packages = ["package1", "package2", "scripts"] Now when the package is installed, the console scripts are generated: $ pip install --editable . ... Successfully installed mypackage-0.0.1 $ main1.py hello $ main2.py world As mentioned, those executables do not live in the project directory, but within the site's scripts directory, which will be present on $PATH. The scripts are generated by pip, using vendored code from distlib's ScriptMaker. If you peek at the generated script files you'll see that they're simple wrappers, they'll just import the callable from within the package and then call it. Any argument parsing, logging configuration, etc must all still be handled within the package code. $ ls mypackage.egg-info package1 package2 pyproject.toml scripts $ which main2.py /tmp/project/.venv/bin/main2.py The exact location of the scripts directory depends on your platform, but it can be checked like this in Python: >>> import sysconfig >>> sysconfig.get_path("scripts") '/tmp/project/.venv/bin'
6
4
73,917,887
2022-10-1
https://stackoverflow.com/questions/73917887/firebase-credentials-as-python-environment-variables-could-not-deserialize-key
I'm developing a Python web app with a Firestore realtime database using the firebase_admin library. The Firestore key comes in form of a .json file containing 10 variables. However, I want to store some of these variables as environment variables so they are not visible publicly. So, I don't use a Firebase SDK .json file, but I create my own dictionary with the elements of this file. The dictionary looks like this and everything has been directly copied from the .json file: my_credentials = { "type": "service_account", "project_id": "bookclub-b2db5", "private_key_id": os.environ.get("PRIVATE_KEY_ID"), "private_key": os.environ.get("PRIVATE_KEY"), "client_email": os.environ.get("CLIENT_EMAIL"), "client_id": os.environ.get("CLIENT_ID"), "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": os.environ.get("AUTH_PROVIDER_X509_CERT_URL"), "client_x509_cert_url": os.environ.get("AUTH_PROVIDER_X509_CERT_URL") } I have set the private key as a PRIVATE_KEY environment variable. The private key looks roughly like this (the characters here are made up): "-----BEGIN PRIVATE KEY-----\nEW8nYP9T840Sb8tQMi\nhZ(...MORE CHARACTERS HERE...)EW8nYP9T840Sb8tQMi/EW8nYP9T840Sb8tQMi\EW8nYP9T840Sb8tQMi/EW8nYP9T840Sb8tQMi=\n-----END PRIVATE KEY-----\n" Then, I try to create a Firebase Certificate from these credentials and initialize the application cred = firebase_admin.credentials.Certificate(my_credentials) firebase_admin.initialize_app(cred, {'databaseURL': 'https://myapp.firebasedatabase.app/'}) However, it turns out that the PRIVATE_KEY environment variable is not readable (cannot be deserialized), and it throws an error: Traceback (most recent call last): File "C:\Users\jarem\Desktop\data_science\python-working-directory\_MyApps\BookClub2.0\venv\lib\site-packages\firebase_admin\credentials.py", line 96, in __init__ self._g_credential = service_account.Credentials.from_service_account_info( File "C:\Users\jarem\AppData\Local\Programs\Python\Python39\lib\site-packages\google\oauth2\service_account.py", line 221, in from_service_account_info signer = _service_account_info.from_dict( File "C:\Users\jarem\AppData\Local\Programs\Python\Python39\lib\site-packages\google\auth\_service_account_info.py", line 58, in from_dict signer = crypt.RSASigner.from_service_account_info(data) File "C:\Users\jarem\AppData\Local\Programs\Python\Python39\lib\site-packages\google\auth\crypt\base.py", line 113, in from_service_account_info return cls.from_string( File "C:\Users\jarem\AppData\Local\Programs\Python\Python39\lib\site-packages\google\auth\crypt\_cryptography_rsa.py", line 133, in from_string private_key = serialization.load_pem_private_key( File "C:\Users\jarem\Desktop\data_science\python-working-directory\_MyApps\BookClub2.0\venv\lib\site-packages\cryptography\hazmat\primitives\serialization\base.py", line 22, in load_pem_private_key return ossl.load_pem_private_key(data, password) File "C:\Users\jarem\Desktop\data_science\python-working-directory\_MyApps\BookClub2.0\venv\lib\site-packages\cryptography\hazmat\backends\openssl\backend.py", line 921, in load_pem_private_key return self._load_key( File "C:\Users\jarem\Desktop\data_science\python-working-directory\_MyApps\BookClub2.0\venv\lib\site-packages\cryptography\hazmat\backends\openssl\backend.py", line 1189, in _load_key self._handle_key_loading_error() File "C:\Users\jarem\Desktop\data_science\python-working-directory\_MyApps\BookClub2.0\venv\lib\site-packages\cryptography\hazmat\backends\openssl\backend.py", line 1248, in _handle_key_loading_error raise ValueError( ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [_OpenSSLErrorWithText(code=503841036, lib=60, reason=524556, reason_text=b'error:1E08010C:DECODER routines::unsupported')]) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\jarem\Desktop\data_science\python-working-directory\_MyApps\BookClub2.0\main.py", line 71, in <module> cred = credentials.Certificate(my_credentials) File "C:\Users\jarem\Desktop\data_science\python-working-directory\_MyApps\BookClub2.0\venv\lib\site-packages\firebase_admin\credentials.py", line 99, in __init__ raise ValueError('Failed to initialize a certificate credential. ' ValueError: Failed to initialize a certificate credential. Caused by: "('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [_OpenSSLErrorWithText(code=503841036, lib=60, reason=524556, reason_text=b'error:1E08010C:DECODER routines::unsupported')])" All other environment variables seem to be readable. It might have something to do with the formatting of environment variables, but I don't know what exactly. Nothing works and I don't know the cause for this error. I work on Windows 11 and use cryptography version 35.0.0. How do I fix this in order to extract the private key? UPDATE: What I have found is that when I print the private key directly, it is printed with new lines, e.g.: -----BEGIN PRIVATE KEY----- ZGVNvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDtuzZSrBuf4Lv3 DJTPJDj34jfknDJOJjfpitjrpZGVNvgIBADANBgkqhkiG9w0BAQZGVNvgI+DIome ZGVNvgIBADANBgkqhkiG9w0BAQVdFkqJd5j53tZFgX5VLOf7g23/Zvgq+BFIHe34 (...MORE LINES HERE...) -----END PRIVATE KEY----- However, when I print the private key obtained by os.environ.get('PRIVATE KEY'), no newlines appear: -----BEGIN PRIVATE KEY-----ZGVNvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDtuzZSrBuf4Lv3\nDJTPJDj34jfknDJOJjfpitjrpZGVNvgIBADANBgkqhkiG9w0BAQZGVNvgI+DIome\nZGVNvgIBADANBgkqhkiG9w0BAQVdFkqJd5j53tZFgX5VLOf7g23/Zvgq+BFIHe34\n(...MORE CHARACTERS HERE...)-----END PRIVATE KEY----- Seems there's a problem with reading the "\n" by the environment variable system. However, I still don't know how to overcome this bug.
I HAVE SOLVED THE PROBLEM: To solve the problem with "\n" I had to replace the raw string "\n" with "\n", as (presumably) the environment variables return a raw string, which treats backslash () as a literal character. The solution looks as follows: my_credentials = { "type": "service_account", "project_id": "bookclub-b2db5", "private_key_id": os.environ.get("PRIVATE_KEY_ID"), "private_key": os.environ.get("PRIVATE_KEY").replace(r'\n', '\n'), # CHANGE HERE "client_email": os.environ.get("CLIENT_EMAIL"), "client_id": os.environ.get("CLIENT_ID"), "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": os.environ.get("AUTH_PROVIDER_X509_CERT_URL"), "client_x509_cert_url": os.environ.get("AUTH_PROVIDER_X509_CERT_URL") } Now, everything works.
7
14
73,913,522
2022-9-30
https://stackoverflow.com/questions/73913522/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-tor
I wanted to concatenate multiple data sets where the labels are disjoint (so don't share labels). I did: class ConcatDataset(Dataset): """ ref: https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 """ def __init__(self, datasets: list[Dataset]): """ """ # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling offset: int = 0 new_idx: int = 0 for dataset_idx, dataset in enumerate(datasets): assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] for x, y in dataset: y = int(y) _x, _y = self.concat_datasets[new_idx] _y = int(_y) # assert y == _y assert torch.equal(x, _x) new_label = y + offset self.indices_to_labels[new_idx] = new_label self.labels_to_indices[new_label] = new_idx num_labels_for_current_dataset: int = max([y for _, y in dataset]) offset += num_labels_for_current_dataset new_idx += 1 assert len(self.indices_to_labels.keys()) == len(self.concat_datasets) # contains the list of labels from 0 - total num labels after concat self.labels = range(offset) self.target_transform = lambda data: torch.tensor(data, dtype=torch.int) def __len__(self): return len(self.concat_datasets) def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: x = self.concat_datasets[idx] y = self.indices_to_labels[idx] if self.target_transform is not None: y = self.target_transform(y) return x, y but it doesn't even work to align the x images (so never mind if my relabling works!). Why? def check_xs_align_cifar100(): from pathlib import Path root = Path("~/data/").expanduser() # root = Path(".").expanduser() train = torchvision.datasets.CIFAR100(root=root, train=True, download=True) test = torchvision.datasets.CIFAR100(root=root, train=False, download=True) concat = ConcatDataset([train, test]) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') error Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 405, in <module> check_xs_align() File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 391, in check_xs_align concat = ConcatDataset([train, test]) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 71, in __init__ assert torch.equal(x, _x) TypeError: equal(): argument 'input' (position 1) must be Tensor, not Image python-BaseException Bonus: let me know if relabeling is correct please. related discussion: https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 Edit 1: PIL comparison fails I did a PIL image comparison according to Compare images Python PIL but it failed: Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 419, in <module> check_xs_align_cifar100() File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 405, in check_xs_align_cifar100 concat = ConcatDataset([train, test]) File "/Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py", line 78, in __init__ assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' AssertionError: comparison of imgs failed: diff.getbbox()=None python-BaseException diff PyDev console: starting. <PIL.Image.Image image mode=RGB size=32x32 at 0x7FBE897A21C0> code comparison: diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' this also failed: assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' AssertionError: ...long msg... assert statement was: assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' Edit 2: Tensor comparison Fails I tried to convert images to tensors but it still fails: AssertionError: Error for some reason, got: data_idx=1, x.norm()=tensor(45.9401), _x.norm()=tensor(33.9407), x=tensor([[[1.0000, 0.9922, 0.9922, ..., 0.9922, 0.9922, 1.0000], code: class ConcatDataset(Dataset): """ ref: - https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 - https://stackoverflow.com/questions/73913522/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-tor """ def __init__(self, datasets: list[Dataset]): """ """ # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling img2tensor: Callable = torchvision.transforms.ToTensor() offset: int = 0 new_idx: int = 0 for dataset_idx, dataset in enumerate(datasets): assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] for data_idx, (x, y) in enumerate(dataset): y = int(y) # - get data point from concataned data set (to compare with the data point from the data set list) _x, _y = self.concat_datasets[new_idx] _y = int(_y) # - sanity check concatanted data set aligns with the list of datasets # assert y == _y # from PIL import ImageChops # diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil # assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' # assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' # tensor comparison x, _x = img2tensor(x), img2tensor(_x) print(f'{data_idx=}, {x.norm()=}, {_x.norm()=}') assert torch.equal(x, _x), f'Error for some reason, got: {data_idx=}, {x.norm()=}, {_x.norm()=}, {x=}, {_x=}' # - relabling new_label = y + offset self.indices_to_labels[new_idx] = new_label self.labels_to_indices[new_label] = new_idx num_labels_for_current_dataset: int = max([y for _, y in dataset]) offset += num_labels_for_current_dataset new_idx += 1 assert len(self.indices_to_labels.keys()) == len(self.concat_datasets) # contains the list of labels from 0 - total num labels after concat self.labels = range(offset) self.target_transform = lambda data: torch.tensor(data, dtype=torch.int) def __len__(self): return len(self.concat_datasets) def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: x = self.concat_datasets[idx] y = self.indices_to_labels[idx] if self.target_transform is not None: y = self.target_transform(y) return x, y Edit 3, clarification request: My vision of the data set I want is a concatenation of a data sets in question -- where relabeling starting the first label commences. The curicial thing (according to me -- might be wrong on this) is that once concatenated we should verify in some way that the data set indeed behaves the way we want it. One check I thought is to index the data point from the list of data sets and also from the concatenation object of the data set. If the data set was correctly conatenated I'd expect the images to be correspond according to this indexing. So if the first image in the first data set had some unique identifier (e.g. the pixels) then the concatenation of the data sets should have the first image be the same as the first image in the list of data sets and so on...if this doesn't hold, if I start creating new labels -- how do I know I am even doing this correctly? reddit link: https://www.reddit.com/r/pytorch/comments/xurnu9/why_dont_the_images_align_when_concatenating_two/ cross posted pytorch discuss: https://discuss.pytorch.org/t/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-torch-utils-data-concatdataset/162801?u=brando_miranda
Corrected code can be found here https://github.com/brando90/ultimate-utils/blob/master/ultimate-utils-proj-src/uutils/torch_uu/dataset/concate_dataset.py you can pip install the library pip install ultimate-utils. Since only links is not a good way to answer I will copy paste the code too with it's test and expected output: """ do checks, loop through all data points, create counts for each label how many data points there are do this for MI only then check union and ur implementation? compare the mappings of one & the other? actually it's easy, just add the cummulative offset and that's it. :D the indices are already -1 indexed. assert every image has a label between 0 --> n1+n2+... and every bin for each class is none empty for it to work with any standard pytorch data set I think the workflow would be: pytorch dataset -> l2l meta data set -> union data set -> .dataset field -> data loader for l2l data sets: l2l meta data set -> union data set -> .dataset field -> data loader but the last one might need to make sure .indices or .labels is created or a get labels function that checking the attribute gets the right .labels or remaps it correctly """ from collections import defaultdict from pathlib import Path from typing import Callable, Optional import torch import torchvision from torch import Tensor from torch.utils.data import Dataset, DataLoader class ConcatDatasetMutuallyExclusiveLabels(Dataset): """ Useful attributes: - self.labels: contains all new USL labels i.e. contains the list of labels from 0 - total num labels after concat. - len(self): gives number of images after all images have been concatenated - self.indices_to_labels: maps the new concat idx to the new label after concat. ref: - https://stackoverflow.com/questions/73913522/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-tor - https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 """ def __init__(self, datasets: list[Dataset], transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, compare_imgs_directly: bool = False, verify_xs_align: bool = False, ): """ Concatenates different data sets assuming the labels are mutually exclusive in the data sets. compare_imgs_directly: adds the additional test that imgs compare at the PIL imgage level. """ self.datasets = datasets self.transform = transform self.target_transform = target_transform # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling self._re_label_all_dataset(datasets, compare_imgs_directly, verify_xs_align) def __len__(self): return len(self.concat_datasets) def _re_label_all_dataset(self, datasets: list[Dataset], compare_imgs_directly: bool = False, verify_xs_align: bool = False, ): """ Relabels according to a blind (mutually exclusive) assumption. Relabling Algorithm: The zero index of the label starts at the number of labels collected so far. So when relabling we do: y = y + total_number_labels total_number_labels += max label for current data set where total_number_labels always has the + 1 to correct for the zero indexing. :param datasets: :param compare_imgs_directly: :parm verify_xs_align: set to false by default in case your transforms aren't deterministic. :return: """ self.img2tensor: Callable = torchvision.transforms.ToTensor() self.int2tensor: Callable = lambda data: torch.tensor(data, dtype=torch.int) total_num_labels_so_far: int = 0 new_idx: int = 0 for dataset_idx, dataset in enumerate(datasets): assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] for data_idx, (x, y) in enumerate(dataset): y = int(y) # - get data point from concataned data set (to compare with the data point from the data set list) _x, _y = self.concat_datasets[new_idx] _y = int(_y) # - sanity check concatanted data set aligns with the list of datasets assert y == _y if compare_imgs_directly: # from PIL import ImageChops # diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil # assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' # doesn't work :/ assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' # tensor comparison if not isinstance(x, Tensor): x, _x = self.img2tensor(x), self.img2tensor(_x) if isinstance(y, int): y, _y = self.int2tensor(y), self.int2tensor(_y) if verify_xs_align: # this might fails if there are random ops in the getitem assert torch.equal(x, _x), f'Error for some reason, got: {dataset_idx=},' \ f' {new_idx=}, {data_idx=}, ' \ f'{x.norm()=}, {_x.norm()=}, ' \ f'{x=}, {_x=}' # - relabling new_label = y + total_num_labels_so_far self.indices_to_labels[new_idx] = new_label self.labels_to_indices[new_label].append(new_idx) new_idx += 1 num_labels_for_current_dataset: int = int(max([y for _, y in dataset])) + 1 # - you'd likely resolve unions if you wanted a proper union, the addition assumes mutual exclusivity total_num_labels_so_far += num_labels_for_current_dataset assert len(self.indices_to_labels.keys()) == len(self.concat_datasets) # contains the list of labels from 0 - total num labels after concat, assume mutually exclusive self.labels = range(total_num_labels_so_far) def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: """ Get's the data point and it's new label according to a mutually exclusive concatenation. For later? to do the relabling on the fly we'd need to figure out which data set idx corresponds to and to compute the total_num_labels_so_far. Something like this: current_data_set_idx = bisect_left(idx) total_num_labels_so_far = sum(max(_, y in dataset)+1 for dataset_idx, dataset in enumerate(self.datasets) if dataset_idx <= current_data_set_idx) new_y = total_num_labels_so_far self.indices_to_labels[idx] = new_y :param idx: :return: """ x, _y = self.concat_datasets[idx] y = self.indices_to_labels[idx] # for the first data set they aren't re-labaled so can't use assert # assert y != _y, f'concat dataset returns x, y so the y is not relabeled, but why are they the same {_y}, {y=}' # idk what this is but could be useful? mnist had this. # img = Image.fromarray(img.numpy(), mode="L") if self.transform is not None: x = self.transform(x) if self.target_transform is not None: y = self.target_transform(y) return x, y def assert_dataset_is_pytorch_dataset(datasets: list, verbose: bool = False): """ to do 1 data set wrap it in a list""" for dataset in datasets: if verbose: print(f'{type(dataset)=}') print(f'{type(dataset.dataset)=}') assert isinstance(dataset, Dataset), f'Expect dataset to be of type Dataset but got {type(dataset)=}.' def get_relabling_counts(dataset: Dataset) -> dict: """ counts[new_label] -> counts/number of data points for that new label """ assert isinstance(dataset, Dataset), f'Expect dataset to be of type Dataset but got {type(dataset)=}.' counts: dict = {} iter_dataset = iter(dataset) for datapoint in iter_dataset: x, y = datapoint # assert isinstance(x, torch.Tensor) # assert isinstance(y, int) if y not in counts: counts[y] = 0 else: counts[y] += 1 return counts def assert_relabling_counts(counts: dict, labels: int = 100, counts_per_label: int = 600): """ default values are for MI. - checks each label/class has the right number of expected images per class - checks the relabels start from 0 and increase by 1 - checks the total number of labels after concat is what you expect ref: https://openreview.net/pdf?id=rJY0-Kcll Because the exact splits used in Vinyals et al. (2016) were not released, we create our own version of the Mini-Imagenet dataset by selecting a random 100 classes from ImageNet and picking 600 examples of each class. We use 64, 16, and 20 classes for training, validation and testing, respectively. """ # - check each image has the right number of total images seen_labels: list[int] = [] for label, count in counts.items(): seen_labels.append(label) assert counts[label] == counts_per_label # - check all labels are there and total is correct seen_labels.sort() prev_label = -1 for label in seen_labels: diff = label - prev_label assert diff == 1 assert prev_label < label # - checks the final label is the total number of labels assert label == labels - 1 def check_entire_data_via_the_dataloader(dataloader: DataLoader) -> dict: counts: dict = {} for it, batch in enumerate(dataloader): xs, ys = batch for y in ys: if y not in counts: counts[y] = 0 else: counts[y] += 1 return counts # - tests def check_xs_align_mnist(): root = Path('~/data/').expanduser() import torchvision # - test 1, imgs (not the recommended use) train = torchvision.datasets.MNIST(root=root, train=True, download=True) test = torchvision.datasets.MNIST(root=root, train=False, download=True) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], compare_imgs_directly=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') # - test 2, tensor imgs train = torchvision.datasets.MNIST(root=root, train=True, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) test = torchvision.datasets.MNIST(root=root, train=False, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], verify_xs_align=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') assert len(concat) == 10 * 7000, f'Err, unexpected number of datapoints {len(concat)=} expected {100 * 700}' assert len( concat.labels) == 20, f'Note it should be 20 (since it is not a true union), but got {len(concat.labels)=}' # - test dataloader loader = DataLoader(concat) for batch in loader: x, y = batch assert isinstance(x, torch.Tensor) assert isinstance(y, torch.Tensor) def check_xs_align_cifar100(): from pathlib import Path root = Path('~/data/').expanduser() import torchvision # - test 1, imgs (not the recommended use) train = torchvision.datasets.CIFAR100(root=root, train=True, download=True) test = torchvision.datasets.CIFAR100(root=root, train=False, download=True) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], compare_imgs_directly=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') # - test 2, tensor imgs train = torchvision.datasets.CIFAR100(root=root, train=True, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) test = torchvision.datasets.CIFAR100(root=root, train=False, download=True, transform=torchvision.transforms.ToTensor(), target_transform=lambda data: torch.tensor(data, dtype=torch.int)) concat = ConcatDatasetMutuallyExclusiveLabels([train, test], verify_xs_align=True) print(f'{len(concat)=}') print(f'{len(concat.labels)=}') assert len(concat) == 100 * 600, f'Err, unexpected number of datapoints {len(concat)=} expected {100 * 600}' assert len( concat.labels) == 200, f'Note it should be 200 (since it is not a true union), but got {len(concat.labels)=}' # details on cifar100: https://www.cs.toronto.edu/~kriz/cifar.html # - test dataloader loader = DataLoader(concat) for batch in loader: x, y = batch assert isinstance(x, torch.Tensor) assert isinstance(y, torch.Tensor) def concat_data_set_mi(): """ note test had to be in MI where train, val, test have disjount/different labels. In cifar100 classic the labels in train, val and test are shared from 0-99 instead of being different/disjoint. :return: """ # - get mi data set from diversity_src.dataloaders.hdb1_mi_omniglot_l2l import get_mi_datasets train_dataset, validation_dataset, test_dataset = get_mi_datasets() assert_dataset_is_pytorch_dataset([train_dataset, validation_dataset, test_dataset]) train_dataset, validation_dataset, test_dataset = train_dataset.dataset, validation_dataset.dataset, test_dataset.dataset # - create usl data set union = ConcatDatasetMutuallyExclusiveLabels([train_dataset, validation_dataset, test_dataset]) # union = ConcatDatasetMutuallyExclusiveLabels([train_dataset, validation_dataset, test_dataset], # compare_imgs_directly=True) assert_dataset_is_pytorch_dataset([union]) assert len(union) == 100 * 600, f'got {len(union)=}' assert len(union.labels) == 100, f'got {len(union.labels)=}' # - create dataloader from uutils.torch_uu.dataloaders.common import get_serial_or_distributed_dataloaders union_loader, _ = get_serial_or_distributed_dataloaders(train_dataset=union, val_dataset=union) for batch in union_loader: x, y = batch assert x is not None assert y is not None if __name__ == '__main__': import time from uutils import report_times start = time.time() # - run experiment check_xs_align_mnist() check_xs_align_cifar100() concat_data_set_mi() # - Done print(f"\nSuccess Done!: {report_times(start)}\a") expected correct output: len(concat)=70000 len(concat.labels)=20 len(concat)=70000 len(concat.labels)=20 Files already downloaded and verified Files already downloaded and verified len(concat)=60000 len(concat.labels)=200 Files already downloaded and verified Files already downloaded and verified len(concat)=60000 len(concat.labels)=200 Success Done!: time passed: hours:0.16719497998555502, minutes=10.0316987991333, seconds=601.901927947998 warning: if you have a transform that is random the verification that the data sets align might make it look as if the two data points are not algined. The code is correct so it's not an issue, but perhaps remove the randomness somehow. Note, I actually decided to not force the user to check all the images of their data set and trust my code works from running once my unit tests. Also note that it's slow to construct the data set since I do the re-labling at the beginning. Might be better to relabel on the fly. I outlined the code for it on how to do it but decided against it since we always see all the data set at least once so doing this amortized is the same as doing it on the fly (note the fly pseudo-code saves the labels to avoid recomputations). This is better: # int2tensor: Callable = lambda data: torch.tensor(data, dtype=torch.int) int2tensor: Callable = lambda data: torch.tensor(data, dtype=torch.long) class ConcatDatasetMutuallyExclusiveLabels(Dataset): """ Useful attributes: - self.labels: contains all new USL labels i.e. contains the list of labels from 0 - total num labels after concat. - len(self): gives number of images after all images have been concatenated - self.indices_to_labels: maps the new concat idx to the new label after concat. ref: - https://stackoverflow.com/questions/73913522/why-dont-the-images-align-when-concatenating-two-data-sets-in-pytorch-using-tor - https://discuss.pytorch.org/t/concat-image-datasets-with-different-size-and-number-of-channels/36362/12 """ def __init__(self, datasets: list[Dataset], transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, compare_imgs_directly: bool = False, verify_xs_align: bool = False, ): """ Concatenates different data sets assuming the labels are mutually exclusive in the data sets. compare_imgs_directly: adds the additional test that imgs compare at the PIL imgage level. """ self.datasets = datasets self.transform = transform self.target_transform = target_transform # I think concat is better than passing data to a self.data = x obj since concat likely using the getitem method of the passed dataset and thus if the passed dataset doesnt put all the data in memory concat won't either self.concat_datasets = torch.utils.data.ConcatDataset(datasets) # maps a class label to a list of sample indices with that label. self.labels_to_indices = defaultdict(list) # maps a sample index to its corresponding class label. self.indices_to_labels = defaultdict(None) # - do the relabeling self._re_label_all_dataset(datasets, compare_imgs_directly, verify_xs_align) def __len__(self): return len(self.concat_datasets) def _re_label_all_dataset(self, datasets: list[Dataset], compare_imgs_directly: bool = False, verify_xs_align: bool = False, verbose: bool = False, ): """ Relabels according to a blind (mutually exclusive) assumption. Relabling Algorithm: The zero index of the label starts at the number of labels collected so far. So when relabling we do: y = y + total_number_labels total_number_labels += max label for current data set where total_number_labels always has the + 1 to correct for the zero indexing. assumption: it re-lables the data points to have a concatenation of all the labels. If there are rebeated labels they are treated as different. So if dataset1 and dataset2 both have cats (represented as indices), then they will get unique integers representing these. So the cats are treated as entirely different labels. """ print() self.img2tensor: Callable = torchvision.transforms.ToTensor() total_num_labels_so_far: int = 0 global_idx: int = 0 # new_idx assert len(self.indices_to_labels.keys()) == 0 assert len(self.labels_to_indices.keys()) == 0 for dataset_idx, dataset in enumerate(datasets): print(f'{dataset_idx=} \n{len(dataset)=}') if hasattr(dataset, 'labels'): print(f'{len(dataset.labels)=}') assert len(dataset) == len(self.concat_datasets.datasets[dataset_idx]) assert dataset == self.concat_datasets.datasets[dataset_idx] original_label2global_idx: defaultdict = defaultdict(list) for original_data_idx, (x, original_y) in enumerate(dataset): original_y = int(original_y) # - get data point from concataned data set (to compare with the data point from the data set list) _x, _y = self.concat_datasets[global_idx] _y = int(_y) # - sanity check concatanted data set aligns with the list of datasets assert original_y == _y, f'{original_y=}, {_y=}' if compare_imgs_directly: # from PIL import ImageChops # diff = ImageChops.difference(x, _x) # https://stackoverflow.com/questions/35176639/compare-images-python-pil # assert diff.getbbox(), f'comparison of imgs failed: {diff.getbbox()=}' # doesn't work :/ assert list(x.getdata()) == list(_x.getdata()), f'\n{list(x.getdata())=}, \n{list(_x.getdata())=}' # - tensor comparison of raw images if not isinstance(x, Tensor): x, _x = self.img2tensor(x), self.img2tensor(_x) # if isinstance(original_y, int): # original_y, _y = int2tensor(original_y), int2tensor(_y) if verify_xs_align: # checks the data points after doing get item make them match. # this might fails if there are random ops in the getitem assert torch.equal(x, _x), f'Error for some reason, got: {dataset_idx=},' \ f' {global_idx=}, {original_data_idx=}, ' \ f'{x.norm()=}, {_x.norm()=}, ' \ f'{x=}, {_x=}' # - collect original labels in dictionary keys original_label2global_idx[int(original_y)].append(global_idx) global_idx += 1 print(f'{global_idx=}') local_num_dps: int = sum(len(global_indices) for global_indices in original_label2global_idx.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' # - do relabeling - original labeling to new global labels print(f'{total_num_labels_so_far=}') assert total_num_labels_so_far != len(dataset), f'Err:\n{total_num_labels_so_far=}\n{len(dataset)=}' new_local_label2global_indices: dict = {} global_label2global_indices: dict = {} # make sure to sort to avoid random looping of unordered data structures e.g. keys in a dict for new_local_label, original_label in enumerate(sorted(original_label2global_idx.keys())): global_indices: list[int] = original_label2global_idx[original_label] new_local_label2global_indices[int(new_local_label)] = global_indices new_global_label: int = total_num_labels_so_far + new_local_label global_label2global_indices[int(new_global_label)] = global_indices local_num_dps: int = sum(len(global_indices) for global_indices in original_label2global_idx.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' local_num_dps: int = sum(len(global_indices) for global_indices in new_local_label2global_indices.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' local_num_dps: int = sum(len(global_indices) for global_indices in global_label2global_indices.values()) assert len(dataset) == local_num_dps, f'Error: \n{local_num_dps=} \n{len(dataset)=}' # - this assumes the integers in each data set is different, if there were unions you'd likely need semantic information about the label e.g. the string cat instead of absolute integers, or know the integers are shared between the two data sets print(f'{total_num_labels_so_far=}') # this is the step where classes are concatenated. Note due to the previous loops assuming each label is uning this should never have intersecting keys. print(f'{list(self.labels_to_indices.keys())=}') print(f'{list(global_label2global_indices.keys())=}') dup: list = get_duplicates(list(self.labels_to_indices.keys()) + list(global_label2global_indices.keys())) print(f'{list(self.labels_to_indices.keys())=}') print(f'{list(global_label2global_indices.keys())=}') assert len(dup) == 0, f'Error:\n{self.labels_to_indices.keys()=}\n{global_label2global_indices.keys()=}\n{dup=}' for global_label, global_indices in global_label2global_indices.items(): # note g_idx might different to global_idx! global_indices: list[int] for g_idx in global_indices: self.labels_to_indices[int(global_label)] = g_idx self.indices_to_labels[g_idx] = int(global_label) # - update number of labels seen so far num_labels_for_current_dataset: int = len(original_label2global_idx.keys()) print(f'{num_labels_for_current_dataset=}') total_num_labels_so_far += num_labels_for_current_dataset assert total_num_labels_so_far == len(self.labels_to_indices.keys()), f'Err:\n{total_num_labels_so_far=}' \ f'\n{len(self.labels_to_indices.keys())=}' assert global_idx == len(self.indices_to_labels.keys()), f'Err:\n{global_idx=}\n{len(self.indices_to_labels.keys())=}' if hasattr(dataset, 'labels'): assert len(dataset.labels) == num_labels_for_current_dataset, f'Err:\n{len(dataset.labels)=}' \ f'\n{num_labels_for_current_dataset=}' # - relabling done assert len(self.indices_to_labels.keys()) == len( self.concat_datasets), f'Err: \n{len(self.indices_to_labels.keys())=}' \ f'\n {len(self.concat_datasets)=}' if all(hasattr(dataset, 'labels') for dataset in datasets): assert sum(len(dataset.labels) for dataset in datasets) == total_num_labels_so_far # contains the list of labels from 0 - total num labels after concat, assume mutually exclusive # - set & validate new labels self.labels = range(total_num_labels_so_far) labels = list(sorted(list(self.labels_to_indices.keys()))) assert labels == list(labels), f'labels should match and be consecutive, but got: \n{labels=}, \n{self.labels=}' def __getitem__(self, idx: int) -> tuple[Tensor, Tensor]: """ Get's the data point and it's new label according to a mutually exclusive concatenation. For later? to do the relabling on the fly we'd need to figure out which data set idx corresponds to and to compute the total_num_labels_so_far. Something like this: current_data_set_idx = bisect_left(idx) total_num_labels_so_far = sum(max(_, y in dataset)+1 for dataset_idx, dataset in enumerate(self.datasets) if dataset_idx <= current_data_set_idx) new_y = total_num_labels_so_far + y self.indices_to_labels[idx] = new_y :param idx: :return: """ x, _y = self.concat_datasets[idx] y = self.indices_to_labels[idx] # for the first data set they aren't re-labaled so can't use assert # assert y != _y, f'concat dataset returns x, y so the y is not relabeled, but why are they the same {_y}, {y=}' # idk what this is but could be useful? mnist had this. # img = Image.fromarray(img.numpy(), mode="L") if self.transform is not None: x = self.transform(x) if self.target_transform is not None: y = self.target_transform(y) return x, y
3
1
73,953,744
2022-10-4
https://stackoverflow.com/questions/73953744/how-to-test-kfp-components-with-pytest
I'm trying to local test a kubeflow component from kfp.v2.ds1 (which works on a pipeline) using pytest, but struggling with the input/output arguments together with fixtures. Here is a code example to illustrate the issue: First, I created a fixture to mock a dataset. This fixture is also a kubeflow component. # ./fixtures/ @pytest.fixture @component() def sample_df(dataset: Output[Dataset]): df = pd.DataFrame( { 'name': ['Ana', 'Maria', 'Josh'], 'age': [15, 19, 22], } ) dataset.path += '.csv' df.to_csv(dataset.path, index=False) return Lets suppose the component double the ages. # ./src/ @component() def double_ages(df_input: Input[Dataset], df_output: Output[Dataset]): df = pd.read_csv(df_input.path) double_df = df.copy() double_df['age'] = double_df['age']*2 df_output.path += '.csv' double_df.to_csv(df_output.path, index=False) Then, the test: #./tests/ @pytest.mark.usefixtures("sample_df") def test_double_ages(sample_df): expected_df = pd.DataFrame( { 'name': ['Ana', 'Maria', 'Josh'], 'age': [30, 38, 44], } ) df_component = double_ages(sample_df) # This is where I call the component, sample_df is an Input[Dataset] df_output = df_component.outputs['df_output'] df = pd.read_csv(df_output.path) assert df['age'].tolist() == expected_df['age'].tolist() But that's when the problem occurs. The Output[Dataset] that should be passed as an output, is not, so the component cannot properly work with it, then I would get the following error on assert df['age'].tolist() == expected_df['age'].tolist(): AttributeError: 'TaskOutputArgument' object has no attribute 'path' Aparently, the object is of the type TaskOutputArgument, instead of Dataset. Does anyone knows how to fix this? Or how to properly use pytest with kfp components? I've searched a lot on internet but couldn't find a clue about it.
After spending my afternoon on this, I finally figured out a way to pytest a python-based KFP component. As I found no other lead on this subject, I hope this can help: Access the function to test The trick is not to directly test the KFP component created by the @component decorator. However you can access the inner decorated Python function through the component attribute python_func. Mock artifacts Regarding the Input and Output artifacts, as you get around KFP to access and call the tested function, you have to create them manually and pass them to the function: input_artifact = Dataset(uri='input_df_previously_saved.csv') output_artifact = Dataset(uri='target_output_path.csv') I had to come up with a workaround for how the Artifact.path property works (which also applies for all KFP Artifact subclasses: Dataset, Model, ...). If you look in KFP source code, you'll find that it uses the _get_path() method that returns None if the uri attribute does not start with one of the defined cloud prefixes: "gs://", "s3://" or "minio://". As we're manually building artifacts with local paths, the tested component that wants to read the path property of an artifact would read a None value. So I made a simple method that builds a subclass of an Artifact (or a Dataset or any other Artifact child class). The built subclass is simply altered to return the uri value instead of None in this specific case of a non-cloud uri. Your example Putting this all together for your test and your fixture, we can get the following code to work: src/double_ages_component.py: your component to test Nothing changes here. I just added the pandas import: from kfp.v2.dsl import component, Input, Dataset, Output @component def double_ages(df_input: Input[Dataset], df_output: Output[Dataset]): import pandas as pd df = pd.read_csv(df_input.path) double_df = df.copy() double_df['age'] = double_df['age'] * 2 df_output.path += '.csv' double_df.to_csv(df_output.path, index=False) tests/utils.py: the Artifact subclass builder import typing def make_test_artifact(artifact_type: typing.Type): class TestArtifact(artifact_type): def _get_path(self): return super()._get_path() or self.uri return TestArtifact I am still not sure it is the most proper workaround. You could also manually create a subclass for each Artifact that you use (Dataset in your example). Or you could directly mock the kfp.v2.dsl.Artifact class using pytest-mock. tests/conftest.py: your fixture I separated the sample dataframe creator component from the fixture. Hence we have a standard KFP component definition + a fixture that builds its output artifact and calls its python function: from kfp.v2.dsl import component, Dataset, Output import pytest from tests.utils import make_test_artifact @component def sample_df_component(dataset: Output[Dataset]): import pandas as pd df = pd.DataFrame({ 'name': ['Ana', 'Maria', 'Josh'], 'age': [15, 19, 22], }) dataset.path += '.csv' df.to_csv(dataset.path, index=False) @pytest.fixture def sample_df(): # define output artifact output_path = 'local_sample_df.csv' # any writable local path. I'd recommend to use pytest `tmp_path` fixture. sample_df_artifact = make_test_artifact(Dataset)(uri=output_path) # call component python_func by passing the artifact yourself sample_df_component.python_func(dataset=sample_df_artifact) # the artifact object is now altered with the new path that you define in sample_df_component (".csv" extension added) return sample_df_artifact The fixture returns an artifact object referencing a selected local path where the sample dataframe has been saved to. tests/test_component.py: your actual component test Once again, the idea is to build the I/O artifact(s) and to call the component's python_func: from kfp.v2.dsl import Dataset import pandas as pd from src.double_ages_component import double_ages from tests.utils import make_test_artifact def test_double_ages(sample_df): expected_df = pd.DataFrame({ 'name': ['Ana', 'Maria', 'Josh'], 'age': [30, 38, 44], }) # input artifact is passed in parameter via sample_df fixture # create output artifact output_path = 'local_test_output_df.csv' output_df_artifact = make_test_artifact(Dataset)(uri=output_path) # call component python_func double_ages.python_func(df_input=sample_df, df_output=output_df_artifact) # read output data df = pd.read_csv(output_df_artifact.path) # write your tests assert df['age'].tolist() == expected_df['age'].tolist() Result > pytest ================ test session starts ================ platform linux -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0 rootdir: /home/USER/code/kfp_tests collected 1 item tests/test_component.py . [100%] ================ 1 passed in 0.28s ================
3
12
73,902,642
2022-9-29
https://stackoverflow.com/questions/73902642/office-365-imap-authentication-via-oauth2-and-python-msal-library
I'm trying to upgrade a legacy mail bot to authenticate via Oauth2 instead of Basic authentication, as it's now deprecated two days from now. The document states applications can retain their original logic, while swapping out only the authentication bit Application developers who have built apps that send, read, or otherwise process email using these protocols will be able to keep the same protocol, but need to implement secure, Modern authentication experiences for their users. This functionality is built on top of Microsoft Identity platform v2.0 and supports access to Microsoft 365 email accounts. Note I've explicitly chosen the client credentials flow, because the documentation states This type of grant is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. So I've got a python script that retrieves an Access Token using the MSAL python library. Now I'm trying to authenticate with the IMAP server, using that Access Token. There's some existing threads out there showing how to connect to Google, I imagine my case is pretty close to this one, except I'm connecting to a Office 365 IMAP server. Here's my script import imaplib import msal import logging app = msal.ConfidentialClientApplication( 'client-id', authority='https://login.microsoftonline.com/tenant-id', client_credential='secret-key' ) result = app.acquire_token_for_client(scopes=['https://graph.microsoft.com/.default']) def generate_auth_string(user, token): return 'user=%s\1auth=Bearer %s\1\1' % (user, token) # IMAP time! mailserver = 'outlook.office365.com' imapport = 993 M = imaplib.IMAP4_SSL(mailserver,imapport) M.debug = 4 M.authenticate('XOAUTH2', lambda x: generate_auth_string('[email protected]', result['access_token'])) print(result) The IMAP authentication is failing and despite setting M.debug = 4, the output isn't very helpful 22:56.53 > b'DBDH1 AUTHENTICATE XOAUTH2' 22:56.53 < b'+ ' 22:56.53 write literal size 2048 22:57.84 < b'DBDH1 NO AUTHENTICATE failed.' 22:57.84 NO response: b'AUTHENTICATE failed.' Traceback (most recent call last): File "/home/ubuntu/mini-oauth.py", line 21, in <module> M.authenticate("XOAUTH2", lambda x: generate_auth_string('[email protected]', result['access_token'])) File "/usr/lib/python3.10/imaplib.py", line 444, in authenticate raise self.error(dat[-1].decode('utf-8', 'replace')) imaplib.IMAP4.error: AUTHENTICATE failed. Any idea where I might be going wrong, or how to get more robust information from the IMAP server about why the authentication is failing? Things I've looked at Note this answer no longer works as the suggested scopes fail to generate an Access Token. The client credentials flow seems to mandate the https://graph.microsoft.com/.default grant. I'm not sure if that includes the scope required for the IMAP resource https://outlook.office.com/IMAP.AccessAsUser.All? Verified the code lifted from the Google thread produces the SASL XOAUTH2 string correctly, per example on the MS docs import base64 user = '[email protected]' token = 'EwBAAl3BAAUFFpUAo7J3Ve0bjLBWZWCclRC3EoAA' xoauth = "user=%s\1auth=Bearer %s\1\1" % (user, token) xoauth = xoauth.encode('ascii') xoauth = base64.b64encode(xoauth) xoauth = xoauth.decode('ascii') xsanity = 'dXNlcj10ZXN0QGNvbnRvc28ub25taWNyb3NvZnQuY29tAWF1dGg9QmVhcmVyIEV3QkFBbDNCQUFVRkZwVUFvN0ozVmUwYmpMQldaV0NjbFJDM0VvQUEBAQ==' print(xoauth == xsanity) # prints True This thread seems to suggest multiple tokens need to be fetched, one for graph, then another for the IMAP connection; could that be what I'm missing?
The imaplib.IMAP4.error: AUTHENTICATE failed Error occured because one point in the documentation is not that clear. When setting up the the Service Principal via Powershell you need to enter the App-ID and an Object-ID. Many people will think, it is the Object-ID you see on the overview page of the registered App, but its not! At this point you need the Object-ID from "Azure Active Directory -> Enterprise Applications --> Your-App --> Object-ID" New-ServicePrincipal -AppId <APPLICATION_ID> -ServiceId <OBJECT_ID> [-Organization <ORGANIZATION_ID>] Microsoft says: The OBJECT_ID is the Object ID from the Overview page of the Enterprise Application node (Azure Portal) for the application registration. It is not the Object ID from the Overview of the App Registrations node. Using the incorrect Object ID will cause an authentication failure. Ofcourse you need to take care for the API-permissions and the other stuff, but this was for me the point. So lets go trough it again, like it is explained on the documentation page. Authenticate an IMAP, POP or SMTP connection using OAuth Register the Application in your Tenant Setup a Client-Key for the application Setup the API permissions, select the APIs my organization uses tab and search for "Office 365 Exchange Online" -> Application permissions -> Choose IMAP and IMAP.AccessAsApp Setup the Service Principal and full access for your Application on the mailbox Check if IMAP is activated for the mailbox Thats the code I use to test it: import imaplib import msal import pprint conf = { "authority": "https://login.microsoftonline.com/XXXXyourtenantIDXXXXX", "client_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX", #AppID "scope": ['https://outlook.office365.com/.default'], "secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", #Key-Value "secret-id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", #Key-ID } def generate_auth_string(user, token): return f"user={user}\x01auth=Bearer {token}\x01\x01" if __name__ == "__main__": app = msal.ConfidentialClientApplication(conf['client_id'], authority=conf['authority'], client_credential=conf['secret']) result = app.acquire_token_silent(conf['scope'], account=None) if not result: print("No suitable token in cache. Get new one.") result = app.acquire_token_for_client(scopes=conf['scope']) if "access_token" in result: print(result['token_type']) pprint.pprint(result) else: print(result.get("error")) print(result.get("error_description")) print(result.get("correlation_id")) imap = imaplib.IMAP4('outlook.office365.com') imap.starttls() imap.authenticate("XOAUTH2", lambda x: generate_auth_string("[email protected]", result['access_token']).encode("utf-8")) After setting up the Service Principal and giving the App full access on the mailbox, wait 15 - 30 minutes for the changes to take effect and test it.
22
6
73,950,834
2022-10-4
https://stackoverflow.com/questions/73950834/zx81-basic-to-pygame-conversion-of-dropout-game
I based the code below on this article: http://kevman3d.blogspot.com/2015/07/basic-games-in-python-1982-would-be.html and on the ZX BASIC in this image: 10 LET P=0 20 LET T=P 30 FOR Z=1 T0 10 35 CLS 37 PRINT AT 12,0;T 40 LET R=INT (RND*17) 50 FOR Y=0 TO 10 60 PRINT AT Y,R;"O" 70 LET N=P(INKEY$="4")-(INKEY$="1") 80 IF N<0 OR N>15 THEN LET N=P 100 PRINT AT 11,P;" ";AT 11,N;"β”—β”›";AT Y,R;" " 110 LET P=N 120 NEXT Y 130 LET T=T+(P=R OR P+1=R) 150 NEXT Z 160 PRINT AT 12,0;"YOU SCORED ";T;"/10" 170 PAUSE 4E4 180 RUN I also shared it on Code Review Stack Exchange, and got a very helpful response refactoring it into high quality Python code complete with type hints. However, for my purposes I'm wanting to keep the level of knowledge required to make this work a little less advanced, including avoiding the use of OOP. I basically want to maintain the "spirit of ZX BASIC" but make the code "not awful." The use of functions is fine, as we were allowed GOSUB back in the day. I'm pretty dubious about the approach of using nested FOR loops inside the main game loop to make the game work, but at the same time I'm curious to see how well the BASIC paradigm maps onto the more event driven approach of Pygame, so I'd welcome any comments on the pros and cons of this approach. More specifically, Is there somewhere I can put the exit code if event.type == pygame.QUIT where it will work during game rounds, without having to repeat the code elsewhere? How would this game be implemented if I were to avoid the use of FOR loops / nested FOR loops? Are there any points of best practice for pygame/Python which I have violated? What improvements can you suggest, bearing in mind my purpose is to write good Pygame code while maintaining the "spirit" of the ZX81 games? Any input much appreciated. I'm also curious to see a full listing implementing some of the ideas arising from my initial attempt if anyone is willing to provide one. import pygame import random import sys # Define colors and other global constants BLACK = (0, 0, 0) WHITE = (255, 255, 255) TEXT_SIZE = 16 SCREEN_SIZE = (16 * TEXT_SIZE, 13 * TEXT_SIZE) NUM_ROUNDS = 5 def print_at_pos(row_num, col_num, item): """Blits text to row, col position.""" screen.blit(item, (col_num * TEXT_SIZE, row_num * TEXT_SIZE)) # Set up stuff pygame.init() screen = pygame.display.set_mode(SCREEN_SIZE) pygame.display.set_caption("Dropout") game_font = pygame.font.SysFont('consolas', TEXT_SIZE) # Create clock to manage how fast the screen updates clock = pygame.time.Clock() # initialize some game variables player_pos, new_player_pos, coin_row, score = 0, 0, 0, 0 # -------- Main Program Loop ----------- while True: score = 0 # Each value of i represents 1 round for i in range(NUM_ROUNDS): coin_col = random.randint(0, 15) # Each value of j represents one step in the coin's fall for j in range(11): pygame.event.get() pressed = pygame.key.get_pressed() if pressed[pygame.K_RIGHT]: new_player_pos = player_pos + 1 elif pressed[pygame.K_LEFT]: new_player_pos = player_pos - 1 if new_player_pos < 0 or new_player_pos > 15: new_player_pos = player_pos # --- Game logic player_pos = new_player_pos coin_row = j if player_pos + 1 == coin_col and j == 10: score += 1 # --- Drawing code # First clear screen screen.fill(WHITE) player_icon = game_font.render("|__|", True, BLACK, WHITE) print_at_pos(10, new_player_pos, player_icon) coin_text = game_font.render("O", True, BLACK, WHITE) print_at_pos(coin_row, coin_col, coin_text) score_text = game_font.render(f"SCORE: {score}", True, BLACK, WHITE) print_at_pos(12, 0, score_text) # --- Update the screen. pygame.display.flip() # --- Limit to 6 frames/sec maximum. Adjust to taste. clock.tick(8) msg_text = game_font.render("PRESS ANY KEY TO PLAY AGAIN", True, BLACK, WHITE) print_at_pos(5, 0, msg_text) pygame.display.flip() waiting = True while waiting: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit(0) if event.type == pygame.KEYDOWN: waiting = False
Here's my reorganisation of your code: import pygame import random # Define global constants TEXT_SIZE = 16 SCREEN_SIZE = (16 * TEXT_SIZE, 13 * TEXT_SIZE) NUM_ROUNDS = 5 def print_at_pos(row_num, col_num, item): """Blits text to row, col position.""" screen.blit(item, (col_num * TEXT_SIZE, row_num * TEXT_SIZE)) # Set up stuff pygame.init() screen = pygame.display.set_mode(SCREEN_SIZE) pygame.display.set_caption("Dropout") game_font = pygame.font.SysFont("consolas", TEXT_SIZE) # Create clock to manage how fast the screen updates clock = pygame.time.Clock() # draw the images player_icon = game_font.render("|__|", True, "black", "white") # if we don't specify a background color, it'll be transparent coin_text = game_font.render("O", True, "black") msg_text = game_font.render("PRESS ANY KEY TO PLAY AGAIN", True, "black", "white") # initialize some game variables waiting = False # start in game player_pos = 0 score = 0 game_round = 0 coin_row = 0 coin_col = random.randint(0, 15) running = True # For program exit # -------- Main Program Loop ----------- while running: # event handling for event in pygame.event.get(): if event.type == pygame.QUIT: running = False elif event.type == pygame.KEYDOWN: if waiting: waiting = False score = 0 # reset score elif event.key == pygame.K_LEFT: player_pos -= 1 elif event.key == pygame.K_RIGHT: player_pos += 1 # --- Game logic if waiting: # don't update the game state or redraw screen print_at_pos(5, 0, msg_text) else: coin_row += 1 # TODO: decouple from frame rate if -1 > player_pos: player_pos = -1 # so we can catch a coin at zero elif 15 < player_pos: player_pos = 15 # coin is in scoring position if coin_row == 10: if player_pos + 1 == coin_col: score += 1 elif coin_row > 10: # round is over coin_col = random.randint(0, 15) coin_row = 0 game_round+= 1 if game_round >= NUM_ROUNDS: waiting = True game_round = 0 # reset round counter # --- Drawing code screen.fill("white") # clear screen print_at_pos(10, player_pos, player_icon) print_at_pos(coin_row, coin_col, coin_text) score_text = game_font.render(f"SCORE: {score}", True, "black", "white") print_at_pos(12, 0, score_text) # --- Update the screen. pygame.display.flip() # --- Limit to 6 frames/sec maximum. Adjust to taste. clock.tick(6) pygame.quit() I've used a boolean waiting to allow for common event and game state handling that only moves during gameplay. For more complex interactions, you'll want a state machine. The coin movement is currently coupled to the frame rate, which is easy, but ideally you'd specify a rate/time interval, e.g. 200ms between row drops and then you could have a refresh rate similar to the monitor refresh rate.
3
2