question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
69,031,990 | 2021-9-2 | https://stackoverflow.com/questions/69031990/how-can-i-use-both-required-and-optional-path-parameters-in-a-fastapi-endpoint | I've read through the documentation and this doesn't seem to be working for me. I followed this doc. But I'm not sure if it's related to what I'm trying to do, I think this doc is for passing queries like this - site.com/endpoint?keyword=test Here's my goal: api.site.com/test/(optional_field) So, if someone goes to the /test endpoint then it defaults the optional field to a parameter but if they add something there then it takes that as a input. With that said, here's my code: @app.get("/company/{company_ticker}/model/{financialColumn}", dependencies=[Depends(api_counter)]) async def myendpoint( company_ticker: str, financialColumn: Optional[str] = 'netincome', .. myFunction(company_ticker, financialColumn) what I'm trying to do is if they just go to the endpoint without the optional flag then it defaults to 'netincome' but if they add something then financialColumn is set to that value. Is there something I can do? | As far as I know, it won't work the way you've set it up. Though you can try something like this: @app.get("/company/{company_ticker}/model/", dependencies=[Depends(api_counter)]) @app.get("/company/{company_ticker}/model/{financialColumn}", dependencies=[Depends(api_counter)]) async def myendpoint( company_ticker: str, financialColumn: Optional[str] = 'netincome' ): myFunction(company_ticker, financialColumn) This way, if someone goes to "/company/{company_ticker}/model/" or "/company/{company_ticker}/model/blabla" the function myendpoint will handle the request. Not sure if it works as you wish, but at the moment I cannot test it. Maybe later. Let me know. | 6 | 7 |
69,024,599 | 2021-9-2 | https://stackoverflow.com/questions/69024599/scraping-data-from-zillow-com-using-beautifulsoup | Following this tutorial, I am trying to extract basic property information from zillow.com. More specifically, I want to extract the information pertinent to property cards displayed on the website. The following code is able to extract information of only 3 properties, even though several property cards exist on the first page. Can someone please explain why is the code skipping the remaining properties? import requests import ast from bs4 import BeautifulSoup url = 'https://www.zillow.com/homes/for_sale/house,multifamily,townhouse_type/?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22mapBounds%22%3A%7B%22west%22%3A-106.43826441618356%2C%22east%22%3A-103.36483912321481%2C%22south%22%3A38.903882034738686%2C%22north%22%3A40.52008627183672%7D%2C%22mapZoom%22%3A9%2C%22customRegionId%22%3A%22fcac4612c1X1-CR9xde3hldsvpa_v24ah%22%2C%22isMapVisible%22%3Afalse%2C%22filterState%22%3A%7B%22hoa%22%3A%7B%22max%22%3A200%7D%2C%22con%22%3A%7B%22value%22%3Afalse%7D%2C%22apa%22%3A%7B%22value%22%3Afalse%7D%2C%22sch%22%3A%7B%22value%22%3Atrue%7D%2C%22ah%22%3A%7B%22value%22%3Atrue%7D%2C%22sort%22%3A%7B%22value%22%3A%22globalrelevanceex%22%7D%2C%22land%22%3A%7B%22value%22%3Afalse%7D%2C%22schu%22%3A%7B%22value%22%3Afalse%7D%2C%22manu%22%3A%7B%22value%22%3Afalse%7D%2C%22schr%22%3A%7B%22value%22%3Afalse%7D%2C%22apco%22%3A%7B%22value%22%3Afalse%7D%2C%22basf%22%3A%7B%22value%22%3Atrue%7D%2C%22schc%22%3A%7B%22value%22%3Afalse%7D%2C%22schb%22%3A%7B%22min%22%3A%227%22%7D%7D%2C%22isListVisible%22%3Atrue%7D' headers = { 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'accept-encoding': 'gzip, deflate, br', 'accept-language': 'en-US,en;q=0.9', 'cookie': 'zguid=23|%24ca6368b9-7b92-4d51-ab67-c2be89065efd; _ga=GA1.2.1460486079.1621047110; _pxvid=7fa13d96-b528-11eb-9860-0242ac120012; _gcl_au=1.1.2025797213.1621047113; __gads=ID=66253ab863481044:T=1621047113:S=ALNI_MZr3mehwm2Wjo7NOrmalVtEcJSXag; __pdst=50987f626deb4767a53b5d8ca2ea406a; _fbp=fb.1.1621047115574.1019382068; _pin_unauth=dWlkPU5EVm1PRGRpTVRBdE5UTTFaUzAwWlRBNExUZzJZall0TWpZMU1HWTBNV0ppWlRkbA; G_ENABLED_IDPS=google; userid=X|3|231a9d744e104379%7C3%7CiEt8bkUx9hWaFeyCeAwN9tHl_T0d0Cq-kynGuEvNYr4%3D; loginmemento=1|c2274ba4a4ad76bbe89263d30695c182e9177b9c40a2691f3054987d66a944be; zjs_user_id=%22X1-ZU158jhpb2klds9_4wzn7%22; zgcus_lbut=; zgcus_aeut=189997416; zgcus_ludi=b44a961b-c7ef-11eb-a48f-96824e7eff50-18999; optimizelyEndUserId=oeu1623111792776r0.8778663892923859; _cs_c=1; WRUIDAWS=3326630244368428; visitor_id701843=248614376; visitor_id701843-hash=4be116fbd77089f953bfb6eaf5996ef92662a6ef7d237d3c49f154ffaf4eaa9295c64fb254b106bdff234e183c94498c01af2aab; __stripe_mid=80125db1-17d1-4fc5-ae37-86b12a68709cf3da6d; g_state={"i_p":1627697570928,"i_l":4}; zjs_anonymous_id=%22ca6368b9-7b92-4d51-ab67-c2be89065efd%22; _gac_UA-21174015-56=1.1626042638.Cj0KCQjwraqHBhDsARIsAKuGZeH8gi095UkXfohW-WWvyLosdmTdL8cfJwgAabYF9hS2XU6JlXqpWLcaAq5SEALw_wcB; _gcl_aw=GCL.1626042640.Cj0KCQjwraqHBhDsARIsAKuGZeH8gi095UkXfohW-WWvyLosdmTdL8cfJwgAabYF9hS2XU6JlXqpWLcaAq5SEALw_wcB; zgsession=1|1edd82e6-372a-4546-bc8b-c2bbadfd29b4; DoubleClickSession=true; fbc=fb.1.1626412984774.IwAR2QM6bzrTskAWN5Sk8UnmPlAxb1HRy1h1GRch888QqXfczHZZWb2vDZfIw; _fbc=fb.1.1626413249162.IwAR2QM6bzrTskAWN5Sk8UnmPlAxb1HRy1h1GRch888QqXfczHZZWb2vDZfIw; _csrf=lV2BBFim7Vy2gFTn--PUt0VA; _gaexp=GAX1.2.w27igyYtRQaAa8XQM3MjDw.18837.2!VDVoDKTnRcyv8f4FAcJ8PA.18915.2!Khnq27RoQmSe5DEusmh5xA.18913.3; _gid=GA1.2.705011419.1630004829; FSsampler=707279376; __CT_Data=gpv=26&ckp=tld&dm=zillow.com&apv_82_www33=26&cpv_82_www33=26&rpv_82_www33=13; OptanonConsent=isIABGlobal=false&datestamp=Fri+Aug+27+2021+12%3A39%3A52+GMT-0600+(Mountain+Daylight+Time)&version=5.11.0&landingPath=NotLandingPage&groups=1%3A1%2C3%3A1%2C4%3A1&AwaitingReconsent=false; _cs_id=41cbdc9c-bb0b-aad9-9521-b1328a65ff77.1623111795.22.1630089665.1630089591.1.1657275795752; utag_main=v_id:01796deff9e3001a59964343177e03079002907100838$_sn:41$_se:2$_ss:0$_st:1630255637884$dc_visit:38$ses_id:1630253822479%3Bexp-session$_pn:1%3Bexp-session$dcsyncran:1%3Bexp-session$tdsyncran:1%3Bexp-session$dc_event:2%3Bexp-session$dc_region:us-east-1%3Bexp-session$ttd_uuid:7b8796ca-44dd-45c9-97d9-bcb642d04cd1%3Bexp-session; JSESSIONID=6CB8C410E0FE216644E8C3A0D0851618; ZILLOW_SID=1|AAAAAVVbFRIBVVsVEklf443J474nftKzJe5PKLD80sujgHvySB7tGcqZunX3BDDH9VwceMqGMTPC54%2F0q4CH%2BfmwsC6P; KruxPixel=true; _derived_epik=dj0yJnU9ai1PSUp1eHZ2Y3J3d0c2NVU1N3BBOFlHbnRBOGFzT0smbj1vLWRISDFwdUNoblN5MjQ4cTVyN213Jm09MSZ0PUFBQUFBR0VzRjRVJnJtPTEmcnQ9QUFBQUFHRXNGNFU; KruxAddition=true; search=6|1632872450375%7Crect%3D40.241821806991595%252C-103.77545313688668%252C39.18758562803622%252C-106.02765040251168%26disp%3Dmap%26mdm%3Dauto%26type%3Dhouse%252Cmultifamily%252Ctownhouse%26fs%3D1%26fr%3D0%26mmm%3D1%26rs%3D0%26ah%3D0%09%0911093%09%09%09%09%09%09; _uetsid=d5e0465006a011ecbe3bd1a0f1c47d01; _uetvid=987e1c70c40a11ebaed8859af36f82fb; _px3=ba45c3df5d5d63d4d9780a102253cd60b21ab52b04778344e332e05474011c21:oCvapPXE6jD0rCXhSf4UjtEC2U956148EDyiWwRFOF8z5vwK63/hC8OWsk09O61g1spnZw64iXApZu1wOmKpyA==:1000:68UzJ5+ar5XwNm61bm41bhSHp8Zp1PfQQlL/5tcqdUIJ3RmA106//vvYGewCCwmln6acqbDAVKgqfB8Th05yX0Cw0TBW7dhfNdeNRjp9bxeLvKqZ56yuW+aVoYYp/zj6MNKv9c16vKlP771xSdCgUTvZ0CDmh7Ng55sHugOHt/jj+2Zmp2WLnuYR4rf7SEndqWBbAyQhhG4BKeyrZyEMpA==; AWSALB=3BIj2fUDeYgoAcLKaZdMkcyTzWSof62v91DQuCssJMyknlpZWcRcVnUU5Me29AcnFcjg1k9H2ehS6N0rSwxo4w8lmEvFCy6hgQfKm1HH8oVoWtpICS36NoLMMxmZ; AWSALBCORS=3BIj2fUDeYgoAcLKaZdMkcyTzWSof62v91DQuCssJMyknlpZWcRcVnUU5Me29AcnFcjg1k9H2ehS6N0rSwxo4w8lmEvFCy6hgQfKm1HH8oVoWtpICS36NoLMMxmZ', 'referer': 'https://www.google.com/', 'sec-ch-ua': '"Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"', 'sec-ch-ua-mobile': '?1', 'sec-fetch-dest': 'document', 'sec-fetch-mode': 'navigate', 'sec-fetch-site': 'same-origin', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Mobile Safari/537.36' } params = { 'searchQueryState': '{"mapBounds":{"west":-106.02765040251168,"east":-103.77545313688668,"south":39.18758562803622,"north":40.241821806991595},"isMapVisible":true,"filterState":{"sort":{"value":"globalrelevanceex"},"ah":{"value":true},"con":{"value":false},"apco":{"value":false},"land":{"value":false},"apa":{"value":false},"manu":{"value":false},"basf":{"value":true},"hoa":{"max":200},"sch":{"value":true},"schb":{"min":"7"},"schc":{"value":false},"schr":{"value":false},"schu":{"value":false}},"isListVisible":true,"mapZoom":9,"customRegionId":"fcac4612c1X1-CR9xde3hldsvpa_v24ah","pagination":{}}' } class ZillowScraper: def __init__(self, url, headers, params): self.headers = headers self.url = url self.params = params def fetch(self): response = requests.get(url=self.url, headers=self.headers, params=self.params) return response def get_cards_info(self, deck_text): urls = [] for card in deck_text.contents: script = card.find('script', {'type': 'application/ld+json'}) if script: script_json = ast.literal_eval(str(script.contents[0])) print(script_json) def parse(self, response_text): content = BeautifulSoup(response_text, features="html.parser") deck_text = content.find('ul', {'class': 'photo-cards photo-cards_wow photo-cards_short photo-cards_extra-attribution'}) cards_info = self.get_cards_info(deck_text) def run(self): response = self.fetch() self.parse(response.text) if __name__ == "__main__": scraper = ZillowScraper(url, headers, params) scraper.run() OUTPUT {'@type': 'SingleFamilyResidence', '@context': 'http://schema.org', 'name': '11615 River Run Cir, Henderson, CO 80640', 'floorSize': {'@type': 'QuantitativeValue', '@context': 'http://schema.org', 'value': '2,001'}, 'address': {'@type': 'PostalAddress', '@context': 'http://schema.org', 'streetAddress': '11615 River Run Cir', 'addressLocality': 'Henderson', 'addressRegion': 'CO', 'postalCode': '80640'}, 'geo': {'@type': 'GeoCoordinates', '@context': 'http://schema.org', 'latitude': 39.908753, 'longitude': -104.851576}, 'url': 'https://www.zillow.com/homedetails/11615-River-Run-Cir-Henderson-CO-80640/49457209_zpid/'} {'@type': 'SingleFamilyResidence', '@context': 'http://schema.org', 'name': '5089 Enid Way, Denver, CO 80239', 'floorSize': {'@type': 'QuantitativeValue', '@context': 'http://schema.org', 'value': '1,852'}, 'address': {'@type': 'PostalAddress', '@context': 'http://schema.org', 'streetAddress': '5089 Enid Way', 'addressLocality': 'Denver', 'addressRegion': 'CO', 'postalCode': '80239'}, 'geo': {'@type': 'GeoCoordinates', '@context': 'http://schema.org', 'latitude': 39.784449, 'longitude': -104.815903}, 'url': 'https://www.zillow.com/homedetails/5089-Enid-Way-Denver-CO-80239/13271929_zpid/'} {'@type': 'SingleFamilyResidence', '@context': 'http://schema.org', 'name': '6088 S Pierson Ct, Littleton, CO 80127', 'floorSize': {'@type': 'QuantitativeValue', '@context': 'http://schema.org', 'value': '1,810'}, 'address': {'@type': 'PostalAddress', '@context': 'http://schema.org', 'streetAddress': '6088 S Pierson Ct', 'addressLocality': 'Littleton', 'addressRegion': 'CO', 'postalCode': '80127'}, 'geo': {'@type': 'GeoCoordinates', '@context': 'http://schema.org', 'latitude': 39.605764, 'longitude': -105.123466}, 'url': 'https://www.zillow.com/homedetails/6088-S-Pierson-Ct-Littleton-CO-80127/13818492_zpid/'} | The results are stored in <script> variable inside the page. To parse them, you can use next example: import json import requests from bs4 import BeautifulSoup url = "https://www.zillow.com/homes/for_sale/house,multifamily,townhouse_type/?searchQueryState={%22pagination%22%3A{}%2C%22mapBounds%22%3A{%22west%22%3A-106.97384791227731%2C%22east%22%3A-102.82925562712106%2C%22south%22%3A39.18758562803622%2C%22north%22%3A40.241821806991595}%2C%22customRegionId%22%3A%22fcac4612c1X1-CR9xde3hldsvpa_v24ah%22%2C%22isMapVisible%22%3Atrue%2C%22filterState%22%3A{%22hoa%22%3A{%22max%22%3A200}%2C%22con%22%3A{%22value%22%3Afalse}%2C%22apa%22%3A{%22value%22%3Afalse}%2C%22sch%22%3A{%22value%22%3Atrue}%2C%22ah%22%3A{%22value%22%3Atrue}%2C%22sort%22%3A{%22value%22%3A%22globalrelevanceex%22}%2C%22land%22%3A{%22value%22%3Afalse}%2C%22schu%22%3A{%22value%22%3Afalse}%2C%22manu%22%3A{%22value%22%3Afalse}%2C%22schr%22%3A{%22value%22%3Afalse}%2C%22apco%22%3A{%22value%22%3Afalse}%2C%22basf%22%3A{%22value%22%3Atrue}%2C%22schc%22%3A{%22value%22%3Afalse}%2C%22schb%22%3A{%22min%22%3A%227%22}}%2C%22isListVisible%22%3Atrue}" headers = { "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0" } soup = BeautifulSoup(requests.get(url, headers=headers).content, "html.parser") data = json.loads( soup.select_one("script[data-zrr-shared-data-key]") .contents[0] .strip("!<>-") ) # uncomment this to print all data: # print(json.dumps(data, indent=4)) for result in data["cat1"]["searchResults"]["listResults"]: print( "{:<15} {:<50} {:<15}".format( result["statusText"], result["address"], result["price"] ) ) Prints: House for sale 6092 S Marshall Dr, Littleton, CO 80123 $680,000 House for sale 3050 S Roslyn St, Denver, CO 80231 $774,900 House for sale 15538 Greenstone Cir, Parker, CO 80134 $590,000 House for sale 7141 Fenton Cir, Arvada, CO 80003 $549,500 House for sale 7823 S Logan Dr, Littleton, CO 80122 $665,000 House for sale 1825 Clermont St, Denver, CO 80220 $599,900 House for sale 408 S Locust St, Denver, CO 80224 $550,000 House for sale 8660 De Soto St, Denver, CO 80229 $450,000 House for sale 1811 S Humboldt St, Denver, CO 80210 $675,000 House for sale 7329 E Easter Ave, Centennial, CO 80112 $699,900 House for sale 13638 W Montana Pl, Lakewood, CO 80228 $600,000 House for sale 8296 E Hinsdale Dr, Centennial, CO 80112 $699,900 House for sale 10325 Ravenswood Ln, Highlands Ranch, CO 80130 $660,000 House for sale 2833 E 90th Pl, Denver, CO 80229 $445,000 House for sale 5756 W 8th Ave, Lakewood, CO 80214 $600,000 House for sale 6088 S Pierson Ct, Littleton, CO 80127 $509,000 House for sale 2829 S Lowell Blvd, Denver, CO 80236 $475,000 House for sale 604 Eldridge St, Golden, CO 80401 $650,000 House for sale 7171 McIntyre Ct, Arvada, CO 80007 $850,000 House for sale 1301 S Blackhawk Way, Aurora, CO 80012 $500,000 House for sale 215 S Julian St, Denver, CO 80219 $350,000 House for sale 7095 E 67th Ave, Commerce City, CO 80022 $440,000 House for sale 8248 S Yukon St, Littleton, CO 80128 $695,000 House for sale 2846 S Macon Ct, Aurora, CO 80014 $520,000 House for sale 9340 Burgundy Cir, Littleton, CO 80126 $799,000 House for sale 2072 S Cathay Way, Aurora, CO 80013 $560,000 House for sale 1317 W 85th Ave, Federal Heights, CO 80260 $405,000 House for sale 6701 Eagle Shadow Ave, Brighton, CO 80602 $1,145,000 House for sale 2900 Webster St, Wheat Ridge, CO 80033 $660,000 House for sale 3943 S Allison Ct, Lakewood, CO 80235 $799,950 House for sale 511 E Irwin Ave, Littleton, CO 80122 $624,500 House for sale 4700 E Montana Pl, Denver, CO 80222 $600,000 House for sale 2344 S Gray Dr, Lakewood, CO 80227 $585,000 House for sale 5546 E 130th Dr, Thornton, CO 80241 $490,000 House for sale 2270 S Joyce St, Lakewood, CO 80228 $1,340,000 House for sale 12171 W Dakota Dr, Lakewood, CO 80228 $600,000 House for sale 6641 Miller St, Arvada, CO 80004 $625,000 House for sale 3220 W Nevada Pl, Denver, CO 80219 $510,000 House for sale 8630 W 64th Pl, Arvada, CO 80004 $447,000 House for sale 5890 Wood Sorrel Dr, Littleton, CO 80123 $975,000 | 4 | 8 |
69,028,920 | 2021-9-2 | https://stackoverflow.com/questions/69028920/why-does-mypy-have-a-hard-time-with-assignment-to-nested-dicts | mypy version 0.910 Consider d = { 'a': 'a', 'b': { 'c': 1 } } d['b']['d'] = 'b' Feeding this to mypy results with error: Unsupported target for indexed assignment ("Collection[str]") Putting a side that mypy inferred the wrong type for d (it is clearly not a collection of strings), adding a very basic explicit type for d fixes this: d: dict = { ... # same as above } Success: no issues found in 1 source file I find this very peculiar. mypy should definitely be able to infer that d is a dict without d: dict. | d is not being inferred as a collection of strings. It is being inferred as a dict, but dicts take two type variables, one for the keys and one for the values. If we use reveal_type: d = { 'a': 'a', 'b': { 'c': 1 } } reveal_type(d) d['b']['d'] = 'b' I get: (py39) jarrivillaga-mbp16-2019:~ jarrivillaga$ mypy --version mypy 0.910 (py39) jarrivillaga-mbp16-2019:~ jarrivillaga$ mypy scratch.py scratch.py:7: note: Revealed type is "builtins.dict[builtins.str*, typing.Collection*[builtins.str]]" scratch.py:8: error: Unsupported target for indexed assignment ("Collection[str]") Found 1 error in 1 file (checked 1 source file) So, it is being inferred as: builtins.dict[builtins.str*, typing.Collection*[builtins.str]] which is a dict mapping strings to collections of strings. This is because for your dict, you have used str and dict[str, int] as values, and I can only surmise that typing.Collection[str] is the least broad type that encompasses str and dict[str, str] ha both. I'm not really sure how mypy is supposed to handle the inference of nested dict literals like yours. Note, annotating with just dict means it is going to use dict[Any, Any], which you probably don't want. You should try to give a more constrained type, but that depends on how you intend to use d. | 12 | 7 |
69,027,829 | 2021-9-2 | https://stackoverflow.com/questions/69027829/how-to-add-row-titles-to-the-following-the-matplotlib-code | I am trying to create a plot containing 8 subplots (4 rows and 2 columns). To do so, I have made this code that reads the x and y data and plots it in the following fashion: fig, axs = plt.subplots(4, 2, figsize=(15,25)) y_labels = ['k0', 'k1'] for x in range(4): for y in range(2): axs[x, y].scatter([i[x] for i in X_vals], [i[y] for i in y_vals]) axs[x, y].set_xlabel('Loss') axs[x, y].set_ylabel(y_labels[y]) This gives me the following result: However, I want to add a title to all the rows (not the plots) in the following way(the titles in yellow text): I found this image and some ways to do that here but I wasn't able to implement this for my use case and got an error. This is what I tried : gridspec = axs[0].get_subplotspec().get_gridspec() subfigs = [fig.add_subfigure(gs) for gs in gridspec] for row, subfig in enumerate(subfigs): subfig.suptitle(f'Subplot row title {row}') which gave me the error : 'numpy.ndarray' object has no attribute 'get_subplotspec' So I changed the code to : gridspec = axs[0, 0].get_subplotspec().get_gridspec() subfigs = [fig.add_subfigure(gs) for gs in gridspec] for row, subfig in enumerate(subfigs): subfig.suptitle(f'Subplot row title {row}') but this returned the error : 'Figure' object has no attribute 'add_subfigure' | The solution in the answer that you linked is the correct one, however it is specific for the 3x3 case as shown there. The following code should be a more general solution for different numbers of subplots. This should work provided your data and y_label arrays/lists are all the correct size. Note that this requires matplotlib 3.4.0 and above to work: import numpy as np import matplotlib.pyplot as plt # random data. Make sure these are the correct size if changing number of subplots x_vals = np.random.rand(4, 10) y_vals = np.random.rand(2, 10) y_labels = ['k0', 'k1'] # change rows/cols accordingly rows = 4 cols = 2 fig = plt.figure(figsize=(15,25), constrained_layout=True) fig.suptitle('Figure title') # create rows x 1 subfigs subfigs = fig.subfigures(nrows=rows, ncols=1) for row, subfig in enumerate(subfigs): subfig.suptitle(f'Subplot row title {row}') # create 1 x cols subplots per subfig axs = subfig.subplots(nrows=1, ncols=cols) for col, ax in enumerate(axs): ax.scatter(x_vals[row], y_vals[col]) ax.set_title("Subplot ax title") ax.set_xlabel('Loss') ax.set_ylabel(y_labels[col]) Which gives: | 5 | 4 |
69,021,077 | 2021-9-1 | https://stackoverflow.com/questions/69021077/start-an-async-background-daemon-in-a-python-fastapi-app | I'm building an async backend for an analytics system using FastAPI. The thing is it has to: a) listen for API calls and be available at all times; b) periodically perform a data-gathering task (parsing data and saving it into the DB). I wrote this function to act as a daemon: async def start_metering_daemon(self) -> None: """sets a never ending task for metering""" while True: delay: int = self._get_delay() # delay in seconds until next execution await asyncio.sleep(delay) await self.gather_meterings() # perfom data gathering What I'm trying to achieve is so that when app starts it also adds this daemon function into the main event loop and execute it when it has time. However, I haven't been able to find a suitable solution which is adequate to the scale of the task (adding Celery and similar stuff is an overkill). I have tried following ways to achieve this but none of them worked: @app.on_event("startup") async def startup_event() -> None: """tasks to do at server startup""" await Gatherer().start_metering_daemon() Result: server can't start up since the thread is blocked @app.on_event("startup") async def startup_event() -> None: """tasks to do at server startup""" fastapi.BackgroundTasks().add_task(Gatherer().start_metering_daemon) Result: task is never executed as observed in logs @app.on_event("startup") async def startup_event() -> None: """tasks to do at server startup""" fastapi.BackgroundTasks().add_task(asyncio.run, Gatherer().start_metering_daemon()) Result: same as previous one @app.on_event("startup") async def startup_event() -> None: """tasks to do at server startup""" threading.Thread(target=asyncio.run, args=(Gatherer().start_metering_daemon(),)).start() Result: this one works but a) makes no sence; b) spawns N identical threads for N Uvicorn workers which all write same data N times into the DB. I am out of solutions by now. I am pretty sure there must be a solution to my problem since is looks pretty trivial to me but I couldn't find one. If you want more context here is the repo of the project I reffer to. | try @app.on_event("startup") async def startup_event() -> None: """tasks to do at server startup""" asyncio.create_task(Gatherer().start_metering_daemon()) | 14 | 7 |
69,021,815 | 2021-9-2 | https://stackoverflow.com/questions/69021815/how-to-read-json-file-with-comments | The comment are causing errors. I have a contents.json file which looks like: { "Fridge": [ ["apples"], ["chips","cake","10"] // This comment here is causing error ], "car": [ ["engine","tires","fuel"], ] } My python script is like this import json jsonfile = open('contents.json','r') jsondata = jsonfile.read() objec = json.loads(jsondata) list_o = objec['Fridge'] for i in (list_o): print(i) In my list_o, i am trying to load Fridge from contents.jsonfile, when JSON file has that comment, it gives me an error, when the JSON file doesn't have the comment, the script runs properly. I understand that comments is not proper JSON format, but is there any way to ignore comments of JSON file? | Read the file per line and remove the comment part. import json jsondata = "" with open('contents.json', 'r') as jsonfile: for line in jsonfile: jsondata += line.split("//")[0] objec = json.loads(jsondata) list_o = objec['Fridge'] for i in (list_o): print(i) ['apples'] ['chips', 'cake', '10'] Update You can also easily just use a library such as commentjson. Just replace : objec = json.loads(jsondata) To import commentjson # python3 -m pip install commentjson objec = commentjson.loads(jsondata) | 5 | 1 |
69,022,873 | 2021-9-2 | https://stackoverflow.com/questions/69022873/is-there-any-straightforward-option-of-unpacking-a-dictionary | If I do something like this some_obj = {"a": 1, "b": 2, "c": 3} first, *rest = some_obj I'll get a list, but I want it in 2 dictionaries: first = {"a": 1} and rest = {"b": 2, "c": 3}. As I understand, I can make a function, but I wonder if I can make it in one line, like in javascript with spread operator. | I don't know if there is a reliable way to achieve this in one line, But here is one method. First unpack the keys and values(.items()). Using some_obj only iterate through the keys. >>> some_obj = {"a":1, "b":2, "c": 3} >>> first, *rest = some_obj.items() But this will return a tuple, >>> first ('a', 1) >>> rest [('b', 2), ('c', 3)] But you can again convert back to dict with just a dict call. >>> dict([first]) {'a': 1} >>> dict(rest) {'b': 2, 'c': 3} | 5 | 8 |
69,016,584 | 2021-9-1 | https://stackoverflow.com/questions/69016584/python-is-there-a-shorthand-for-eg-printftypevar-typevar | Is there a shorthand in Python for (e.g.) print(f'type(var) = {type(var)}'), without having to state the object in the text and the {.}? The short answer may be "no", but I had to ask! E.g. in SAS one may use &= to output a macro variable and its value to the log... %let macrovar = foobar; %put &=macrovar; which returns in the log: MACROVAR = foobar This is my first question, and I found it difficult to search for an answer, so apologies if it's been asked and answered. | Indeed there is. As of python 3.8, you can simply type f'{type(var)=}', and you will get the output you desire: >>> x = {} >>> f'{x=}' 'x={}' >>> f'{type(x)=}' "type(x)=<class 'dict'>" Further reading: The "What's New In Python 3.8" page The documentation for f-strings The discussion on BPO that led to this feature being implemented. | 5 | 7 |
69,015,915 | 2021-9-1 | https://stackoverflow.com/questions/69015915/spliting-a-list-into-n-uneven-buckets-with-all-combinations | I have a list like: lst = [1,2,3,4,5,6,7,8,9,10] and I want to get the combination of all splits for a given n bucket without changing the order of the list. Output exp for n=3: [ [1],[2],[3,4,5,6,7,8,9,10], [1],[2,3],[4,5,6,7,8,9,10], [1],[2,3,4],[5,6,7,8,9,10], . . . [1,2,3,4,5,6,7,8],[9],[10], ] Python is the language I use but if you can direct me to an algorithm that would nice as well. I see this problem is usually apllied on strings. But couldn't figure it out on the list. P.S. this is my first question. Any feedback is appreciated on how to improve the question. | Try: from itertools import product def generate(n, l): for c in product(range(1, l), repeat=n - 1): s = sum(c) if s > l - 1: continue yield *c, l - s lst = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] n = 3 for groups in generate(n, len(lst)): l, out = lst, [] for g in groups: out.append(l[:g]) l = l[g:] print(out) Prints: [[1], [2], [3, 4, 5, 6, 7, 8, 9, 10]] [[1], [2, 3], [4, 5, 6, 7, 8, 9, 10]] [[1], [2, 3, 4], [5, 6, 7, 8, 9, 10]] [[1], [2, 3, 4, 5], [6, 7, 8, 9, 10]] [[1], [2, 3, 4, 5, 6], [7, 8, 9, 10]] [[1], [2, 3, 4, 5, 6, 7], [8, 9, 10]] [[1], [2, 3, 4, 5, 6, 7, 8], [9, 10]] [[1], [2, 3, 4, 5, 6, 7, 8, 9], [10]] [[1, 2], [3], [4, 5, 6, 7, 8, 9, 10]] [[1, 2], [3, 4], [5, 6, 7, 8, 9, 10]] [[1, 2], [3, 4, 5], [6, 7, 8, 9, 10]] [[1, 2], [3, 4, 5, 6], [7, 8, 9, 10]] [[1, 2], [3, 4, 5, 6, 7], [8, 9, 10]] [[1, 2], [3, 4, 5, 6, 7, 8], [9, 10]] [[1, 2], [3, 4, 5, 6, 7, 8, 9], [10]] [[1, 2, 3], [4], [5, 6, 7, 8, 9, 10]] [[1, 2, 3], [4, 5], [6, 7, 8, 9, 10]] [[1, 2, 3], [4, 5, 6], [7, 8, 9, 10]] [[1, 2, 3], [4, 5, 6, 7], [8, 9, 10]] [[1, 2, 3], [4, 5, 6, 7, 8], [9, 10]] [[1, 2, 3], [4, 5, 6, 7, 8, 9], [10]] [[1, 2, 3, 4], [5], [6, 7, 8, 9, 10]] [[1, 2, 3, 4], [5, 6], [7, 8, 9, 10]] [[1, 2, 3, 4], [5, 6, 7], [8, 9, 10]] [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10]] [[1, 2, 3, 4], [5, 6, 7, 8, 9], [10]] [[1, 2, 3, 4, 5], [6], [7, 8, 9, 10]] [[1, 2, 3, 4, 5], [6, 7], [8, 9, 10]] [[1, 2, 3, 4, 5], [6, 7, 8], [9, 10]] [[1, 2, 3, 4, 5], [6, 7, 8, 9], [10]] [[1, 2, 3, 4, 5, 6], [7], [8, 9, 10]] [[1, 2, 3, 4, 5, 6], [7, 8], [9, 10]] [[1, 2, 3, 4, 5, 6], [7, 8, 9], [10]] [[1, 2, 3, 4, 5, 6, 7], [8], [9, 10]] [[1, 2, 3, 4, 5, 6, 7], [8, 9], [10]] [[1, 2, 3, 4, 5, 6, 7, 8], [9], [10]] | 6 | 4 |
69,015,534 | 2021-9-1 | https://stackoverflow.com/questions/69015534/seaborn-heatmap-annotation-valueerror-unknown-format-code-g-for-object-of-typ | I want to draw a seaborn.heatmap and annotate only some rows/columns. Example where all cells have annotation: import seaborn as sns import matplotlib.pyplot as plt import numpy as np n1 = 5 n2 = 10 M = np.random.random((n1, n2)) fig, ax = plt.subplots() sns.heatmap(ax = ax, data = M, annot = True) plt.show() Following these examples (paragraph Adding Value Annotations), it is possible to pass to seaborn.heatmap an array with annotations for each cell as annot parameter: annot : bool or rectangular dataset, optional If True, write the data value in each cell. If an array-like with the same shape as data, then use this to annotate the heatmap instead of the data. Note that DataFrames will match on position, not index. If I try to generate an array of str and pass it as annot parameter to seaborn.heatmap I get the following error: Traceback (most recent call last): File "C:/.../myfile.py", line 16, in <module> sns.heatmap(ax = ax, data = M, annot = A) File "C:\venv\lib\site-packages\seaborn\_decorators.py", line 46, in inner_f return f(**kwargs) File "C:\venv\lib\site-packages\seaborn\matrix.py", line 558, in heatmap plotter.plot(ax, cbar_ax, kwargs) File "C:\venv\lib\site-packages\seaborn\matrix.py", line 353, in plot self._annotate_heatmap(ax, mesh) File "C:\venv\lib\site-packages\seaborn\matrix.py", line 262, in _annotate_heatmap annotation = ("{:" + self.fmt + "}").format(val) ValueError: Unknown format code 'g' for object of type 'numpy.str_' Code which generates the ValueError (in this case I try to remove annotations of the 4th columns as an example): import seaborn as sns import matplotlib.pyplot as plt import numpy as np n1 = 5 n2 = 10 M = np.random.random((n1, n2)) A = np.array([[f'{M[i, j]:.2f}' for j in range(n2)] for i in range(n1)]) A[:, 3] = '' fig, ax = plt.subplots(figsize = (6, 3)) sns.heatmap(ax = ax, data = M, annot = A) plt.show() What is the cause of this error? How can I generate a seaborn.heatmap and annotate only selected rows/columns? | It is a formatting issue. Here the fmt = '' is required if you are using non-numeric labels (defaults to: fmt='.2g') which consider only for numeric values and throw an error for labels with text format. import seaborn as sns import matplotlib.pyplot as plt import numpy as np n1 = 5 n2 = 10 M = np.random.random((n1, n2)) A = np.array([[f'{M[i, j]:.2f}' for j in range(n2)] for i in range(n1)]) A[:, 3] = '' fig, ax = plt.subplots(figsize = (6, 3)) sns.heatmap(ax = ax, data = M, annot = A, fmt='') plt.show() | 12 | 15 |
69,009,440 | 2021-9-1 | https://stackoverflow.com/questions/69009440/bash-how-to-capture-the-version-from-rpm | this is the way when I try to get the Kafka version rpm -qa | grep "^kafka_" kafka_2_6_5_0_292-1.0.0.2.6.5.0-292.noarch Kafka version is 1.0 , so I did the following in order to cut the Kafka version rpm -qa | grep "^kafka_" | sed s'/-/ /g' | awk '{print $2}' | cut -c 1-3 1.0 <----- results above cli seems to be not so elegant and long syntax can we do it better , maybe with Perl or Python one liner command ? | Refactoring your code rpm -qa | grep "^kafka_" | sed s'/-/ /g' | awk '{print $2}' | cut -c 1-3 1st step: use AWK's FS (Field Seperator) instead preprocessing in sed rpm -qa | grep "^kafka_" | awk 'BEGIN{FS="-"}{print $2}' | cut -c 1-3 2nd step: register {print $2} action to lines matching description rather than filtering it with grep rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print $2}' | cut -c 1-3 3rd step: use AWK's substr function in place of cut -c rpm -qa | awk 'BEGIN{FS="-"}/^kafka_/{print substr($2,1,3)}' Disclaimer: my answer assumes you want behavior exactly like your original code, even if possibly unexpected i.e. it does get first 3 characters of version parts, regardless of how many digits are in 2nd part so for example for 1.15.0.2.6.5.0-292 it does yield 1.1 | 6 | 4 |
69,006,887 | 2021-9-1 | https://stackoverflow.com/questions/69006887/return-multiple-values-from-a-pandas-rolling-apply-function | I have a function that needs to return multiple values: def max_dd(ser): ... compute i,j,dd return i,j,dd if I have code like this that calls this function passing in a series: date1, date2, dd = df.rolling(window).apply(max_dd) however, I get an error: pandas.core.base.DataError: No numeric types to aggregate If I return a single value from max_dd, everything is fine. How do I return multiple values from a function that has been "apply"? | Rolling apply can only produce single numeric values. There is no support for multiple returns or even nonnumeric returns (like something as simple as a string) from rolling apply. Any answer to this question will be a work around. That said, a viable workaround is to take advantage of the fact that rolling objects are iterable (as of pandas 1.1.0). What’s new in 1.1.0 (July 28, 2020) Made pandas.core.window.rolling.Rolling and pandas.core.window.expanding.Expanding iterable(GH11704) Meaning that it is possible to take advantage of the faster grouping and indexing operations of the rolling function, but obtain more flexible behaviour with python: def some_fn(df_): """ When iterating over a rolling window it disregards the min_periods argument of rolling and will produce DataFrames for all windows The input is also of type DataFrame not Series You are completely responsible for doing all operations here, including ignoring values if the input is not of the correct shape or format :param df_: A DataFrame produced by rolling :return: a column joined, and the max value within the window """ return ','.join(df_['a']), df_['a'].max() window = 5 results = pd.DataFrame([some_fn(df_) for df_ in df.rolling(window)]) Sample DataFrame and output: df = pd.DataFrame({'a': list('abdesfkm')}) df: a 0 a 1 b 2 d 3 e 4 s 5 f 6 k 7 m result: 0 1 0 a a 1 a,b b 2 a,b,d d 3 a,b,d,e e 4 a,b,d,e,s s 5 b,d,e,s,f s 6 d,e,s,f,k s 7 e,s,f,k,m s | 5 | 14 |
68,919,220 | 2021-8-25 | https://stackoverflow.com/questions/68919220/using-getattr-to-access-built-in-functions | I would like to use getattr() to access Python's built-in functions. Is that possible? For example: getattr(???, 'abs') I know I can just simply do: >>> abs <built-in function abs> But I want to use getattr, because the keyword names are strings. | The builtins module: You could try importing builtins module: >>> import builtins >>> getattr(builtins, 'abs') <built-in function abs> >>> As mentioned in the documentation: This module provides direct access to all ‘built-in’ identifiers of Python; for example, builtins.open is the full name for the built-in function open(). See Built-in Functions and Built-in Constants for documentation. So the above mentions that builtins.open is the open function. So abs is the same builtins.abs is the same thing as abs. But for gettatr, getattr(builtins, 'abs') is also the same as builtins.abs. The original __builtins__ (NOT RECOMMENDED): You could try getting from the __builtins__: >>> getattr(__builtins__, 'abs') <built-in function abs> >>> As mentioned in the documentation: CPython implementation detail: Users should not touch __builtins__; it is strictly an implementation detail. Users wanting to override values in the builtins namespace should import the builtins module and modify its attributes appropriately. The builtins namespace associated with the execution of a code block is actually found by looking up the name __builtins__ in its global namespace; this should be a dictionary or a module (in the latter case the module’s dictionary is used). By default, when in the main module, __builtins__ is the built-in module builtins; when in any other module, __builtins__ is an alias for the dictionary of the builtins module itself. As you can see, it's not recommended, also usually the __builtins__ is a dict, rather than a module. If you write your code in modules, __builtins__ would return dictionary aliases of the builtins module, which would give something like: {..., 'abs': <built-in function abs>, ...}. More on getattr: Just to have more idea about getattr, as mentioned in the documentation: Return the value of the named attribute of object. name must be a string. If the string is the name of one of the object’s attributes, the result is the value of that attribute. For example, getattr(x, 'foobar') is equivalent to x.foobar. If the named attribute does not exist, default is returned if provided, otherwise AttributeError is raised. So: >>> import builtins >>> getattr(builtins, 'abs') <built-in function abs> >>> Is the same as: >>> import builtins >>> builtins.abs <built-in function abs> >>> So you might be wondering since: >>> abs <built-in function abs> Gives the same thing, why we can't just do: getattr(abs) The reason we can't do that is that that getattr is suppose to be calling methods/functions/classes of a classes/modules. The reason using getattr(builtins, 'abs') works is because builtins is a module and abs is a class/method, it stores all the built-in Python keywords as methods in that module. All the keywords are showed on this page of the documentation. | 4 | 20 |
68,916,383 | 2021-8-25 | https://stackoverflow.com/questions/68916383/can-i-disable-type-errors-from-third-party-packages-in-pylance | Some of the packages I use don't type hint their code, so when I use them, Pylance keeps telling me that the functions I use have partially unknown types, which is a problem I can't fix. Is there a way to disable such errors? | If you're absolutely certain of the type you're getting from the external library and you're sure it's not documented through typeshed either, you can always cast it to signal to the type checker it's to be treated as that type. from typing import cast from elsewhere import Ham spam = some_untyped_return() ham = cast(Ham, spam) | 20 | 2 |
68,961,796 | 2021-8-28 | https://stackoverflow.com/questions/68961796/how-do-i-melt-a-pandas-dataframe | On the pandas tag, I often see users asking questions about melting dataframes in pandas. I am going to attempt a canonical Q&A (self-answer) with this topic. I am is going to clarify: What is melt? How do I use melt? When do I use melt? I see some hotter questions about melt, like: Convert columns into rows with Pandas: This one actually could be good, but some more explanation would be better. Pandas Melt Function: A nice question, and the answer is good, but it's a bit too vague and doesn't have much explanation. Melting a pandas dataframe: Also a nice answer! But it's only for that particular situation, which is pretty simple, only pd.melt(df) Pandas dataframe use columns as rows (melt): Very neat! But the problem is that it's only for the specific question the OP asked, which is also required to use pivot_table as well. So I am going to attempt a canonical Q&A for this topic. Dataset: I will have all my answers on this dataset of random grades for random people with random ages (easier to explain for the answers :D): import pandas as pd df = pd.DataFrame({'Name': ['Bob', 'John', 'Foo', 'Bar', 'Alex', 'Tom'], 'Math': ['A+', 'B', 'A', 'F', 'D', 'C'], 'English': ['C', 'B', 'B', 'A+', 'F', 'A'], 'Age': [13, 16, 16, 15, 15, 13]}) >>> df Name Math English Age 0 Bob A+ C 13 1 John B B 16 2 Foo A B 16 3 Bar F A+ 15 4 Alex D F 15 5 Tom C A 13 Problems: Problem 1: How do I melt a dataframe so that the original dataframe becomes the following? Name Age Subject Grade 0 Bob 13 English C 1 John 16 English B 2 Foo 16 English B 3 Bar 15 English A+ 4 Alex 17 English F 5 Tom 12 English A 6 Bob 13 Math A+ 7 John 16 Math B 8 Foo 16 Math A 9 Bar 15 Math F 10 Alex 17 Math D 11 Tom 12 Math C I want to transpose this so that one column would be each subject and the other columns would be the repeated names of the students and their age and score. Problem 2: This is similar to Problem 1, but this time I want to make the Problem 1 output Subject column only have Math, I want to filter out the English column: Name Age Subject Grades 0 Bob 13 Math A+ 1 John 16 Math B 2 Foo 16 Math A 3 Bar 15 Math F 4 Alex 15 Math D 5 Tom 13 Math C I want the output to be like the above. Problem 3: If I was to group the melt and order the students by their scores, how would I be able to do that, to get the desired output like the below: value Name Subjects 0 A Foo, Tom Math, English 1 A+ Bob, Bar Math, English 2 B John, John, Foo Math, English, English 3 C Tom, Bob Math, English 4 D Alex Math 5 F Bar, Alex Math, English I need it to be ordered and the names separated by comma and also the Subjects separated by comma in the same order respectively. Problem 4: How would I unmelt a melted dataframe? Let's say I already melted this dataframe: df = df.melt(id_vars=['Name', 'Age'], var_name='Subject', value_name='Grades') To become: Name Age Subject Grades 0 Bob 13 Math A+ 1 John 16 Math B 2 Foo 16 Math A 3 Bar 15 Math F 4 Alex 15 Math D 5 Tom 13 Math C 6 Bob 13 English C 7 John 16 English B 8 Foo 16 English B 9 Bar 15 English A+ 10 Alex 15 English F 11 Tom 13 English A Then how would I translate this back to the original dataframe, the below? Name Math English Age 0 Bob A+ C 13 1 John B B 16 2 Foo A B 16 3 Bar F A+ 15 4 Alex D F 15 5 Tom C A 13 Problem 5: If I was to group by the names of the students and separate the subjects and grades by comma, how would I do it? Name Subject Grades 0 Alex Math, English D, F 1 Bar Math, English F, A+ 2 Bob Math, English A+, C 3 Foo Math, English A, B 4 John Math, English B, B 5 Tom Math, English C, A I want to have a dataframe like above. Problem 6: If I was is going to completely melt my dataframe, all columns as values, how would I do it? Column Value 0 Name Bob 1 Name John 2 Name Foo 3 Name Bar 4 Name Alex 5 Name Tom 6 Math A+ 7 Math B 8 Math A 9 Math F 10 Math D 11 Math C 12 English C 13 English B 14 English B 15 English A+ 16 English F 17 English A 18 Age 13 19 Age 16 20 Age 16 21 Age 15 22 Age 15 23 Age 13 I want to have a dataframe like above. All columns as values. | Note for pandas versions < 0.20.0: I will be using df.melt(...) for my examples, but you will need to use pd.melt(df, ...) instead. Documentation references: Most of the solutions here would be used with melt, so to know the method melt, see the documentation explanation. Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (id_vars), while all other columns, considered measured variables (value_vars), are “unpivoted” to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’. Parameters id_vars : tuple, list, or ndarray, optional Column(s) to use as identifier variables. value_vars : tuple, list, or ndarray, optional Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars. var_name : scalar Name to use for the ‘variable’ column. If None it uses frame.columns.name or ‘variable’. value_name : scalar, default ‘value’ Name to use for the ‘value’ column. col_level : int or str, optional If columns are a MultiIndex then use this level to melt. ignore_index : bool, default True If True, original index is ignored. If False, the original index is retained. Index labels will be repeated as necessary. New in version 1.1.0. Logic to melting: Melting merges multiple columns and converts the dataframe from wide to long, for the solution to Problem 1 (see below), the steps are: First we got the original dataframe. Then the melt firstly merges the Math and English columns and makes the dataframe replicated (longer). Then finally it adds the column Subject which is the subject of the Grades columns value, respectively: This is the simple logic to what the melt function does. Solutions: Problem 1: Problem 1 could be solve using pd.DataFrame.melt with the following code: print(df.melt(id_vars=['Name', 'Age'], var_name='Subject', value_name='Grades')) This code passes the id_vars argument to ['Name', 'Age'], then automatically the value_vars would be set to the other columns (['Math', 'English']), which is transposed into that format. You could also solve Problem 1 using stack like the below: print( df.set_index(["Name", "Age"]) .stack() .reset_index(name="Grade") .rename(columns={"level_2": "Subject"}) .sort_values("Subject") .reset_index(drop=True) ) This code sets the Name and Age columns as the index and stacks the rest of the columns Math and English, and resets the index and assigns Grade as the column name, then renames the other column level_2 to Subject and then sorts by the Subject column, then finally resets the index again. Both of these solutions output: Name Age Subject Grade 0 Bob 13 English C 1 John 16 English B 2 Foo 16 English B 3 Bar 15 English A+ 4 Alex 17 English F 5 Tom 12 English A 6 Bob 13 Math A+ 7 John 16 Math B 8 Foo 16 Math A 9 Bar 15 Math F 10 Alex 17 Math D 11 Tom 12 Math C Problem 2: This is similar to my first question, but this one I only one to filter in the Math columns, this time the value_vars argument can come into use, like the below: print( df.melt( id_vars=["Name", "Age"], value_vars="Math", var_name="Subject", value_name="Grades", ) ) Or we can also use stack with column specification: print( df.set_index(["Name", "Age"])[["Math"]] .stack() .reset_index(name="Grade") .rename(columns={"level_2": "Subject"}) .sort_values("Subject") .reset_index(drop=True) ) Both of these solutions give: Name Age Subject Grade 0 Bob 13 Math A+ 1 John 16 Math B 2 Foo 16 Math A 3 Bar 15 Math F 4 Alex 15 Math D 5 Tom 13 Math C Problem 3: Problem 3 could be solved with melt and groupby, using the agg function with ', '.join, like the below: print( df.melt(id_vars=["Name", "Age"]) .groupby("value", as_index=False) .agg(", ".join) ) It melts the dataframe then groups by the grades and aggregates them and joins them by a comma. stack could be also used to solve this problem, with stack and groupby like the below: print( df.set_index(["Name", "Age"]) .stack() .reset_index() .rename(columns={"level_2": "Subjects", 0: "Grade"}) .groupby("Grade", as_index=False) .agg(", ".join) ) This stack function just transposes the dataframe in a way that is equivalent to melt, then resets the index, renames the columns and groups and aggregates. Both solutions output: Grade Name Subjects 0 A Foo, Tom Math, English 1 A+ Bob, Bar Math, English 2 B John, John, Foo Math, English, English 3 C Bob, Tom English, Math 4 D Alex Math 5 F Bar, Alex Math, English Problem 4: How would I unmelt a melted dataframe? Let's say I already melted this dataframe: df = df.melt(id_vars=['Name', 'Age'], var_name='Subject', value_name='Grades') This could be solved with pivot_table. We would have to specify the arguments values, index, columns and also aggfunc. We could solve it with the below code: print( df.pivot_table("Grades", ["Name", "Age"], "Subject", aggfunc="first") .reset_index() .rename_axis(columns=None) ) Output: Name Age English Math 0 Alex 15 F D 1 Bar 15 A+ F 2 Bob 13 C A+ 3 Foo 16 B A 4 John 16 B B 5 Tom 13 A C The melted dataframe is converted back to the exact same format as the original dataframe. We first pivot the melted dataframe and then reset the index and remove the column axis name. Problem 5: Problem 5 could be solved with melt and groupby like the following: print( df.melt(id_vars=["Name", "Age"], var_name="Subject", value_name="Grades") .groupby("Name", as_index=False) .agg(", ".join) ) That melts and groups by Name. Or you could stack: print( df.set_index(["Name", "Age"]) .stack() .reset_index() .groupby("Name", as_index=False) .agg(", ".join) .rename({"level_2": "Subjects", 0: "Grades"}, axis=1) ) Both codes output: Name Subjects Grades 0 Alex Math, English D, F 1 Bar Math, English F, A+ 2 Bob Math, English A+, C 3 Foo Math, English A, B 4 John Math, English B, B 5 Tom Math, English C, A Problem 6: Problem 6 could be solved with melt and no column needed to be specified, just specify the expected column names: print(df.melt(var_name='Column', value_name='Value')) That melts the whole dataframe. Or you could stack: print( df.stack() .reset_index(level=1) .sort_values("level_1") .reset_index(drop=True) .set_axis(["Column", "Value"], axis=1) ) Both codes output: Column Value 0 Age 16 1 Age 15 2 Age 15 3 Age 16 4 Age 13 5 Age 13 6 English A+ 7 English B 8 English B 9 English A 10 English F 11 English C 12 Math C 13 Math A+ 14 Math D 15 Math B 16 Math F 17 Math A 18 Name Alex 19 Name Bar 20 Name Tom 21 Name Foo 22 Name John 23 Name Bob | 49 | 37 |
68,957,800 | 2021-8-27 | https://stackoverflow.com/questions/68957800/how-to-fix-pylance-syntax-highlighting-showing-wrong-color-for-self-and-cls-pyth | I have encountered this issue when I use Pylance and syntax highlighting is enabled for python in the VSCode with default or the visual studio theme. self and cls parameter are LightSkyBlue color like other parameters It should be like this: | Added the color code inside the settings.json file for the dark themes I use. // correct color self and cls python "editor.semanticTokenColorCustomizations": { "[Default Dark+]": { "rules": { "selfParameter": "#569CD6", "clsParameter": "#569CD6" }, }, "[Visual Studio Dark]": { "rules": { "selfParameter": "#569CD6", "clsParameter": "#569CD6" }, }, "[Default Dark Modern]": { "rules": { "selfParameter": "#569CD6", "clsParameter": "#569CD6" }, } }, Add theme settings based on what theme you are using. for example, I just changed my theme to [Default Dark Modern] and then added the color settings. These two issues on pylance and vscode github repository helped: https://github.com/microsoft/pylance-release/issues/323 https://github.com/microsoft/vscode/issues/118946 | 7 | 11 |
68,924,471 | 2021-8-25 | https://stackoverflow.com/questions/68924471/plotly-express-doesnt-load-and-refuse-to-connect | I have this simple program that should display a pie chart, but whenever I run the program, it opens a page on Chrome and just keeps loading without any display, and sometimes it refuses to connect. How do I solve this? P.S.: I would like to use it offline, and I'm running it using cmd on windows10 import pandas as pd import numpy as np from datetime import datetime import plotly.express as px def graph(dataframe): figure0 = px.pie(dataframe, values=dataframe['POPULATION'], names=dataframe['CONTINENT']) figure0.show() df = pd.DataFrame({'POPULATION': [60, 17, 9, 13, 1], 'CONTINENT': ['Asia', 'Africa', 'Europe', 'Americas', 'Oceania']}) graph(df) | Disclaimer: I extracted this answer from the OPs question. Answers should not be contained in the question itself. Answer provided by g_odim_3: So instead of figure0.show(), I used figure0.write_html('first_figure.html', auto_open=True) and it worked: import pandas as pd import numpy as np from datetime import datetime import plotly.express as px def graph(dataframe): figure0 = px.pie(dataframe, values=dataframe['POPULATION'], names=dataframe['CONTINENT'], title='Global Population') # figure0.show() figure0.write_html('first_figure.html', auto_open=True) df = pd.DataFrame({'POPULATION':[60, 17, 9, 13, 1], 'CONTINENT':['Asia', 'Africa', 'Europe', 'Americas', 'Oceania']}) graph(df) | 5 | 7 |
68,999,178 | 2021-8-31 | https://stackoverflow.com/questions/68999178/pipenv-error-no-python-at-c-python39-python-exe | I installed and added Python3.9 and Pip to the PATH through the installer. python --version # Python 3.9.7 pip --version # pip 21.2.4 from C:\Users\{MyUserName}\AppData\Local\Programs\Python\Python39\lib\site-packages\pip (python 3.9) I installed pipenv with pip install pipenv and pipenv --version outputs pipenv, version 2021.5.29. Although, if I try to install any package with pipenv, or just enter the pipenv shell and then run python --version, I always get No Python at 'C:\Python39\python.exe'. Python sys path is C:\Users\{MyUserName}\AppData\Local\Programs\Python\Python39, so why does pipenv look into another folder? And how can I fix this? I'm running all these commands in git bash. | For anyone running into this error, run the following to delete the virtual environment (built with the previous/future version of Python): cd $project_folder pipenv --rm Then rerun this to build your pipenv virtual environment with your new version of Python: pipenv install | 9 | 24 |
68,916,893 | 2021-8-25 | https://stackoverflow.com/questions/68916893/typeerror-numpy-dtypemeta-object-is-not-subscriptable | I'm trying to type hint a numpy ndarray like this: RGB = numpy.dtype[numpy.uint8] ThreeD = tuple[int, int, int] def load_images(paths: list[str]) -> tuple[list[numpy.ndarray[ThreeD, RGB]], list[str]]: ... but at the first line when I run this, I got the following error: RGB = numpy.dtype[numpy.uint8] TypeError: 'numpy._DTypeMeta' object is not subscriptable How do I type hint a ndarray correctly? | It turns out that strongly type a numpy array is not straightforward at all. I spent a couple of hours to figure out how to do it properly. A simple method that do not add yet another dependency to your project is to use a trick described here. Just wrap numpy types with with ': import numpy import numpy.typing as npt from typing import cast, Type, Sequence import typing RGB: typing.TypeAlias = 'numpy.dtype[numpy.uint8]' ThreeD: typing.TypeAlias = tuple[int, int, int] NDArrayRGB: typing.TypeAlias = 'numpy.ndarray[ThreeD, RGB]' def load_images(paths: list[str]) -> tuple[list[NDArrayRGB], list[str]]: ... The trick is to use single-quotes to avoid the infamous TypeError: 'numpy._DTypeMeta' object is not subscriptable when Python tries to interpret the [] in the expression. This trick is well handled for instance by VSCode Pylance type-checker: Notice that the colors for types are respected and that the execution gives no error. Note about nptyping As suggested by @ddejohn, one can use nptyping. Just install the package: pip install nptyping. However, as of now (16 June 2022), there is no Tuple type defined in nptyping so you won't be able to prefectly type you code that way. I have open a new issue so maybe in the future it will work. edits Turns out there is a different way to express a tuple as a nptyping.Shape as answered by ramonhagenaars, which is also elegant: from nptyping import NDArray, Shape, UInt8 # A 1-dimensional array (i.e. 1 RGB color). RGBArray1D = NDArray[Shape["[r, g, b]"], UInt8] # A 2-dimensional array (i.e. an array of RGB colors). RGBArrayND = NDArray[Shape["*, [r, g, b]"], UInt8] def load_images_trick(paths: list[str]) -> tuple[list[RGBArrayND], list[str]]: ... However, this solution is not well supported by VSCode Pylance, an I get an error suggestion for Shape: Expected class type but received "Literal" "Literal" is not a class "Literal" is not a classPylancereportGeneralTypeIssues Pylance(reportGeneralTypeIssues) | 11 | 5 |
68,929,799 | 2021-8-25 | https://stackoverflow.com/questions/68929799/pysimplegui-right-justify-a-button-in-a-frame | I am building a simple GUI with pysimplegui and want to right-justify a button inside a frame. I have found details on how to do this with text but not with buttons. For example, I would like the button below to snap to the right side of the frame with the groove around it. I want this: To look more like this: But without having to add in a manually adjusted blank text element to get it close as this often doesn't line up correctly (note the commented out sg.Text("", size=(22, 1)) line below). import sys import PySimpleGUI as sg sg.theme("Light Blue 2") layout = [ [ sg.Text("Target folder", size=(9, 1)), sg.InputText(default_text="Choose a folder...", size=(59, 1)), sg.FolderBrowse(), ], [ sg.Frame( layout=[ [ sg.Text("First parameter", size=(15, 1)), sg.InputText(default_text="2", size=(3, 1),), ], [ sg.Text("Second parameter", size=(15, 1)), sg.InputText(default_text="8", size=(3, 1),), # sg.Text("", size=(22, 1)), sg.Submit("A nice button", size=(23, 1)), ], [sg.ProgressBar(1, orientation="h", size=(50, 20))], ], title="Cool subpanel", relief=sg.RELIEF_GROOVE, ) ], ] window = sg.Window("Test window", layout) while True: event, values = window.read() if event == "Cancel" or event is None: sys.exit() | Your question just missed a release of PySimpleGUI that makes this operation trivial. One problem with StackOverflow is - "nothing dies"... including old solutions. It's a genuine problem that I've yet to find a solid solution for. This technique was released in Sept 2021 in version 4.48.0 and uses the, then new, Push element. As the name implies, the Push will push around elements. By putting one between elements, it will push the elements apart. Here's your code with a Push added just before the button you want right justified. import sys import PySimpleGUI as sg sg.theme("Light Blue 2") layout = [ [ sg.Text("Target folder", size=(9, 1)), sg.InputText(default_text="Choose a folder...", size=(59, 1)), sg.FolderBrowse(), ], [ sg.Frame( layout=[ [ sg.Text("First parameter", size=(15, 1)), sg.InputText(default_text="2", size=(3, 1),), ], [ sg.Text("Second parameter", size=(15, 1)), sg.InputText(default_text="8", size=(3, 1),), sg.Push(), sg.Button("A nice button", size=(23, 1)), ], [sg.ProgressBar(1, orientation="h", size=(50, 20))], ], title="Cool subpanel", relief=sg.RELIEF_GROOVE, ) ], ] window = sg.Window("Test window", layout) while True: event, values = window.read() if event == "Cancel" or event is None: sys.exit() And this is the result: Note (that's not super important) - I've replaced Submit with Button as using Submit is perhaps confusing. Submit is nothing more than a function that returns a Button element with the text "Submit". It doesn't do anything special and in fact provides less parameter perhaps than just Button. | 5 | 2 |
68,945,080 | 2021-8-26 | https://stackoverflow.com/questions/68945080/pytube-exceptions-regexmatcherror-get-throttling-function-name-could-not-find | I used to download songs the following way: from pytube import YouTube video = YouTube('https://www.youtube.com/watch?v=AWXvSBHB210') video.streams.get_by_itag(251).download() Since today there is this error: Traceback (most recent call last): File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\__main__.py", line 170, in fmt_streams extract.apply_signature(stream_manifest, self.vid_info, self.js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\extract.py", line 409, in apply_signature cipher = Cipher(js=js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 43, in __init__ self.throttling_plan = get_throttling_plan(js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 387, in get_throttling_plan raw_code = get_throttling_function_code(js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 293, in get_throttling_function_code name = re.escape(get_throttling_function_name(js)) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 278, in get_throttling_function_name raise RegexMatchError( pytube.exceptions.RegexMatchError: get_throttling_function_name: could not find match for multiple During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Me\Documents\YouTubeDownloader.py", line 3, in <module> video.streams.get_by_itag(251).download() File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\__main__.py", line 285, in streams return StreamQuery(self.fmt_streams) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\__main__.py", line 177, in fmt_streams extract.apply_signature(stream_manifest, self.vid_info, self.js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\extract.py", line 409, in apply_signature cipher = Cipher(js=js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 43, in __init__ self.throttling_plan = get_throttling_plan(js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 387, in get_throttling_plan raw_code = get_throttling_function_code(js) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 293, in get_throttling_function_code name = re.escape(get_throttling_function_name(js)) File "C:\Users\Me\AppData\Local\Programs\Python\Python39\lib\site-packages\pytube\cipher.py", line 278, in get_throttling_function_name raise RegexMatchError( pytube.exceptions.RegexMatchError: get_throttling_function_name: could not find match for multiple | I had same issue when i was using pytube 11.0.0 so found out that there is a regular expression filter mismatch in pytube library in cipher.py class function_patterns = [ r'a\.C&&\(b=a\.get\("n"\)\)&&\(b=([^(]+)\(b\),a\.set\("n",b\)\)}};', ] Now there is a update of pytube code yesterday to 11.0.1 function_patterns = [ r'a\.[A-Z]&&\(b=a\.get\("n"\)\)&&\(b=([^(]+)\(b\)', ] With this code update now downloading youtube video with pytube works!!! Update your pytube library with this command: python3 -m pip install --upgrade pytube | 28 | 16 |
68,965,072 | 2021-8-28 | https://stackoverflow.com/questions/68965072/pytorch-model-take-too-much-to-load-the-first-time-in-a-new-machine | I have a manual scaling set-up on EC2 where I'm creating instances based on an AMI which already runs my code at boot (using Systemd). I'm facing a fundamental problem: on the main instance (the one I use to create the AMI, the Python code takes 8 seconds to be ready after the image is booted, this includes importing libraries, loading state dicts of models, etc...). Now, on the images I create with the AMI, the code takes 5+ minutes to boot up the first time, it takes especially long to load the state dicts from disk to GPU memory, after the first time the code takes about the same as the main instance to load. The AMI keeps the same pycache folders as the main instance, so it shouldn't take that much time since I think the AMI should include everything, shouldn't it?. So, my question is: Is there any other caching to make CUDA / Python faster that I'm not taking into consideration? I'm only keeping the pycache/ folders, but I don't know if there's anything I could do to make sure it doesn't take that much time to boot everything the first time. This is my main structure: # Import libraries import torch import numpy as np # Import personal models (takes 1 minute) from model1 import model1 from model2 import model2 # Load first model model1_object = model1() model2_object = model2() # Load state dicts (takes 3+ minutes, the first time in new instances, seconds other times) # Note: the models are a bit heavy model1_object.load_state_dict(torch.load("model1.pth")) model2_object.load_state_dict(torch.load("model2.pth")) Note: I'm using g4dn.xlarge instances, for both the main instance and for newer ones in AWS. | This was caused because of the high latencies required while restoring AWS EBS snapshots. At first when you restore a snapshot, the latency is extremely high, explaining why the model takes so much to load in my example when the instance is freshly created. Check the initialization section of this article: https://cloudonaut.io/ebs-snapshot-pitfalls/ The only solution that I've found to use an instance fast when it is first created is to enable Fast Snapshot Restore, which costs around 500$ a month: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html If you have time to spare, you can wait until the maximum performance is achieved, or try to warm the volume up beforehand https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html | 6 | 3 |
68,930,093 | 2021-8-25 | https://stackoverflow.com/questions/68930093/modulenotfounderror-no-module-named-ffmpeg-on-spyder-although-ffmpeg-is-insta | ffmpeg is installed on Anaconda Navigator (in base(root) environment), but when I run import ffmpeg, I got this error message: ModuleNotFoundError: No module named 'ffmpeg' Why is this module not found and how can I fix this? | You need to install the ffmpeg-python module to the environment: pip install ffmpeg-python or conda install -c conda-forge ffmpeg-python from there import ffmpeg statements when using the environment should work. | 10 | 18 |
68,967,514 | 2021-8-28 | https://stackoverflow.com/questions/68967514/importing-the-numpy-c-extensions-failed-amplify | Cross posted on GitHub I'm working with AWS Amplify and pipenv for my python 3.9 lambda. I'm attempting to use pandas to create a dataframe, do some processing and write it back to CSV for sagemaker inference. Reproducing code example: import pandas as pd (Code immediately fails after this) Error message: Here's the full error message: [ERROR] Runtime.ImportModuleError: Unable to import module 'index': IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from "/var/lang/bin/python3.9" * The NumPy version is: "1.21.2" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: No module named 'numpy.core._multiarray_umath' I first downloaded with pipenv install pandas which automatically installs numpy To solve I've tried: pipenv install numpy / pipenv uninstall numpy pipenv uninstall pandas / pipenv install pandas pipenv uninstall setuptools / pipenv install setuptools Important to note I'm on Windows 10 | I had ran into a similar issue. After much research it looks like the Lambda Layer AWSLambda-Python38-SciPy1x provided by Amazon is your best bet. More Info is here. You can manually add the Layer via the Console like so: Picture for you Or you can add the layer via the Amplify CLI. I ran the following commands on an existing lambda function: amplify function update Then select the Lambda function (serverless function) you'd like to add the layer to. Go to Lambda layers configuration. Provide the ARN under Provide existing Lambda Layer ARN. I used arn:aws:lambda:us-west-2:420165488524:layer:AWSLambda-Python38-SciPy1x:29 amplify push That should do the trick. | 4 | 3 |
68,924,790 | 2021-8-25 | https://stackoverflow.com/questions/68924790/parenthesized-context-managers-work-in-python-3-9-but-not-3-8 | So I have this simple example of a with statement. It works in Python 3.8 and 3.9: class Foo: def __enter__(self, *args): print("enter") def __exit__(self, *args): print("exit") with Foo() as f, Foo() as b: print("Foo") Output (as expected): enter enter Foo exit exit But if I add parentheses like this it only works in Python 3.9: class Foo: def __enter__(self, *args): print("enter") def __exit__(self, *args): print("exit") with (Foo() as f, Foo() as b): print("Foo") Output in 3.8: File "foo.py", line 8 with (Foo() as f, Foo() as b): ^ SyntaxError: invalid syntax I know, I could just remove the parentheses but I don't understand why it works in Python 3.9 in the first place. I could not find the relevant change log. | Parenthesized context managers are mentioned as a new feature in What’s New In Python 3.10. The changelog states: This new syntax uses the non LL(1) capacities of the new parser. Check PEP 617 for more details. But PEP 617 was already accepted in Python 3.9, as described in its changelog: Python 3.9 uses a new parser, based on PEG instead of LL(1). The new parser’s performance is roughly comparable to that of the old parser, but the PEG formalism is more flexible than LL(1) when it comes to designing new language features. We’ll start using this flexibility in Python 3.10 and later. It was even already part of the Python 3.9 grammar: with_stmt: | 'with' '(' ','.with_item+ ','? ')' ':' block | 'with' ','.with_item+ ':' [TYPE_COMMENT] block | ASYNC 'with' '(' ','.with_item+ ','? ')' ':' block | ASYNC 'with' ','.with_item+ ':' [TYPE_COMMENT] block with_item: | expression 'as' star_target &(',' | ')' | ':') | expression My guess is that because the LL1 parser was officially removed in Python 3.10, only then it was considered a new feature, while still being supported in 3.9. | 6 | 8 |
68,973,827 | 2021-8-29 | https://stackoverflow.com/questions/68973827/how-to-send-a-inlinekeyboardbutton-in-telegram-bot-periodically | I'm trying to send an InlineKeyboardHandler every x second. for that purpose I used updater.job_queue.run_repeating but it acts weird. The keyboard doesn't work unless I have another interaction with the bot first. I've written a simple piece of code that you can test. from telegram import Update, InlineKeyboardButton, InlineKeyboardMarkup from telegram.ext import Updater, CommandHandler, ConversationHandler, CallbackContext, CallbackQueryHandler user_id = '*********' tlg_token = '******************************' SELECTING_COMMAND=1 keyboard = [[InlineKeyboardButton('Button: Print Clicked', callback_data=1)],] reply_markup = InlineKeyboardMarkup(keyboard) def menu(update: Update, context: CallbackContext) -> int: update.message.reply_text('sent by command button:', reply_markup=reply_markup) return SELECTING_COMMAND def InlineKeyboardHandler(update: Update, _: CallbackContext) -> None: print('clicked') return 1 def cancel(update: Update, context: CallbackContext) -> int: return ConversationHandler.END updater = Updater(tlg_token, use_context=True) dispatcher = updater.dispatcher conv_handler = ConversationHandler( entry_points=[CommandHandler('request_button', menu)], states={ SELECTING_COMMAND: [CallbackQueryHandler(InlineKeyboardHandler)], }, fallbacks=[CommandHandler('cancel', cancel)], ) dispatcher.add_handler(conv_handler) j = updater.job_queue def talker(update): update.bot.sendMessage(chat_id=user_id, text='sent by talker:', reply_markup=reply_markup) j.run_repeating(talker, interval=10, first=0) updater.start_polling() updater.bot.sendMessage(chat_id=user_id, text='/request_button') updater.idle() I expect I can see 'clicked' printed after clicking on the button but it's not going to work unless you click on the /request_button first. Why? And how can I fix it? | The problem with your code as a_guest mentioned in the comments, is that InlineKeyboardHandler will start to work only after calling request_button command. Here's a working version where InlineKeyboardHandler is registered independently: from telegram import Update, InlineKeyboardButton, InlineKeyboardMarkup from telegram.ext import Updater, CommandHandler, ConversationHandler, CallbackContext, CallbackQueryHandler ################################# user_id = 0 tlg_token = 'bot_token' SELECTING_COMMAND = 1 keyboard = [[InlineKeyboardButton('Button: Print Clicked', callback_data=1)], ] reply_markup = InlineKeyboardMarkup(keyboard) ################################# def menu(update: Update, context: CallbackContext) -> int: update.message.reply_text('sent by command button:', reply_markup=reply_markup) return SELECTING_COMMAND def InlineKeyboardHandler(update: Update, _: CallbackContext) -> None: print('clicked') return 1 def cancel(update: Update, context: CallbackContext) -> int: return ConversationHandler.END updater = Updater(tlg_token, use_context=True) dispatcher = updater.dispatcher updater.dispatcher.add_handler(CallbackQueryHandler(InlineKeyboardHandler)) updater.dispatcher.add_handler(CommandHandler('request_button', menu)) j = updater.job_queue def talker(update): update.bot.sendMessage(chat_id=user_id, text='sent by talker:', reply_markup=reply_markup) j.run_repeating(talker, interval=10, first=0) updater.start_polling() updater.bot.sendMessage(chat_id=user_id, text='/request_button') updater.idle() The other solution for the problem is what OP himself mentioned in the comments where you add the CallbackQueryHandler as an entry point: entry_points=[CommandHandler('request_button', menu), CallbackQueryHandler(InlineKeyboardHandler)] | 7 | 2 |
68,971,787 | 2021-8-29 | https://stackoverflow.com/questions/68971787/unit-test-for-django-update-form | I do not understand how to manage updates on forms and related unit tests and I would really appreciate some advises =) I have a Company model, and related very simple CompanyForm: class Company(models.Model): """ Company informations - Detailed information for display purposes in the application but also used in documents built and sent by the application - Mail information to be able to send emails """ company_name = models.CharField("nom", max_length=200) comp_slug = models.SlugField("slug") logo = models.ImageField(upload_to="img/", null=True, blank=True) use_groups = models.BooleanField("utilise les groupes", default=False) # Company uses groups or not rules = [("MAJ", "Majorité"), ("PROP", "Proportionnelle")] # Default management rule rule = models.CharField( "mode de scrutin", max_length=5, choices=rules, default="MAJ" ) upd_rule = models.BooleanField("choisir la règle de répartition pour chaque événement", default=False) # Event rule might change from one to another or always use default statut = models.CharField("forme juridique", max_length=50) siret = models.CharField("SIRET", max_length=50) street_num = models.IntegerField("N° de rue", null=True, blank=True) street_cplt = models.CharField("complément", max_length=50, null=True, blank=True) address1 = models.CharField("adresse", max_length=300) address2 = models.CharField( "complément d'adresse", max_length=300, null=True, blank=True ) zip_code = models.IntegerField("code postal") city = models.CharField("ville", max_length=200) host = models.CharField("serveur mail", max_length=50, null=True, blank=True) port = models.IntegerField("port du serveur", null=True, blank=True) hname = models.EmailField("utilisateur", max_length=100, null=True, blank=True) fax = models.CharField("mot de passe", max_length=50, null=True, blank=True) use_tls = models.BooleanField("authentification requise", default=True, blank=True) class Meta: verbose_name = "Société" constraints = [ models.UniqueConstraint(fields=["comp_slug"], name="unique_comp_slug") ] def __str__(self): return self.company_name @classmethod def get_company(cls, slug): """ Retreive company from its slug """ return cls.objects.get(comp_slug=slug) class CompanyForm(forms.ModelForm): company_name = forms.CharField(label="Société", disabled=True) class Meta: model = Company exclude = [] The view is very simple too: @user_passes_test(lambda u: u.is_superuser or u.usercomp.is_admin) def adm_options(request, comp_slug): ''' Manage Company options ''' company = Company.get_company(comp_slug) comp_form = CompanyForm(request.POST or None, instance=company) if request.method == "POST": if comp_form.is_valid(): comp_form.save() return render(request, "polls/adm_options.html", locals()) This view works fine, I can update information (it's actually not used for creation, which is done thanks to the Django Admin panel). Unfortunately, I'm not able to build unit tests that will ensure update works! I tried 2 ways, but none of them worked. My first try was the following: class TestOptions(TestCase): def setUp(self): self.company = create_dummy_company("Société de test") self.user_staff = create_dummy_user(self.company, "staff", admin=True) self.client.force_login(self.user_staff.user) def test_adm_options_update(self): # Load company options page url = reverse("polls:adm_options", args=[self.company.comp_slug]) response = self.client.get(url) self.assertEqual(response.status_code, 200) self.assertContains(response, "0123456789") self.assertEqual(self.company.siret, "0123456789") # Options update response = self.client.post( reverse("polls:adm_options", args=[self.company.comp_slug]), {"siret": "987654321"} ) self.assertEqual(response.status_code, 200) self.assertContains(response, "987654321") self.assertNotContains(response, "0123456789") self.assertEqual(self.company.siret, "987654321") In this case, everything is OK but the latest assertion. It looks that the update has not been saved, which is actually not the case. I tried to Read the database just before, with the key stored in the context, but it remains the same. I was looking for other information when I found this topic, so I tried another way to test, even if the approach surprised me a bit (I do not see how the view is actually tested). Here is my second try (setUp() remains the same): def test_adm_options_update(self): # Load company options page url = reverse("polls:adm_options", args=[self.company.comp_slug]) response = self.client.get(url) self.assertEqual(response.status_code, 200) self.assertContains(response, "0123456789") # this is the default value in tests for this field self.assertEqual(self.company.siret, "0123456789") # Options update self.company.siret = "987654321" comp_form = CompanyForm(instance=self.company) self.assertTrue(comp_form.is_valid()) comp_form.save() company = Company.get_company(self.company.comp_slug) self.assertEqual(company.siret, "987654321") In this case, the form is just empty! I could consider my view works and go ahead, my problem is that I have a bug in another view and I would like to ensure I can build the test to find out the bug! Many thanks in advance for your answers! EDITS - Aug 30th Following advices, I tried to use self.company.refresh_from_db() but it did not change the result. A try was made to pass all fields in the self.client.post() but it fails as soon as a field is empty ('Cannot encode None as POST data' error message) It also appeared that I created a 'dummy' company for test with empty mandatory fields... and it worked anyway. A matter of testing environment ? I changed this point but I wonder if the problem is not anywhere else... EDITS - Sept 15th Looking for someone available to provide me with new ideas, please =) To ensure I understood the latest proposition, here is the complete code for the test: def test_adm_options_update(self): # Load company options page url = reverse("polls:adm_options", args=[self.company.comp_slug]) response = self.client.get(url) self.assertEqual(response.status_code, 200) self.assertContains(response, "0123456789") self.assertEqual(self.company.siret, "0123456789") # Apply changes company_data = copy.deepcopy(CompanyForm(instance=self.company).initial) company_data['siret'] = "987654321" response = self.client.post( reverse("polls:adm_options", args=[self.company.comp_slug]), company_data, ) self.company.refresh_from_db() self.assertEqual(response.status_code, 200) self.assertContains(response, "987654321") self.assertNotContains(response, "0123456789") self.assertEqual(self.company.siret, "987654321") Here is the function that creates the 'dummy' copany for tests: def create_dummy_company(name): return Company.objects.create( company_name=name, comp_slug=slugify(name), logo=SimpleUploadedFile(name='logo.jpg', content=b'content', content_type='image/jpeg'), statut="SARL", siret="0123456789", address1="Rue des fauvettes", zip_code="99456", city='Somewhere', host="smtp.gmail.com", port=587, hname="[email protected]", fax="toto", ) | In this case, you need to use refresh_from_db to "refresh" your object once the view and the form are done updating your object. This means that when you are currently asserting, you are using an "old snapshot" of self.company hence the failure on assertion, so you need to update it: # Options update response = self.client.post( reverse("polls:adm_options", args=[self.company.comp_slug]), {"siret": "987654321"} ) ... self.company.refresh_from_db() self.assertEqual(self.company.siret, "987654321") EDIT: Figured out a way to make this work. Since the form requires that you put in all data, you can just pass the company instance to the same form, and access initial (which will serve as your request data). You can then modify it with the changes you want, in this case for siret and logo: from django.core.files.uploadedfile import SimpleUploadedFile def test(self): company_data = CompanyForm(instance=self.company).initial company_data['logo'] = SimpleUploadedFile(name='somefile', content=b'content', content_type='image/jpeg') company_data['siret'] = "987654321" response = self.client.post( reverse("polls:adm_options", args=[self.company.comp_slug]), company_data, ) self.company.refresh_from_db() self.assertEqual(self.company.siret, "987654321") This works and passes on my end with the same exact model you have. | 4 | 6 |
68,937,783 | 2021-8-26 | https://stackoverflow.com/questions/68937783/why-do-i-get-mysql-server-has-gone-away-after-running-a-telegram-bot-for-some | I'm building a Django (ver. 3.0.5) app that uses mysqlclient (ver. 2.0.3) as the DB backend. Additionally, I've written a Django command that runs a bot written using the python-telegram-bot API, so the mission of this bot is to run indefinitely, as it has to answer to commands anytime. Problem is that approximately 24hrs. after running the bot (not necessarily being idle all the time), I get a django.db.utils.OperationalError: (2006, 'MySQL server has gone away') exception after running any command. I'm absolutely sure the MySQL server has been running all the time and is still running at the time I get this exception. The MySQL server version is 5.7.35. My assumption is that some MySQL threads get aged out and get closed, so after reusing them they won't get renewed. Has anyone bumped into this situation and knows how to solve it? Traceback (most recent call last): File "/opt/django/gip/venv/lib/python3.6/site-packages/telegram/ext/dispatcher.py", line 555, in process_update handler.handle_update(update, self, check, context) File "/opt/django/gip/venv/lib/python3.6/site-packages/telegram/ext/handler.py", line 198, in handle_update return self.callback(update, context) File "/opt/django/gip/gip/hospital/gipcrbot.py", line 114, in ayuda perfil = get_permiso_efectivo(update.message.from_user.id) File "/opt/django/gip/gip/hospital/telegram/funciones.py", line 33, in get_permiso_efectivo u = Telegram.objects.get(idtelegram=userid) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/models/query.py", line 411, in get num = len(clone) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/models/query.py", line 258, in __len__ self._fetch_all() File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/models/query.py", line 1261, in _fetch_all self._result_cache = list(self._iterable_class(self)) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/models/query.py", line 57, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1151, in execute_sql cursor.execute(sql, params) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute return super().execute(sql, params) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers return executor(sql, params, many, context) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/opt/django/gip/venv/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute return self.cursor.execute(query, args) File "/opt/django/gip/venv/lib/python3.6/site-packages/MySQLdb/cursors.py", line 206, in execute res = self._query(query) File "/opt/django/gip/venv/lib/python3.6/site-packages/MySQLdb/cursors.py", line 319, in _query db.query(q) File "/opt/django/gip/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 259, in query _mysql.connection.query(self, query) django.db.utils.OperationalError: (2006, 'MySQL server has gone away') Things I have tried I already tried changing the Django settings.py file so I set an explicit value for CONN_MAX_AGE, and I also set a value for the MySQL client wait_timeout parameter, being CONN_MAX_AGE lower than wait_timeout. settings.py: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'OPTIONS': { 'read_default_file': '/opt/django/gip/gip/gip/my.cnf', }, 'CONN_MAX_AGE': 3600, } } my.cnf: [client] ... wait_timeout = 28800 Unfortunately, the behavior is exactly the same: I get an exception approximately 24hrs. after running the bot. Setting CONN_MAX_AGE to None won't make any difference either. I installed the mysql-server-has-gone-away python package as proposed by @r-marolahy, but it won't make a difference either. After nearly 24hours after running it the "gone away" message shows again. I also tried the approach of closing old connections: from django.db import close_old_connections try: #do your long running operation here except django.db.utils.OperationalError: close_old_connections() #do your long running operation here Still getting the same result. | I ended up scheduling a DB query every X hours (in this case, 6h) in the bot. The python-telegram-bot has a class called JobQueue which has a method called run_repeating. This will run a task every n seconds. So I declared: def check_db(context): # Do the code for running "SELECT 1" in the DB return updater.job_queue.run_repeating(check_db, interval=21600, first=21600) After this change I haven't had the same problem again. Also, calling the mostly undocumented close_if_unusable_or_obsolete() Django method from time to time works as well in my case. from django.db import connection connection.close_if_unusable_or_obsolete() | 7 | 0 |
68,939,894 | 2021-8-26 | https://stackoverflow.com/questions/68939894/implement-a-python-websocket-listener-without-async-asyncio | I'm running a websocket listener in a separate thread. I'd like to connect to the websocket then do: while True: msg = sock.wait_for_message() f(msg) i.e. no async/asyncio Is this stupid? Is there a way to do this? | In absence of a better answer, I have found https://github.com/websocket-client/websocket-client which prove painless to use. | 4 | 6 |
68,995,523 | 2021-8-31 | https://stackoverflow.com/questions/68995523/how-to-get-the-first-sheet-of-an-excel-workbook-using-openpyxl | I'm able to get the desired sheet by using wb["sheet_name"] method but I want to get the first, or let's say the nth sheet, regardless of the name. wb = load_workbook(filename = xlsx_dir) # xlsx_dir is the workbook path ws = wb["Details"] # Details is the sheet name | You need to use the worksheets property of the workbook object ws = wb.worksheets[0] | 10 | 32 |
68,938,628 | 2021-8-26 | https://stackoverflow.com/questions/68938628/why-is-anytrue-for-if-cond-much-faster-than-anycond-for | Two similar ways to check whether a list contains an odd number: any(x % 2 for x in a) any(True for x in a if x % 2) Timing results with a = [0] * 10000000 (five attempts each, times in seconds): 0.60 0.60 0.60 0.61 0.63 any(x % 2 for x in a) 0.36 0.36 0.36 0.37 0.37 any(True for x in a if x % 2) Why is the second way almost twice as fast? My testing code: from timeit import repeat setup = 'a = [0] * 10000000' expressions = [ 'any(x % 2 for x in a)', 'any(True for x in a if x % 2)', ] for expression in expressions: times = sorted(repeat(expression, setup, number=1)) print(*('%.2f ' % t for t in times), expression) Try it online! | The first method sends everything to any() whilst the second only sends to any() when there's an odd number, so any() has fewer elements to go through. | 98 | 92 |
68,932,099 | 2021-8-26 | https://stackoverflow.com/questions/68932099/how-to-get-alembic-to-recognise-sqlmodel-database-model | Using SQLModel how to get alembic to recognise the below model? from sqlmodel import Field, SQLModel class Hero(SQLModel, table=True): id: int = Field(default=None, primary_key=True) name: str secret_name: str age: Optional[int] = None One approach I've been looking at is to import the SQLalchemy model for Alembic but looking through the source code I can't find how to do that. How to make Alembic work with SQLModel models? | There should be info about that in Advanced user guide soon with better explanation than mine but here is how I made Alimbic migrations work. First of all run alembic init migrations in your console to generate migrations folder. Inside migrations folder should be empty versions subfolder,env.py file, script.py.mako file. In script.py.mako file we should add line import sqlmodel somewhere around these two lines #script.py.mako from alembic import op import sqlalchemy as sa import sqlmodel # added Then we should edit env.py file #env.py from logging.config import fileConfig from sqlalchemy import engine_from_config from sqlalchemy import pool from alembic import context from app.models import * # necessarily to import something from file where your models are stored # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config # Interpret the config file for Python logging. # This line sets up loggers basically. fileConfig(config.config_file_name) # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata target_metadata = None # comment line above and instead of that write target_metadata = SQLModel.metadata While writing came up with an idea that you forgot to import something from your models.py (or anywhere else your models are stored). And that was the main problem Also, an important note would be saving changes in your models by pressing ctrl(CMD) + S - there are some issues with that. Finally,running alembic revision --autogenerate -m "your message" should generate a new .py file in versions folder with your changes. And alembic upgrade head Applies your changes to DB. | 23 | 45 |
68,990,830 | 2021-8-30 | https://stackoverflow.com/questions/68990830/how-to-preserve-axis-aspect-ratio-with-tight-layout | I have a plot with both a colorbar and a legend. I want to place the legend outside of the plot to the right of the colorbar. To accomplish this, I use bbox_to_anchor argument, but this causes the legend to get cut off: import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm _, ax = plt.subplots() extent = np.r_[0, 1, 0, 1] space = np.linspace(0, 1) probs = np.array([[norm.cdf(x + y) for x in space] for y in space]) colormap = ax.imshow(probs, aspect="auto", origin="lower", extent=extent, alpha=0.5) colorbar = plt.colorbar(colormap, ax=ax) colorbar.set_label(f"Probability") ax.scatter( [0.2, 0.4, 0.6], [0.8, 0.6, 0.4], color="r", label="Labeled Points", ) plt.legend(loc="center left", bbox_to_anchor=(1.3, 0.5)) plt.title plt.show() Plot with legend cut off To fix the legend, I insert a call to plt.tight_layout() before plt.show(), but this causes the aspect ratio to get distorted: Plot with distorted aspect ratio How can I show the entire legend and preserve the aspect ratio of the axes? | You can manage the ratio between axis height and width with matplotlib.axes.Axes.set_aspect. Since you want them to be equal: ax.set_aspect(1) Then you can use matplotlib.pyplot.tight_layout to fit the legend within the figure. If you want to adjust margins too, you can use matplotlib.pyplot.subplots_adjust. Complete Code import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm _, ax = plt.subplots() extent = np.r_[0, 1, 0, 1] space = np.linspace(0, 1) probs = np.array([[norm.cdf(x + y) for x in space] for y in space]) colormap = ax.imshow(probs, aspect="auto", origin="lower", extent=extent, alpha=0.5) colorbar = plt.colorbar(colormap, ax=ax) colorbar.set_label(f"Probability") ax.scatter([0.2, 0.4, 0.6], [0.8, 0.6, 0.4], color="r", label="Labeled Points",) plt.legend(loc="center left", bbox_to_anchor=(1.3, 0.5)) ax.set_aspect(1) plt.tight_layout() plt.subplots_adjust(left = 0.1) plt.show() | 6 | 2 |
69,005,034 | 2021-8-31 | https://stackoverflow.com/questions/69005034/multiple-inheritance-metaclass-conflict-involving-enum | I need a double inheritance for a class that is an Enum but also support my own methods. Here's the context: import abc from enum import Enum class MyFirstClass(abc.ABC): @abc.abstractmethod def func(self): pass class MySecondClass(Enum, MyFirstClass): VALUE_1 = 0 VALUE_2 = 1 def func(self): return 42 The declaration of MySecondClass yields the following error: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases I tried applying this stackoverflow solution by doing: class MyMetaClass(type(Enum), type(MyFirstClass)): pass class MyFinalClass(Enum, MyFirstClass, metaclass=MyMetaClass): VALUE_1 = 0 VALUE_2 = 1 def func(self): return 42 But I get the following error: TypeError: new enumerations should be created as `EnumName([mixin_type, ...] [data_type,] enum_type)` Is this an issue specific to the Enum type, or am I missing something else regarding metaclasses? | The solution to your immediate problem is: class MyFinalClass(MyFirstClass, Enum, metaclass=MyMetaClass): pass Note that Enum is the last regular class listed. For a fully functioning abstract Enum you'll want to use the ABCEnumMeta from this answer -- otherwise missing abstract methods will not be properly flagged. | 10 | 6 |
69,005,509 | 2021-8-31 | https://stackoverflow.com/questions/69005509/place-an-order-in-interactive-brokers-using-api-request | First, to begin with, I was successfully able to place an order using TWS API. However, for that, as I understood, I need to run the TWS desktop version in the background. But I need to run this on my remote server. So I used a 3rd party API called IBeam and created a gateway using it, in the remote server. Now it is working well and serving the GET requests that I request from the Interactive Brokers. Now, I want to place an order in Interactive Broker, using an API request and found this doc by IB. However, for me it is not clear what they meant by each argument, so as of now I am stuck. I.e, from docs, I need to send a POST request to https://localhost:5000/v1/api/iserver/account/{accountId}/orders (with IB gateway running in localhost:5000) with the request body { "orders": [ { "acctId": "string", "conid": 0, "secType": "secType = 265598:STK", "cOID": "string", "parentId": "string", "orderType": "string", "listingExchange": "string", "isSingleGroup": true, "outsideRTH": true, "price": 0, "auxPrice": null, "side": "string", "ticker": "string", "tif": "string", "referrer": "QuickTrade", "quantity": 0, "fxQty": 0, "useAdaptive": true, "isCcyConv": true, "allocationMethod": "string", "strategy": "string", "strategyParameters": {} } ] } From what I learn from the TWS API, this was all the information needed to place an order: contract = Contract() contract.symbol = "AAPL" contract.secType = "STK" contract.exchange = "SMART" contract.currency = "USD" contract.primaryExchange = "NASDAQ" order = Order() order.action = "BUY" order.totalQuantity = 10 order.orderType = "MKT" It would be great if you could help me with a sample code to place a similar order using the REST API of Ineteractive Broker | I found this article helpful in the process of placing an order. I.e, this is a sample request that you can use to place an order { "orders": [ { "acctId": "DU4299134", "conid": 8314, "secType": "8314:STK", "cOId": "testAlgoOrder", "orderType": "LMT", "price": 142, "side": "BUY", "tif": "DAY", "quantity": 1, "strategy": "Adaptive", "strategyParameters": {"adaptivePriority": "Normal" } } ] } You can use these URLs to find more info about the strategies, url = f"https://localhost:5000/v1/api/iserver/contract/{conid}/algos" url_more_info = f"https://localhost:5000/v1/api/iserver/contract/{conid}/algos?addDescription=1&addParams=1&algos={algos}" Further, when you place an order like above, IBKR will ask you to confirm the order, which you can do by url = f"https://localhost:5000/v1/api/iserver/reply/{replyid}" data = '''{ "confirmed": true }''' response = requests.post(url, data=data, headers=headers, verify='path to .pem file') Note that you have to use the correct header when you are sending a POST requests to IBKR as mentioned here. | 8 | 4 |
68,938,614 | 2021-8-26 | https://stackoverflow.com/questions/68938614/file-pyinstaller-loader-pyimod03-importers-py-line-546-in-exec-module-modul | EDIT I'm trying to import algosec.models in a file inside the algobot package. I've tried to add --hidden-import algosec, I've also tried to add the path before importing, using sys.path.append(./../algosec) this is the error message I get when I try to run the program: Traceback (most recent call last): File "algobot_packer/algobot.py", line 2, in <module> File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module File "algobot/cli/cli.py", line 3, in <module> File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module File "algobot/microsoft_teams/mainloop.py", line 9, in <module> File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module File "algobot/framework/configuration.py", line 34, in <module> File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module File "algobot/framework/commands.py", line 22, in <module> File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module File "algobot/framework/bot.py", line 4, in <module> File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module File "algobot/framework/responses.py", line 9, in <module> ModuleNotFoundError: No module named 'algosec' the folder structure is: algobot algobot algosec algobot-packer pyucwa I'm using pyinstaller version 4.2 I didn't make any change in the code since the last time my executable file ran perfectly fine, but now I'm getting this error every time. the thing is - the folder 'algosec' is a subdirectory in my project, and it is noted in the pipfile and again, I didn't make any change in a while and tested it recently (last tested on July 8th)), therefore I believe that it's a dependency issue but not sure which or how to solve. I've tried multiple changes that somehow worked on one run but when I tried to make these changes again it failed on other builds... | Apparently since I took the highest version of zeep and deprecated without giving a fixed version, it caused issues because of a newer release. I had to add them to setup.py of the algobot package which is the main package of the executable with a fixed version. In addition I had to add a .egg file of the algosec package with --paths in order for pyinstaller to find it. | 6 | 1 |
68,957,147 | 2021-8-27 | https://stackoverflow.com/questions/68957147/aiofiles-take-longer-than-normal-file-operation | I have a question I'm new to the python async world and I write some code to test the power of asyncio, I create 10 files with random content, named file1.txt, file2.txt, ..., file10.txt here is my code: import asyncio import aiofiles import time async def reader(pack, address): async with aiofiles.open(address) as file: pack.append(await file.read()) async def main(): content = [] await asyncio.gather(*(reader(content, f'./file{_+1}.txt') for _ in range(10))) return content def core(): content = [] for number in range(10): with open(f'./file{number+1}.txt') as file: content.append(file.read()) return content if __name__ == '__main__': # Asynchronous s = time.perf_counter() content = asyncio.run(main()) e = time.perf_counter() print(f'Take {e - s: .3f}') # Synchronous s = time.perf_counter() content = core() e = time.perf_counter() print(f'Take {e - s: .3f}') and got this result: Asynchronous: Take 0.011 Synchronous: Take 0.001 why Asynchronous code takes longer than Synchronous code ? where I do it wrong ? | I post an issue #110 on aiofiles's GitHub and the author of aiofiles answer that: You're not doing anything wrong. What aiofiles does is delegate the file reading operations to a thread pool. This approach is going to be slower than just reading the file directly. The benefit is that while the file is being read in a different thread, your application can do something else in the main thread.A true, cross-platform way of reading files asynchronously is not available yet, I'm afraid :) I hope it be helpful to anybody that has the same problem | 5 | 14 |
68,960,005 | 2021-8-27 | https://stackoverflow.com/questions/68960005/saving-an-animated-matplotlib-graph-as-a-gif-file-results-in-a-different-looking | I created an animated plot using FuncAnimation from the Matplotlib Animation class, and I want to save it as a .gif file. When I run the script, the output looks normal, and looks like this (the animation works fine): However, when I try to save the animated plot as a .gif file using ImageMagick or PillowWriter, the plot looks like the following graph: The lines are clearly much thicker, and in general, just looks very bad. The problem is attributed to the points (the purple, and red circles). Thus it seems like the plot is writing over each frame (which I think is the case). I can avoid this by just getting rid of them all together. But I don't want to do that as it would be hard to see the lines. Here is the code: line, = ax.plot([], [], color = 'blue', lw=1) line2, = ax.plot([], [], color = 'red', lw=1) line3, = ax.plot([], [], color = 'purple', lw=1) def animate(i): line.set_data(x1[:i], y1[:i]) line2.set_data(x2[:i], y2[:i]) line3.set_data(x3[:i], y3[:i]) point1, = ax.plot(x1[i], y1[i], marker='.', color='blue') point2, = ax.plot(x2[i], y2[i], marker='.', color='red') point3, = ax.plot(x3[i], y3[i], marker='.', color='purple') return line, line2, line3, point1, point2, point3, ani = animation.FuncAnimation(fig, animate, interval=20, blit=True, repeat=False, frames=1000, save_count=1000) ani.save("TLI.gif", writer='imagemagick',fps=60) The arrays x1, y1, x2, y2, x3, y3 are all 1D arrays that contain the x, y coordinates. So why is this happening? Why is it that the .gif file doesn't show what the plot shows when I run it directly? And also, how can I fix this? I am also aware of this Stack Overflow question: matplotlib animation save is not obeying blit=True but it seems to work just fine in plt.show() which means the problem is definitely attributed to blitting. However, reading the answer of that question did not solve my problem because that only refers to ax.text opposed to a regular point plotted via ax.plot. | See if this works. I don't have Imagemagick so I used Pillow. To prevent the animation showing stacked frames (i.e., dot traces), the trick is to clear the axes to refresh each frame. Then set xlim and ylim for each frame, and plot the incremental lines using ax.plot(x1[0:i], y1[0:i]... To improve the image resolution, set the output dpi to a suitable value. I played around with it and settled on 300 to get sharp lines and axes lines/numbers. import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation, PillowWriter import numpy as np x1 = np.arange(0, -0.2, -0.002) y1 = np.arange(0, -0.2, -0.002) x2 = np.arange(3.9, 3.7, -0.002) y2 = np.arange(0, 1, 0.01) x3 = np.arange(0, 1.8, 0.018) y3 = np.array(x3**2) fig,ax = plt.subplots() def animate(i): ax.clear() ax.set_xlim(-4,4) ax.set_ylim(-4,4) line, = ax.plot(x1[0:i], y1[0:i], color = 'blue', lw=1) line2, = ax.plot(x2[0:i], y2[0:i], color = 'red', lw=1) line3, = ax.plot(x3[0:i], y3[0:i], color = 'purple', lw=1) point1, = ax.plot(x1[i], y1[i], marker='.', color='blue') point2, = ax.plot(x2[i], y2[i], marker='.', color='red') point3, = ax.plot(x3[i], y3[i], marker='.', color='purple') return line, line2, line3, point1, point2, point3, ani = FuncAnimation(fig, animate, interval=40, blit=True, repeat=True, frames=100) ani.save("TLI.gif", dpi=300, writer=PillowWriter(fps=25)) | 13 | 20 |
68,997,995 | 2021-8-31 | https://stackoverflow.com/questions/68997995/can-i-read-parquet-from-https-octet-stream | Some backend endpoint returns a parquet file as an octet-stream. In Pandas I can do something like this: result = requests.get("https://..../file.parquet") df = pd.read_parquet(io.BytesIO(result.content)) Can I do it in Dask somehow? This code: dd.read_parquet("https://..../file.parquet") raises exception (obviously, because this is bytes-like object): File "to_parquet_dask.py", line 153, in <module> main(*parser.parse_args()) File "to_parquet_dask.py", line 137, in main download_parquet( File "to_parquet_dask.py", line 121, in download_parquet dd.read_parquet( File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/dask/dataframe/io/parquet/core.py", line 313, in read_parquet read_metadata_result = engine.read_metadata( File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/dask/dataframe/io/parquet/fastparquet.py", line 733, in read_metadata parts, pf, gather_statistics, base_path = _determine_pf_parts( File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/dask/dataframe/io/parquet/fastparquet.py", line 148, in _determine_pf_parts elif fs.isdir(paths[0]): File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/fsspec/asyn.py", line 88, in wrapper return sync(self.loop, func, *args, **kwargs) File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/fsspec/asyn.py", line 69, in sync raise result[0] File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/fsspec/asyn.py", line 25, in _runner result[0] = await coro File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/fsspec/implementations/http.py", line 418, in _isdir return bool(await self._ls(path)) File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/fsspec/implementations/http.py", line 195, in _ls out = await self._ls_real(url, detail=detail, **kwargs) File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/fsspec/implementations/http.py", line 150, in _ls_real text = await r.text() File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 1082, in text return self._body.decode(encoding, errors=errors) # type: ignore UnicodeDecodeError: 'utf-8' codec can't decode byte 0x90 in position 7: invalid start byte UPD With changes in fsspec from @mdurant's answer I got this error: ValueError: Cannot seek streaming HTTP file So I put "simplecache::" to my url and I face next: Traceback (most recent call last): File "to_parquet_dask.py", line 161, in <module> main(*parser.parse_args()) File "to_parquet_dask.py", line 145, in main download_parquet( File "to_parquet_dask.py", line 128, in download_parquet dd.read_parquet( File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/dask/dataframe/io/parquet/core.py", line 313, in read_parquet read_metadata_result = engine.read_metadata( File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/dask/dataframe/io/parquet/fastparquet.py", line 733, in read_metadata parts, pf, gather_statistics, base_path = _determine_pf_parts( File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/dask/dataframe/io/parquet/fastparquet.py", line 185, in _determine_pf_parts pf = ParquetFile( File "/home/bc30138/Documents/CODE/flexydrive/driver_style/.venv/lib/python3.8/site-packages/fastparquet/api.py", line 127, in __init__ raise ValueError("Opening directories without a _metadata requires" ValueError: Opening directories without a _metadata requiresa filesystem compatible with fsspec Temporary workaround Maybe this way is dirty and not optimal, but some kind of works: @dask.delayed def parquet_from_http(url, token): result = requests.get( url, headers={'Authorization': token} ) return pd.read_parquet(io.BytesIO(result.content)) delayed_download = parquet_from_http(url, token) df = dd.from_delayed(delayed_download, meta=meta) P.S. meta argument in this approach is necessary, because otherwise Dask will use this function twice: to find out meta and than to calculate, so two requests will be made. | This is not an answer, but I believe the following change in fsspec will fix your problem. If you would be willing to try and confirm, we can make this a patch. --- a/fsspec/implementations/http.py +++ b/fsspec/implementations/http.py @@ -472,7 +472,10 @@ class HTTPFileSystem(AsyncFileSystem): async def _isdir(self, path): # override, since all URLs are (also) files - return bool(await self._ls(path)) + try: + return bool(await self._ls(path)) + except (FileNotFoundError, ValueError): + return False (we can put this in a branch, if that makes it easier for you to install) -edit- The second problem (which is the same thing in both parquet engines) stems from the server either not providing the size of the file, or not allowing range-gets. The parquet format requires random access to the data to be able to read. The only way to get around this (short of improving the server) is to copy the whole file locally, e.g., by prepending "simplecache::" to your URL. | 5 | 1 |
68,999,248 | 2021-8-31 | https://stackoverflow.com/questions/68999248/mock-external-api-post-call-in-view-from-test-view-python | I have an external API POST call that is being made from within my views.py as such: class MyView(APIView): def post(self, request): my_headers = { "Content-Type": "application/json" } response = requests.post("https://some-external-api.com", data=json.dumps(request.data), headers=my_headers) return Response(status.response.status_code) As you can see, it is a very simple case of making a POST call to the external API with the same data that is received to the views endpoint. Right now, I am trying to create a unit test for this, while mocking the response from "https://some-external-api.com" so I obviously don't have to make an actual call to it every time this unit test runs. But I am having difficulty as I can't get the mock aspect to work, and everytime the request is sent to the actual external endpoint. I know there are a lot of examples online, but nothing that I've tried seems to work. I've not seen examples whereby the mocked response should come from the view file itself. As of now, I have this: @patch('requests.post') def test_external_api_call(self, mock_post) mock_post.return_value.ok = True response = self.client.post(reverse('my-view'), { //my random dummy json object goes here }, format='json') self.assertEqual(response.status_code, 200) As I mentioned, with the above code, there is an actual call being made to "https://some-external-api.com" rather than it being mocked. | No need to reinvent the wheel, just use the available mockers for the requests library such as requests_mock. import json import pytest import requests import requests_mock # python3 -m pip install requests-mock def post(): my_headers = {"Content-Type": "application/json"} my_data = {"some_key": "some_value"} response = requests.post("https://some-external-api.com", data=json.dumps(my_data), headers=my_headers) print(response.status_code, response.json()) @pytest.fixture def mock_post(): with requests_mock.Mocker() as requests_mocker: def match_data(request): """ This is just optional. Remove if not needed. This will check if the request contains the expected body. """ return request.json() == {"some_key": "some_value"} requests_mocker.post( "https://some-external-api.com", # Match the target URL. additional_matcher=match_data, # Optional. If you want to match the request body too. status_code=200, # The status code of the response. json={"the_result": "was successful!"}, # Optional. The value when .json() is called on the response. ) yield def test_requests(mock_post): post() $ pytest -q -rP ================================================================================================= PASSES ================================================================================================== ______________________________________________________________________________________________ test_requests ______________________________________________________________________________________________ ------------------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------------------- 200 {'the_result': 'was successful!'} 1 passed in 0.04s | 4 | 5 |
68,933,195 | 2021-8-26 | https://stackoverflow.com/questions/68933195/how-do-i-pass-multiple-arguments-to-a-pandas-udf-in-pyspark | I'm working with the following snippet: from cape_privacy.pandas.transformations import Tokenizer max_token_len = 5 @pandas_udf("string") def Tokenize(column: pd.Series)-> pd.Series: tokenizer = Tokenizer(max_token_len) return tokenizer(column) spark_df = spark_df.withColumn("name", Tokenize("name")) Since Pandas UDF only uses Pandas series I'm unable to pass the max_token_len argument in the function call Tokenize("name"). Therefore I have to define the max_token_len argument outside the scope of the function. The workarounds provided in this question weren't really helpful. Are there any other possible workarounds or alternatives to this issue? Please Advise | After trying a myriad of approaches, I found an effortless solution as illustrated below: I created a wrapper function (Tokenize_wrapper) to wrap the Pandas UDF (Tokenize_udf) with the wrapper function returning the Pandas UDF's function call. def Tokenize_wrapper(column, max_token_len=10): @pandas_udf("string") def Tokenize_udf(column: pd.Series) -> pd.Series: tokenizer = Tokenizer(max_token_len) return tokenizer(column) return Tokenize_udf(column) df = df.withColumn("Name", Tokenize_wrapper("Name", max_token_len=5)) Using partial functions (@Vaebhav's answer) did actually make this issue's implementation difficult. | 5 | 17 |
69,003,730 | 2021-8-31 | https://stackoverflow.com/questions/69003730/understanding-whats-happening-in-the-kadane-algorithm-python | I'm having a difficult time understanding what's happening in these two examples I found of the Kadane Algorithm. I'm new to Python and I'm hoping understanding this complex algo will help me see/read programs better. Why would one example be better than the other, is it just List vs Range? Is there something else that makes one of the examples more efficient? Also, some questions about what's happening in the calculations. (questions inside the examples) I've used PythonTutor to help me get a visual on what exactly is happening step by step. Example 1: In PythonTuter, when you select next step in the screen shot provided, The value of so_far turns to 1. How is this? Giving the sum, I've thought its adding -2 + 1 which is -1, so when so_far turns to 1, how is this? def max_sub(nums): max_sum = 0 so_far = nums[0] for x in nums[1:]: so_far = max(x, x + so_far) max_sum = max(so_far, max_sum) return max_sum nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4] max_sub(nums) 6 Example 2: Similar question for this one, when I select NEXT step, the max_sum turns from -2 to 4... but how so if it's adding the element in the 2 (which is 4). To me, that would be -2 + 4 = 2 ? def maxSubArraySum(a,size): max_so_far =a[0] curr_max = a[0] for i in range(1,size): curr_max = max(a[i], curr_max + a[i]) max_so_far = max(max_so_far,curr_max) return max_so_far a = [-2, -3, 4, -1, -2, 1, 5, -3] print("Maximum contiguous sum is" , maxSubArraySum(a,len(a))) Maximum contiguous sum is 7 So, this would be a 2 part question than: [1]Based on understandings, why would one be more pythonic and more efficient than the other? [2]How can I better understand the calculations happening in the examples? | Simply watch each step and you could figure out this problem: [Notes] this program seems to work based on the assumption of mixed integer numbers? only positive and negatives. # starting so_far = -2 # init. to nums[0] max_sum = 0 # in the for-loop: x = 1 # starting with nums[1:] so_far = max(1, -1) -> 1 (x is 1, -2 + 1) max_sum = max(0, 1) -> 1 ..... continue .... each step is to find the max accumulated numbers sum, as it's evident in the max( ) statement. *There is no `sum` involved, except it tried to determine the current x is good (not negative) then so add it to the so_far. | 6 | 1 |
68,997,345 | 2021-8-31 | https://stackoverflow.com/questions/68997345/how-to-dict-or-data-check-keys-in-pydantic | class mail(BaseModel): mailid: int email: str class User(BaseModel): id: int name: str mails: List[mail] data1 = { 'id': 123, 'name': 'Jane Doe', 'mails':[ {'mailid':1,'email':'[email protected]'}, {'mailid':2,'email':'[email protected]'} ] } userobj = User(**data1) # Accepted data2 = { 'id': 123, 'name': 'Jane Doe', 'mails':[ {'mailid':1,'email':'[email protected]'}, {'email':'[email protected]'} ] } userobj = User(**data2) # Discarded or not accepted I want to check the keys in the dictionary that we passing to pydantic model so If the key is not present in the given dictionary I want to discard that data. For example in data2 in mails {'email':'[email protected]'} data2 must be discarded | You may use pydantic.validator as @juanpa-arrivillaga said. There are few little tricks: Optional it may be empty when the end of your validation. pre=True whether or not this validator should be called before the standard validators (else after) from pydantic import BaseModel, validator from typing import List, Optional class Mail(BaseModel): mailid: int email: str class User(BaseModel): id: int name: str mails: Optional[List[Mail]] @validator('mails', pre=True) def mail_check(cls, v): mail_att = [i for i in Mail.__fields__.keys()] mail_att_count = 0 for i, x in enumerate(v): for k in dict(x).keys(): if k in mail_att: mail_att_count += 1 if mail_att_count != len(mail_att): v.pop(i) mail_att_count = 0 return v data = { 'id': 123, 'name': 'Jane Doe', 'mails':[ {'mailid':1,'email':'[email protected]'}, {'mailid':2,'email':'[email protected]'}, {'email':'[email protected]'} ] } x = User(**data) # Discarded or not accepted print(x.id) print(x.name) print(x.mails) # Output # >>123 # >>Jane Doe # >>[Mail(mailid=1, email='[email protected]'), Mail(mailid=2, email='[email protected]')] | 7 | 5 |
68,995,862 | 2021-8-31 | https://stackoverflow.com/questions/68995862/how-to-activate-virtual-env-in-vs-code | I cant activate virtual env in vs code. I tried same code in the cmd console is work but not in the vs code terminal. "D:\python\djangoapp\djangovenv\Scripts\activate.bat" I write this code. I am using windows 10 pro | yeah Its beacuse of terminal vs code was using powershell ı changed with cmd | 8 | 0 |
68,991,947 | 2021-8-31 | https://stackoverflow.com/questions/68991947/reversing-lists-splices-python-optimization-usaco-february-2020-bronze-question | I am trying to solve a problem that involves reversing list splices, and I am having trouble with the time limit for a test case,, which is 4 seconds. The question: Farmer John's N cows (1≤N≤100) are standing in a line. The ith cow from the left has label i, for each 1≤i≤N. Farmer John has come up with a new morning exercise routine for the cows. He tells them to repeat the following two-step process exactly K (1≤K≤1000000000) times: The sequence of cows currently in positions A1…A2 from the left reverse their order (1≤A1<A2≤N). Then, the sequence of cows currently in positions B1…B2 from the left reverse their order (1≤B1<B2≤N). After the cows have repeated this process exactly K times, please output the label of the ith cow from the left for each 1≤i≤N. SCORING: Test cases 2-3 satisfy K≤100. Test cases 4-13 satisfy no additional constraints. INPUT FORMAT (file swap.in): The first line of input contains N and K. The second line contains A1 and A2, and the third contains B1 and B2. OUTPUT FORMAT (file swap.out): On the ith line of output, print the label of the ith cow from the left at the end of the exercise routine. SAMPLE INPUT: 7 2 2 5 3 7 SAMPLE OUTPUT: 1 2 4 3 5 7 6 Initially, the order of the cows is [1,2,3,4,5,6,7] from left to right. After the first step of the process, the order is [1,5,4,3,2,6,7]. After the second step of the process, the order is [1,5,7,6,2,3,4]. Repeating both steps a second time yields the output of the sample. Theoretically, you could solve this problem by finding the point where the program repeats, and then simulating the reverse k % frequency times, where frequency is the amount of times the simulation is unique. But my problem is that when the input is: 100 1000000000 1 94 2 98 my program takes over 100 seconds to run. This input is particularly time consuming because it runs the maximum number of iterations, and frequency is very high. Current Code: fin = open("swap.in", 'r') line = fin.readline().strip().split() n = int(line[0]) k = int(line[1]) nums = [[int(x)-1 for x in fin.readline().strip().split()]for i in range(2)] fin.close() repeated = [] cows = [i for i in range(1, n+1)] repeat = False while not repeat: for i in nums: cows[i[0]:i[1]+1] = reversed(cows[i[0]:i[1]+1]) if cows[i[0]:i[1]+1] in repeated : frequency = len(repeated)-1 repeat = True repeated.append(cows[i[0]:i[1]+1]) cows = [i for i in range(1, n+1)] for _ in range(k%frequency): for i in nums: cows[i[0]:i[1]+1] = reversed(cows[i[0]:i[1]+1]) fout = open("swap.out", 'w') for i in cows: fout.write(str(i) + "\n") fout.close() If anyone knows a way to solve this issue, please post an answer. Comment if anything isn't clear. | Lets first talk about how we could solve this mathematically, and then work out a solution programmatically. Lets say the cow at each position is represented by the variable Pi.j, where i is the cow index, and j is the the swap iteration. These variables will each contain an integer corresponding to that cow's unique id. Also, for simplicity, we'll only consider a single reversal operation per iteration, and expand to include both later on. Starting with P0.0 (0th position, 0th iteration), we want to start defining some equations to give us the cow at a given position on the next iteration. If the cow is outside of the reversal region, this is trivial; the cow does not change. If it is within the reversal region, we'll want to calculate the previous position of the new cow based on the endpoints of the reversal region. Explicitly: if outside reversal region: Pi.(j+1) = Pi.j else: Pi.(j+1) = P(e1+e2-i).j, where e1 and e2 are the region endpoints Now, with our rules, we can actually write out a system of equations: P0.b = 1 * P0.a P1.b = 1 * P1.a ... P4.b = 1 * P7.a // reversal region starts P5.b = 1 * P6.a ... From here, we can transform these system of questions to a transformation matrix MA, which when multiplied by a vector of cow ids, will return a new vector of cow ids post-reversal. This is where things get fancy. We can do the same thing for the other reversal region, making a second matrix MB, and multiply it with MA to get an overall matrix M which does both reversals in one. From here, we can raise the matrix to the power K (the number of iterations), for a single matrix which will calculate the cows at each position after all reversals take place. Now, at this point, you're probably questioning the performance of this approach- afterall, we're raising a 100x100 matrix to some power K up to 109! Well, we've got a few tricks up our sleeves to make this much faster. First, note that a single cow, any cow, we can logically reason through shuffling across all the reversal operations and determine its end position, all without knowing anything about where the other cows are / which other cows are which. This means for any given position at any given time, the cow at that position can be defined by exactly one other position on any other iteration (e.g. P12.7 (position 12 iteration 7) can determined exactly by knowing the cow at the corresponding position in iteration, say, iteration 5- P8.5). This is useful because it means each row of our matrix will have exactly one non-zero element, and that element will have a value of 1 (coefficient of 1 in the system of equations) so we can actually compress our our 100x100 matrix into an array of just 100 values stating which column per row holds the 1. Great, we can trivially multiply matrices in O(n^2) time using this trick. Well, we can actually do a bit better yet- we can actually multiply these in O(n) time. For each "row" R (containing just the column index, I of the 1 value) in the first matrix, look at the Ith row in our second matrix, and get its value C. Assign our corresponding row R' in the output matrix to equal C. So we can multiply these matrices in ~100 logical steps, but we need raise this matrix to the Kth power, which could be up to 109. We just eliminated a factor of 100 from our work, just to add it right back! Well, not quite. There's a well-known method for matrix exponentiation called "exponentiation by squaring", where we cleverly stagger multiplying and squaring the result repeatedly, to calculate M^K in log(K) iterations/steps. I won't go into detail here since its widely known and well-documented. Overall, that puts us at O(N log K) time, not too bad! Update: Here is the functioning code to solve the problem. def matmul(p, q): return [q[I] for I in p] def exp(m, e): if e == 1: return m result = exp(matmul(m, m), e // 2) if e % 2: result = matmul(m, result) return result def solve(n, k, a1, a2, b1, b2): a1, a2, b1, b2 = a1-1, a2-1, b1-1, b2-1 cows = list(range(1, n+1)) ma = [a1 + a2 - x if a1 <= x <= a2 else x for x in range(n)] mb = [b1 + b2 - x if b1 <= x <= b2 else x for x in range(n)] m0 = matmul(mb, ma) mk = exp(m0, k) return [cows[i] for i in mk] After doing some benchmarks using timeit for number=100 iterations, these were my findings (timings in seconds, smaller is better): contributor | time (sec) ------------------------------ blhsing | 7.857691 Michael Szczesny | 5.076418 (mine) | 0.013314 So Michael's improves on blhsing's solution by running about 1.5x faster on my machine, and my solution runs about 381x faster than Michael's, taking roughly a ten-thousandth of a second per run on average. Update 2: As mentioned in the comment of this answer, the above solution does still scale with K, so eventually it would become intractable, and this problem seems to emit a repetitive pattern- surely we can exploit that? Well, yes, in fact we can. The trick is that there's only so many ways we can permute these cows before they reach some previous state that we've seen before, and then form an endless cycle from there. In fact- because of the simplicity of the problem at hand, its typically a relatively small number of shuffles / reversals before we arrive back at a previous state (actually our starting state specifically, since to visit any other cow permutation without first visiting our starting state would imply that there's two distinct states that transition to said state, which cannot be- we have our system of equations which told us otherwise). So, what we need to do if find this magic number where the cow shuffling / reversals starts repeating, and use that to reduce the exponent (in our matrix exponentiation) from K to K mod MAGIC. We're actually going to call this number the multiplicative order of the transition matrix. To start, notice that there may be more than one cycle of cows that shuffle positions, each of which repeats with periods. A subset of 4 cows may cycle positions every 4 iterations, while another subset of 3 cows cycles every 3 iterations, meaning together the 7 cows repeat their starting configuration every 12 iterations. More generally, we need to find each independent cycle of cow shuffles, get their lengths, and find the least common multiple (LCM) of these periods. Once we have that, take K mod that, and raise our matrix to that new value. Example code for doing so: from itertools import count from functools import reduce from math import lcm def order(m): cycle_lens = [] unvisited = set(m) while unvisited: start = head = unvisited.pop() for size in count(1): head = m[head] if head == start: cycle_lens.append(size) break unvisited.discard(head) return reduce(lcm, cycle_lens) # And inside solve()- # replace: mk = exp(m0, k) # with: mk = exp(m0, k % order(m0)) | 8 | 8 |
68,996,444 | 2021-8-31 | https://stackoverflow.com/questions/68996444/in-operator-functionality-in-python | I needed to remove the characters in string1 which are present in string2. Here string1 and string2 have only the lower case characters a-z with given condition that the length of string1 will be greater every time. I was using the in operator: def removeChars (string1, string2): for char in string2: if char in string1: string1 = string1.replace(char, '') return string1 But I read one answer on Stack Overflow which says: For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression x in y is equivalent to any(x is e or x == e for e in y). Which means the in operator is using a for loop behind the scenes. So my question is, in the for loop in my code, should I consider that nested for loops are being used, as the in operator is using a for loop in the background? If yes, what would be the time complexity of this program? | in does not necessarily use loops behind the scenes. For example: r = range(100000000000) print(333 in r) # prints True immediately without looping If you were to loop r it will take quite a long time, so clearly that doesn't happen. in basically calls (behind the scenes) the object's __contains__ method. For some iterators it will in fact "loop" through everything but that's not always the case. This example is basically the same as calling: r.__contains__(333) As pointed out in comments - the str objects specifically have a smarter algorithm than just plain loops, as you can see here Also see example answer here And see the documentation here Because the real world scenario would probably mean that string1 can be arbitrarily long, but the characters to be removed would be a finite and small set, it will probably be much more efficient to just add up all the characters that aren't in string2. Something like this: def removeChars (string1, string2): result = '' for char in string1: if char not in string2: result += char return result This will involve looping over string1 just once, but multiple checks against string2 using in. This can be further simplified (to avoid += loop over the result): def removeChars (string1, string2): return ''.join(char for char in string1 if char not in string2) | 5 | 7 |
68,995,170 | 2021-8-31 | https://stackoverflow.com/questions/68995170/pydantic-get-a-fields-type-hint | I want to store metadata for my ML models in pydantic. Is there a proper way to access a fields type? I know you can do BaseModel.__fields__['my_field'].type_ but I assume there's a better way. I want to make it so that if a BaseModel fails to instantiate it is very clear what data is required to create this missing fields and which methods to use. Something like this : from pydantic import BaseModel import pandas as pd # basic model class Metadata(BaseModel): peaks_per_day: float class PeaksPerDayType(float): data_required = pd.Timedelta("180D") data_type = "foo" @classmethod def determine(cls, data): return cls(data) # use our custom float class Metadata(BaseModel): peaks_per_day: PeaksPerDayType def get_data(data_type, required_data): # get enough of the appropriate data type return [1] # Initial data we have metadata_json = {} try: metadata = Metadata(**metadata_json) # peaks per day is missing except Exception as e: error_msg = e missing_fields = error_msg.errors() missing_fields = [missing_field['loc'][0] for missing_field in missing_fields] # For each missing field use its type hint to find what data is required to # determine it and access the method to determine the value new_data = {} for missing_field in missing_fields: req_data = Metadata[missing_field].data_required data_type = Metadata[missing_field].data_type data = get_data(data_type=data_type, required_data=req_data) new_data[missing_field] = Metadata[missing_field].determine(data) metadata = Metadata(**metadata_json, **new_data) | In the case you dont need to handle nested classes, this should work from pydantic import BaseModel, ValidationError import typing class PeaksPerDayType(float): data_required = 123.22 data_type = "foo" @classmethod def determine(cls, data): return cls(data) # use our custom float class Metadata(BaseModel): peaks_per_day: PeaksPerDayType def get_data(data_type, required_data): # get enough of the appropriate data type return required_data metadata_json = {} try: Metadata(**metadata_json) except ValidationError as e: field_to_type = typing.get_type_hints(Metadata) missing_fields = [] for error in e.errors(): if error['type']=='value_error.missing': missing_fields.append(error['loc'][0]) else: raise new_data = {} for field in missing_fields: type_ = field_to_type[field] new_data[field] = get_data(type_.data_type, type_.data_required) print(Metadata(**metadata_json, **new_data)) peaks_per_day=123.22 Im not really sure whats the point of data_type or get_data, but I assume its some internal logic that you want to add | 6 | 6 |
68,986,802 | 2021-8-30 | https://stackoverflow.com/questions/68986802/multiprocessing-process-are-modifying-non-shared-variables-they-should-not-hav | Processes are mutating things they should not be able to mutate. A Workerhas a single state variable (an mp.Value). This value is set to -1, and it (the Worker) changes it to 1 in a loop. However, it seems to be possible to reset that value back to -1 by spawning a second Worker, even though this shares nothing with the original pair. This seems like it should be impossible. Behavior: When the second Worker spins up, the state of the first worker (self.state.value ) gets reset to -1. This gets caught, and we print out that an error was discovered. Code: import multiprocessing as mp import time class Worker: def __init__(self, tag, service_state) -> None: self.tag = tag self.local_state = int(service_state.value) self.state = service_state self.run_work_loop() def run_work_loop(self) -> None: print(f"[{self.tag}] Running... {self.state.value} {self.local_state}") while True: if self.state.value != self.local_state: print(f"[{self.tag}] Illegal change. Shared state: {self.state.value} Local State: {self.local_state}") break elif self.state.value == -1: self.state.value = self.local_state = 1 print(f"[{self.tag}] Set Shared State: {self.state.value} Local State: {self.local_state}.") if __name__ == "__main__": mp.Process(target=Worker, args=("A", mp.Value('i', -1))).start() time.sleep(.03) mp.Process(target=Worker, args=("B", mp.Value('i', -1))).start() Output: [A] Running... -1 -1 [A] Set Shared State: 1 Local State: 1. [A] Illegal change. Shared state: -1 Local State: 1 [B] Running... -1 -1 [B] Set Shared State: 1 Local State: 1. | The issue is that you are creating Value instances that immediately go out of scope in the parent process, which makes them get garbage collected. Because of the way Python allocates memory for multiprocessing.Value objects, the second Value ends up using the exact same shared memory location as the first Value, which means the second ends up stomping on the first. You can do some experiments to see this in action. For example, this does not print the warning: if __name__ == "__main__": mp.Process(target=Worker, args=("A", mp.Value('i', -1))).start() time.sleep(.03) a = mp.Value('i', 1) mp.Process(target=Worker, args=("B", mp.Value('i', -1))).start() The Value we assign to a is initialized to 1, which overwrites the anonymous Value we passed to process "A". Because we overwrite it with 1, no illegal state message is printed. if we instead initialize it to any other value, you will see the warning again. This prints an illegal state message about -2, , for example: if __name__ == "__main__": mp.Process(target=Worker, args=("A", mp.Value('i', -1))).start() time.sleep(.03) a = mp.Value('i', -2) mp.Process(target=Worker, args=("B", mp.Value('i', -1))).start() Your code should really save the Value instances you create as local variables in your parent process, both to avoid this issue, and because it's pointless to create shared values that you don't actually share. Like this: if __name__ == "__main__": a = mp.Value('i', -1) mp.Process(target=Worker, args=("A", a)).start() time.sleep(.03) b = mp.Value('i', -1) mp.Process(target=Worker, args=("B", b)).start() | 5 | 5 |
68,981,780 | 2021-8-30 | https://stackoverflow.com/questions/68981780/rounding-a-number-in-google-sheets-using-gspread-api | I am writing a pandas dataframe to google sheets using gspread: from gspread_formatting import * import gspread from df2gspread import df2gspread as d2g import pandas as pd d2g.upload(data, sheet.id, 'test_name', clean=True, credentials=creds, col_names=True, row_names=False) While the pandas dataframe is rounded to 2 decimal points the value in google sheets sometimes have more than that. df a b 100.56 600.79 Result in google sheets: aa b 100.5616 600.79 And I can't find any information how I can round a value using python gspread API. | If you get the worksheet element you can use format to achieve what you want. sh = gc.open("sheet_name") worksheet = sh.get_worksheet(0) # your sheet number worksheet.format('A', {'numberFormat': {'type' : 'NUMBER', 'pattern': '0.0#'}}) | 4 | 5 |
68,981,869 | 2021-8-30 | https://stackoverflow.com/questions/68981869/how-to-upload-a-single-file-to-fastapi-server-using-curl | I'm trying to set up a FastAPI server that can receive a single file upload from the command line using curl. I'm following the FastAPI Tutorial here: https://fastapi.tiangolo.com/tutorial/request-files/?h=upload+file from typing import List from fastapi import FastAPI, File, UploadFile from fastapi.responses import HTMLResponse app = FastAPI() @app.post("/file/") async def create_file(file: bytes = File(...)): return {"file_size": len(file)} @app.post("/uploadfile/") async def create_upload_file(file: UploadFile = File(...)): return {"filename": file.filename} @app.post("/files/") async def create_files(files: List[bytes] = File(...)): return {"file_sizes": [len(file) for file in files]} @app.post("/uploadfiles/") async def create_upload_files(files: List[UploadFile] = File(...)): return {"filenames": [file.filename for file in files]} Running this code and then opening "http://127.0.0.1:5094" in a browser gives me a upload form with four ways of selecting files and uploading I followed this tutorial: https://medium.com/@petehouston/upload-files-with-curl-93064dcccc76 I tried uploading a file "1.json" in the current directory like this curl -F "[email protected]" http://127.0.0.1:5094/uploadfiles on the server side I get this result INFO: 127.0.0.1:58772 - "POST /uploadfiles HTTP/1.1" 307 Temporary Redirect I do not understand why a redirect happens. I need help on how to either guess the correct curl syntax or fix this on the FastAPI side. | The solution was to tell curl to follow a redirect. curl -L -F "[email protected]" http://127.0.0.1:5094/uploadfile which then uploads the file. | 8 | 4 |
68,979,379 | 2021-8-30 | https://stackoverflow.com/questions/68979379/what-is-the-clientip-in-namecheap-api-request | According to namecheap api docs, a request should have this structure: response_request = f'https://api.namecheap.com/xml.response?ApiUser={ApiUser}&ApiKey={ApiKey}&UserName={ApiUser}&Command=namecheap.domains.check&ClientIp={ClientIp}&DomainList={DomainList}' But I keep receiving Error Number="1011150" Invalid request IP when I input ClientIp as my IP address (I use a shared IP). | The ClientIp is: The public IP address of the system making the request. Google search for "What is my IP" for several services that will provide your public IP address. The same public IP address must be whitelisted. This link provides details on Whitelisting IP. | 5 | 5 |
68,956,951 | 2021-8-27 | https://stackoverflow.com/questions/68956951/after-installing-django-with-poetry-it-says-no-module-named-django-in-active-v | I'm playing with poetry because I'm thinking about switching from pip. Following the basic usage examples, I'm doing the following: $ poetry new poetry-demo $ cd poetry-demo $ poetry add django $ django-admin #can't find it $ poetry shell #or poetry $(poetry env info --path)/bin/activate $ django-admin Traceback (most recent call last): File "/Users/cjones/Library/Caches/pypoetry/virtualenvs/poetry-demo-Jq168aNm-py3.8/bin/django-admin", line 5, in <module> from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' Also tried in this order: $ poetry new poetry-demo $ cd poetry-demo $ poetry shell #or poetry $(poetry env info --path)/bin/activate $ poetry add django $ django-admin Traceback (most recent call last): File "/Users/cjones/Library/Caches/pypoetry/virtualenvs/poetry-demo-Jq168aNm-py3.8/bin/django-admin", line 5, in <module> from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' Check the pyproject.toml: [tool.poetry] name = "poetry-demo" version = "0.1.0" description = "" authors = ["Your Name <[email protected]>"] [tool.poetry.dependencies] python = "^3.8" Django = "^3.2.6" [tool.poetry.dev-dependencies] pytest = "^5.2" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" Check poetry show: asgiref 3.4.1 ASGI specs, helper code, and adapters attrs 21.2.0 Classes Without Boilerplate django 3.2.6 A high-level Python Web framework that encourages rapid development and clean, pragmatic design. more-itertools 8.8.0 More routines for operating on iterables, beyond itertools packaging 21.0 Core utilities for Python packages pluggy 0.13.1 plugin and hook calling mechanisms for python py 1.10.0 library with cross-python path, ini-parsing, io, code, log facilities pyparsing 2.4.7 Python parsing module pytest 5.4.3 pytest: simple powerful testing with Python pytz 2021.1 World timezone definitions, modern and historical sqlparse 0.4.1 A non-validating SQL parser. wcwidth 0.2.5 Measures the displayed width of unicode strings in a terminal Check the path where it says it is not and there it is: $ ls -l $(poetry env info --path)/bin total 144 -rw-r--r-- 1 cjones staff 2197 Aug 27 09:39 activate -rw-r--r-- 1 cjones staff 1489 Aug 27 09:39 activate.csh -rw-r--r-- 1 cjones staff 3120 Aug 27 09:39 activate.fish -rw-r--r-- 1 cjones staff 1751 Aug 27 09:39 activate.ps1 -rw-r--r-- 1 cjones staff 1199 Aug 27 09:39 activate_this.py -rwxr-xr-x 1 cjones staff 339 Aug 27 09:39 django-admin -rwxr-xr-x 1 cjones staff 729 Aug 27 09:39 django-admin.py -rwxr-xr-x 1 cjones staff 297 Aug 27 09:39 pip -rwxr-xr-x 1 cjones staff 297 Aug 27 09:39 pip-3.8 -rwxr-xr-x 1 cjones staff 297 Aug 27 09:39 pip3 -rwxr-xr-x 1 cjones staff 297 Aug 27 09:39 pip3.8 -rwxr-xr-x 1 cjones staff 281 Aug 27 09:39 py.test -rwxr-xr-x 1 cjones staff 281 Aug 27 09:39 pytest lrwxr-xr-x 1 cjones staff 128 Aug 27 09:39 python -> /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/Resources/Python.app/Contents/MacOS/Python lrwxr-xr-x 1 cjones staff 6 Aug 27 09:39 python3 -> python lrwxr-xr-x 1 cjones staff 6 Aug 27 09:39 python3.8 -> python -rwxr-xr-x 1 cjones staff 292 Aug 27 09:39 sqlformat -rwxr-xr-x 1 cjones staff 284 Aug 27 09:39 wheel -rwxr-xr-x 1 cjones staff 284 Aug 27 09:39 wheel-3.8 -rwxr-xr-x 1 cjones staff 284 Aug 27 09:39 wheel3 -rwxr-xr-x 1 cjones staff 284 Aug 27 09:39 wheel3 Deactivate and reactive the venv. I've restarted the shell. Still just get: Traceback (most recent call last): File "/Users/cjones/Library/Caches/pypoetry/virtualenvs/poetry-demo-Jq168aNm-py3.8/bin/django-admin", line 5, in <module> from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' Suggestions for what is up here and how to resolve it? | Should have read this more closely: https://python-poetry.org/docs/basic-usage/#using-poetry-run poetry run django-admin | 5 | 2 |
68,916,387 | 2021-8-25 | https://stackoverflow.com/questions/68916387/stripe-checkout-session-is-missing-metadata | I have been trying to pass metadata through stripe.checkout.Session.create() like so: stripe.api_key = STRIPE_SECRET_KEY payments_blueprint = Blueprint('payments', __name__, url_prefix='/payments') @payments_blueprint.route('/checkout', methods=['POST']) def create_checkout_session(): try: checkout_session = stripe.checkout.Session.create( metadata=dict(key='val'), payment_method_types=['card'], line_items=request.form.get("lineItems", LINE_ITEMS), success_url=f'{request.environ["HTTP_ORIGIN"]}/success', cancel_url=f'{request.environ["HTTP_ORIGIN"]}/cancel', mode='payment' ) return redirect(checkout_session.url, code=HTTPStatus.SEE_OTHER) except stripe.error.InvalidRequestError as err: return redirect(f'{request.environ["HTTP_ORIGIN"]}/error', code=HTTPStatus.MOVED_PERMANENTLY) and neither the responses from stripe nor the events passing through my webhook contains any metadata, even though the event logs in the stripe console for the request and the response both contain: "metadata": { "key": "val" },... I am listening to all events using stripe listen --forward-to localhost:8000/hooks/ --print-json and all that the endpoint at /hooks does is print the event to stdout. nothing else. I would like for this metadata to be passed through my series of booking validation webhooks. referencing this: https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-metadata Basically i am following these docs, sending metadata through the call for checkout.Session.create(), and then not seeing this metadata. I have tried using the dict() constructor, using dict syntax instead ({"key":"val"}), creating a variable and setting it to this dict before passing it through the function, and every other way i could think of to pass this metadata dictionary in, but i have not been getting it back from stripe. Here is the hook i have set up where these events are being forwarded: class TestHook(Resource): def post(self): event = stripe.Event.construct_from( json.loads(request.data), stripe.api_key ).to_dict() print(event['type']) pprint(event['data']['object']) And the output to stdout: payment_intent.created {'amount': 20000, 'amount_capturable': 0, 'amount_received': 0, 'application': None, 'application_fee_amount': None, 'canceled_at': None, 'cancellation_reason': None, 'capture_method': 'automatic', 'charges': {}, 'client_secret': 'pi_3JTYxxxx7t', 'confirmation_method': 'automatic', 'created': 1630184808, 'currency': 'usd', 'customer': None, 'description': None, 'id': 'pi_3JTYxxxxVm4', 'invoice': None, 'last_payment_error': None, 'livemode': False, 'metadata': <StripeObject at 0x105c061d0> JSON: {}, 'next_action': None, 'object': 'payment_intent', 'on_behalf_of': None, 'payment_method': None, 'payment_method_options': {'card': {'installments': None, 'network': None, 'request_three_d_secure': 'automatic'}}, 'payment_method_types': ['card'], 'receipt_email': None, 'review': None, 'setup_future_usage': None, 'shipping': None, 'source': None, 'statement_descriptor': None, 'statement_descriptor_suffix': None, 'status': 'requires_payment_method', 'transfer_data': None, 'transfer_group': None} checkout.session.completed {'allow_promotion_codes': None, 'amount_subtotal': 20000, 'amount_total': 20000, 'automatic_tax': {'enabled': False, 'status': None}, 'billing_address_collection': None, 'cancel_url': 'http://localhost:9000/#/guides/cozumel-buzos-del-caribe/trips/7-day-dive?cancelpayment=true', 'client_reference_id': None, 'currency': 'usd', 'customer': 'cus_K7oxxxguu', 'customer_details': {'email': '[email protected]', 'tax_exempt': 'none', 'tax_ids': []}, 'customer_email': None, 'id': 'cs_test_b1Yxxx9dM', 'livemode': False, 'locale': None, 'metadata': <StripeObject at 0x103d64a40> JSON: {}, 'mode': 'payment', 'object': 'checkout.session', 'payment_intent': 'pi_3JTYxxxVm4', 'payment_method_options': <StripeObject at 0x103d648b0> JSON: {}, 'payment_method_types': ['card'], 'payment_status': 'paid', 'setup_intent': None, 'shipping': None, 'shipping_address_collection': None, 'submit_type': None, 'subscription': None, 'success_url': 'http://localhost:9000/#/payment/success', 'total_details': {'amount_discount': 0, 'amount_shipping': 0, 'amount_tax': 0}, 'url': None} in all these events metadata is 'metadata': <StripeObject at 0x103d64a40> JSON: {} | What exactly do you mean by "session response" here? Can you provide an example? For the webhook, which exact event type are you subscribed to? If, for example, you're listening to payment_intent.succeeded instead of checkout.session.completed, then it would be expected for the session metadata to not be present. You can optionally provide metadata to the underlying payment intent using payment_intent_data[metadata][key]=val, which would then be included in the payment intent event bodies. https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_intent_data-metadata | 13 | 2 |
68,960,891 | 2021-8-28 | https://stackoverflow.com/questions/68960891/how-to-run-lambda-application-as-local-api | I've looked all over for some supported library that does this but can't find anything. I just want to run my lambda as a local api (ie localhost:80000/api/get/1) so I can run both my frontend and backend all on my machine for rapid development. I've hacked together a fastapi "gateway" that I run locally and use that to call the lambda_entry locally, the only problem is it's quite slow, no doubt the spinning up of an environment for each request is taxing on performance. I feel like this is something people would use a lot, am I on the right track? | You can use AWS SAM for this. Local Testing and Debugging Use SAM CLI to step-through and debug your code. It provides a Lambda-like execution environment locally and helps you catch issues upfront. You might need to install Docker first as it would be the execution environment used to run the APIs. Setup the sam project first. $ python3 -m pip install aws-sam-cli $ sam init # Just choose the first options e.g. <1 - AWS Quick Start Templates> Now, you would have a file structure like this: $ tree . └── sam-app ├── events │ └── event.json ├── hello_world │ ├── app.py │ ├── __init__.py │ └── requirements.txt ├── __init__.py ├── README.md ├── template.yaml └── tests ├── __init__.py ├── integration │ ├── __init__.py │ └── test_api_gateway.py ├── requirements.txt └── unit ├── __init__.py └── test_handler.py 6 directories, 13 files 2 files that you would be interested at are: sam-app/hello_world/app.py This is where you would put the codes of your Lambda function. import json def lambda_handler(event, context): """Sample pure Lambda function""" return { "statusCode": 200, "body": json.dumps({ "message": "hello world", }), } sam-app/template.yaml This is where you would configure the API e.g. HTTP Method, URL, the location of the code to run, etc. ... HelloWorldFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.9 Events: HelloWorld: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /hello Method: get ... Build the app and then you can run your local API already $ cd sam-app/ $ sam build $ sam local start-api Access your local API Note that running this the first time might take some time as this is when the environment for the API is setup e.g. creating the image and running the container. Any subsequent calls should be fast already. $ curl http://127.0.0.1:3000/hello {"message": "hello world"} Complete reference for your guidance: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html | 6 | 7 |
68,960,171 | 2021-8-27 | https://stackoverflow.com/questions/68960171/python-error-importerror-attempted-relative-import-with-no-known-parent-packa | So, my files/folders structure is the following: project/ ├─ utils/ │ ├─ module.py ├─ server/ │ ├─ main.py Inside project/server/main.py I'm trying to import project/utils/module.py using this syntax: from ..utils.module import my_function. I'm using VSCode, and it even autocomplete for me as I type the module path. But when I run the file project/server/main.py, I get the error in the title. I've read dozens of answers here on stack overflow about this topic but none of them used an example like this. | Here is a reference that explains this problem well. Basically, the problem is that __package__ is not set when running standalone scripts. File structure . └── project ├── server │ └── main.py └── utils └── module.py project/server/main.py if __name__ == '__main__': print(__package__) Output $ python3 project/server/main.py None As we can see, the value of __package__ is None. This is a problem because it is the basis of relative imports as stated here: __package__ ... This attribute is used instead of __name__ to calculate explicit relative imports for main modules, as defined in PEP 366... Where PEP 366 explains this further: The major proposed change is the introduction of a new module level attribute, __package__. When it is present, relative imports will be based on this attribute rather than the module __name__ attribute. To resolve this, you can run it as a module via -m flag instead of a standalone script. Output $ python3 -m project.server.main # This can be <python3 -m project.server> if the file was named project/server/__main__.py project.server project/server/main.py from ..utils.module import my_function if __name__ == '__main__': print(__package__) print("Main") my_function() Output $ python3 -m project.server.main project.server Main My function Now, __package__ is set, which means it can now resolve the explicit relative imports as documented above. | 27 | 44 |
68,951,594 | 2021-8-27 | https://stackoverflow.com/questions/68951594/python-lru-cache-how-can-currsize-misses-maxsize | I have a class with a method that is annotated with the lru_cache annotation: CACHE_SIZE=16384 class MyClass: [...] @lru_cache(maxsize=CACHE_SIZE) def _my_method(self, texts: Tuple[str]): <some heavy text processing> def cache_info(self): return self._my_method.cache_info() After running for a while, I look at the cache statistics through the cache_info() method: c = MyClass() [...] c.cache_info() { "hits":9348, "misses":4312, "maxsize":16384, "currsize":2588 } My question is: how can currsize be smaller than misses AND smaller than maxsize? My understanding was: for each miss, the result is added to the cache, hence increasing the current size. Only when the current size has reached the maximum size, cached results are removed. Since the maximum size is not reached here yet, each miss should be cached, so currsize should equal misses at this point. However, that does not seem to be the way this works. | If your program is either multi-threaded, or recursive - basically, any sort of condition where _my_method() might be called again while another call is partially completed - then it's possible to see the behavior you're experiencing. lru_cache() is thread-aware and uses the following set of steps for size-limited caching: make a hash key out of the wrapped function's arguments lock the cache in a with block: look up the key in the cache if the key is in the cache, return the cached value else, if the key isn't in the cache, increase misses by 1 call the wrapped function lock the cache again if the result is in the cache now, return it if the result still isn't in the cache, add it, possibly removing older entries, etc. etc. In other words, the cached value may have been added by another thread while the wrapped function was called, but it's still counted as a miss. If you had multiple calls to _my_method() that looked up the same missing key, causing misses to be incremented but then resulting in the key appearing in the cache by the time _my_method() completes, misses will be higher than currsize. | 6 | 4 |
68,957,686 | 2021-8-27 | https://stackoverflow.com/questions/68957686/pillow-how-to-binarize-an-image-with-threshold | I would like to binarize a png image. I would like to use Pillow if possible. I've seen two methods used: image_file = Image.open("convert_image.png") # open colour image image_file = image_file.convert('1') # convert image to black and white This method appears to handle a region filled with a light colour by dithering the image. I don't want this behaviour. If there is, for example, a light yellow circle, I want that to become a black circle. More generally, if a pixel's RGB is (x,y,z) the I want the pixel to become black if x<=t OR y<=t OR z<=t for some threshold 0<t<255 I can covert the image to greyscale or RGB and then manually apply a threshold test but this seems inefficient. The second method I've seen is this: threshold = 100 im = im2.point(lambda p: p > threshold and 255) from here I don't know how this works though or what the threshold is or does here and what "and 255" does. I am looking for either an explanation of how to apply method 2 or an alternative method using Pillow. | I think you need to convert to grayscale, apply the threshold, then convert to monochrome. image_file = Image.open("convert_iamge.png") # Grayscale image_file = image_file.convert('L') # Threshold image_file = image_file.point( lambda p: 255 if p > threshold else 0 ) # To mono image_file = image_file.convert('1') The expression "p > threshhold and 255" is a Python trick. The definition of "a and b" is "a if a is false, otherwise b". So that will produce either "False" or "255" for each pixel, and the "False" will be evaluated as 0. My if/else does the same thing in what might be a more readable way. | 8 | 20 |
68,917,844 | 2021-8-25 | https://stackoverflow.com/questions/68917844/why-do-i-get-a-futurewarning-with-pandas-concat | Does anyone meet this similar FutureWarning? I got this when I was using Tiingo+pandas_datareader? The warning is like: python3.8/site-packages/pandas_datareader/tiingo.py:234: FutureWarning: In a future version of pandas all arguments of concat except for the argument 'objs' will be keyword-only return pd.concat(dfs, self._concat_axis) I think this warning does not impact my accessing to pandas data(in my case, I fetch from tiingo api), I can get all the data I want with no problem. I just want to understand if there is any risk with my current enviroment: my python3 - 3.8.5, Python 3.8.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 pandas_datareader version - 0.10.0 pandas version - 1.3.2 I then tested my code with a 'futureVersion' of python: 3.9.6 (comparing with python 3.8.5). To my suprise, I no longer get any warning or error, everything works fine: bellow are details updated platform win32 - Python 3.9.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 Any advice is appreciated. | Most function parameters in python are "positional or keyword" arguments. I.e. if I have this function: def do_something(x, y): pass Then I can either call it like this, using positional arguments: do_something(1, 2) Or like this, using keyword arguments: do_something(x=1, y=2) Or like this, using a mixture of the two (but note that you're not allowed to have a positional argument after a keyword argument): do_something(1, y=2) But you can also define functions with positional-only or keyword-only parameters Say I have this other function: def do_something_else(x, /, y, *, z): pass In this function, I've marked x as being positional-only, because it comes before the /. And I've marked z as being keyword-only, because it comes after the *. y is a positional-or-keyword parameter, as it comes after the / but before the *. This means that these two attempts to call the function will fail: the first one because z is being called as a positional argument, and the second because x is being called as a keyword argument: do_something_else(1, 2, 3) # will fail! do_something_else(x=1, y=2, z=3) # will fail! These two attempts, however, will both succeed — y is still a positional-or-keyword parameter. do_something_else(1, 2, z=3) # fine do_something_else(1, y=2, z=3) # fine The `FutureWarning` message. The FutureWarning message has nothing to do with the version of python you're using, but has everything to do with the version of pandas that you're using. Pandas is a third-party library, not a part of the python core, so the pandas version you're using is a completely different thing to the python version you're using. The warning is letting you know that currently, you're fine to write pd.concat(dfs, self._concat_axis), but that they're planning on changing the definition of the function in a future version of pandas so that all arguments except for objs will be keyword-only. I.e., after they make this change, pd.concat(dfs, self._concat_axis) will raise an error, and you will have to write pd.concat(dfs, axis=self._concat_axis) instead. They are most likely considering making this change because calling functions with keyword arguments is often clearer and more readable for other people. | 6 | 17 |
68,947,934 | 2021-8-27 | https://stackoverflow.com/questions/68947934/read-a-text-file-line-by-line-and-check-for-a-substring-on-2-of-the-lines | I want to read a text file and check for strings with open(my_file,'r') as f: for line in f: if 'text1' in line: f.next() if 'text2' in line: # do some processing I want to first find the text 'text1' at the beginning of the line then if found I want to check the next line for 'text2' if found then I will do some other processing. Appears that the f.next() is not moving to the next line. | The variable line does not magically get updated when you call f.next() (or next(f) in Python 3). You would instead have to assign the line returned by next to a variable and test against it: with open(my_file,'r') as f: for line in f: if 'text1' in line: try: next_line = next(f) except StopIteration: break # in case we are already at the end of file if 'text2' in next_line: # do some processing with line and next_line If there can be cases where there are two consecutive lines with text1 followed by a line of text2, however, calling next(f) would make it consume the second line of text1, making it unable to match text1 in the next iteration. To remedy that, you can use the pairwise recipe in the itertools documentation to iterate 2 lines at a time over the file instead: import itertools def pairwise(iterable): a, b = itertools.tee(iterable) next(b, None) return zip(a, b) with open(my_file,'r') as f: for line, next_line in pairwise(f): if 'text1' in line and 'text2' in next_line: # do some processing with line and next_line | 5 | 3 |
68,947,752 | 2021-8-27 | https://stackoverflow.com/questions/68947752/is-it-possible-to-override-just-one-column-type-when-using-pyspark-to-read-in-a | I'm trying to use PySpark to read in a CSV file with many columns. The inferschema option is great at inferring majority of the columns' data types. If I want to override just one of the columns types that were inferred incorrectly, what is the best way to do this? I have this code working, but it makes PySpark import only the one column that is specified in the schema, which is not want I want. schema = StructType() \ .add("column_one_of_many", StringType(), True) spark.read.format('com.databricks.spark.csv') \ .option('delimited',',') \ .option('header','true') \ .option('inferschema', 'true') \ .schema(self.schema) \ .load('dbfs:/FileStore/some.csv') Is what I'm asking for even possible? Thank you for your time and guidance :) | Easier way would be using .withColumn and casting column_one_of_many as string. Example from pyspark.sql.types import * spark.read.format('com.databricks.spark.csv') \ .option('delimited',',') \ .option('header','true') \ .option('inferschema', 'true') \ .load('dbfs:/FileStore/some.csv')\ .withColumn("column_one_of_many",col("column_one_of_many").cast("string")) Other way would be defining all the columns in schema then exclude the inferschema just use .schema option to read the csv file. | 5 | 3 |
68,939,963 | 2021-8-26 | https://stackoverflow.com/questions/68939963/efficiently-insert-multiple-elements-in-a-list-or-another-data-structure-keepi | I have a list of items that should be inserted in a list-like data structure one after the other, and I have the indexes at which each item should be inserted. For example: items = ['itemX', 'itemY', 'itemZ'] indexes = [0, 0, 1] The expected result is to have a list like this: result = ['itemY', 'itemZ', 'itemX']. I'm able to get this result with this simple approach: result = [] for index, item in zip(indexes, items): result.insert(index, item) However, this is a very slow approach once lists become huge (the complexity is O(n^2)). Is there any (relatively simple to implement) way to improve my basic approach? I guess I have to look at other data structures while I insert elements and finally transform that data structure into my result list. Are trees a good option? Insert could be done maybe in O(log(n)) (instead of O(n)), but which specific tree-like structure should I use? Or maybe something good can be achieved by just looking at all the indexes together (instead of using them one by one). This is probably the worst case for my slow approach (always insert items at the beginning of the list): n = 10**6 # some large number items = list(range(n)) indexes = [0] * n | Here's python code for a treap with a size decoration that allows insertion at specific indexes, and reordering of whole contiguous sections. It was adapted from C++ code, Kimiyuki Onaka's solution to the Hackerrank problem, "Give Me the Order." (I cannot guarantee that this adaptation is bug free -- a copy of the original code is available in the description of this question.) import random class Treap: def __init__(self, value=None): self.value = value self.key = random.random() self.size = 1 self.left = None self.right = None def size(t): return t.size if t else 0 def update(t): if t: t.size = 1 + size(t.left) + size(t.right) return t def merge(a, b): if not a: return b if not b: return a if a.key > b.key: a.right = merge(a.right, b) return update(a) else: b.left = merge(a, b.left) return update(b) def split(t, i): if not t: return None, None if i <= size(t.left): u, t.left = split(t.left, i) return u, update(t) else: t.right, u = split(t.right, i - size(t.left) - 1) return update(t), u def insert(t, i, value): left, right = split(t, i) u = Treap(value) return merge(merge(left, u), right) def inorder(treap): if not treap: return if treap.left: inorder(treap.left) print(treap.value) if treap.right: inorder(treap.right) Output: lst = ['itemX', 'itemY', 'itemZ'] idxs = [0, 0, 1] t = None for i in range(len(lst)): t = insert(t, idxs[i], lst[i]) inorder(t) """ itemY itemZ itemX """ | 7 | 1 |
68,941,232 | 2021-8-26 | https://stackoverflow.com/questions/68941232/pandas-how-to-explode-data-frame-with-json-arrays | How to explode pandas data frame? Input df: Required output df: +----------------+------+-----+------+ |level_2 | date | val | num | +----------------+------+-----+------+ | name_1a | 2020 | 1 | null | | name_1b | 2019 | 2 | null | | name_1b | 2020 | 3 | null | | name_10000_xyz | 2018 | 4 | str | | name_10000_xyz | 2019 | 5 | null | | name_10000_xyz | 2020 | 6 | str | +------------------------------------+ To reproduce input df: import pandas as pd pd.set_option('display.max_colwidth', None) data={'level_2':{1:'name_1a',3:'name_1b',5:'name_10000_xyz'},'value':{1:[{'date':'2020','val':1}],3:[{'date':'2019','val':2},{'date':'2020','val':3}],5:[{'date':'2018','val':4,'num':'str'},{'date':'2019','val':5},{'date':'2020','val':6,'num':'str'}]}} df = pd.DataFrame(data) | Explode the dataframe on value column, then pop the value column and create a new dataframe from it then join the new frame with the exploded frame. s = df.explode('value', ignore_index=True) s.join(pd.DataFrame([*s.pop('value')], index=s.index)) level_2 date val num 0 name_1a 2020 1 NaN 1 name_1b 2019 2 NaN 2 name_1b 2020 3 NaN 3 name_10000_xyz 2018 4 str 4 name_10000_xyz 2019 5 NaN 5 name_10000_xyz 2020 6 str | 4 | 7 |
68,929,785 | 2021-8-25 | https://stackoverflow.com/questions/68929785/how-to-apply-mask-to-image-tensors-in-pytorch | Applying mask with NumPy or OpenCV is a relatively straightforward process. However, if I need to use masked image in loss calculations of my optimization algorithm, I need to employ exclusively PyTorch, as doing otherwise interferes with gradient computations. Assuming that I have an image tensor [1, 512, 512, 3] (batch, height, width, channels) and a mask tensor [1, 20, 512, 512] (batch, channels, height, width) where every channel corresponds to one of 20 segmentation classes, I want to get a masked image tensor that fills every pixel with black (0, 0, 0), except for those belonging to one or more specified segmentation classes. Here is how it can be done with numpy: import numpy as np import torch # Create dummy image and mask image_tensor = torch.randn([1, 512, 512, 3]) mask_tensor = torch.randn([1, 20, 512, 512]) # Apply argmax to mask mask_tensor = torch.max(mask_tensor, 1)[1] # -> 1, 512, 512 # Define mask function def selective_mask(image_src, mask, dims=[]): h, w = mask.shape background = np.zeros([h, w, 3], dtype=np.uint8) for j_, j in enumerate(mask[:, :]): for k_, k in enumerate(j): if k in dims: background[j_, k_] = image_src[j_, k_] output = background return output # Convert tensors to numpy: image = image_tensor.squeeze(0).cpu().numpy() mask = mask_tensor.squeeze(0).cpu().nmpy() # Apply mask function for several classes image_masked = selective_mask(image, mask, dims=[5, 6, 8]) How should my code be changed to bring it in line with the PyTorch requirements? | First of all, the definition of the function selective_mask is far for what You may call 'straightforward'. The key point in using numpy (and torch, which is designed to be mostly compatible) is to take advantage of the vectorization of operations and to avoid using loops, which are not parallelizable. If You rewrite the said function in this manner: def selective_mask(image_src, mask, channels=[]): mask = mask[np.array(channels).astype(int)] return np.sign(np.sum(mask, axis=0), dtype=image_src.dtype) * image_src it will turn out that You can actually do the same with pytorch tensors (here no need to squeeze the batch (first) dimension): def selective_mask_t(image_src, mask, channels=[]): mask = mask[:, torch.tensor(channels).long()] mask = torch.sgn(torch.sum(mask, dim=1)).to(dtype=image_src.dtype).unsqueeze(-1) return mask * image_src Also, You probably want to produce the mask itself this way: (BTW using a combination of max and sgn here should actually work faster than setting elements indexed by argmax) # Create dummy image and mask image_tensor = torch.randn([1, 512, 512, 3]) mask_tensor = torch.randn([1, 20, 512, 512]) # Discreticize the mask (set to one in the channel with the highest value) -> 1, 20, 512, 512 mask_tensor = torch.sgn(mask_tensor - torch.max(mask_tensor, 1)[0].unsqueeze(1)) + 1. Then it should work just fine: print(selective_mask_t(image_tensor, mask_tensor, [5, 6, 8])) | 6 | 5 |
68,931,854 | 2021-8-26 | https://stackoverflow.com/questions/68931854/pandas-infer-freq-returns-none | I have a pandas frame where the index is a DateTimeIndex and I am trying to infer its frequency and it is coming up as None. df.index DatetimeIndex(['2020-08-24 00:00:00', '2020-08-24 00:01:00', '2020-08-24 00:02:00', '2020-08-24 00:03:00', '2020-08-24 00:04:00', '2020-08-24 00:05:00', '2020-08-24 00:06:00', '2020-08-24 00:07:00', '2020-08-24 00:08:00', '2020-08-24 00:09:00', ... '2021-08-22 23:51:00', '2021-08-22 23:52:00', '2021-08-22 23:53:00', '2021-08-22 23:54:00', '2021-08-22 23:55:00', '2021-08-22 23:56:00', '2021-08-22 23:57:00', '2021-08-22 23:58:00', '2021-08-22 23:59:00', '2021-08-23 00:00:00'], dtype='datetime64[ns]', name='timestamp', length=307668, freq=None As you can see the data is coming at a 1 minute frequency but when I do something like: pd.infer_freq(df.index) # returns None Is there some other way to figure out the input frequency? Is there a method that would be robust to some missing data? | freq is already None in this case, so you should try: >>> pd.to_timedelta(np.diff(df.index).min()) Timedelta('0 days 00:01:00') >>> Or just: >>> np.diff(df.index).min() numpy.timedelta64(60000000000,'ns') | 5 | 4 |
68,926,935 | 2021-8-25 | https://stackoverflow.com/questions/68926935/importerror-cannot-import-name-dtypearg-from-pandas | I'm using Pandas 1.3.2 in a Conda environment. When importing pandas on a Jupyter Notebook: import pandas as pd I get the error: ImportError: cannot import name 'DtypeArg' from 'pandas._typing' (C:\Users\tone_\anaconda3\envs\spyder\lib\site-packages\pandas\_typing.py) I've seen similar questions, but so far no solution. Can anyone help? | According to the answer provided in this post it is a bug in pandas==1.3.1. A possible solution is to downgrade it to some earlier version, e.g pip install pandas==1.3.0 | 5 | 1 |
68,925,966 | 2021-8-25 | https://stackoverflow.com/questions/68925966/how-to-plot-each-pandas-row-as-a-line-plot | I have a pandas dataframe where the column names are frequencies in 1 Hz steps, each row is a participant id, and the values are an amplitude^2 value for the participant in each respective frequency. I am trying to plot a time-series of the data where the x axis are the frequencies, and the y axis is the amplitude^2 value, in "spaghetti plot" style, i.e. there is one line plotted for each row of my dataframe: Here is a small snippet of my data: data = [['1', 9.45e-09, 9.85e-09, 8.33e-09, 6.06e-09, 4.80e-09, 4.08e-09], ['2', 1.30e-08, 1.25e-08, 8.99e-09, 6.25e-09, 4.44e-09, 3.45e-09], ['3', 9.32e-09, 8.60e-09, 5.67e-09, 3.68e-09, 2.53e-09, 1.75e-09]] fft_df = df = pd.DataFrame(data, columns = ['id', '1','2','3','4','5','6']).set_index('id') # display(fft_df) 1 2 3 4 5 6 id 1 9.450000e-09 9.850000e-09 8.330000e-09 6.060000e-09 4.800000e-09 4.080000e-09 2 1.300000e-08 1.250000e-08 8.990000e-09 6.250000e-09 4.440000e-09 3.450000e-09 3 9.320000e-09 8.600000e-09 5.670000e-09 3.680000e-09 2.530000e-09 1.750000e-09 Using matplotlib, if I use the fft_df column names as the x argument, and the fft_df column mean as the y argument, matplotlib will return a lineplot. However if I remove the .mean() from the y input it will return an error. I cannot seem to figure out how to plot one line for each row in fft_df: plt.figure(figsize=(10, 10)) plt.ylabel('Absolute Power (log)',fontsize=12) plt.xlabel('Frequencies',fontsize=12) plt.plot(fft_df.columns,fft_df.mean()) | I think, the easiest solution would be to transpose your DataFrame and then use pandas' plotting method. This is somewhat based on this answer. The code would look like this: import pandas as pd import matplotlib.pyplot as plt data = [['1', 9.45e-09, 9.85e-09, 8.33e-09, 6.06e-09, 4.80e-09, 4.08e-09], ['2', 1.30e-08, 1.25e-08, 8.99e-09, 6.25e-09, 4.44e-09, 3.45e-09], ['3', 9.32e-09, 8.60e-09, 5.67e-09, 3.68e-09, 2.53e-09, 1.75e-09]] df = pd.DataFrame(data, columns = ['id', '1','2','3','4','5','6']).set_index('id') # create figure and axis fig, ax = plt.subplots(figsize=(10, 10)) # setting the axis' labels ax.set_ylabel('Absolute Power (log)',fontsize=12) ax.set_xlabel('Frequencies',fontsize=12) # transposing (switchung rows and columns) of DataFrame df and # plot a line for each column on the axis ax, which was created previously df.T.plot(ax=ax) The result looks like this: | 6 | 1 |
68,925,951 | 2021-8-25 | https://stackoverflow.com/questions/68925951/scikit-learn-attributeerror-in-custom-transformer | I'm trying to create a transformer that changes types of columns from "object" to "category", so I created custom class for that: from sklearn.base import BaseEstimator, TransformerMixin class ChangeToCategory(BaseEstimator, TransformerMixin): def __init__(self, to_categories = None): self.to_categories_ = to_categories def fit(self, X, y=None): return self def transform(self, X, y=None): X_ = X.copy() for cat in self.to_categories_: X_[cat] = X_[cat].astype("category") return X_ But I'm getting error when I try to create object of this class: ChangeToCategory(to_categories=["Sex", "Embarked"]) AttributeError: 'ChangeToCategory' object has no attribute 'to_categories' What am I doing wrong? Based on error, I assume that it somewhere try to call attribute "to_categories", but I use attribute "to_categories_" - with underscore, and without is variable from init, which I don't call anywhere. | So, I found there I was wrong. From sklearn documentation you must initialize all estimator parameters as attributes of the class. In addition, every keyword argument accepted by init should correspond to an attribute on the instance. Scikit-learn relies on this to find the relevant attributes to set on an estimator when doing model selection. | 7 | 8 |
68,926,132 | 2021-8-25 | https://stackoverflow.com/questions/68926132/creation-of-a-class-wrapper-in-python | I would like to do the following: given an instance of a Base class create an object of a Wrapper class that has all the methods and attributes of the Base class + some additional functionality. class Base: def __init__(self, *args, **kwargs): self.base_param_1 = ... # some stuff def base_method_1(self, *args, **kwargs): # some stuff class Wrapper(...): def __init__(self, cls_instance, *args, **kwargs): self.wrapper_param_1 = ... # some stuff def wrapper_method_1(self, *args, **kwargs): # some stuff The use case is like the following: wrapper_obj = Wrapper(Base(*base_args, **base_kwargs), *wrapper_args, *wrapper_kwargs) The expected behavior is that one can access base_param_1, wrapper_param_1 and base_param_1, base_param_2. It is important that the fields of the Base class are the same in Wrapper (no copying). I've seen that new functionality can be added in this way Adding a Method to an Existing Object Instance, but this approach has caveats and is not recommended. Inheritance seems not to be an option here, since I am an already constructed object and this Wrapper can take different classes, despite with common Base. EDIT It is also required that Wrapper object is identified as Base instance. | You can override __getattr__. That way, Wrapper specific attributes are looked up first, then the wrapped object's attributes are tried. class Wrapper: def __init__(self, base_obj, *args, **kwargs): self.base_obj = base_obj # some stuff def wrapper_method(self): return "new stuff" def __getattr__(self, name): return getattr(self.base_obj, name) w = Wrapper("abc") w.wrapper_method() # 'new stuff' w.upper() # 'ABC' | 4 | 7 |
68,926,007 | 2021-8-25 | https://stackoverflow.com/questions/68926007/make-number-of-rows-based-on-column-values-pandas-python | I want to expand my data frame based on numeric values in two columns (index_start and index_end). My df looks like this: item index_start index_end A 1 3 B 4 7 I want this to expand to create rows for A from 1 to 3 and rows for B from 4 to 7 like so. item index_start index_end index A 1 3 1 A 1 3 2 A 1 3 3 B 4 7 4 B 4 7 5 B 4 7 6 B 4 7 7 Unsure how to implement this in Python/pandas. | You could use .explode() df['index'] = df.apply(lambda row: list(range(row['index_start'], row['index_end']+1)), axis=1) df.explode('index') item index_start index_end index 0 A 1 3 1 0 A 1 3 2 0 A 1 3 3 1 B 4 7 4 1 B 4 7 5 1 B 4 7 6 1 B 4 7 7 | 4 | 6 |
68,915,407 | 2021-8-25 | https://stackoverflow.com/questions/68915407/celery-retry-with-updated-arguments | Considering a task takes a list as arguments and process each element in the list, which may succeed or fail. In this case, how to "retry" with the failed elements only? Example: @app.task(bind=True) def my_test(self, my_list:list): new_list = [] for ele in my_list: try: do_something_may_fail(ele) except: new_list.append(ele) # how to retry with the new list? # like # self.retry(my_list=new_list, countdown=5) # or # self.apply_async(new_list, countdown=5) | Solution 1 Use Task.retry with its args and kwargs input. retry(args=None, kwargs=None, exc=None, throw=True, eta=None, countdown=None, max_retries=None, **options) Retry the task, adding it to the back of the queue. Parameters args (Tuple) – Positional arguments to retry with. kwargs (Dict) – Keyword arguments to retry with. Be aware when passing arguments because having a value just for the same parameter both in args and kwargs would result to failure. Below, I chose to only use args=(<values here>) and empty out kwargs={}. You may also opt to do the other way around where you would use kwargs={<values here>} and empty out args=(). tasks.py from celery import Celery app = Celery('tasks') @app.task( bind=True, default_retry_delay=0.1, retry_backoff=False, max_retries=None, ) def my_test(self, some_arg_1: int, my_list: list, some_arg_2: str): print(f"my_test {some_arg_1} {my_list} {some_arg_2}") # Filter the failed items. Here, let's say only the last item is successful. new_list = my_list[:-1] if new_list: self.retry( args=( some_arg_1 + 1, # some_arg_1 increments per retry new_list, # Failed items some_arg_2 * 2, # some_arg_2's length doubles per retry ), kwargs={}, # Empty it out to avoid having multiple values for the arguments whether we initially called it with args or kwargs or both. ) Logs (Producer) >>> from tasks import * >>> my_test.apply_async(args=(0, [1,2,3,4,5], "a")) <AsyncResult: 121090c6-6b77-4cbd-b1d1-790005e8b18c> >>> >>> # The above command is just equivalent to the following (just the same result): >>> # my_test.apply_async(kwargs={'some_arg_1': 0, 'my_list': [1,2,3,4,5], 'some_arg_2': "a"}) >>> # my_test.apply_async(args=(0,), kwargs={'my_list': [1,2,3,4,5], 'some_arg_2': "a"}) Logs (Consumer) [2021-08-25 21:32:06,433: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] received [2021-08-25 21:32:06,434: WARNING/MainProcess] my_test 0 [1, 2, 3, 4, 5] a [2021-08-25 21:32:06,434: WARNING/MainProcess] [2021-08-25 21:32:06,438: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] retry: Retry in 0.1s [2021-08-25 21:32:06,439: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] received [2021-08-25 21:32:06,539: WARNING/MainProcess] my_test 1 [1, 2, 3, 4] aa [2021-08-25 21:32:06,539: WARNING/MainProcess] [2021-08-25 21:32:06,541: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] retry: Retry in 0.1s [2021-08-25 21:32:06,542: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] received [2021-08-25 21:32:06,640: WARNING/MainProcess] my_test 2 [1, 2, 3] aaaa [2021-08-25 21:32:06,640: WARNING/MainProcess] [2021-08-25 21:32:06,642: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] retry: Retry in 0.1s [2021-08-25 21:32:06,643: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] received [2021-08-25 21:32:06,742: WARNING/MainProcess] my_test 3 [1, 2] aaaaaaaa [2021-08-25 21:32:06,743: WARNING/MainProcess] [2021-08-25 21:32:06,745: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] retry: Retry in 0.1s [2021-08-25 21:32:06,747: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] received [2021-08-25 21:32:06,844: WARNING/MainProcess] my_test 4 [1] aaaaaaaaaaaaaaaa [2021-08-25 21:32:06,844: WARNING/MainProcess] [2021-08-25 21:32:06,844: INFO/MainProcess] Task tasks.my_test[121090c6-6b77-4cbd-b1d1-790005e8b18c] succeeded in 0.0005442450019472744s: None All task arguments are updated per retry: some_arg_1 increments by 1 per retry, from the starting value of 0 to the last value of 4 my_list loses 1 item per retry, from the starting value of [1, 2, 3, 4, 5] to the last value of [1] some_arg_2 doubles it's size per retry, from the starting value of "a" to the last value of "aaaaaaaaaaaaaaaa" Solution 2 Just recall the same task from within the task itself, somewhat recursion-like. tasks.py from celery import Celery app = Celery('tasks') @app.task def my_test(some_arg_1: int, my_list: list, some_arg_2: str): print(f"my_test {some_arg_1} {my_list} {some_arg_2}") # Filter the failed items. Here, let's say only the last item is successful. new_list = my_list[:-1] if new_list: my_test.apply_async( args=( some_arg_1 + 1, # some_arg_1 increments per retry new_list, # Failed items some_arg_2 * 2, # some_arg_2's length doubles per retry ), kwargs={}, # Empty it out to avoid having multiple values for the arguments whether we initially called it with args or kwargs or both. ) Logs (Producer and Consumer) Same as Solution 1 | 7 | 5 |
68,915,672 | 2021-8-25 | https://stackoverflow.com/questions/68915672/specify-metaclass-for-dynamic-type | In Python, you can create types dynamically using the function my_type = type(name, bases, dict). How would you specify a metaclass for this type my_type? (Ideally other than defining a throwaway class object that simply binds the metaclass to instantiated subclasses) | For Dynamic Type Creation where you need to provide keywords in the class statement (including, but not limited to, the keyword "metaclass"), you would use types.new_class. The following class definition: class A(B, C, metaclass=AMeta): pass Can be created dynamically like: A = types.new_class( name="A", bases=(B, C), kwds={"metaclass": AMeta}, exec_body=None, ) This is preferable to directly calling type, or any other metaclass, since it resolves the metaclass bases - depending on the types of B and C, the type(A) here might not necessarily be AMeta. | 4 | 10 |
68,870,009 | 2021-8-21 | https://stackoverflow.com/questions/68870009/equivalent-of-python-walrus-operator-in-c11 | Recently I have been using the := operator in python quite a bit, in this way: if my_object := SomeClass.function_that_returns_object(): # do something with this object if it exists print(my_object.some_attribute) The question Is there any way to do this in c++11 without the use of stdlib? for example in an arduino sketch if I wanted to use a method that may potentially return zero, such as: if(char * data = myFile.readBytes(data, dataLen)) { // do something } else { // do something else } | Python's := assignment expression operator (aka, the "walrus" operator) returns the value of an assignment. C++'s = assignment operator (both copy assignment and move assignment, as well as other assignment operators) does essentially the same thing, but in a different way. The result of an assignment is a reference to the object that was assigned to, allowing that object to be evaluated in further expressions. So, the equivalent of: if my_object := SomeClass.function_that_returns_object(): # do something with this object if it exists print(my_object.some_attribute) Would be just like you showed: SomeType *my_object; if ((my_object = SomeClass.function_that_returns_object())) { // do something with this object if it exists print(my_object->some_attribute); } If function_that_returns_object() returns a null pointer, the if evaluates my_object as false, otherwise it evaluates as true. The same can be done with other types, eg: int value; if ((value = SomeClass.function_that_returns_int()) == 12345) { // do something with this value if it matches } | 5 | 9 |
68,848,991 | 2021-8-19 | https://stackoverflow.com/questions/68848991/in-python-how-are-triple-quotes-considered-comments-by-the-ide | My CS teacher told me that """ triple quotations are used as comments, yet I learned it as strings with line-breaks and indentations. This got me thinking - does python completely triple quote lines outside of relevant statements? """is this completely ignored like a comment""" or, is the computer actually considering this? | Triple quoted strings are used as comment by many developers but it is actually not a comment, it is similar to regular strings in python but it allows the string to be in multi-line. You will find no official reference for triple quoted strings to be a comment. In python, there is only one type of comment that starts with hash # and can contain only a single line of text. According to PEP 257, it can however be used as a docstring, which is again not really a comment. def foo(): """ Developer friendly text for describing the purpose of function Some test cases used by different unit testing libraries """ ... # body of the function You can just assign them to a variable as you do with single quoted strings: x = """a multi-line text enclosed by triple quotes """ Furthermore, if you try in repl, triple quoted strings get printed, had it really been a comment, should it have been printed?: >>> #comment >>> """triple quoted""" 'triple quoted' | 21 | 22 |
68,912,915 | 2021-8-24 | https://stackoverflow.com/questions/68912915/vscode-python-automatically-implementing-abstract-methods | Is there any support to automatically implement all abstract methods of an abstract class in VSCode with Python Environment? class AbstractClass(ABC): @abstractclass def abstract_method(): pass class NonAbstractClass(AbstractClass): # shortcut in vscode to implement all abstract methods # it works if I start writing methods and then it autocompletes, that is not what I am looking for | Seems like it is now (2024-04-17) in pre-release. Checkout the release notes. | 9 | 1 |
68,885,950 | 2021-8-22 | https://stackoverflow.com/questions/68885950/how-to-pass-the-script-path-to-run-magic-command-as-a-variable-in-databricks-no | I want to run a notebook in databricks from another notebook using %run. Also I want to be able to send the path of the notebook that I'm running to the main notebook as a parameter. The reason for not using dbutils.notebook.run is that I'm storing nested dictionaries in the notebook that's called and I wanna use them in the main notebook. I'm looking for Something like: path = "/References/parameterDefinition/schemaRepository" %run <path variable> | Unfortunately it's impossible to pass the path in %run as variable. You can pass variable as parameter only, and it's possible only in combination with with widgets - you can see the example in this answer. In this case you can have all your definitions in one notebook, and depending on the passed variable you can redefine the dictionary. There will be a new functionality coming in the next months (approximately, see public roadmap webinar for more details) that will allow to import notebooks as libraries using the import statement. Potentially you can emulate the same functionality by exporting the notebook into the file on disk using the Export command of Workspace API, decoding the data & importing file's content, for example, if you have notebook called module1 with content my_cool_dict = {"key1": "abc", "key2": 123} then you can import it as following: import requests import base64 import os api_url = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiUrl().get() host_token = "your_PAT_token" path = "/Users/..../module1" # fetch notebook response = requests.get(f"{api_url}/api/2.0/workspace/export", json = {"format": "SOURCE", "path": path}, headers={"Authorization": f"Bearer {host_token}"} ).json() # decode base64 encoded content data = base64.b64decode(response["content"].encode("ascii")) # write the file & __init__.py, so directory will considered a module dir = os.path.join("/tmp","my_modules") if not os.path.exists(dir): os.mkdir(dir) with open(os.path.join(dir, os.path.split(path)[-1]+".py"), "wb") as f: f.write(data) with open(os.path.join(dir, "__init__.py"), "wb") as f: f.write("\n".encode("ascii")) # add our directory into system path import sys sys.path.append(dir) # import notebook from module1 import my_cool_dict and see that we got our variable: | 14 | 1 |
68,901,049 | 2021-8-24 | https://stackoverflow.com/questions/68901049/copying-the-docstring-of-function-onto-another-function-by-name | I'm looking to copy the docstring of a function in the same file by name (with a decorator). I can easily do it with a function that is out of the current module, but I'm a bit confused when it comes to the same module (or the same class more specifically) Here's what I have so far: import inspect def copy_doc(func_name: str): def wrapper(func): doc = ... # get doc from function that has the name as func_name func.__doc__ = doc return func retun wrapper I'm looking for something that can do two of these examples: Ex 1: def this() -> None: """Fun doc string""" return @copy_doc('this') def that() -> None: return print(that.__doc__) Ex 2: class This: def foo(self) -> None: """Fun doc string""" return None @copy_doc('foo') def bar(self) -> None: return None print(This().bar.__doc__) Any fun ideas? | After some testing and experimentation, I learned you could directly reference the function in a given class. * Note ParamSpec and TypeVar are to keep the correct signature of the wrapped function, you can remove all the annotations if you do not need them. from typing import Callable, TypeVar, Any, TypeAlias from typing_extensions import ParamSpec T = TypeVar('T') P = ParamSpec('P') WrappedFuncDeco: TypeAlias = Callable[[Callable[P, T]], Callable[P, T]] def copy_doc(copy_func: Callable[..., Any]) -> WrappedFuncDeco[P, T]: """Copies the doc string of the given function to another. This function is intended to be used as a decorator. .. code-block:: python3 def foo(): '''This is a foo doc string''' ... @copy_doc(foo) def bar(): ... """ def wrapped(func: Callable[P, T]) -> Callable[P, T]: func.__doc__ = copy_func.__doc__ return func return wrapped class Cas: def foo(self) -> None: """This is the foo doc string.""" return @copy_doc(foo) def bar(self) -> None: return print(Cas.bar.__doc__) # >>> This is the foo doc string. | 8 | 11 |
68,826,941 | 2021-8-18 | https://stackoverflow.com/questions/68826941/python-coverage-for-async-methods | I use aiohttp, pytest and pytest-cov to get coverage of my code. I want to get better coverage< but now I am a little bit stuck, because event simple code does not show 100% cov. For example this piece of code: @session_decorator() async def healthcheck(session, request): await session.scalar("select pg_is_in_recovery();") return web.json_response({"status": "ok"}) in coverage report it shows that line with return is not covered. I have read this link Code coverage for async methods. But this is for C# and I can not understand: This can happen most commonly if the operation you're awaiting is completed before it's awaited. My code to run tests: python3.9 -m pytest -vv --cov --cov-report xml --cov-report term-missing My test for code above async def test_healthz_page(test_client): response = await test_client.get("/healthz") assert response.status == HTTPStatus.OK | I had the same issue when testing FastAPI code using asyncio. The fix is to create or edit a .coveragerc at the root of your project with the following content: [run] concurrency = gevent If you use a pyproject.toml you can also include this section in that file instead: [tool.coverage.run] concurrency = ["gevent"] In addition, you have to pip install gevent. If using Poetry, run poetry add --group dev gevent. If the above doesn’t work, try using concurrency = thread,gevent instead (see this comment. Note it says to use --concurrency but this option is not available when you use pytest-cov; you must use .coveragerc). | 12 | 7 |
68,888,941 | 2021-8-23 | https://stackoverflow.com/questions/68888941/keyerror-received-unregistered-task-of-type-on-celery-while-task-is-registere | I'm a bit new in celery configs. I have a task named myapp.tasks.my_task for example. I can see myapp.tasks.my_task in registered tasks of celery when I use celery inspect registered. doesn't it mean that the task is successfully registered? why it raises the following error for it: KeyError celery.worker.consumer.consumer in on_task_received Received unregistered task of type 'my_app.tasks.my_task'. The message has been ignored and discarded. Did you remember to import the module containing this task? Or maybe you're using relative imports? Please see http://docs.celeryq.org/en/latest/internals/protocol.html for more information. The full contents of the message body was: '[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (77b) there are also other tasks in my_app.tasks and they work correctly but only this task does not work and gets KeyError: @shared_task(queue='celery') def other_task(): """ WORKS """ ... @shared_task(queue='celery') def my_task(): """ DOES NOT WORK """ ... | Back in the day, when I faced the problem a senior solved it for me in a mushroom management way unfortunately (see more about this anti-pattern here). I came back to this problem recently to figure out the solution in our own project domain. As Niel pointed in his/her solution, we were using celery_app.autodiscover_tasks() in our project and in that case we should import my_task in __init__.py of tasks package like bellow. from .some_tasks_file import my_task Also, we used celery beat and the task defined inside app.conf.beat_schedule must have exact path to the function like bellow (even though the function is imported in __init__.py of the tasks package). app.conf.beat_schedule = { 'MY_TASK': { 'task': 'myapp.tasks.some_tasks_file.my_task', 'schedule': 60, # every minute }, } Hope this would help people with the same celery configuration and problem. | 10 | 5 |
68,836,551 | 2021-8-18 | https://stackoverflow.com/questions/68836551/keras-attributeerror-sequential-object-has-no-attribute-predict-classes | Im attempting to find model performance metrics (F1 score, accuracy, recall) following this guide https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/ This exact code was working a few months ago but now returning all sorts of errors, very confusing since i havent changed one character of this code. Maybe a package update has changed things? I fit the sequential model with model.fit, then used model.evaluate to find test accuracy. Now i am attempting to use model.predict_classes to make class predictions (model is a multi-class classifier). Code shown below: model = Sequential() model.add(Dense(24, input_dim=13, activation='relu')) model.add(Dense(18, activation='relu')) model.add(Dense(6, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) - history = model.fit(X_train, y_train, batch_size = 256, epochs = 10, verbose = 2, validation_split = 0.2) - score, acc = model.evaluate(X_test, y_test,verbose=2, batch_size= 256) print('test accuracy:', acc) - yhat_classes = model.predict_classes(X_test) last line returns error "AttributeError: 'Sequential' object has no attribute 'predict_classes'" This exact code was working not long ago so struggling a bit, thanks for any help | This function was removed in TensorFlow version 2.6. According to the keras in rstudio reference update to predict_x=model.predict(X_test) classes_x=np.argmax(predict_x,axis=1) Or use TensorFlow 2.5.x . If you are using TensorFlow version 2.5, you will receive the following warning: tensorflow\python\keras\engine\sequential.py:455: UserWarning: model.predict_classes() is deprecated and will be removed after 2021-01-01. Please use instead:* np.argmax(model.predict(x), axis=-1), if your model does multi-class classification (e.g. if it uses a softmax last-layer activation).* (model.predict(x) > 0.5).astype("int32"), if your model does binary classification (e.g. if it uses a sigmoid last-layer activation). | 78 | 124 |
68,882,603 | 2021-8-22 | https://stackoverflow.com/questions/68882603/using-python-poetry-to-publish-to-test-pypi-org | I have been investigating the use of Poetry to publish Python projects. I wanted to test the publishing process using a trivial project similar to the Python Packaging Authority tutorial. Since this is a trivial project, I want to publish it to the test instance of pypi rather than the real instance. Test.pypi requires a token to publish, but I can't figure out how to make Poetry use my test pypi token. All the documentation I can find uses HTTP basic authentication for test-pypi which no longer works. I added the repository using this command: poetry config.repositories.test-pypi https://test.pypi.org I have tried creating tokens using both the following commands: poetry config pypi-token.test-pypi my-token poetry config test-pypi-token.test-pypi my-token I don't find a good explanation of the syntax for adding tokens in the poetry documentation, so any help will be appreciated. | I've successfully used tokens and poetry to upload to PyPI and TestPyPI. I believe you just need to change the TestPyPI URL you are configuring by appending /legacy/: poetry config repositories.test-pypi https://test.pypi.org/legacy/ You can then create your token as you were doing previously: poetry config pypi-token.test-pypi <your-token> https://test.pypi.org/legacy/ is the API endpoint for uploading packages. It's a bit hidden in the documentation but it is mentioned here that that is the URL you should use. Also note that the name succeeding the period in repositories. and pypi-token. is what needs to match which is why we have specified: repositories.test-pypi and pypi-token.test-pypi | 25 | 37 |
68,844,666 | 2021-8-19 | https://stackoverflow.com/questions/68844666/github-action-is-being-killed | I'm running a little python project to collect data. It's being triggered by a scheduled GitHub Action script (every midnight). As part of expanding the project I've added the pycaret library to the project. So currently installing the requirements for the project is taking about 15 minutes, plus running the python project is another 10 minutes. But the interesting part is that now the action/job is being killed with: /home/runner/work/_temp/bad86621-8542-4ea5-ae93-6f59b7ee2463.sh: line 1: 4382 Killed python main.py Error: Process completed with exit code 137. Now i'vew tried looking up the reason for the process being killed by i have found nothing in GitHub Actions i'm running the job on ubuntu-latest machine in GitHub actions. i've set the job timeout to 60 minutes , so i don't think that is the issue. | Error 137 indicates that the container (runner/build agent) that builds your project received SIGKILL and terminated. It can be initiated manually or by the host machine when the runner exceeds its allocated memory limit. In your case, since it is initiated by Github itself, then it is generally due to being out of memory. (P.S. This is a very late answer, but it may help some folks) | 9 | 9 |
68,896,173 | 2021-8-23 | https://stackoverflow.com/questions/68896173/issue-caching-python-dependencies-in-github-actions | I have the following steps in a github action: steps: - name: Check out repository code uses: actions/checkout@v2 - name: Cache dependencies id: pip-cache uses: actions/cache@v2 with: path: ~.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }} restore-keys: | ${{ runner.os }}-pip- - name: Install dependencies if: steps.pip-cache.outputs.cache-hit != 'true' run: pip install -r requirements.txt - name: run mypy run: mypy . The caching works fine, but when a cache hit occurs, and I try to run mypy, it fails with: Run mypy . /home/runner/work/_temp/9887df5b-d5cc-46d7-90e1-b884d8c49272.sh: line 1: mypy: command not found Error: Process completed with exit code 127. The whole point of caching dependencies is so I don't have to install them every time I run the workflow. How do I use the cached dependencies? | You're only caching source tarballs and binary wheels downloaded by pip. You're not caching: Installed Python packages (i.e., the site-packages/ subdirectory of the active Python interpreter). Installed entry points (i.e., executable commands residing in the current ${PATH}). That isn't necessarily a bad thing. Merely downloading assets tends to consume a disproportionate share of scarce GitHub Actions (GA) minutes; caching assets trivially alleviates that issue. In other words, remove the if: steps.pip-cache.outputs.cache-hit != 'true' line to restore your GitHub Actions (GA) workflow to sanity. But... I Want to Cache Installed Packages! Challenge accepted. This is feasible – albeit more fragile. I'd advise just caching pip downloads unless you've profiled the pip install command to be a significant installation bottleneck. Let's assume that you still want to do this. In this case, something resembling the following snippet should get you where you want to go: - uses: 'actions/setup-python@v2' with: # CAUTION: Replace this hardcoded "3.7" string with the # major and minor version of your desired Python interpreter. python-version: "3.7" - uses: 'actions/cache@v2' id: cache with: # CAUTION: Replace this hardcoded "python3.7" dirname with # the dirname providing your desired Python interpreter. path: ${{ env.pythonLocation }}/lib/python3.7/site-packages/* key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }} restore-keys: | ${{ runner.os }}-pip- ${{ runner.os }}- As noted, you'll want to manually replace the hardcoded 3.7 and python3.7 substrings above with something specific to your use case. That's one of several reasons why this is dangerously fragile. Another is that the ${{ env.pythonLocation }} environment variable set by the setup-python GitHub Action has been infamously undocumented for several years. </gulp> (env.pythonLocation documentation was added from v4.0.0.) In theory, adding the above directly under your existing uses: actions/cache@v2 list item should suffice. Good luck and Godspeed as you travel into the terrifying unknown. | 5 | 8 |
68,860,879 | 2021-8-20 | https://stackoverflow.com/questions/68860879/vscode-keras-intellisensesuggestion-not-working-properly | Intellisense works fine on importing phrase But when it comes with chaining method, it shows different suggestions Python & Pylance extensions are installed. | From this issue on github try adding this to the bottom of your tensorflow/__init__.py (in .venv/Lib/site-packages/tensorflow for me) # Explicitly import lazy-loaded modules to support autocompletion. # pylint: disable=g-import-not-at-top if _typing.TYPE_CHECKING: from tensorflow_estimator.python.estimator.api._v2 import estimator as estimator from keras.api._v2 import keras from keras.api._v2.keras import losses from keras.api._v2.keras import metrics from keras.api._v2.keras import optimizers from keras.api._v2.keras import initializers # pylint: enable=g-import-not-at-top The problem is because keras is a special class that enables lazy loading and not a normal module. Edit: With updates to tf, vscode, or something else I'm not having this issue and don't need to use the above fix anymore. I just have to use keras = tf.keras instead of from tensorflow import keras and I have Intellisense working now. | 7 | 14 |
68,900,763 | 2021-8-24 | https://stackoverflow.com/questions/68900763/how-to-update-pandas-dataframe-drop-for-future-warning-all-arguments-of-data | The following code: df = df.drop('market', 1) generates the warning: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only market is the column we want to drop, and we pass the 1 as a second parameter for axis (0 for index, 1 for columns, so we pass 1). How can we change this line of code now so that it is not a problem in the future version of pandas / to resolve the warning message now? | From the documentation, pandas.DataFrame.drop has the following parameters: Parameters labels: single label or list-like Index or column labels to drop. axis: {0 or ‘index’, 1 or ‘columns’}, default 0 Whether to drop labels from the index (0 or ‘index’) or columns (1 or ‘columns’). index: single label or list-like Alternative to specifying axis (labels, axis=0 is equivalent to index=labels). columns: single label or list-like Alternative to specifying axis (labels, axis=1 is equivalent to columns=labels). level: int or level name, optional For MultiIndex, level from which the labels will be removed. inplace: bool, default False If False, return a copy. Otherwise, do operation inplace and return None. errors: {‘ignore’, ‘raise’}, default ‘raise’ If ‘ignore’, suppress error and only existing labels are dropped. Moving forward, only labels (the first parameter) can be positional. So, for this example, the drop code should be as follows: df = df.drop('market', axis=1) or (more legibly) with columns: df = df.drop(columns='market') | 42 | 47 |
68,849,673 | 2021-8-19 | https://stackoverflow.com/questions/68849673/importing-numpy-shows-warning-when-running-in-mod-wsgi | I am running a Flask application in Apache using mod_wsgi. When I try to import numpy, I get the following warning: /usr/local/lib/python3.8/dist-packages/scipy/__init__.py:67: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Should I do anything to address this warning? | Following the information here, you can eliminate the warning (and prevent potential problems) by adding WSGIApplicationGroup %{GLOBAL} to your httpd.conf. | 11 | 16 |
68,826,091 | 2021-8-18 | https://stackoverflow.com/questions/68826091/the-specified-device-is-not-open-or-is-not-recognized-by-mci | I was programming a game using Python and a sound effect needed to be played, so I used the playsound module: from playsound import playsound playsound("Typing.wav", False) And when I attempted the run the program this error was returned: Error 263 for command: open Typing.wav The specified device is not open or is not recognized by MCI. I did some research and some sources indicated that it was an issue with my sound drivers. I updated & reinstalled it, but the issue persists. Is there any way to solve this? | I faced this problem too firstly as mentioned in the previous comments I downgraded my python version from 3.10 to 3.7 and yet the problem persisted. So what actually worked is that the recent versions of playsound are giving such errors in order to fix this run the following commands in cmd as admin pip uninstall playsound pip install playsound==1.2.2 and this should do the work. just in case that doesn't work try degrading your python version to 3.7 and run these commands and that should be good. | 24 | 63 |
68,893,521 | 2021-8-23 | https://stackoverflow.com/questions/68893521/simple-example-of-pandas-extensionarray | It seems to me that Pandas ExtensionArrays would be one of the cases where a simple example to get one started would really help. However, I have not found a simple enough example anywhere. Creating an ExtensionArray To create an ExtensionArray, you need to Create an ExtensionDtype and register it Create an ExtensionArray by implementing the required methods. There is also a section in the Pandas documentation with a brief overview. Example implementations There are many examples of implementations: Pandas' own internal extension arrays Geopandas' GeometryArray Pandas documentation has a list of projects with extension data types e.g. CyberPandas' IPArray Many others around the web, for example Fletcher's StringSupportingExtensionArray Question Despite having studied all of the above, I still find extension arrays difficult to understand. All of the examples have a lot of specifics and custom functionality that makes it difficult to work out what is actually necessary. I suspect many have faced a similar problem. I am thus asking for a simple and minimal example of a working ExtensionArray. The class should pass all the tests Pandas have provided to check that the ExtensionArray behaves as expected. I've provided an example implementation of the tests below. To have a concrete example, let's say I want to extend ExtensionArray to obtain an integer array that is able to hold NA values. That is essentially IntegerArray, but stripped of any actual functionality beyond the basics of ExtensionArray. Testing the solution I have used the following fixtures & tests to test the validity of the solution. These are based on the directions in the Pandas documentation import operator import numpy as np from pandas import Series import pytest from pandas.tests.extension.base.casting import BaseCastingTests # noqa from pandas.tests.extension.base.constructors import BaseConstructorsTests # noqa from pandas.tests.extension.base.dtype import BaseDtypeTests # noqa from pandas.tests.extension.base.getitem import BaseGetitemTests # noqa from pandas.tests.extension.base.groupby import BaseGroupbyTests # noqa from pandas.tests.extension.base.interface import BaseInterfaceTests # noqa from pandas.tests.extension.base.io import BaseParsingTests # noqa from pandas.tests.extension.base.methods import BaseMethodsTests # noqa from pandas.tests.extension.base.missing import BaseMissingTests # noqa from pandas.tests.extension.base.ops import ( # noqa BaseArithmeticOpsTests, BaseComparisonOpsTests, BaseOpsUtil, BaseUnaryOpsTests, ) from pandas.tests.extension.base.printing import BasePrintingTests # noqa from pandas.tests.extension.base.reduce import ( # noqa BaseBooleanReduceTests, BaseNoReduceTests, BaseNumericReduceTests, ) from pandas.tests.extension.base.reshaping import BaseReshapingTests # noqa from pandas.tests.extension.base.setitem import BaseSetitemTests # noqa from .extension import NullableIntArray @pytest.fixture def dtype(): """A fixture providing the ExtensionDtype to validate.""" return 'NullableInt' @pytest.fixture def data(): """ Length-100 array for this type. * data[0] and data[1] should both be non missing * data[0] and data[1] should not be equal """ return NullableIntArray(np.array(list(range(100)))) @pytest.fixture def data_for_twos(): """Length-100 array in which all the elements are two.""" return NullableIntArray(np.array([2] * 2)) @pytest.fixture def data_missing(): """Length-2 array with [NA, Valid]""" return NullableIntArray(np.array([np.nan, 2])) @pytest.fixture(params=["data", "data_missing"]) def all_data(request, data, data_missing): """Parametrized fixture giving 'data' and 'data_missing'""" if request.param == "data": return data elif request.param == "data_missing": return data_missing @pytest.fixture def data_repeated(data): """ Generate many datasets. Parameters ---------- data : fixture implementing `data` Returns ------- Callable[[int], Generator]: A callable that takes a `count` argument and returns a generator yielding `count` datasets. """ def gen(count): for _ in range(count): yield data return gen @pytest.fixture def data_for_sorting(): """ Length-3 array with a known sort order. This should be three items [B, C, A] with A < B < C """ return NullableIntArray(np.array([2, 3, 1])) @pytest.fixture def data_missing_for_sorting(): """ Length-3 array with a known sort order. This should be three items [B, NA, A] with A < B and NA missing. """ return NullableIntArray(np.array([2, np.nan, 1])) @pytest.fixture def na_cmp(): """ Binary operator for comparing NA values. Should return a function of two arguments that returns True if both arguments are (scalar) NA for your type. By default, uses ``operator.is_`` """ return operator.is_ @pytest.fixture def na_value(): """The scalar missing value for this type. Default 'None'""" return np.nan @pytest.fixture def data_for_grouping(): """ Data for factorization, grouping, and unique tests. Expected to be like [B, B, NA, NA, A, A, B, C] Where A < B < C and NA is missing """ return NullableIntArray(np.array([2, 2, np.nan, np.nan, 1, 1, 2, 3])) @pytest.fixture(params=[True, False]) def box_in_series(request): """Whether to box the data in a Series""" return request.param @pytest.fixture( params=[ lambda x: 1, lambda x: [1] * len(x), lambda x: Series([1] * len(x)), lambda x: x, ], ids=["scalar", "list", "series", "object"], ) def groupby_apply_op(request): """ Functions to test groupby.apply(). """ return request.param @pytest.fixture(params=[True, False]) def as_frame(request): """ Boolean fixture to support Series and Series.to_frame() comparison testing. """ return request.param @pytest.fixture(params=[True, False]) def as_series(request): """ Boolean fixture to support arr and Series(arr) comparison testing. """ return request.param @pytest.fixture(params=[True, False]) def use_numpy(request): """ Boolean fixture to support comparison testing of ExtensionDtype array and numpy array. """ return request.param @pytest.fixture(params=["ffill", "bfill"]) def fillna_method(request): """ Parametrized fixture giving method parameters 'ffill' and 'bfill' for Series.fillna(method=<method>) testing. """ return request.param @pytest.fixture(params=[True, False]) def as_array(request): """ Boolean fixture to support ExtensionDtype _from_sequence method testing. """ return request.param class TestCastingTests(BaseCastingTests): pass class TestConstructorsTests(BaseConstructorsTests): pass class TestDtypeTests(BaseDtypeTests): pass class TestGetitemTests(BaseGetitemTests): pass class TestGroupbyTests(BaseGroupbyTests): pass class TestInterfaceTests(BaseInterfaceTests): pass class TestParsingTests(BaseParsingTests): pass class TestMethodsTests(BaseMethodsTests): pass class TestMissingTests(BaseMissingTests): pass class TestArithmeticOpsTests(BaseArithmeticOpsTests): pass class TestComparisonOpsTests(BaseComparisonOpsTests): pass class TestOpsUtil(BaseOpsUtil): pass class TestUnaryOpsTests(BaseUnaryOpsTests): pass class TestPrintingTests(BasePrintingTests): pass class TestBooleanReduceTests(BaseBooleanReduceTests): pass class TestNoReduceTests(BaseNoReduceTests): pass class TestNumericReduceTests(BaseNumericReduceTests): pass class TestReshapingTests(BaseReshapingTests): pass class TestSetitemTests(BaseSetitemTests): pass | Update 2021-09-19 There were too many issues trying to get NullableIntArray to pass the test suite, so I've created a new example (AngleDtype + AngleArray) that currently passes 398 tests (fails 2). 0. Usage (pandas 1.3.2, numpy 1.20.2, python 3.9.2) AngleArray stores either radians or degrees depending on its unit (represented by AngleDtype): thetas = [0, np.pi, 2 * np.pi] a = AngleArray(thetas, unit='rad') # <AngleArray> # [0.0, 3.141592653589793, 6.283185307179586] # Length: 3, dtype: angle[rad] a = a.asunit('deg') # <AngleArray> # [0.0, 180.0, 360.0] # Length: 3, dtype: angle[deg] AngleArray can be stored in a Series or DataFrame: s = pd.Series(a) # 0 0.0 # 1 180.0 # 2 360.0 # dtype: angle[deg] df = pd.DataFrame({'a': s, 'b': AngleArray(thetas[::-1])}) # a b # 0 0.0 6.283185307179586 # 1 180.0 3.141592653589793 # 2 360.0 0.0 df['a'] # 0 0.0 # 1 180.0 # 2 360.0 # Name: a, dtype: angle[deg] df['b'] # 0 6.283185307179586 # 1 3.141592653589793 # 2 0.0 # Name: b, dtype: angle[rad] AngleArray computations are unit-aware: df['a + b'] = df['a'] + df['b'] # a b a + b # 0 0.0 6.283185307179586 360.0 # 1 180.0 3.141592653589793 360.0 # 2 360.0 0.0 360.0 df['a + b'] # 0 360.0 # 1 360.0 # 2 360.0 # Name: a + b, dtype: angle[deg] 1. AngleDtype For every ExtensionDtype, 3 methods must be implemented concretely: type name construct_array_type For a parameterized ExtensionDtype (e.g., AngleDtype.unit or PeriodDtype.freq): _metadata is required construct_from_string is recommended For the test suite: __hash__ __eq__ __setstate__ from __future__ import annotations import operator import re from typing import Any, Sequence import numpy as np import pandas as pd @pd.api.extensions.register_extension_dtype class AngleDtype(pd.core.dtypes.dtypes.PandasExtensionDtype): """ An ExtensionDtype for unit-aware angular data. """ # Required for all parameterized dtypes _metadata = ('unit',) _match = re.compile(r'(A|a)ngle\[(?P<unit>.+)\]') def __init__(self, unit=None): if unit is None: unit = 'rad' if unit not in ['rad', 'deg']: msg = f"'{type(self).__name__}' only supports 'rad' and 'deg' units" raise ValueError(msg) self._unit = unit def __str__(self) -> str: return f'angle[{self.unit}]' # TestDtypeTests def __hash__(self) -> int: return hash(str(self)) # TestDtypeTests def __eq__(self, other: Any) -> bool: if isinstance(other, str): return self.name == other else: return isinstance(other, type(self)) and self.unit == other.unit # Required for pickle compat (see GH26067) def __setstate__(self, state) -> None: self._unit = state['unit'] # Required for all ExtensionDtype subclasses @classmethod def construct_array_type(cls): """ Return the array type associated with this dtype. """ return AngleArray # Recommended for parameterized dtypes @classmethod def construct_from_string(cls, string: str) -> AngleDtype: """ Construct an AngleDtype from a string. Example ------- >>> AngleDtype.construct_from_string('angle[deg]') angle['deg'] """ if not isinstance(string, str): msg = f"'construct_from_string' expects a string, got {type(string)}" raise TypeError(msg) msg = f"Cannot construct a '{cls.__name__}' from '{string}'" match = cls._match.match(string) if match: d = match.groupdict() try: return cls(unit=d['unit']) except (KeyError, TypeError, ValueError) as err: raise TypeError(msg) from err else: raise TypeError(msg) # Required for all ExtensionDtype subclasses @property def type(self): """ The scalar type for the array (e.g., int). """ return np.generic # Required for all ExtensionDtype subclasses @property def name(self) -> str: """ A string representation of the dtype. """ return str(self) @property def unit(self) -> str: """ The angle unit. """ return self._unit 2. AngleArray For every ExtensionArray, 11 methods must be implemented concretely: _from_sequence _from_factorized __getitem__ __len__ __eq__ dtype nbytes isna take copy _concat_same_type For the test suite: Many more concrete methods are needed Whenever a test prompted me to add a new method, I marked it with a comment (though this is not a comprehensive mapping since most methods are required by multiple tests) class AngleArray(pd.api.extensions.ExtensionArray): """ An ExtensionArray for unit-aware angular data. """ # Include `copy` param for TestInterfaceTests def __init__(self, data, unit='rad', copy: bool=False): self._data = np.array(data, copy=copy) self._unit = unit # Required for all ExtensionArray subclasses def __getitem__(self, index: int) -> AngleArray | Any: """ Select a subset of self. """ if isinstance(index, int): return self._data[index] else: # Check index for TestGetitemTests index = pd.core.indexers.check_array_indexer(self, index) return type(self)(self._data[index]) # TestSetitemTests def __setitem__(self, index: int, value: np.generic) -> None: """ Set one or more values in-place. """ # Check index for TestSetitemTests index = pd.core.indexers.check_array_indexer(self, index) # Upcast to value's type (if needed) for TestMethodsTests if self._data.dtype < type(value): self._data = self._data.astype(type(value)) # TODO: Validate value for TestSetitemTests # value = self._validate_setitem_value(value) self._data[index] = value # Required for all ExtensionArray subclasses def __len__(self) -> int: """ Length of this array. """ return len(self._data) # TestUnaryOpsTests def __invert__(self) -> AngleArray: """ Element-wise inverse of this array. """ data = ~self._data return type(self)(data, unit=self.dtype.unit) def _ensure_same_units(self, other) -> AngleArray: """ Helper method to ensure `self` and `other` have the same units. """ if isinstance(other, type(self)) and self.dtype.unit != other.dtype.unit: return other.asunit(self.dtype.unit) else: return other def _apply_operator(self, op, other, recast=False) -> np.ndarray | AngleArray: """ Helper method to apply an operator `op` between `self` and `other`. Some ops require the result to be recast into AngleArray: * Comparison ops: recast=False * Arithmetic ops: recast=True """ f = operator.attrgetter(op) data, other = np.array(self), np.array(self._ensure_same_units(other)) result = f(data)(other) return result if not recast else type(self)(result, unit=self.dtype.unit) def _apply_operator_if_not_series(self, op, other, recast=False) -> np.ndarray | AngleArray: """ Wraps _apply_operator only if `other` is not Series/DataFrame. Some ops should return NotImplemented if `other` is a Series/DataFrame: https://github.com/pandas-dev/pandas/blob/e7e7b40722e421ef7e519c645d851452c70a7b7c/pandas/tests/extension/base/ops.py#L115 """ if isinstance(other, (pd.Series, pd.DataFrame)): return NotImplemented else: return self._apply_operator(op, other, recast=recast) # Required for all ExtensionArray subclasses @pd.core.ops.unpack_zerodim_and_defer('__eq__') def __eq__(self, other): return self._apply_operator('__eq__', other, recast=False) # TestComparisonOpsTests @pd.core.ops.unpack_zerodim_and_defer('__ne__') def __ne__(self, other): return self._apply_operator('__ne__', other, recast=False) # TestComparisonOpsTests @pd.core.ops.unpack_zerodim_and_defer('__lt__') def __lt__(self, other): return self._apply_operator('__lt__', other, recast=False) # TestComparisonOpsTests @pd.core.ops.unpack_zerodim_and_defer('__gt__') def __gt__(self, other): return self._apply_operator('__gt__', other, recast=False) # TestComparisonOpsTests @pd.core.ops.unpack_zerodim_and_defer('__le__') def __le__(self, other): return self._apply_operator('__le__', other, recast=False) # TestComparisonOpsTests @pd.core.ops.unpack_zerodim_and_defer('__ge__') def __ge__(self, other): return self._apply_operator('__ge__', other, recast=False) # TestArithmeticOpsTests @pd.core.ops.unpack_zerodim_and_defer('__add__') def __add__(self, other) -> AngleArray: return self._apply_operator_if_not_series('__add__', other, recast=True) # TestArithmeticOpsTests @pd.core.ops.unpack_zerodim_and_defer('__sub__') def __sub__(self, other) -> AngleArray: return self._apply_operator_if_not_series('__sub__', other, recast=True) # TestArithmeticOpsTests @pd.core.ops.unpack_zerodim_and_defer('__mul__') def __mul__(self, other) -> AngleArray: return self._apply_operator_if_not_series('__mul__', other, recast=True) # TestArithmeticOpsTests @pd.core.ops.unpack_zerodim_and_defer('__truediv__') def __truediv__(self, other) -> AngleArray: return self._apply_operator_if_not_series('__truediv__', other, recast=True) # Required for all ExtensionArray subclasses @classmethod def _from_sequence(cls, data, dtype=None, copy: bool=False): """ Construct a new AngleArray from a sequence of scalars. """ if dtype is None: dtype = AngleDtype() if not isinstance(dtype, AngleDtype): msg = f"'{cls.__name__}' only supports 'AngleDtype' dtype" raise ValueError(msg) else: return cls(data, unit=dtype.unit, copy=copy) # TestParsingTests @classmethod def _from_sequence_of_strings(cls, strings, *, dtype=None, copy: bool=False) -> AngleArray: """ Construct a new AngleArray from a sequence of strings. """ scalars = pd.to_numeric(strings, errors='raise') return cls._from_sequence(scalars, dtype=dtype, copy=copy) # Required for all ExtensionArray subclasses @classmethod def _from_factorized(cls, uniques: np.ndarray, original: AngleArray): """ Reconstruct an AngleArray after factorization. """ return cls(uniques, unit=original.dtype.unit) # Required for all ExtensionArray subclasses @classmethod def _concat_same_type(cls, to_concat: Sequence[AngleArray]) -> AngleArray: """ Concatenate multiple AngleArrays. """ # ensure same units counts = pd.value_counts([array.dtype.unit for array in to_concat]) unit = counts.index[0] if counts.size > 1: to_concat = [a.asunit(unit) for a in to_concat] return cls(np.concatenate(to_concat), unit=unit) # Required for all ExtensionArray subclasses @property def dtype(self): """ An instance of AngleDtype. """ return AngleDtype(self._unit) # Required for all ExtensionArray subclasses @property def nbytes(self) -> int: """ The number of bytes needed to store this object in memory. """ return self._data.nbytes @property def unit(self): return self.dtype.unit # Test*ReduceTests def all(self) -> bool: return all(self) def any(self) -> bool: # Test*ReduceTests return any(self) def sum(self) -> np.generic: # Test*ReduceTests return self._data.sum() def mean(self) -> np.generic: # Test*ReduceTests return self._data.mean() def max(self) -> np.generic: # Test*ReduceTests return self._data.max() def min(self) -> np.generic: # Test*ReduceTests return self._data.min() def prod(self) -> np.generic: # Test*ReduceTests return self._data.prod() def std(self) -> np.generic: # Test*ReduceTests return pd.Series(self._data).std() def var(self) -> np.generic: # Test*ReduceTests return pd.Series(self._data).var() def median(self) -> np.generic: # Test*ReduceTests return np.median(self._data) def skew(self) -> np.generic: # Test*ReduceTests return pd.Series(self._data).skew() def kurt(self) -> np.generic: # Test*ReduceTests return pd.Series(self._data).kurt() # Test*ReduceTests def _reduce(self, name: str, *, skipna: bool=True, **kwargs): """ Return a scalar result of performing the reduction operation. """ f = operator.attrgetter(name) return f(self)() # Required for all ExtensionArray subclasses def isna(self): """ A 1-D array indicating if each value is missing. """ return pd.isnull(self._data) # Required for all ExtensionArray subclasses def copy(self): """ Return a copy of the array. """ copied = self._data.copy() return type(self)(copied, unit=self.unit) # Required for all ExtensionArray subclasses def take(self, indices, allow_fill=False, fill_value=None): """ Take elements from an array. """ if allow_fill and fill_value is None: fill_value = self.dtype.na_value result = pd.core.algorithms.take(self._data, indices, allow_fill=allow_fill, fill_value=fill_value) return self._from_sequence(result) # TestMethodsTests def value_counts(self, dropna: bool=True): """ Return a Series containing descending counts of unique values (excludes NA values by default). """ return pd.core.algorithms.value_counts(self._data, dropna=dropna) def asunit(self, unit: str) -> AngleArray: """ Cast to an AngleDtype unit. """ if unit not in ['rad', 'deg']: msg = f"'{type(self.dtype).__name__}' only supports 'rad' and 'deg' units" raise ValueError(msg) elif self.dtype.unit == unit: return self else: rad2deg = self.dtype.unit == 'rad' and unit == 'deg' data = np.rad2deg(self._data) if rad2deg else np.deg2rad(self._data) return type(self)(data, unit) 3. pytest $ pytest tests.py ... 2 failed, 398 passed, 1 skipped, 1 xfailed in 3.95s There are two remaining test failures: TestMethodsTests.test_combine_le Currently this returns an AngleDtype Series of boolean values, but pandas wants the Series itself to be boolean (not sure how to resolve this without breaking other tests): pd.Series(a).combine(pd.Series(a), lambda x1, x2: x1 <= x2) TestSetitemTests.test_setitem_scalar_key_sequence_raise Currently this puts a[[0, 1]] into index 0, but pandas expects an error: a[0] = a[[0, 1]] Several of the pandas extension arrays use convoluted validation methods to catch these edge cases, e.g.: DatetimeLikeArrayMixin._validate_setitem_value DatetimeLikeArrayMixin._validate_listlike DatetimeLikeArrayMixin._validate_scalar import operator import numpy as np from pandas import Series import pytest from pandas.tests.extension.base.casting import BaseCastingTests # noqa from pandas.tests.extension.base.constructors import BaseConstructorsTests # noqa from pandas.tests.extension.base.dtype import BaseDtypeTests # noqa from pandas.tests.extension.base.getitem import BaseGetitemTests # noqa from pandas.tests.extension.base.groupby import BaseGroupbyTests # noqa from pandas.tests.extension.base.interface import BaseInterfaceTests # noqa from pandas.tests.extension.base.io import BaseParsingTests # noqa from pandas.tests.extension.base.methods import BaseMethodsTests # noqa from pandas.tests.extension.base.missing import BaseMissingTests # noqa from pandas.tests.extension.base.ops import ( # noqa BaseArithmeticOpsTests, BaseComparisonOpsTests, BaseOpsUtil, BaseUnaryOpsTests, ) from pandas.tests.extension.base.printing import BasePrintingTests # noqa from pandas.tests.extension.base.reduce import ( # noqa BaseBooleanReduceTests, BaseNoReduceTests, BaseNumericReduceTests, ) from pandas.tests.extension.base.reshaping import BaseReshapingTests # noqa from pandas.tests.extension.base.setitem import BaseSetitemTests # noqa from extension import AngleDtype, AngleArray @pytest.fixture def dtype(): """ A fixture providing the ExtensionDtype to validate. """ return AngleDtype() @pytest.fixture def data(): """ Length-100 array for this type. * data[0] and data[1] should both be non missing * data[0] and data[1] should not be equal """ return AngleArray(np.arange(100)) @pytest.fixture def data_for_twos(): """ Length-100 array in which all the elements are two. """ return AngleArray(np.array([2] * 100)) @pytest.fixture def data_missing(): """ Length-2 array with [NA, Valid]. """ return AngleArray(np.array([np.nan, 2])) @pytest.fixture(params=['data', 'data_missing']) def all_data(request, data, data_missing): """ Parameterized fixture giving 'data' and 'data_missing'. """ if request.param == 'data': return data elif request.param == 'data_missing': return data_missing @pytest.fixture def data_repeated(data): """ Generate many datasets. Parameters ---------- data : fixture implementing `data` Returns ------- Callable[[int], Generator]: A callable that takes a `count` argument and returns a generator yielding `count` datasets. """ def gen(count): for _ in range(count): yield data return gen @pytest.fixture def data_for_sorting(): """ Length-3 array with a known sort order. This should be three items [B, C, A] with A < B < C. """ return AngleArray(np.array([2, 3, 1])) @pytest.fixture def data_missing_for_sorting(): """ Length-3 array with a known sort order. This should be three items [B, NA, A] with A < B and NA missing. """ return AngleArray(np.array([2, np.nan, 1])) @pytest.fixture def na_cmp(): """ Binary operator for comparing NA values. Should return a function of two arguments that returns True if both arguments are (scalar) NA for your type. By default, uses ``operator.is_``. """ return lambda a, b: np.array_equal(a, b, equal_nan=True) @pytest.fixture def na_value(): """ The scalar missing value for this type. Default 'None'. """ return np.nan @pytest.fixture def data_for_grouping(): """ Data for factorization, grouping, and unique tests. Expected to be like [B, B, NA, NA, A, A, B, C] where A < B < C and NA is missing. """ return AngleArray(np.array([2, 2, np.nan, np.nan, 1, 1, 2, 3])) @pytest.fixture(params=[True, False]) def box_in_series(request): """ Whether to box the data in a Series. """ return request.param @pytest.fixture( params=[ lambda x: 1, lambda x: [1] * len(x), lambda x: Series([1] * len(x)), lambda x: x, ], ids=['scalar', 'list', 'series', 'object'], ) def groupby_apply_operator(request): """ Functions to test groupby.apply(). """ return request.param @pytest.fixture(params=[True, False]) def as_frame(request): """ Boolean fixture to support Series and Series.to_frame() comparison testing. """ return request.param @pytest.fixture(params=[True, False]) def as_series(request): """ Boolean fixture to support arr and Series(arr) comparison testing. """ return request.param @pytest.fixture(params=[True, False]) def use_numpy(request): """ Boolean fixture to support comparison testing of ExtensionDtype array and numpy array. """ return request.param @pytest.fixture(params=['ffill', 'bfill']) def fillna_method(request): """ Parameterized fixture giving method parameters 'ffill' and 'bfill' for Series.fillna(method=<method>) testing. """ return request.param @pytest.fixture(params=[True, False]) def as_array(request): """ Boolean fixture to support ExtensionDtype _from_sequence method testing. """ return request.param @pytest.fixture(params=[None, lambda x: x]) def sort_by_key(request): """ Simple fixture for testing keys in sorting methods. Tests None (no key) and the identity key. """ return request.param # TODO: Finish implementing all operators _all_arithmetic_operators = [ '__add__', # '__radd__', '__sub__', # '__rsub__', '__mul__', # '__rmul__', # '__floordiv__', # '__rfloordiv__', '__truediv__', # '__rtruediv__', # '__pow__', # '__rpow__', # '__mod__', # '__rmod__', ] @pytest.fixture(params=_all_arithmetic_operators) def all_arithmetic_operators(request): """ Fixture for dunder names for common arithmetic operations. """ return request.param _all_numeric_reductions = [ 'sum', 'max', 'min', 'mean', 'prod', 'std', 'var', 'median', 'kurt', 'skew', ] @pytest.fixture(params=_all_numeric_reductions) def all_numeric_reductions(request): """ Fixture for numeric reduction names. """ return request.param _all_boolean_reductions = ['all', 'any'] @pytest.fixture(params=_all_boolean_reductions) def all_boolean_reductions(request): """ Fixture for boolean reduction names. """ return request.param _all_reductions = _all_numeric_reductions + _all_boolean_reductions @pytest.fixture(params=_all_reductions) def all_reductions(request): """ Fixture for all (boolean + numeric) reduction names. """ return request.param _all_compare_operators = [ '__eq__', '__ne__', '__le__', '__lt__', '__ge__', '__gt__', ] @pytest.fixture(params=_all_compare_operators) def all_compare_operators(request): """ Fixture for dunder names for common compare operations: * >= * > * == * != * < * <= """ return request.param class TestCastingTests(BaseCastingTests): pass class TestConstructorsTests(BaseConstructorsTests): pass class TestDtypeTests(BaseDtypeTests): pass class TestGetitemTests(BaseGetitemTests): pass class TestGroupbyTests(BaseGroupbyTests): pass class TestInterfaceTests(BaseInterfaceTests): pass class TestParsingTests(BaseParsingTests): pass class TestMethodsTests(BaseMethodsTests): pass class TestMissingTests(BaseMissingTests): pass class TestArithmeticOpsTests(BaseArithmeticOpsTests): series_scalar_exc = None frame_scalar_exc = None series_array_exc = None divmod_exc = TypeError # TODO: Implement divmod class TestComparisonOpsTests(BaseComparisonOpsTests): # See pint-pandas test suite def _compare_other(self, s, data, op_name, other): op = self.get_op_from_name(op_name) result = op(s, other) expected = op(s.to_numpy(), other) assert (result == expected).all() class TestOpsUtil(BaseOpsUtil): pass class TestUnaryOpsTests(BaseUnaryOpsTests): pass class TestPrintingTests(BasePrintingTests): pass class TestBooleanReduceTests(BaseBooleanReduceTests): pass class TestNumericReduceTests(BaseNumericReduceTests): pass # AFAICT NoReduce and Boolean+NumericReduce are mutually exclusive # class TestNoReduceTests(BaseNoReduceTests): # pass class TestReshapingTests(BaseReshapingTests): pass class TestSetitemTests(BaseSetitemTests): pass | 20 | 34 |
68,906,112 | 2021-8-24 | https://stackoverflow.com/questions/68906112/how-to-get-an-exact-representation-of-floats-during-dataframe-to-json | I observed the following behavior with DataFrame.to_json: >>> df = pd.DataFrame([[eval(f'1.12345e-{i}') for i in range(8, 20)]]) >>> df 0 1 2 3 4 5 6 7 8 9 10 11 0 1.123450e-08 1.123450e-09 1.123450e-10 1.123450e-11 1.123450e-12 1.123450e-13 1.123450e-14 1.123450e-15 1.123450e-16 1.123450e-17 1.123450e-18 1.123450e-19 >>> print(df.to_json(indent=2, orient='index')) { "0":{ "0":0.0000000112, "1":0.0000000011, "2":0.0000000001, "3":0.0, "4":0.0, "5":0.0, "6":0.0, "7":0.0, "8":1.12345e-16, "9":1.12345e-17, "10":1.12345e-18, "11":1.12345e-19 } } So all numbers down to 1e-16 seem to be rounded to 10 decimal places (in agreement with the default value for double_precision) but all smaller values are represented exactly. Why is this the case and how can I turn off decimal rounding for the larger values too (i.e. using scientific notation instead)? >>> pd.__version__ '1.3.1' For reference, the standard library's json module doesn't do this: >>> import json >>> print(json.dumps([eval(f'1.12345e-{i}') for i in range(8, 20)], indent=2)) [ 1.12345e-08, 1.12345e-09, 1.12345e-10, 1.12345e-11, 1.12345e-12, 1.12345e-13, 1.12345e-14, 1.12345e-15, 1.12345e-16, 1.12345e-17, 1.12345e-18, 1.12345e-19 ] | I'm not sure on achieving this with pd.DataFrame.to_json, but we can use pd.DataFrame.to_dict, json, and pd.read_json to achieve a full precision json representation from a pandas dataframe. json_df = json.dumps(df.to_dict('index'), indent=2) >>> print(json_df) { "0": { "0": 1.12345e-08, "1": 1.12345e-09, "2": 1.12345e-10, "3": 1.12345e-11, "4": 1.12345e-12, "5": 1.12345e-13, "6": 1.12345e-14, "7": 1.12345e-15, "8": 1.12345e-16, "9": 1.12345e-17, "10": 1.12345e-18, "11": 1.12345e-19 } } To read it back in, we can then do: >>> pd.read_json(json_df, orient='index') 0 1 2 ... 9 10 11 0 1.123450e-08 1.123450e-09 1.123450e-10 ... 1.123450e-17 1.123450e-18 1.123450e-19 [1 rows x 12 columns] | 7 | 1 |
68,895,380 | 2021-8-23 | https://stackoverflow.com/questions/68895380/automated-legend-creation-for-3d-plot | I'm trying to update below function to report the clusters info via legend: color_names = ["red", "blue", "yellow", "black", "pink", "purple", "orange"] def plot_3d_transformed_data(df, title, colors="red"): ax = plt.figure(figsize=(12,10)).gca(projection='3d') #fig = plt.figure(figsize=(8, 8)) #ax = fig.add_subplot(111, projection='3d') if type(colors) is np.ndarray: for cname, class_label in zip(color_names, np.unique(colors)): X_color = df[colors == class_label] ax.scatter(X_color[:, 0], X_color[:, 1], X_color[:, 2], marker="x", c=cname, label=f"Cluster {class_label}" if type(colors) is np.ndarray else None) else: ax.scatter(df.Type, df.Length, df.Freq, alpha=0.6, c=colors, marker="x", label=str(clusterSizes) ) ax.set_xlabel("PC1: Type") ax.set_ylabel("PC2: Length") ax.set_zlabel("PC3: Frequency") ax.set_title(title) if type(colors) is np.ndarray: #ax.legend() plt.gca().legend() plt.legend(bbox_to_anchor=(1.04,1), loc="upper left") plt.show() So I call my function to visualize the clusters patterns by: plot_3d_transformed_data(pdf_km_pred, f'Clustering rare URL parameters for data of date: {DATE_FROM} \nMethod: KMeans over PCA \nn_clusters={n_clusters} , Distance_Measure={DistanceMeasure}', colors=pdf_km_pred.prediction_km) print(clusterSizes) Sadly I can't show the legend, and I have to print clusters members manually under the 3D plot. This is the output without legend with the following error: No handles with labels found to put in legend. I check this post, but I couldn't figure out what is the mistake in function to pass the cluster label list properly. I want to update the function so that I can demonstrate cluster labels via clusterSizes.index and their scale via clusterSizes.size Expected output: As here suggests better using legend_elements() to determine a useful number of legend entries to be shown and return a tuple of handles and labels automatically. Update: As I mentioned in the expected output should contain one legend for cluster labels and the other legend for cluster size (number of instances in each cluster). It might report this info via single legend too. Please see below example for 2D: | You need to save the reference to the first legend and add it to your ax as a separate artist before creating the second legend. That way, the second call to ax.legend(...) does not erase the first legend. For the second legend, I simply created a circle for each unique color and added it in. I forgot how to draw real circles, so instead I use a Line2D with lw=0, marker="o" which results in a circle. Play around with the legend's bbox_to_anchor and loc keywords to get a result that satisfies you. I got rid of everything relying on plt.<something> because it's the best way to forget which method is attached to which object. Now everything is in ax.<something> or fig.<something>. It's also the right approach for when you have several axes, or when you want to embed your canvas in a PyQt app. plt will not do what you expect there. The initial code is the one provided by @r-beginners and I simply built upon it. # Imports. import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import pandas as pd import numpy as np # Figure. figure = plt.figure(figsize=(12, 10)) ax = figure.add_subplot(projection="3d") ax.set_xlabel("PC1: Type") ax.set_ylabel("PC2: Length") ax.set_zlabel("PC3: Frequency") ax.set_title("scatter 3D legend") # Data and 3D scatter. colors = ["red", "blue", "yellow", "black", "pink", "purple", "orange", "black", "red" ,"blue"] df = pd.DataFrame({"type": np.random.randint(0, 5, 10), "length": np.random.randint(0, 20, 10), "freq": np.random.randint(0, 10, 10), "size": np.random.randint(20, 200, 10), "colors": np.random.choice(colors, 10)}) sc = ax.scatter(df.type, df.length, df.freq, alpha=0.6, c=colors, s=df["size"], marker="o") # Legend 1. handles, labels = sc.legend_elements(prop="sizes", alpha=0.6) legend1 = ax.legend(handles, labels, bbox_to_anchor=(1, 1), loc="upper right", title="Sizes") ax.add_artist(legend1) # <- this is important. # Legend 2. unique_colors = set(colors) handles = [] labels = [] for n, color in enumerate(unique_colors, start=1): artist = mpl.lines.Line2D([], [], color=color, lw=0, marker="o") handles.append(artist) labels.append(str(n)) legend2 = ax.legend(handles, labels, bbox_to_anchor=(0.05, 0.05), loc="lower left", title="Classes") figure.show() Not related to the question: because of how markersize works for circles, one could use s = df["size"]**2 instead of s = df["size"]. | 5 | 0 |
68,887,729 | 2021-8-23 | https://stackoverflow.com/questions/68887729/vs-pylance-warning-import-module-could-not-be-resolved | Hi I am getting the following warning (A squiggly line underneath imports), import "numpy" could not be resolved Pylance(reportMissingModuleSource). There is no issues with executing the code - works fine, just the warning (squiggly line). In the following github page, it states to change Settings.JSON with following line "python.analysis.extraPaths": ["./sources"]. https://github.com/microsoft/pylance-release/blob/main/TROUBLESHOOTING.md#unresolved-import-warnings However this didn't work. I also tried adding the path to the current directory followed by "sources" as show in the image. But it didnt work either. I am opening vs code from this entry point /home/imantha/workspace/python using bash with code . command. Could anyone know how I add the correct path. | If I understand your problem correctly, your python environment is properly set (for you are able to run your code) but your IDE (Vs code) points import errors. That is probably because your IDE does not know which python environment use for your current project (which seems to live somewhere in /home/imantha/workspace/python). You need to set it to get rid of this warning https://code.visualstudio.com/docs/python/environments | 15 | 18 |
68,850,403 | 2021-8-19 | https://stackoverflow.com/questions/68850403/best-way-to-flatten-and-remap-orm-to-pydantic-model | I am using Pydantic with FastApi to output ORM data into JSON. I would like to flatten and remap the ORM model to eliminate an unnecessary level in the JSON. Here's a simplified example to illustrate the problem. original output: {"id": 1, "billing": [ {"id": 1, "order_id": 1, "first_name": "foo"}, {"id": 2, "order_id": 1, "first_name": "bar"} ] } desired output: {"id": 1, "name": ["foo", "bar"]} How to map values from nested dict to Pydantic Model? provides a solution that works for dictionaries by using the init function in the Pydantic model class. This example shows how that works with dictionaries: from pydantic import BaseModel # The following approach works with a dictionary as the input order_dict = {"id": 1, "billing": {"first_name": "foo"}} # desired output: {"id": 1, "name": "foo"} class Order_Model_For_Dict(BaseModel): id: int name: str = None class Config: orm_mode = True def __init__(self, **kwargs): print( "kwargs for dictionary:", kwargs ) # kwargs for dictionary: {'id': 1, 'billing': {'first_name': 'foo'}} kwargs["name"] = kwargs["billing"]["first_name"] super().__init__(**kwargs) print(Order_Model_For_Dict.parse_obj(order_dict)) # id=1 name='foo' (This script is complete, it should run "as is") However, when working with ORM objects, this approach does not work. It appears that the init function is not called. Here's an example which will not provide the desired output. from pydantic import BaseModel, root_validator from typing import List from sqlalchemy.orm import relationship from sqlalchemy import Column, Integer, String, ForeignKey from sqlalchemy.dialects.postgresql import ARRAY from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() from pydantic.utils import GetterDict class BillingOrm(Base): __tablename__ = "billing" id = Column(Integer, primary_key=True, nullable=False) order_id = Column(ForeignKey("orders.id", ondelete="CASCADE"), nullable=False) first_name = Column(String(20)) class OrderOrm(Base): __tablename__ = "orders" id = Column(Integer, primary_key=True, nullable=False) billing = relationship("BillingOrm") class Billing(BaseModel): id: int order_id: int first_name: str class Config: orm_mode = True class Order(BaseModel): id: int name: List[str] = None # billing: List[Billing] # uncomment to verify the relationship is working class Config: orm_mode = True def __init__(self, **kwargs): # This __init__ function does not run when using from_orm to parse ORM object print("kwargs for orm:", kwargs) kwargs["name"] = kwargs["billing"]["first_name"] super().__init__(**kwargs) billing_orm_1 = BillingOrm(id=1, order_id=1, first_name="foo") billing_orm_2 = BillingOrm(id=2, order_id=1, first_name="bar") order_orm = OrderOrm(id=1) order_orm.billing.append(billing_orm_1) order_orm.billing.append(billing_orm_2) order_model = Order.from_orm(order_orm) # Output returns 'None' for name instead of ['foo','bar'] print(order_model) # id=1 name=None (This script is complete, it should run "as is") The output returns name=None instead of the desired list of names. In the above example, I am using Order.from_orm to create the Pydantic model. This approach seems to be the same that is used by FastApi when specifying a response model. The desired solution should support use in the FastApi response model as shown in this example: @router.get("/orders", response_model=List[schemas.Order]) async def list_orders(db: Session = Depends(get_db)): return get_orders(db) Update: Regarding MatsLindh comment to try validators, I replaced the init function with a root validator, however, I'm unable to mutate the return values to include a new attribute. I suspect this issue is because it is a ORM object and not a true dictionary. The following code will extract the names and print them in the desired list. However, I can't see how to include this updated result in the model response: @root_validator(pre=True) def flatten(cls, values): if isinstance(values, GetterDict): names = [ billing_entry.first_name for billing_entry in values.get("billing") ] print(names) # values["name"] = names # error: 'GetterDict' object does not support item assignment return values I also found a couple other discussions on this problem that led me to try this approach: https://github.com/samuelcolvin/pydantic/issues/717 https://gitmemory.com/issue/samuelcolvin/pydantic/821/744047672 | What if you override the from_orm class method? class Order(BaseModel): id: int name: List[str] = None billing: List[Billing] class Config: orm_mode = True @classmethod def from_orm(cls, obj: Any) -> 'Order': # `obj` is the orm model instance if hasattr(obj, 'billing'): obj.name = obj.billing.first_name return super().from_orm(obj) | 10 | 14 |
68,848,055 | 2021-8-19 | https://stackoverflow.com/questions/68848055/pip-installing-a-whl-file-from-a-private-github-repository | How can one install a .whl (python library) from a private github repo? I have setup a personal access token and can install the library if its not a .whl by using the following command pip install git+https://{token}@github.com/{org_name}/{repo_name}.git However if there is a .whl in the repo and I want to install from that using: pip install git+https://{token}@github.com/{org_name}/{repo_name}/blob/master/{name.whl} Then I get the following error: TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType I am stumped! You can pip install {name.whl} if the file is locally but not from the private github repo. Quesiton: How to pip install name.whl on a private github repo? | You should be able to do pip install https://{token}@raw.githubusercontent.com/{user}/{repo}/master/{name.whl} | 12 | 9 |
68,876,869 | 2021-8-21 | https://stackoverflow.com/questions/68876869/sort-and-concatenate-the-dataframes | I have following two dataframes: >>> df1 c1 c2 v1 v2 0 A NaN 9 2 1 B NaN 2 5 2 C NaN 3 5 3 D NaN 4 2 >>> df2 c1 c2 v1 v2 0 A P 4 1 1 A T 3 1 2 A Y 2 0 3 B P 0 1 4 B T 2 2 5 B Y 0 2 6 C P 1 2 7 C T 1 2 8 C Y 1 1 9 D P 1 1 10 D T 2 0 11 D Y 1 1 I need to concatenate the dataframes and sort them or vice versa. The first dataframe needs to be sorted on v1 column, then the second dataframe needs to be sorted based on the order of the values from c1 column after sorting the first dataframe, and the v2 column from the second dataframe. A working version is something like this: sorting first dataframe on v1, then iterating the rows, and filtering the second dataframe for the value of c2 column, and sorting the filtered second dataframe on v2, finally concatenating all the frames. result = [] for i,row in df1.sort_values('v1').iterrows(): result.append(row.to_frame().T) result.append(df2[df2['c1'].eq(row['c1'])].sort_values('v2')) The resulting dataframe after sorting: >>> pd.concat(result, ignore_index=True) c1 c2 v1 v2 0 B NaN 2 5 1 B P 0 1 2 B T 2 2 3 B Y 0 2 4 C NaN 3 5 5 C Y 1 1 6 C P 1 2 7 C T 1 2 8 D NaN 4 2 9 D T 2 0 10 D P 1 1 11 D Y 1 1 12 A NaN 9 2 13 A Y 2 0 14 A P 4 1 15 A T 3 1 The problem with above approach is its iterative, and not so efficient when the number of dataframes increases and/or the number of rows increases in these dataframes. The real use-case scenario has from 2 to 6 dataframes, where number of rows ranges from few thousands to hundred thousands. UPDATE: Either of sorting the dataframes first then concatenating them, or concatenating the datframes first then sorting, will be fine, that is why I just included both the dataframes instead of just concatenating them and presenting a single dataframe. EDIT: Here is 4 dataframes from actual use-case scenario: from math import nan import pandas as pd df4 = pd.DataFrame({'c1': ['BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT'], 'c2': ['D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2', 'w2'], 'c3': ['BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH', 'BAF', 'BAF', 'BAF', 'BAF', 'BAF', 'WH', 'WH', 'WH', 'WH', 'WH'], 'c4': ['001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss', '001', '002', '003', '004', 'mss'], 'v1': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 2, 2, 0, 1, 0, 2, 3, 6, 2, 0, 2, 2, 0, 1, 0, 1, 3, 5, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 2, 2, 0, 1, 0, 2, 3, 6, 2, 0, 2, 2, 0, 1, 0, 1, 3, 5, 1, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 2, 2, 0, 2, 0, 2, 4, 6, 4, 0, 2, 2, 0, 1, 0, 2, 3, 6, 2, 0, 2, 2, 0, 1, 0, 1, 3, 5, 1, 0], 'v2': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 0, 0, 1, 0, 2, 3, 5, 4, 0, 0, 0, 0, 1, 0, 1, 3, 5, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 0, 0, 1, 0, 2, 3, 5, 4, 0, 0, 0, 0, 1, 0, 1, 3, 5, 3, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 1, 0, 1, 0, 2, 4, 6, 5, 0, 0, 0, 0, 1, 0, 2, 3, 5, 4, 0, 0, 0, 0, 1, 0, 1, 3, 5, 3, 0], 'v3': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 1, 2, 1, 0, 0, 0, 4, 6, 4, 0, 1, 2, 1, 0, 0, 0, 2, 6, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 1, 2, 1, 0, 0, 0, 4, 6, 4, 0, 1, 2, 1, 0, 0, 0, 2, 6, 3, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 1, 2, 1, 0, 0, 1, 5, 9, 7, 0, 1, 2, 1, 0, 0, 0, 4, 6, 4, 0, 1, 2, 1, 0, 0, 0, 2, 6, 3, 0], 'v4': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 3, 4, 1, 2, 0, 4, 10, 17, 10, 0, 3, 4, 1, 2, 0, 2, 8, 16, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 3, 4, 1, 2, 0, 4, 10, 17, 10, 0, 3, 4, 1, 2, 0, 2, 8, 16, 7, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 3, 5, 1, 3, 0, 5, 13, 21, 16, 0, 3, 4, 1, 2, 0, 4, 10, 17, 10, 0, 3, 4, 1, 2, 0, 2, 8, 16, 7, 0]}) df3 = pd.DataFrame({'c1': ['BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT'], 'c2': ['D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2', 'D1', 'D1', 'D1', 'Sc', 'Sc', 'Sc', 'w1', 'w1', 'w1', 'w2', 'w2', 'w2'], 'c3': ['BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss', 'BAF', 'WH', 'mss'], 'c4': [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], 'v1': [0, 0, 0, 6, 16, 0, 0, 0, 0, 0, 0, 0, 6, 16, 0, 6, 16, 0, 5, 13, 0, 5, 10, 0, 0, 0, 0, 6, 16, 0, 0, 0, 0, 0, 0, 0, 6, 16, 0, 6, 16, 0, 5, 13, 0, 5, 10, 0, 6, 16, 0, 6, 16, 0, 5, 13, 0, 5, 10, 0], 'v2': [0, 0, 0, 2, 17, 0, 0, 0, 0, 0, 0, 0, 2, 17, 0, 2, 17, 0, 1, 14, 0, 1, 12, 0, 0, 0, 0, 2, 17, 0, 0, 0, 0, 0, 0, 0, 2, 17, 0, 2, 17, 0, 1, 14, 0, 1, 12, 0, 2, 17, 0, 2, 17, 0, 1, 14, 0, 1, 12, 0], 'v3': [0, 0, 0, 4, 22, 0, 0, 0, 0, 0, 0, 0, 4, 22, 0, 4, 22, 0, 4, 14, 0, 4, 11, 0, 0, 0, 0, 4, 22, 0, 0, 0, 0, 0, 0, 0, 4, 22, 0, 4, 22, 0, 4, 14, 0, 4, 11, 0, 4, 22, 0, 4, 22, 0, 4, 14, 0, 4, 11, 0], 'v4': [0, 0, 0, 12, 55, 0, 0, 0, 0, 0, 0, 0, 12, 55, 0, 12, 55, 0, 10, 41, 0, 10, 33, 0, 0, 0, 0, 12, 55, 0, 0, 0, 0, 0, 0, 0, 12, 55, 0, 12, 55, 0, 10, 41, 0, 10, 33, 0, 12, 55, 0, 12, 55, 0, 10, 41, 0, 10, 33, 0]}) df2 = pd.DataFrame({'c1': ['BMI', 'BMI', 'BMI', 'BMI', 'BMI', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'DIABP', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'HEIGHT', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'SYSBP', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT', 'WEIGHT'], 'c2': ['D1', 'Sc', 'w1', 'w2', 'mss', 'D1', 'Sc', 'w1', 'w2', 'mss', 'D1', 'Sc', 'w1', 'w2', 'mss', 'D1', 'Sc', 'w1', 'w2', 'mss', 'D1', 'Sc', 'w1', 'w2', 'mss'], 'c3': [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], 'c4': [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], 'v1': [0, 22, 0, 0, 0, 22, 22, 18, 15, 0, 0, 22, 0, 0, 0, 22, 22, 18, 15, 0, 22, 22, 18, 15, 0], 'v2': [0, 19, 0, 0, 0, 19, 19, 15, 13, 0, 0, 19, 0, 0, 0, 19, 19, 15, 13, 0, 19, 19, 15, 13, 0], 'v3': [0, 26, 0, 0, 0, 26, 26, 18, 15, 0, 0, 26, 0, 0, 0, 26, 26, 18, 15, 0, 26, 26, 18, 15, 0], 'v4': [0, 67, 0, 0, 0, 67, 67, 51, 43, 0, 0, 67, 0, 0, 0, 67, 67, 51, 43, 0, 67, 67, 51, 43, 0]}) df1 = pd.DataFrame({'c1': ['BMI', 'DIABP', 'HEIGHT', 'SYSBP', 'WEIGHT', 'mss'], 'c2': [nan, nan, nan, nan, nan, nan], 'c3': [nan, nan, nan, nan, nan, nan], 'c4': [nan, nan, nan, nan, nan, nan], 'v1': [22, 22, 22, 22, 22, 0], 'v2': [19, 19, 19, 19, 19, 0], 'v3': [26, 26, 26, 26, 26, 0], 'v4': [67, 67, 67, 67, 67, 0]}) # Comment for easy code selection Even for above four dataframes, sorting and merging criteria is still the same Sorting df1 on v1 Sorting c2 in df2 on v2, maintaining the order of c1 from df1 Sorting c3 in df3 on v3, maintaining the order of c1 from df1, and c2 from df2 Sorting c4 in df3 on v4, maintaining the order of c1 from df1, c2 from df2, and c3 from df3 And in such cases when the number of dataframe to sort and merge grows, the solution I have used above is becoming really inefficient. | Another solution using groupby without sorting groups: import itertools out = pd.concat([df1.sort_values('v1'), df2.sort_values('v2')], ignore_index=True) # Original answer # >>> out.reindex(out.groupby('c1', sort=False) # .apply(lambda x: x.index) # .explode()) # Faster alternative >>> out.loc[itertools.chain.from_iterable(out.groupby('c1', sort=False) .groups.values())] >>> out c1 c2 v1 v2 0 B NaN 2 5 8 B P 0 1 12 B T 2 2 13 B Y 0 2 1 C NaN 3 5 9 C Y 1 1 14 C P 1 2 15 C T 1 2 2 D NaN 4 2 5 D T 2 0 10 D P 1 1 11 D Y 1 1 3 A NaN 9 2 4 A Y 2 0 6 A P 4 1 7 A T 3 1 | 7 | 4 |
68,914,523 | 2021-8-24 | https://stackoverflow.com/questions/68914523/fastapi-pydantic-value-error-raises-internal-server-error | I am using FastAPI with Pydantic. My problem - I need to raise ValueError using Pydantic from fastapi import FastAPI from pydantic import BaseModel, validator from fastapi import Depends, HTTPException app = FastAPI() class RankInput(BaseModel): rank: int @validator('rank') def check_if_value_in_range(cls, v): """ check if input rank is within range """ if not 0 < v < 1000001: raise ValueError("Rank Value Must be within range (0,1000000)") #raise HTTPException(status_code=400, detail="Rank Value Error") - this works But I am looking for a solution using ValueError return v def get_info_by_rank(rank): return rank @app.get('/rank/{rank}') async def get_rank(value: RankInput = Depends()): result = get_info_by_rank(value.rank) return result this piece of code gives Internal Server Error when a ValueError is raised INFO: 127.0.0.1:59427 - "GET /info/?rank=-1 HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastapi/applications.py", line 199, in __call__ await super().__call__(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/applications.py", line 111, in __call__ await self.middleware_stack(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc from None File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc from None File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/routing.py", line 566, in __call__ await route.handle(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/routing.py", line 227, in handle await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/routing.py", line 41, in app response = await func(request) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastapi/routing.py", line 195, in app dependency_overrides_provider=dependency_overrides_provider, File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastapi/dependencies/utils.py", line 550, in solve_dependencies solved = await run_in_threadpool(call, **sub_values) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool return await loop.run_in_executor(None, func, *args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "pydantic/main.py", line 400, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for GetInput rank ValueError() takes no keyword arguments (type=type_error) ERROR:uvicorn.error:Exception in ASGI application Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastapi/applications.py", line 199, in __call__ await super().__call__(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/applications.py", line 111, in __call__ await self.middleware_stack(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc from None File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc from None File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/routing.py", line 566, in __call__ await route.handle(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/routing.py", line 227, in handle await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/routing.py", line 41, in app response = await func(request) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastapi/routing.py", line 195, in app dependency_overrides_provider=dependency_overrides_provider, File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastapi/dependencies/utils.py", line 550, in solve_dependencies solved = await run_in_threadpool(call, **sub_values) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool return await loop.run_in_executor(None, func, *args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "pydantic/main.py", line 400, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for GetInput rank ValueError() takes no keyword arguments (type=type_error) I also checked https://github.com/tiangolo/fastapi/issues/2180. But I was not able to figure out a solution. What I need to do is Raise ValueError with a Custom Status Code. Note - I know I can get the Job Done by raising HTTPException. But I am looking for a solution using ValueError Could you tell me where I am going wrong? Have Also Posted this Issue on Github - https://github.com/tiangolo/fastapi/issues/3761 | If you're not raising an HTTPException then normally any other uncaught exception will generate a 500 response (an Internal Server Error). If your intent is to respond with some other custom error message and HTTP status when raising a particular exception - say, ValueError - then you can use add a global exception handler to your app: from fastapi import FastAPI, Request from fastapi.responses import JSONResponse @app.exception_handler(ValueError) async def value_error_exception_handler(request: Request, exc: ValueError): return JSONResponse( status_code=400, content={"message": str(exc)}, ) This will give a 400 response (or you can change the status code to whatever you like) like this: { "message": "Value Must be within range (0,1000000)" } | 18 | 15 |
68,913,379 | 2021-8-24 | https://stackoverflow.com/questions/68913379/how-to-create-the-custom-loss-function-by-adding-negative-entropy-to-the-cross-e | I recently read a paper entitled "REGULARIZING NEURAL NETWORKS BY PENALIZING CONFIDENT OUTPUT DISTRIBUTIONS https://arxiv.org/abs/1701.06548". The authors discuss regularizing neural networks by penalizing low entropy output distributions through adding a negative entropy term to the negative log-likelihood and creating a custom loss function for model training. The value β controls the strength of confidence penalty. I have written a custom function for categorical cross-entropy as shown below but the negative entropy term need to be added to the loss function. import tensorflow as tf def custom_loss(y_true, y_pred): cce = tf.keras.losses.CategoricalCrossentropy() cce_loss = cce(y_true, y_pred) return cce_loss | The entropy of y_pred is essentially the categorical cross entropy between y_pred and itself: def custom_loss(y_true, y_pred, beta): cce = tf.keras.losses.CategoricalCrossentropy() return cce(y_true, y_pred) - beta*cce(y_pred, y_pred) | 5 | 1 |
68,913,649 | 2021-8-24 | https://stackoverflow.com/questions/68913649/python3-dataframe-mutiple-separators | I'm trying to take my df.to_csv which is sep="\t" and turn that tab into two spaces instead. This question is similar but the solution isn't working: Pandas to_csv with multiple separators \s+ won't work as python will complain that its not a single char separator. This works as its a tab: df2.to_csv('test.csv', index=False, sep='\t', quoting=csv.QUOTE_NONE, quotechar="", escapechar=None) this throws TypeError: "delimiter" must be a 1-character string df2.to_csv('test.csv', index=False, sep='\s+', quoting=csv.QUOTE_NONE, quotechar="", escapechar=None) | Let's look at using to_markdown instead of to_csv: df = pd.DataFrame({'col1':'aaa bbb ccc'.split(), 'col2':[1, 10, 1000], 'col3': [True, False, True]}) df.to_markdown('a.txt', tablefmt='plain', index=False) !type a.txt File: col1 col2 col3 aaa 1 True bbb 10 False ccc 1000 True | 5 | 1 |
68,848,853 | 2021-8-19 | https://stackoverflow.com/questions/68848853/how-can-i-set-a-number-of-default-values-for-many-fastapi-endpoints | I am using FastAPI and I have a number of endpoints that look like this: @app.get("/REDS/") def query_REDS( request: Request, lighter: Optional[bool] = False, darker: Optional[bool] = False, inverse: Optional[bool] = False, amount: Optional[int] = 10): pass # Work done here @app.get("/BLUES/") def query_BLUES( request: Request, lighter: Optional[bool] = False, darker: Optional[bool] = False, inverse: Optional[bool] = False, amount: Optional[int] = 10): pass # Work done here @app.get("/GREENS/") def query_GREENS( request: Request, lighter: Optional[bool] = False, darker: Optional[bool] = False, inverse: Optional[bool] = False, amount: Optional[int] = 10): pass # Work done here This looks like this in the swagger UI: The real config is passed in the request and parsed manually. Whenever I need to update the signature of these endpoints, I need to update it in like 20 different places. Is there a way to define those specific default arguments in one place? I tried using the pydantic BaseModel to define an input model: class Arguments(BaseModel): lighter: Optional[bool] = False darker: Optional[bool] = False inverse: Optional[bool] = False amount: Optional[int] = 10 @app.get("/REDS/") def query_REDS( request: Request, arguments: Arguments): pass # Work done here @app.get("/BLUES/") def query_BLUES( request: Request, arguments: Arguments): pass # Work done here @app.get("/GREENS/") def query_GREENS( request: Request, arguments: Arguments): pass # Work done here. But this is not what I am after, first of all because using a body in a get request is not recommended and not supported everywhere and second of all because it is not that useful in the swagger UI: Is there a way to define a sort of default signature to a number of different enpoints? | To do what you want, you can use regular classes or pydantic models as class dependencies: class CommonParams: def __init__(self, request: Request, lighter: Optional[bool] = False, darker: Optional[bool] = False, inverse: Optional[bool] = False, amount: Optional[int] = 10): self.request = request self.lighter = lighter self.darker = darker self.inverse = inverse self.amount = amount class Arguments(BaseModel): lighter: Optional[bool] = False darker: Optional[bool] = False inverse: Optional[bool] = False amount: Optional[int] = 10 @app.get("/REDS/") def query_REDS(params=Depends(CommonParams)): pass # Work done here @app.get("/BLUES/") def query_BLUES(params=Depends(Arguments)): pass # Work done here | 5 | 7 |
68,869,110 | 2021-8-20 | https://stackoverflow.com/questions/68869110/python-static-type-hint-check-mismatch-between-iterableanystr-vs-iterablestr | I'm running into this static type hint mismatch (with Pyright): from __future__ import annotations from typing import AnyStr, Iterable def foo(i: Iterable[AnyStr]): return i def bar(i: Iterable[str] | Iterable[bytes]): return i def baz(i: Iterable[str | bytes]): return i def main(): s = ['a'] # makes sense to me baz(foo(s)) # allowed foo(baz(s)) # not allowed # makes sense to me baz(bar(s)) # allowed bar(baz(s)) # not allowed bar(foo(s)) # allowed foo(bar(s)) # nope -- why? What's the difference between Iterable[AnyStr] and Iterable[str] | Iterable[bytes]? Shouldn't they be "equivalent"? (save for AnyStr referring to a single consistent type within a context) More concretely: what is the right way to type-hint the following? import random from typing import Iterable, AnyStr def foo(i: Iterable[AnyStr]): return i def exclusive_bytes_or_str(): # type-inferred to be Iterator[bytes] | Iterator[str] if random.randrange(2) == 0: return iter([b'bytes']) else: return iter(['str']) foo(iter([b'bytes'])) # fine foo(iter(['str'])) # fine foo(exclusive_bytes_or_str()) # same error | Paraphrased answer from erictraut@github: This isn't really the intended use for a constrained TypeVar. I recommend using an @overload instead: @overload def foo(i: Iterable[str]) -> Iterable[str]: ... @overload def foo(i: Iterable[bytes]) -> Iterable[bytes]: ... def foo(i: Iterable[AnyStr]) -> Iterable[AnyStr]: return i Because: The type Iterable[str] | Iterable[bytes] is not assignable to type Iterable[AnyStr]. A constrained type variable needs to be matched against one of its contraints, not multiple constraints. When a type variable is "solved", it needs to be replaced by another (typically concrete) type. If foo(bar(s)) were allowed, what type would the AnyType@foo type variable resolve to? If it were resolved to type str | bytes, then the concrete return type of foo would be Iterable[str | bytes]. That's clearly wrong. | 5 | 2 |
68,909,283 | 2021-8-24 | https://stackoverflow.com/questions/68909283/how-to-customize-pandas-pie-plot-with-labels-and-legend | Tried plotting a pie chart using: import pandas as pd import numpy as np data = {'City': ['KUMASI', 'ACCRA', 'ACCRA', 'ACCRA', 'KUMASI', 'ACCRA', 'ACCRA', 'ACCRA', 'ACCRA'], 'Building': ['Commercial', 'Commercial', 'Industrial', 'Commercial', 'Industrial', 'Commercial', 'Commercial', 'Commercial', 'Commercial'], 'LPL': ['NC', 'C', 'C', 'C', 'NC', 'C', 'NC', 'NC', 'NC'], 'Lgfd': ['NC', 'C', 'C', 'C', 'NC', 'C', 'NC', 'NC', 'C'], 'Location': ['NC', 'C', 'C', 'C', 'NC', 'C', 'C', 'NC', 'NC'], 'Hazard': ['NC', 'C', 'C', 'C', 'NC', 'C', 'C', 'NC', 'NC'], 'Inspection': ['NC', np.nan, np.nan, np.nan, 'NC', 'NC', 'C', 'C', 'C'], 'Name': ['Zonal', 'In Prog', 'Tullow Oil', 'XGI', 'Food Factory', 'MOH', 'EV', 'CSD', 'Electroland'], 'Air Termination System': ['Vertical Air Termination', 'Vertical Air Termination', 'Vertical Air Termination', 'Early Streamer Emission', 'Vertical Air Termination', 'Vertical Air Termination', 'Vertical Air Termination', 'Vertical Air Termination', 'Early Streamer Emission'], 'Positioned Using': ['Highest Points', 'Software', 'Software', 'Software', 'Highest Points', np.nan, np.nan, 'Rolling Sphere Method', 'Software']} df = pd.DataFrame(data) colors = ['#ff9999','#66b3ff','#99ff99','#ffcc99'] data = df["Air Termination System"].value_counts().plot(kind="pie",autopct='%1.1f%%', radius=1.5, shadow=True, explode=[0.05, 0.05], colors=colors) Present chart looks like: How do I bring the title "Air Termination Systems" outside the chart and also create a legend at the top right using the colors? | legend=True adds the legend title='Air Termination System' puts a title at the top ylabel='' removes 'Air Termination System' from inside the plot. The label inside the plot was a result of radius=1.5 labeldistance=None removes the other labels since there is a legend. If necessary, specify figsize=(width, height) inside data.plot(...) colors = ['#ff9999','#66b3ff','#99ff99','#ffcc99'] data = df["Air Termination System"].value_counts() ax = data.plot(kind="pie", autopct='%1.1f%%', shadow=True, explode=[0.05, 0.05], colors=colors, legend=True, title='Air Termination System', ylabel='', labeldistance=None) ax.legend(bbox_to_anchor=(1, 1.02), loc='upper left') plt.show() | 5 | 12 |
68,905,848 | 2021-8-24 | https://stackoverflow.com/questions/68905848/how-to-correctly-specify-type-hints-with-asyncgenerator-and-asynccontextmanager | Consider the following code import contextlib import abc import asyncio from typing import AsyncContextManager, AsyncGenerator, AsyncIterator class Base: @abc.abstractmethod async def subscribe(self) -> AsyncContextManager[AsyncGenerator[int, None]]: pass class Impl1(Base): @contextlib.asynccontextmanager async def subscribe(self) -> AsyncIterator[ AsyncGenerator[int, None] ]: <-- mypy error here async def _generator(): for i in range(5): await asyncio.sleep(1) yield i yield _generator() For Impl1.subscribe mypy gives the error Signature of "subscribe" incompatible with supertype "Base" What is the correct way to specify type hints in the above case? Or is mypy wrong here? | I just happened to come up with the same problem and found this question on the very same day, but also figured out the answer quickly. You need to remove async from the abstract method. To explain why, I'll simplify the case to a simple async iterator: @abc.abstractmethod async def foo(self) -> AsyncIterator[int]: pass async def v1(self) -> AsyncIterator[int]: yield 0 async def v2(self) -> AsyncIterator[int]: return v1() If you compare v1 and v2, you'll see that the function signature looks the same, but they actually do very different things. v2 is compatible with the abstract method, v1 is not. When you add the async keyword, mypy infers the return type of the function to be a Coroutine. But, if you also put a yield in, it then infers the return type to be AsyncIterator: reveal_type(foo) # -> typing.Coroutine[Any, Any, typing.AsyncIterator[builtins.int]] reveal_type(v1) # -> typing.AsyncIterator[builtins.int] reveal_type(v2) # -> typing.Coroutine[Any, Any, typing.AsyncIterator[builtins.int]] As you can see, the lack of a yield in the abstract method means that this is inferred as a Coroutine[..., AsyncIterator[int]]. In other words, a function used like async for i in await v2():. By removing the async: @abc.abstractmethod def foo(self) -> AsyncIterator[int]: pass reveal_type(foo) # -> typing.AsyncIterator[builtins.int] We see that the return type is now AsyncIterator and is now compatible with v1, rather than v2. In other words, a function used like async for i in v1(): You can also see that this is fundamentally the same thing as v1: def v3(self) -> AsyncIterator[int]: return v1() While the syntax is different, both v3 and v1 are functions which will return an AsyncIterator when called, which should be obvious given that we are literally returning the result of v1(). | 27 | 32 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.