content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Use Python to get value from element in XML file I'm writing a program in Python that looks at an XML file that I get from an API and should return a list of users' initials to a list for later use. My XML file looks like this with about 60 users: <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials1</rep> </user> <user> <active>true</active> <datelastlogin>12/1/2022 3:31:25 PM</datelastlogin> <dept>5</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>4/8/2020 3:02:08 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials2</rep> </user> ... ... ... </ArrayOfuser> I'm trying to use an XML parser to return the text in the <rep> tag for each user to a list. I would also love to have it sorted by date of last login, but that's not something I need and I'll just alphabetize the list if sorting by date overcomplicates this process. The code below shows my attempt at just printing the data without saving it to a list, but the output is unexpected as shown below as well. Code I tried: #load file activeusers = etree.parse("activeusers.xml") #declare namespaces ns = {'xx': 'http://schemas.datacontract.org/2004/07/IQWebAPI.Users'} #locate rep tag and print (saving to list once printing shows expected output) targets = activeusers.xpath('//xx:user[xx:rep]',namespaces=ns) for target in targets: print(target.attrib) Output: {} {} I'm expecting the output to look like the below codeblock. Once it looks something like that I should be able to change the print statement to instead save to a list. {userinitials1} {userinitials2} I think my issue comes from what's inside my print statement with printing the attribute. I tried this with variations of target.getparent() with keys(), items(), and get() as well and they all seem to show the same empty output when printed. EDIT: I found a post from someone with a similar problem that had been solved and the solution was to use this code but I changed filenames to suit my need: root = (etree.parse("activeusers.xml")) values = [s.find('rep').text for s in root.findall('.//user') if s.find('rep') is not None] print(values) Again, the expected output was a populated list but when printed the list is empty. I think now my issue may have to do with the fact that my document contains namespaces. For my use, I may just delete them since I don't think these will end up being required so please correct me if namespaces are more important than I realize. SECOND EDIT: I also realized the API can send me this data in a JSON format and not just XML so that file would look like the below codeblock. Any solution that can append the text in the "rep" child of each user to a list in JSON format or XML is perfect and would be greatly appreciated since once I have this list, I will not need to use the XML or JSON file for any other use. [ { "active": true, "datelastlogin": "8/21/2019 9:16:30 PM", "dept": 3, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "2/6/2019 11:10:29 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials1" }, { "active": true, "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials2" } ] A: As this is xml with namespace, you can have like import xml.etree.ElementTree as ET root = ET.fromstring(xml_in_qes) my_ns = {'root': 'WebsiteWhereDataComesFrom.com'} myUser=[] for eachUser in root.findall('root:user',my_ns): rep=eachUser.find("root:rep",my_ns) print(rep.text) myUser.append(rep.text) note: xml_in_qes is the XML attached in this question. ('root:user',my_ns): search user in my_ns which has key root i.e WebsiteWhereDataComesFrom.com A: XML data implementation: import xml.etree.ElementTree as ET xmlstring = ''' <ArrayOfuser> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials1</rep> </user> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials2</rep> </user> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials3</rep> </user> </ArrayOfuser> ''' user_array = ET.fromstring(xmlstring) replist = [] for users in user_array.findall('user'): replist.append((users.find('rep').text)) print(replist) Output: ['userinitials1', 'userinitials2', 'userinitials3'] JSON data implementation: userlist = [ { "active": "true", "datelastlogin": "8/21/2019 9:16:30 PM", "dept": 3, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "2/6/2019 11:10:29 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials1" }, { "active": "true", "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials2" }, { "active": "true", "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials3" } ] replist = [] for user in userlist: replist.append(user["rep"]) print(replist) Output: ['userinitials1', 'userinitials2', 'userinitials3'] A: If you like a sorted tabel of users who have last logged on you can put the parsed values into pandas: import xml.etree.ElementTree as ET import pandas as pd tree = ET.parse("activeusers.xml") root = tree.getroot() namespaces = {"xmlns":"WebsiteWhereDataComesFrom.com" , "xmlns:i":"http://www.w3.org/2001/XMLSchema-instance"} columns =["rep", "datelastlogin"] login = [] usr = [] for user in root.findall("xmlns:user", namespaces): for lastlog in user.findall("xmlns:datelastlogin", namespaces): login.append(lastlog.text) for activ in user.findall("xmlns:rep", namespaces): usr.append(activ.text) data = list(zip(usr, login)) df = pd.DataFrame(data, columns=columns) df["datelastlogin"] = df["datelastlogin"].astype('datetime64[ns]') df = df.sort_values(by='datelastlogin', ascending = False) print(df.to_string()) Output: rep datelastlogin 1 userinitials2 2022-12-01 15:31:25 0 userinitials1 2019-08-21 21:16:30
Use Python to get value from element in XML file
I'm writing a program in Python that looks at an XML file that I get from an API and should return a list of users' initials to a list for later use. My XML file looks like this with about 60 users: <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials1</rep> </user> <user> <active>true</active> <datelastlogin>12/1/2022 3:31:25 PM</datelastlogin> <dept>5</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>4/8/2020 3:02:08 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials2</rep> </user> ... ... ... </ArrayOfuser> I'm trying to use an XML parser to return the text in the <rep> tag for each user to a list. I would also love to have it sorted by date of last login, but that's not something I need and I'll just alphabetize the list if sorting by date overcomplicates this process. The code below shows my attempt at just printing the data without saving it to a list, but the output is unexpected as shown below as well. Code I tried: #load file activeusers = etree.parse("activeusers.xml") #declare namespaces ns = {'xx': 'http://schemas.datacontract.org/2004/07/IQWebAPI.Users'} #locate rep tag and print (saving to list once printing shows expected output) targets = activeusers.xpath('//xx:user[xx:rep]',namespaces=ns) for target in targets: print(target.attrib) Output: {} {} I'm expecting the output to look like the below codeblock. Once it looks something like that I should be able to change the print statement to instead save to a list. {userinitials1} {userinitials2} I think my issue comes from what's inside my print statement with printing the attribute. I tried this with variations of target.getparent() with keys(), items(), and get() as well and they all seem to show the same empty output when printed. EDIT: I found a post from someone with a similar problem that had been solved and the solution was to use this code but I changed filenames to suit my need: root = (etree.parse("activeusers.xml")) values = [s.find('rep').text for s in root.findall('.//user') if s.find('rep') is not None] print(values) Again, the expected output was a populated list but when printed the list is empty. I think now my issue may have to do with the fact that my document contains namespaces. For my use, I may just delete them since I don't think these will end up being required so please correct me if namespaces are more important than I realize. SECOND EDIT: I also realized the API can send me this data in a JSON format and not just XML so that file would look like the below codeblock. Any solution that can append the text in the "rep" child of each user to a list in JSON format or XML is perfect and would be greatly appreciated since once I have this list, I will not need to use the XML or JSON file for any other use. [ { "active": true, "datelastlogin": "8/21/2019 9:16:30 PM", "dept": 3, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "2/6/2019 11:10:29 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials1" }, { "active": true, "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials2" } ]
[ "As this is xml with namespace, you can have like\nimport xml.etree.ElementTree as ET\nroot = ET.fromstring(xml_in_qes)\nmy_ns = {'root': 'WebsiteWhereDataComesFrom.com'}\nmyUser=[]\nfor eachUser in root.findall('root:user',my_ns):\n rep=eachUser.find(\"root:rep\",my_ns)\n print(rep.text)\n myUser.append(rep.text)\n\nnote: xml_in_qes is the XML attached in this question.\n('root:user',my_ns): search user in my_ns which has key root i.e WebsiteWhereDataComesFrom.com\n", "XML data implementation:\nimport xml.etree.ElementTree as ET\nxmlstring = '''\n<ArrayOfuser>\n <user>\n <active>true</active>\n <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin>\n <dept>3</dept>\n <email>useremail</email>\n <firstname>userfirstname</firstname>\n <lastname>userlastname</lastname>\n <lastupdated>2/6/2019 11:10:29 PM</lastupdated>\n <lastupdatedby>lastupdateduserinitials</lastupdatedby>\n <loginemail>userloginemail</loginemail>\n <phone1>userphone</phone1>\n <phone2/>\n <rep>userinitials1</rep>\n </user>\n <user>\n <active>true</active>\n <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin>\n <dept>3</dept>\n <email>useremail</email>\n <firstname>userfirstname</firstname>\n <lastname>userlastname</lastname>\n <lastupdated>2/6/2019 11:10:29 PM</lastupdated>\n <lastupdatedby>lastupdateduserinitials</lastupdatedby>\n <loginemail>userloginemail</loginemail>\n <phone1>userphone</phone1>\n <phone2/>\n <rep>userinitials2</rep>\n </user>\n <user>\n <active>true</active>\n <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin>\n <dept>3</dept>\n <email>useremail</email>\n <firstname>userfirstname</firstname>\n <lastname>userlastname</lastname>\n <lastupdated>2/6/2019 11:10:29 PM</lastupdated>\n <lastupdatedby>lastupdateduserinitials</lastupdatedby>\n <loginemail>userloginemail</loginemail>\n <phone1>userphone</phone1>\n <phone2/>\n <rep>userinitials3</rep>\n </user>\n</ArrayOfuser>\n'''\n\nuser_array = ET.fromstring(xmlstring)\n\nreplist = []\nfor users in user_array.findall('user'):\n replist.append((users.find('rep').text))\n\nprint(replist)\n\nOutput:\n['userinitials1', 'userinitials2', 'userinitials3']\n\nJSON data implementation:\nuserlist = [\n {\n \"active\": \"true\",\n \"datelastlogin\": \"8/21/2019 9:16:30 PM\",\n \"dept\": 3,\n \"email\": \"useremail\",\n \"firstname\": \"userfirstname\",\n \"lastname\": \"userlastname\",\n \"lastupdated\": \"2/6/2019 11:10:29 PM\",\n \"lastupdatedby\": \"lastupdateduserinitials\",\n \"loginemail\": \"userloginemail\",\n \"phone1\": \"userphone\",\n \"phone2\": \"\",\n \"rep\": \"userinitials1\"\n },\n {\n \"active\": \"true\",\n \"datelastlogin\": \"12/1/2022 3:31:25 PM\",\n \"dept\": 5,\n \"email\": \"useremail\",\n \"firstname\": \"userfirstname\",\n \"lastname\": \"userlastname\",\n \"lastupdated\": \"4/8/2020 3:02:08 PM\",\n \"lastupdatedby\": \"lastupdateduserinitials\",\n \"loginemail\": \"userloginemail\",\n \"phone1\": \"userphone\",\n \"phone2\": \"\",\n \"rep\": \"userinitials2\"\n },\n {\n \"active\": \"true\",\n \"datelastlogin\": \"12/1/2022 3:31:25 PM\",\n \"dept\": 5,\n \"email\": \"useremail\",\n \"firstname\": \"userfirstname\",\n \"lastname\": \"userlastname\",\n \"lastupdated\": \"4/8/2020 3:02:08 PM\",\n \"lastupdatedby\": \"lastupdateduserinitials\",\n \"loginemail\": \"userloginemail\",\n \"phone1\": \"userphone\",\n \"phone2\": \"\",\n \"rep\": \"userinitials3\"\n }\n]\n\nreplist = []\nfor user in userlist:\n replist.append(user[\"rep\"])\n\nprint(replist)\n\nOutput:\n['userinitials1', 'userinitials2', 'userinitials3']\n\n", "If you like a sorted tabel of users who have last logged on you can put the parsed values into pandas:\nimport xml.etree.ElementTree as ET\nimport pandas as pd\n\ntree = ET.parse(\"activeusers.xml\")\nroot = tree.getroot()\n\nnamespaces = {\"xmlns\":\"WebsiteWhereDataComesFrom.com\" , \"xmlns:i\":\"http://www.w3.org/2001/XMLSchema-instance\"}\n\ncolumns =[\"rep\", \"datelastlogin\"]\nlogin = []\nusr = []\nfor user in root.findall(\"xmlns:user\", namespaces):\n for lastlog in user.findall(\"xmlns:datelastlogin\", namespaces):\n login.append(lastlog.text)\n \n for activ in user.findall(\"xmlns:rep\", namespaces):\n usr.append(activ.text)\n \ndata = list(zip(usr, login))\n\n\ndf = pd.DataFrame(data, columns=columns)\ndf[\"datelastlogin\"] = df[\"datelastlogin\"].astype('datetime64[ns]')\ndf = df.sort_values(by='datelastlogin', ascending = False)\nprint(df.to_string())\n\nOutput:\n rep datelastlogin\n1 userinitials2 2022-12-01 15:31:25\n0 userinitials1 2019-08-21 21:16:30\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "json", "python", "xml" ]
stackoverflow_0074659126_json_python_xml.txt
Q: Why does Python installed via Homebrew not include Tkinter I've installed Python via Homebrew on my Mac. brew install python After that I checked my Python version as 2.7.11, then I tried to perform import Tkinter I got following error message: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 39, in <module> import _tkinter # If this fails your Python may not be configured for Tk ImportError: No module named _tkinter A: I am running MacOS Big Sur (11.2.3). With python2, I have Tkinter built-in. With python3, it has to be installed manually and it's very simple, just run: $ brew install python-tk To run python2 in a terminal, execute python file.py. To run python3 in a terminal, execute python3 file.py. A: Based on the comments from above and the fact that Python must be linked to Tcl/Tk framework: If you don't have Xcode command line tools, install those: xcode-select --install If you don't have Tcl/Tk brew installation (check brew list), install that: brew install tcl-tk Then, run "brew uninstall python" if that was not installed with option --with-tcl-tk (the current official option). Then install Python again, linking it to the brew installed Tcl/Tk: brew install python --with-tcl-tk A: UPDATE: Other answers have found workarounds, so this answer is now outdated. 12/18 Update: No longer possible for various reasons. Below is now outdated. You'll have to install Python directly from python.org if you want to remove those warnings. 2018 Update brew reinstall python --with-tcl-tk Note: Homebrew now uses Python 3 by default - Homebrew Blog. Docs. Testing python should bring up system’s Python 2, python3 should bring up Python 3. idle points to system Python/tcl-tk. It will show an out-dated tcl-tk error (unless you brew install python@2 --with-tcl-tk) idle3 should bring up Python 3 with no warnings. Caveat --with-tcl-tk will install python directly from python.org, which you'll see when you run brew info python. More info here. A: With brew and python3 you have to install Tinker separately. brew message while installing python: tkinter is no longer included with this formula, but it is available separately: brew install [email protected] A: If you're using pyenv you can try installing tcl-tk via homebrew and then activating the env. vars. mentioned in its caveats section, as detailed in this answer. Activating those env. vars. prior to installing python via homebrew may work for you: ※ export PATH="/usr/local/opt/tcl-tk/bin:$PATH" ※ export LDFLAGS="-L/usr/local/opt/tcl-tk/lib" ※ export CPPFLAGS="-I/usr/local/opt/tcl-tk/include" ※ export PKG_CONFIG_PATH="/usr/local/opt/tcl-tk/lib/pkgconfig" ※ export PYTHON_CONFIGURE_OPTS="--with-tcltk-includes='-I$(brew --prefix tcl-tk)/include' \ --with-tcltk-libs='-L$(brew --prefix tcl-tk)/lib -ltcl8.6 -ltk8.6'" ※ brew reinstall python A: On mac OSX you must install TCL separately: You will find instructions and dowloadables here: https://www.tcl.tk/software/tcltk/ and there: http://wiki.tcl.tk/1013 It requires a little bit of effort, but it is neither complicated nor difficult. A: It may be because you don't have the latest Xcode command line tools so brew built python from source instead of from bottle. Try: xcode-select --install brew uninstall python brew install python --use-brewed-tk A: It is a bit more complicated now, true you still need to have xcode command line tools and homebrew as a start. But the procedure changes constantly. Homebrew took out tcl-tk support long ago, and apple still only supplies v8.5 of tcl-tk. Anyway, it is possible, and I maintain a github gist personally to fix these issues. Latest update is using python 3.8.1 (will probably be usable on the 3.8.x branch later too) see here, just follow the steps outlined. github gist link to install tcl-tk with python A: On MacOS 11.13.1 using brew install python brew install python-tk I can now select TkAgg in matplotlib, but when I use it in ipython I get an error message %pylab matplotlib.use('tkagg') plot([0,1]) results in 2021-05-07 21:51:02.954 Python[10773:71016] -[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0 2021-05-07 21:51:02.956 Python[10773:71016] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0' *** First throw call stack: ( 0 CoreFoundation 0x00000001a0d97db8 __exceptionPreprocess + 240 1 libobjc.A.dylib 0x00000001a0ac10a8 objc_exception_throw + 60 2 CoreFoundation 0x00000001a0e28ba0 -[NSObject(NSObject) __retain_OA] + 0 3 CoreFoundation 0x00000001a0cf91e4 ___forwarding___ + 1444 4 CoreFoundation 0x00000001a0cf8b80 _CF_forwarding_prep_0 + 96 5 libtk8.6.dylib 0x000000012754a844 GetRGBA + 308 6 libtk8.6.dylib 0x000000012754a208 SetCGColorComponents + 132 7 libtk8.6.dylib 0x000000012754a65c TkpGetColor + 572 8 libtk8.6.dylib 0x00000001274ac714 Tk_GetColor + 220 9 libtk8.6.dylib 0x000000012749fea0 Tk_Get3DBorder + 204 10 libtk8.6.dylib 0x000000012749fcac Tk_Alloc3DBorderFromObj + 144 11 libtk8.6.dylib 0x00000001274adadc DoObjConfig + 840 12 libtk8.6.dylib 0x00000001274ad690 Tk_InitOptions + 348 13 libtk8.6.dylib 0x00000001274ad58c Tk_InitOptions + 88 14 libtk8.6.dylib 0x00000001274d4cb4 CreateFrame + 1448 15 libtk8.6.dylib 0x00000001274d4fac TkListCreateFrame + 156 16 libtk8.6.dylib 0x00000001274cde80 Initialize + 1848 17 _tkinter.cpython-39-darwin.so 0x000000012059a31c Tcl_AppInit + 80 18 _tkinter.cpython-39-darwin.so 0x000000012059487c Tkapp_New + 592 19 _tkinter.cpython-39-darwin.so 0x000000012059410c _tkinter_create + 580 20 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88 21 Python 0x00000001007bac4c call_function + 128 22 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844 23 Python 0x00000001007ada9c _PyEval_EvalCode + 444 24 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 25 Python 0x00000001006c58ac _PyObject_FastCallDictTstate + 208 26 Python 0x0000000100739bf4 slot_tp_init + 188 27 Python 0x000000010073f850 type_call + 300 28 Python 0x00000001006c5590 _PyObject_MakeTpCall + 132 29 Python 0x00000001007bacd8 call_function + 268 30 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012 31 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180 32 Python 0x00000001006c8c98 method_vectorcall + 124 33 Python 0x00000001007bac4c call_function + 128 34 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844 35 Python 0x00000001007ada9c _PyEval_EvalCode + 444 36 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 37 Python 0x00000001006c8c98 method_vectorcall + 124 38 Python 0x00000001006c5e40 PyVectorcall_Call + 184 39 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304 40 Python 0x00000001007ada9c _PyEval_EvalCode + 444 41 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 42 Python 0x00000001006c5e40 PyVectorcall_Call + 184 43 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304 44 Python 0x00000001007ada9c _PyEval_EvalCode + 444 45 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 46 Python 0x00000001007bac4c call_function + 128 47 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880 48 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180 49 Python 0x00000001007bac4c call_function + 128 50 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880 51 Python 0x00000001007ada9c _PyEval_EvalCode + 444 52 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 53 Python 0x00000001007bac4c call_function + 128 54 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880 55 Python 0x00000001007ada9c _PyEval_EvalCode + 444 56 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 57 Python 0x00000001007bac4c call_function + 128 58 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880 59 Python 0x00000001007ada9c _PyEval_EvalCode + 444 60 Python 0x00000001007a86a0 builtin_exec + 356 61 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88 62 Python 0x00000001007bac4c call_function + 128 63 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880 64 Python 0x00000001006da678 gen_send_ex + 192 65 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224 66 Python 0x00000001006da678 gen_send_ex + 192 67 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224 68 Python 0x00000001006da678 gen_send_ex + 192 69 Python 0x00000001006d1cb0 method_vectorcall_O + 108 70 Python 0x00000001007bac4c call_function + 128 71 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720 72 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180 73 Python 0x00000001007bac4c call_function + 128 74 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880 75 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180 76 Python 0x00000001007bac4c call_function + 128 77 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720 78 Python 0x00000001007ada9c _PyEval_EvalCode + 444 79 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 80 Python 0x00000001006c8c98 method_vectorcall + 124 81 Python 0x00000001007bac4c call_function + 128 82 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012 83 Python 0x00000001007ada9c _PyEval_EvalCode + 444 84 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 85 Python 0x00000001007bac4c call_function + 128 86 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720 87 Python 0x00000001007ada9c _PyEval_EvalCode + 444 88 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 89 Python 0x00000001007bac4c call_function + 128 90 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720 91 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180 92 Python 0x00000001007bac4c call_function + 128 93 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720 94 Python 0x00000001007ada9c _PyEval_EvalCode + 444 95 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 96 Python 0x00000001006c8c98 method_vectorcall + 124 97 Python 0x00000001006c5e40 PyVectorcall_Call + 184 98 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304 99 Python 0x00000001007ada9c _PyEval_EvalCode + 444 100 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364 101 Python 0x00000001007bac4c call_function + 128 102 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880 103 Python 0x00000001007ada9c _PyEval_EvalCode + 444 104 Python 0x0000000100805498 run_eval_code_obj + 136 105 Python 0x00000001008053ac run_mod + 112 106 Python 0x0000000100802be8 pyrun_file + 168 107 Python 0x000000010080250c pyrun_simple_file + 276 108 Python 0x00000001008023b8 PyRun_SimpleFileExFlags + 80 109 Python 0x0000000100822560 pymain_run_file + 320 110 Python 0x0000000100821b2c pymain_run_python + 412 111 Python 0x000000010082194c Py_RunMain + 24 112 Python 0x0000000100822f50 pymain_main + 36 113 Python 0x00000001008231c8 Py_BytesMain + 40 114 libdyld.dylib 0x00000001a0c38420 start + 4 ) libc++abi: terminating with uncaught exception of type NSException Abort trap: 6 A: This is what worked for me with macOS Monterey: brew install [email protected] or brew install [email protected] Depending on which Python version you're running. A: It doesn't work in any OS whatsoever that doesn't have the TCL Toolkit already installed. While it's either already installed in many Linux distributions and/or bundled with Python bundles downloaded from python.org for Windows and Linux - and a a consequence of that is generally wrongly assumed that it's part of Python - it's not the case for macOS. There's official reasons for this described in the appropriate document: If you are using macOS 12 Monterey or later, you may see problems with file open and save dialogs when using IDLE or other tkinter-based applications. The most recent versions of python.org installers (for 3.10.0 and 3.9.8) have patched versions of Tk to avoid these problems. They should be fixed in an upcoming Tk 8.6.12 release. If you are using a Python from any current python.org Python installer for macOS (3.10.0+ or 3.9.0+), no further action is needed to use IDLE or tkinter. A built-in version of Tcl/Tk 8.6 will be used. If you are using macOS 10.6 or later, the Apple-supplied Tcl/Tk 8.5 has serious bugs that can cause application crashes. If you wish to use IDLE or Tkinter, do not use the Apple-supplied Pythons. Instead, install and use a newer version of Python from python.org or a third-party distributor that supplies or links with a newer version of Tcl/Tk. Python's integrated development environment, IDLE, and the tkinter GUI toolkit it uses, depend on the Tk GUI toolkit which is not part of Python itself. For best results, it is important that the proper release of Tcl/Tk is installed on your machine. For recent Python installers for macOS downloadable from this website, here is a summary of current recommendations followed by more detailed information. It has been already mentioned but the most popular way to do it is: $ brew install python-tk It will work because python-tk formulae depends on other two: python and tcl-tk (hence you don't need to additionally do brew install python). If you had already installed python with homebrew $ brew install python You can have tkinter with $ brew install tcl-tk
Why does Python installed via Homebrew not include Tkinter
I've installed Python via Homebrew on my Mac. brew install python After that I checked my Python version as 2.7.11, then I tried to perform import Tkinter I got following error message: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 39, in <module> import _tkinter # If this fails your Python may not be configured for Tk ImportError: No module named _tkinter
[ "I am running MacOS Big Sur (11.2.3).\nWith python2, I have Tkinter built-in.\nWith python3, it has to be installed manually and it's very simple, just run:\n$ brew install python-tk\n\nTo run python2 in a terminal, execute python file.py.\nTo run python3 in a terminal, execute python3 file.py.\n", "Based on the comments from above and the fact that Python must be linked to Tcl/Tk framework:\nIf you don't have Xcode command line tools, install those:\nxcode-select --install\n\nIf you don't have Tcl/Tk brew installation (check brew list), install that:\nbrew install tcl-tk\n\nThen, run \"brew uninstall python\" if that was not installed with option --with-tcl-tk (the current official option). Then install Python again, linking it to the brew installed Tcl/Tk:\nbrew install python --with-tcl-tk\n\n", "UPDATE: Other answers have found workarounds, so this answer is now outdated.\n12/18 Update: No longer possible for various reasons.\nBelow is now outdated. You'll have to install Python directly from python.org if you want to remove those warnings.\n\n2018 Update\nbrew reinstall python --with-tcl-tk\n\n\nNote: Homebrew now uses Python 3 by default - Homebrew Blog. Docs.\n\nTesting\npython should bring up system’s Python 2, python3 should bring up Python 3.\nidle points to system Python/tcl-tk. It will show an out-dated tcl-tk error (unless you brew install python@2 --with-tcl-tk)\nidle3 should bring up Python 3 with no warnings.\nCaveat\n--with-tcl-tk will install python directly from python.org, which you'll see when you run brew info python.\nMore info here.\n", "With brew and python3 you have to install Tinker separately.\nbrew message while installing python:\n\ntkinter is no longer included with this formula, but it is available separately:\n\nbrew install [email protected]\n\n", "If you're using pyenv you can try installing tcl-tk via homebrew and then activating the env. vars. mentioned in its caveats section, as detailed in this answer. Activating those env. vars. prior to installing python via homebrew may work for you:\n※ export PATH=\"/usr/local/opt/tcl-tk/bin:$PATH\"\n※ export LDFLAGS=\"-L/usr/local/opt/tcl-tk/lib\"\n※ export CPPFLAGS=\"-I/usr/local/opt/tcl-tk/include\"\n※ export PKG_CONFIG_PATH=\"/usr/local/opt/tcl-tk/lib/pkgconfig\"\n※ export PYTHON_CONFIGURE_OPTS=\"--with-tcltk-includes='-I$(brew --prefix tcl-tk)/include' \\\n --with-tcltk-libs='-L$(brew --prefix tcl-tk)/lib -ltcl8.6 -ltk8.6'\"\n※ brew reinstall python\n\n", "On mac OSX you must install TCL separately:\nYou will find instructions and dowloadables here: https://www.tcl.tk/software/tcltk/ and there: http://wiki.tcl.tk/1013\nIt requires a little bit of effort, but it is neither complicated nor difficult.\n", "It may be because you don't have the latest Xcode command line tools so brew built python from source instead of from bottle. Try:\nxcode-select --install\nbrew uninstall python\nbrew install python --use-brewed-tk\n\n", "It is a bit more complicated now, true you still need to have xcode command line tools and homebrew as a start. But the procedure changes constantly. Homebrew took out tcl-tk support long ago, and apple still only supplies v8.5 of tcl-tk. Anyway, it is possible, and I maintain a github gist personally to fix these issues.\nLatest update is using python 3.8.1 (will probably be usable on the 3.8.x branch later too) see here, just follow the steps outlined.\ngithub gist link to install tcl-tk with python\n", "On MacOS 11.13.1 using\nbrew install python\nbrew install python-tk\n\nI can now select TkAgg in matplotlib, but when I use it in ipython I get an error message\n%pylab\nmatplotlib.use('tkagg')\nplot([0,1])\n\nresults in\n2021-05-07 21:51:02.954 Python[10773:71016] -[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0\n2021-05-07 21:51:02.956 Python[10773:71016] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0'\n*** First throw call stack:\n(\n 0 CoreFoundation 0x00000001a0d97db8 __exceptionPreprocess + 240\n 1 libobjc.A.dylib 0x00000001a0ac10a8 objc_exception_throw + 60\n 2 CoreFoundation 0x00000001a0e28ba0 -[NSObject(NSObject) __retain_OA] + 0\n 3 CoreFoundation 0x00000001a0cf91e4 ___forwarding___ + 1444\n 4 CoreFoundation 0x00000001a0cf8b80 _CF_forwarding_prep_0 + 96\n 5 libtk8.6.dylib 0x000000012754a844 GetRGBA + 308\n 6 libtk8.6.dylib 0x000000012754a208 SetCGColorComponents + 132\n 7 libtk8.6.dylib 0x000000012754a65c TkpGetColor + 572\n 8 libtk8.6.dylib 0x00000001274ac714 Tk_GetColor + 220\n 9 libtk8.6.dylib 0x000000012749fea0 Tk_Get3DBorder + 204\n 10 libtk8.6.dylib 0x000000012749fcac Tk_Alloc3DBorderFromObj + 144\n 11 libtk8.6.dylib 0x00000001274adadc DoObjConfig + 840\n 12 libtk8.6.dylib 0x00000001274ad690 Tk_InitOptions + 348\n 13 libtk8.6.dylib 0x00000001274ad58c Tk_InitOptions + 88\n 14 libtk8.6.dylib 0x00000001274d4cb4 CreateFrame + 1448\n 15 libtk8.6.dylib 0x00000001274d4fac TkListCreateFrame + 156\n 16 libtk8.6.dylib 0x00000001274cde80 Initialize + 1848\n 17 _tkinter.cpython-39-darwin.so 0x000000012059a31c Tcl_AppInit + 80\n 18 _tkinter.cpython-39-darwin.so 0x000000012059487c Tkapp_New + 592\n 19 _tkinter.cpython-39-darwin.so 0x000000012059410c _tkinter_create + 580\n 20 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88\n 21 Python 0x00000001007bac4c call_function + 128\n 22 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844\n 23 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 24 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 25 Python 0x00000001006c58ac _PyObject_FastCallDictTstate + 208\n 26 Python 0x0000000100739bf4 slot_tp_init + 188\n 27 Python 0x000000010073f850 type_call + 300\n 28 Python 0x00000001006c5590 _PyObject_MakeTpCall + 132\n 29 Python 0x00000001007bacd8 call_function + 268\n 30 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012\n 31 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 32 Python 0x00000001006c8c98 method_vectorcall + 124\n 33 Python 0x00000001007bac4c call_function + 128\n 34 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844\n 35 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 36 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 37 Python 0x00000001006c8c98 method_vectorcall + 124\n 38 Python 0x00000001006c5e40 PyVectorcall_Call + 184\n 39 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304\n 40 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 41 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 42 Python 0x00000001006c5e40 PyVectorcall_Call + 184\n 43 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304\n 44 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 45 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 46 Python 0x00000001007bac4c call_function + 128\n 47 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 48 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 49 Python 0x00000001007bac4c call_function + 128\n 50 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 51 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 52 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 53 Python 0x00000001007bac4c call_function + 128\n 54 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 55 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 56 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 57 Python 0x00000001007bac4c call_function + 128\n 58 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 59 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 60 Python 0x00000001007a86a0 builtin_exec + 356\n 61 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88\n 62 Python 0x00000001007bac4c call_function + 128\n 63 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 64 Python 0x00000001006da678 gen_send_ex + 192\n 65 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224\n 66 Python 0x00000001006da678 gen_send_ex + 192\n 67 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224\n 68 Python 0x00000001006da678 gen_send_ex + 192\n 69 Python 0x00000001006d1cb0 method_vectorcall_O + 108\n 70 Python 0x00000001007bac4c call_function + 128\n 71 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 72 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 73 Python 0x00000001007bac4c call_function + 128\n 74 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 75 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 76 Python 0x00000001007bac4c call_function + 128\n 77 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 78 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 79 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 80 Python 0x00000001006c8c98 method_vectorcall + 124\n 81 Python 0x00000001007bac4c call_function + 128\n 82 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012\n 83 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 84 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 85 Python 0x00000001007bac4c call_function + 128\n 86 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 87 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 88 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 89 Python 0x00000001007bac4c call_function + 128\n 90 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 91 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 92 Python 0x00000001007bac4c call_function + 128\n 93 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 94 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 95 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 96 Python 0x00000001006c8c98 method_vectorcall + 124\n 97 Python 0x00000001006c5e40 PyVectorcall_Call + 184\n 98 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304\n 99 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 100 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 101 Python 0x00000001007bac4c call_function + 128\n 102 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 103 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 104 Python 0x0000000100805498 run_eval_code_obj + 136\n 105 Python 0x00000001008053ac run_mod + 112\n 106 Python 0x0000000100802be8 pyrun_file + 168\n 107 Python 0x000000010080250c pyrun_simple_file + 276\n 108 Python 0x00000001008023b8 PyRun_SimpleFileExFlags + 80\n 109 Python 0x0000000100822560 pymain_run_file + 320\n 110 Python 0x0000000100821b2c pymain_run_python + 412\n 111 Python 0x000000010082194c Py_RunMain + 24\n 112 Python 0x0000000100822f50 pymain_main + 36\n 113 Python 0x00000001008231c8 Py_BytesMain + 40\n 114 libdyld.dylib 0x00000001a0c38420 start + 4\n)\nlibc++abi: terminating with uncaught exception of type NSException\nAbort trap: 6\n\n", "This is what worked for me with macOS Monterey:\nbrew install [email protected]\nor\nbrew install [email protected]\nDepending on which Python version you're running.\n", "It doesn't work in any OS whatsoever that doesn't have the TCL Toolkit already installed. While it's either already installed in many Linux distributions and/or bundled with Python bundles downloaded from python.org for Windows and Linux - and a a consequence of that is generally wrongly assumed that it's part of Python - it's not the case for macOS. There's official reasons for this described in the appropriate document:\n\nIf you are using macOS 12 Monterey or later, you may see problems with file open and save dialogs when using IDLE or other tkinter-based applications. The most recent versions of python.org installers (for 3.10.0 and 3.9.8) have patched versions of Tk to avoid these problems. They should be fixed in an upcoming Tk 8.6.12 release.\n\n\nIf you are using a Python from any current python.org Python installer for macOS (3.10.0+ or 3.9.0+), no further action is needed to use IDLE or tkinter. A built-in version of Tcl/Tk 8.6 will be used.\n\n\nIf you are using macOS 10.6 or later, the Apple-supplied Tcl/Tk 8.5 has serious bugs that can cause application crashes. If you wish to use IDLE or Tkinter, do not use the Apple-supplied Pythons. Instead, install and use a newer version of Python from python.org or a third-party distributor that supplies or links with a newer version of Tcl/Tk.\n\n\nPython's integrated development environment, IDLE, and the tkinter GUI toolkit it uses, depend on the Tk GUI toolkit which is not part of Python itself. For best results, it is important that the proper release of Tcl/Tk is installed on your machine. For recent Python installers for macOS downloadable from this website, here is a summary of current recommendations followed by more detailed information.\n\n\nIt has been already mentioned but the most popular way to do it is:\n$ brew install python-tk \n\nIt will work because python-tk formulae depends on other two: python and tcl-tk (hence you don't need to additionally do brew install python).\nIf you had already installed python with homebrew\n$ brew install python \n\nYou can have tkinter with\n$ brew install tcl-tk \n\n" ]
[ 61, 25, 11, 11, 9, 6, 4, 3, 3, 0, 0 ]
[]
[]
[ "macos", "python", "python_3.x", "tkinter" ]
stackoverflow_0036760839_macos_python_python_3.x_tkinter.txt
Q: Dynamically Edit PDF File I have a PDF template containing some text, the PDF file has a name repeated many times, I want to write a code that takes the {name} as input, then dynamically changes all appearance of the name in the pdf to the value I have entered then output the file after the changes. I have tried to build a pdf from the begging with python but I couldn't reach the alignment I want A: PyFPDF has a template designer for alignment etc. however it is not the same as Acrobat or other FDF editors where a field can be copy pasted multiple times as numbered increments. It uses a different csv methodology where you would need to add each repeated name as a line entry in the data file. For the tutoring sample see https://pyfpdf.readthedocs.io/en/latest/Templates/index.html However, there is the newer version 2 which uses much the same features so unknown if designer is fully compatible as it was dropped from that forked version. Perhaps compare the output from V1 with the inputs for V2. More relavent it has a newer methodology that may do what you require by using a more modular flextemplate
Dynamically Edit PDF File
I have a PDF template containing some text, the PDF file has a name repeated many times, I want to write a code that takes the {name} as input, then dynamically changes all appearance of the name in the pdf to the value I have entered then output the file after the changes. I have tried to build a pdf from the begging with python but I couldn't reach the alignment I want
[ "PyFPDF has a template designer for alignment etc. however it is not the same as Acrobat or other FDF editors where a field can be copy pasted multiple times as numbered increments. It uses a different csv methodology where you would need to add each repeated name as a line entry in the data file.\n\nFor the tutoring sample see https://pyfpdf.readthedocs.io/en/latest/Templates/index.html\nHowever, there is the newer version 2 which uses much the same features so unknown if designer is fully compatible as it was dropped from that forked version. Perhaps compare the output from V1 with the inputs for V2.\nMore relavent it has a newer methodology that may do what you require by using a more modular flextemplate\n" ]
[ 0 ]
[]
[]
[ "automation", "pdf", "pyfpdf", "python" ]
stackoverflow_0074656772_automation_pdf_pyfpdf_python.txt
Q: Is there any Gson similar libraries for Python I am new to python .I was trying to create json responses for my android app. I was wondering if there is any library similar to GSON for python. http://nullege.com/codes/search/com.google.gson.Gson at this link i saw Gson the usage. Can anyone please tell me if there is GSON libary for python , or any other similar library. Also if there is any please guide me to integrated it in the code. A: You can use Pykson, JSON Serializer and Deserializer for Python which is somehow like Gson. It supports lists of objects and serialization names. Simply define your object model as JsonObject, and use Pykson to convert it back and forth to JSON. class Student(JsonObject): first_name = StringField(serialized_name="fn") last_name = StringField(serialized_name="ln") age = IntegerField(serialized_name="a") json_text = '{"fn":"John", "ln":"Smith", "a": 25}' student = Pykson.from_json(json_text, Student) student_json = Pykson.to_json(student) assert (json_text == student_json) A: You can use Jsonic library. Jsonic is a lightweight utility for serializing/deserializing python objects to/from JSON. Example: from jsonic import serialize, deserialize class User(Serializable): def __init__(self, user_id: str, birth_time: datetime): super().__init__() self.user_id = user_id self.birth_time = birth_time user = User('id1', datetime(2020,10,11)) obj = serialize(user) # {'user_id': 'id1', 'birth_time': {'datetime': '2020-10-11 00:00:00', '_serialized_type': 'datetime'}, '_serialized_type': 'User'} new_user : User = deserialize(obj) # new_user is a new instance of user with same attributes Jsonic has some nifty features: You can serialize objects of types that are not extending Serializable. This can come handy when you need to serialize objects of third party library classes. Support for custom serializers and deserializers Serializing into JSON string or python dict Transient class attributes Supports both serialization of private fields or leave them out of the serialization process. Full disclosure: I'm the creator of Jsonic A: You can use BSON: https://pymongo.readthedocs.io/en/stable/api/bson/index.html Think of BSON as "binary JSON", meaning both: It's output is not simple ASCII (as JSON is) It has support for arbitrary byte strings (aka "Binary" is a native datatype) In addition, it natively supports: datetime objects ObjectId objects BSON is the "native" object marshaling technique of MongoDb. So it is an object type that you get access to with the "pymongo" library. You can load pymongo and not use it for MongoDb, and just use the BSON portion. We use BSON to marshal long arrays of floating point numbers, for which JSON is an awful choice. This is useful over raw TCP sockets, or other transports that send "byte blobs" as their payload, such as RabbitMQ messages.
Is there any Gson similar libraries for Python
I am new to python .I was trying to create json responses for my android app. I was wondering if there is any library similar to GSON for python. http://nullege.com/codes/search/com.google.gson.Gson at this link i saw Gson the usage. Can anyone please tell me if there is GSON libary for python , or any other similar library. Also if there is any please guide me to integrated it in the code.
[ "You can use Pykson, JSON Serializer and Deserializer for Python which is somehow like Gson. It supports lists of objects and serialization names.\nSimply define your object model as JsonObject, and use Pykson to convert it back and forth to JSON.\nclass Student(JsonObject):\n first_name = StringField(serialized_name=\"fn\")\n last_name = StringField(serialized_name=\"ln\")\n age = IntegerField(serialized_name=\"a\")\n\njson_text = '{\"fn\":\"John\", \"ln\":\"Smith\", \"a\": 25}'\nstudent = Pykson.from_json(json_text, Student)\n\nstudent_json = Pykson.to_json(student)\nassert (json_text == student_json)\n\n", "You can use Jsonic library.\nJsonic is a lightweight utility for serializing/deserializing python objects to/from JSON.\nExample:\nfrom jsonic import serialize, deserialize\n\nclass User(Serializable):\n def __init__(self, user_id: str, birth_time: datetime):\n super().__init__()\n self.user_id = user_id\n self.birth_time = birth_time\n \nuser = User('id1', datetime(2020,10,11)) \nobj = serialize(user) # {'user_id': 'id1', 'birth_time': {'datetime': '2020-10-11 00:00:00', '_serialized_type': 'datetime'}, '_serialized_type': 'User'}\nnew_user : User = deserialize(obj) # new_user is a new instance of user with same attributes\n\nJsonic has some nifty features:\n\nYou can serialize objects of types that are not extending Serializable.\nThis can come handy when you need to serialize objects of\nthird party library classes.\nSupport for custom serializers and deserializers\nSerializing into JSON string or python dict\nTransient class attributes\nSupports both serialization of private fields or leave them out of the\nserialization process.\n\nFull disclosure: I'm the creator of Jsonic\n", "You can use BSON:\nhttps://pymongo.readthedocs.io/en/stable/api/bson/index.html\nThink of BSON as \"binary JSON\", meaning both:\n\nIt's output is not simple ASCII (as JSON is)\nIt has support for arbitrary byte strings (aka \"Binary\" is a native datatype)\n\nIn addition, it natively supports:\n\ndatetime objects\nObjectId objects\n\nBSON is the \"native\" object marshaling technique of MongoDb. So it is an object type that you get access to with the \"pymongo\" library. You can load pymongo and not use it for MongoDb, and just use the BSON portion.\nWe use BSON to marshal long arrays of floating point numbers, for which JSON is an awful choice. This is useful over raw TCP sockets, or other transports that send \"byte blobs\" as their payload, such as RabbitMQ messages.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "gson", "json", "python" ]
stackoverflow_0034805435_gson_json_python.txt
Q: How to calculate distance from a player to a dynamic collision point I'm trying to create sensors for a car to keep track of the distances from the car to the borders of the track. My goal is to have 5 sensors (see image below) and use them to train a machine learning algorithm. But I can't figure out a way to calculate these distances. For now, I just need a sample of code and a logical explanation of how to implement this with PyGame. But a mathematical and geometrical explanation would be really nice as well for further reading. I'm using this code from a YouTuber tutorial series. My biggest issue is how to get the points in blue. (last picture) I need them to create the red lines from the car to the points and to calculate the length of these lines. These points are taking the car's position and rotation into account and they have a specific angle at which they get out of the car. I've managed to create the lines, but could not get the point the line would collide with the track. What I want to accomplish: I've tried different approaches to this problem, but for now, my biggest problem is how to get the position of the blue dots: --- Edit from the feedback ------ I added a new paragraph to better explain the problem. This way I hope it is clearer why this problem is different from those said to be related to it. The other problem we have the desired final position (mouse or enemy) in this one we have to figure out which point is the one we are going to use to create the line, and this is my issue. My GitHub repo of the project https://github.com/pedromello/ml-pygame/blob/main/main.py The part of the code where I'm trying to implement this: class AbstractCar: def __init__(self, max_vel, rotation_vel): self.img = self.IMG self.max_vel = max_vel self.vel = 0 self.rotation_vel = rotation_vel self.angle = 0 self.x, self.y = self.START_POS self.acceleration = 0.1 def rotate(self, left=False, right=False): if left: self.angle += self.rotation_vel elif right: self.angle -= self.rotation_vel def draw(self, win): blit_rotate_center(win, self.img, (self.x, self.y), self.angle) def move_forward(self): self.vel = min(self.vel + self.acceleration, self.max_vel) self.move() def move_backward(self): self.vel = max(self.vel - self.acceleration, -self.max_vel/2) self.move() def move(self): radians = math.radians(self.angle) vertical = math.cos(radians) * self.vel horizontal = math.sin(radians) * self.vel self.y -= vertical self.x -= horizontal def collide(self, mask, x=0, y=0): car_mask = pygame.mask.from_surface(self.img) offset = (int(self.x - x), int(self.y - y)) poi = mask.overlap(car_mask, offset) return poi def reset(self): self.x, self.y = self.START_POS self.angle = 0 self.vel = 0 class PlayerCar(AbstractCar): IMG = RED_CAR START_POS = (180, 200) def reduce_speed(self): self.vel = max(self.vel - self.acceleration / 2, 0) self.move() def bounce(self): self.vel = -self.vel self.move() def drawSensors(self): radians = math.radians(self.angle) vertical = -math.cos(radians) horizontal = math.sin(radians) car_center = pygame.math.Vector2(self.x + CAR_WIDTH/2, self.y + CAR_HEIGHT/2) pivot_sensor = pygame.math.Vector2(car_center.x + horizontal * -100, car_center.y - vertical * -100) #sensor1 = Vector(30, 0).rotate(self.angle) #+ self.pos # atualiza a posição do sensor 1 #sensor2 = Vector(30, 0).rotate((self.angle+30)%360) #+ self.pos # atualiza a posição do sensor 2 #sensor3 = Vector(30, 0).rotate((self.angle-30)%360) #+ self.pos # atualiza a posição do sensor 3 #rotate pivot sensor around car center sensor_2 = pivot_sensor.rotate((self.angle+30)%360) # Sensor 1 pygame.draw.line(WIN, (255, 0, 0), car_center, pivot_sensor, 2) # Sensor 2 pygame.draw.line(WIN, (255, 0, 0), car_center, sensor_2, 2) # Sensor 3 #pygame.draw.line(WIN, (255, 0, 0), (self.x, self.y), (self.x + horizontal * 100, self.y - vertical * 100), 2) A: Thank you for the comments, I solved my problem using the idea of firing sensors so I can get the point on the wall when the "bullet" hits it. As we can see when the bullet hits the wall we can create a line that connects the point to the car. This is not the best solution, as it takes time for the bullet to hit the wall and in the meantime, the car is "blind". As Rabbid76 commented, using raycasting may be the solution I was looking for. Code for reference: Sensor Bullet class class SensorBullet: def __init__(self, car, base_angle, vel, color): self.x = car.x + CAR_WIDTH/2 self.y = car.y + CAR_HEIGHT/2 self.angle = car.angle self.base_angle = base_angle self.vel = vel self.color = color self.img = pygame.Surface((4, 4)) self.fired = False self.hit = False self.last_poi = None def draw(self, win): pygame.draw.circle(win, self.color, (self.x, self.y), 2) def fire(self, car): self.angle = car.angle + self.base_angle self.x = car.x + CAR_WIDTH/2 self.y = car.y + CAR_HEIGHT/2 self.fired = True self.hit = False def move(self): if(self.fired): radians = math.radians(self.angle) vertical = math.cos(radians) * self.vel horizontal = math.sin(radians) * self.vel self.y -= vertical self.x -= horizontal def collide(self, x=0, y=0): bullet_mask = pygame.mask.from_surface(self.img) offset = (int(self.x - x), int(self.y - y)) poi = TRACK_BORDER_MASK.overlap(bullet_mask, offset) if poi: self.fired = False self.hit = True self.last_poi = poi return poi def draw_line(self, win, car): if self.hit: pygame.draw.line(win, self.color, (car.x + CAR_WIDTH/2, car.y + CAR_HEIGHT/2), (self.x, self.y), 1) pygame.display.update() def get_distance_from_poi(self, car): if self.last_poi is None: return -1 return math.sqrt((car.x - self.last_poi[0])**2 + (car.y - self.last_poi[1])**2) Methods the car must perform to use the sensor # Inside car's __init__ method self.sensors = [SensorBullet(self, 25, 12, (100, 0, 255)), SensorBullet(self, 10, 12, (200, 0, 255)), SensorBullet(self, 0, 12, (0, 255, 0)), SensorBullet(self, -10, 12, (0, 0, 255)), SensorBullet(self, -25, 12, (0, 0, 255))] # ------ # Cars methods def fireSensors(self): for bullet in self.sensors: bullet.fire(self) def sensorControl(self): #print(contains(self.sensors, lambda x: x.hit)) for bullet in self.sensors: if not bullet.fired: bullet.fire(self) for bullet in self.sensors: bullet.move() def get_distance_array(self): return [bullet.get_distance_from_poi(self) for bullet in self.sensors]
How to calculate distance from a player to a dynamic collision point
I'm trying to create sensors for a car to keep track of the distances from the car to the borders of the track. My goal is to have 5 sensors (see image below) and use them to train a machine learning algorithm. But I can't figure out a way to calculate these distances. For now, I just need a sample of code and a logical explanation of how to implement this with PyGame. But a mathematical and geometrical explanation would be really nice as well for further reading. I'm using this code from a YouTuber tutorial series. My biggest issue is how to get the points in blue. (last picture) I need them to create the red lines from the car to the points and to calculate the length of these lines. These points are taking the car's position and rotation into account and they have a specific angle at which they get out of the car. I've managed to create the lines, but could not get the point the line would collide with the track. What I want to accomplish: I've tried different approaches to this problem, but for now, my biggest problem is how to get the position of the blue dots: --- Edit from the feedback ------ I added a new paragraph to better explain the problem. This way I hope it is clearer why this problem is different from those said to be related to it. The other problem we have the desired final position (mouse or enemy) in this one we have to figure out which point is the one we are going to use to create the line, and this is my issue. My GitHub repo of the project https://github.com/pedromello/ml-pygame/blob/main/main.py The part of the code where I'm trying to implement this: class AbstractCar: def __init__(self, max_vel, rotation_vel): self.img = self.IMG self.max_vel = max_vel self.vel = 0 self.rotation_vel = rotation_vel self.angle = 0 self.x, self.y = self.START_POS self.acceleration = 0.1 def rotate(self, left=False, right=False): if left: self.angle += self.rotation_vel elif right: self.angle -= self.rotation_vel def draw(self, win): blit_rotate_center(win, self.img, (self.x, self.y), self.angle) def move_forward(self): self.vel = min(self.vel + self.acceleration, self.max_vel) self.move() def move_backward(self): self.vel = max(self.vel - self.acceleration, -self.max_vel/2) self.move() def move(self): radians = math.radians(self.angle) vertical = math.cos(radians) * self.vel horizontal = math.sin(radians) * self.vel self.y -= vertical self.x -= horizontal def collide(self, mask, x=0, y=0): car_mask = pygame.mask.from_surface(self.img) offset = (int(self.x - x), int(self.y - y)) poi = mask.overlap(car_mask, offset) return poi def reset(self): self.x, self.y = self.START_POS self.angle = 0 self.vel = 0 class PlayerCar(AbstractCar): IMG = RED_CAR START_POS = (180, 200) def reduce_speed(self): self.vel = max(self.vel - self.acceleration / 2, 0) self.move() def bounce(self): self.vel = -self.vel self.move() def drawSensors(self): radians = math.radians(self.angle) vertical = -math.cos(radians) horizontal = math.sin(radians) car_center = pygame.math.Vector2(self.x + CAR_WIDTH/2, self.y + CAR_HEIGHT/2) pivot_sensor = pygame.math.Vector2(car_center.x + horizontal * -100, car_center.y - vertical * -100) #sensor1 = Vector(30, 0).rotate(self.angle) #+ self.pos # atualiza a posição do sensor 1 #sensor2 = Vector(30, 0).rotate((self.angle+30)%360) #+ self.pos # atualiza a posição do sensor 2 #sensor3 = Vector(30, 0).rotate((self.angle-30)%360) #+ self.pos # atualiza a posição do sensor 3 #rotate pivot sensor around car center sensor_2 = pivot_sensor.rotate((self.angle+30)%360) # Sensor 1 pygame.draw.line(WIN, (255, 0, 0), car_center, pivot_sensor, 2) # Sensor 2 pygame.draw.line(WIN, (255, 0, 0), car_center, sensor_2, 2) # Sensor 3 #pygame.draw.line(WIN, (255, 0, 0), (self.x, self.y), (self.x + horizontal * 100, self.y - vertical * 100), 2)
[ "Thank you for the comments, I solved my problem using the idea of firing sensors so I can get the point on the wall when the \"bullet\" hits it.\n\nAs we can see when the bullet hits the wall we can create a line that connects the point to the car. This is not the best solution, as it takes time for the bullet to hit the wall and in the meantime, the car is \"blind\".\nAs Rabbid76 commented, using raycasting may be the solution I was looking for.\nCode for reference:\nSensor Bullet class\nclass SensorBullet:\n def __init__(self, car, base_angle, vel, color):\n self.x = car.x + CAR_WIDTH/2\n self.y = car.y + CAR_HEIGHT/2\n self.angle = car.angle\n self.base_angle = base_angle\n self.vel = vel\n self.color = color\n self.img = pygame.Surface((4, 4))\n self.fired = False\n self.hit = False\n self.last_poi = None\n\n def draw(self, win):\n pygame.draw.circle(win, self.color, (self.x, self.y), 2)\n\n def fire(self, car):\n self.angle = car.angle + self.base_angle\n self.x = car.x + CAR_WIDTH/2\n self.y = car.y + CAR_HEIGHT/2\n self.fired = True\n self.hit = False\n\n def move(self):\n if(self.fired):\n radians = math.radians(self.angle)\n vertical = math.cos(radians) * self.vel\n horizontal = math.sin(radians) * self.vel\n\n self.y -= vertical\n self.x -= horizontal\n\n def collide(self, x=0, y=0):\n bullet_mask = pygame.mask.from_surface(self.img)\n offset = (int(self.x - x), int(self.y - y))\n poi = TRACK_BORDER_MASK.overlap(bullet_mask, offset)\n if poi:\n self.fired = False\n self.hit = True\n self.last_poi = poi\n return poi\n\n def draw_line(self, win, car):\n if self.hit:\n pygame.draw.line(win, self.color, (car.x + CAR_WIDTH/2, car.y + CAR_HEIGHT/2), (self.x, self.y), 1)\n pygame.display.update()\n\n def get_distance_from_poi(self, car):\n if self.last_poi is None:\n return -1\n return math.sqrt((car.x - self.last_poi[0])**2 + (car.y - self.last_poi[1])**2)\n\nMethods the car must perform to use the sensor\n# Inside car's __init__ method\nself.sensors = [SensorBullet(self, 25, 12, (100, 0, 255)), SensorBullet(self, 10, 12, (200, 0, 255)), SensorBullet(self, 0, 12, (0, 255, 0)), SensorBullet(self, -10, 12, (0, 0, 255)), SensorBullet(self, -25, 12, (0, 0, 255))]\n# ------\n\n# Cars methods\ndef fireSensors(self): \n for bullet in self.sensors:\n bullet.fire(self)\n\ndef sensorControl(self):\n #print(contains(self.sensors, lambda x: x.hit))\n\n for bullet in self.sensors:\n if not bullet.fired:\n bullet.fire(self)\n\n for bullet in self.sensors:\n bullet.move()\n\ndef get_distance_array(self):\n return [bullet.get_distance_from_poi(self) for bullet in self.sensors]\n\n\n" ]
[ 0 ]
[]
[]
[ "euclidean_distance", "geometry", "math", "python", "raycasting" ]
stackoverflow_0074616569_euclidean_distance_geometry_math_python_raycasting.txt
Q: numpy.ndarray.data attribute buffer object I create different numpy arrays as follows: import numpy as np a = np.array([[1,2,3],[1,2,3]]) # 2d array of integers b = np.array([[1,2,3],[1,2,5.0]]) # 2d array of floats c = np.array([1,2,3,4,5,6,7,8,9]) # 1d array of integers d = np.array([10,20,30]) # different 1d array of integers # python buffer object pointing to the start of the arrays data. print(a.data) print(b.data) print(c.data) print(d.data) According to the numpy docs i expect to get a "python buffer object pointing to the start of the arrays data". Here is the official doc (from the numpy website): https://numpy.org/doc/stable/reference/generated/numpy.ndarray.data.html So i would expect a different memory address for each array. but i get this: <memory at 0x000001EAE8B7CAD0> <memory at 0x000001EAE8B7CAD0> <memory at 0x000001EAE98E8A00> <memory at 0x000001EAE98E8A00> The first two have the same memory buffer object pointer. And the second two have the same memory buffer object pointer. Is numpy inferring the memory buffer object pointer based on the dimension of the array or if not, what is the rule that generates similar pointer addresses ? A: They aren't sharing the same memory. .data creates a memoryview object every time the attribute is accessed. You can see from this session that it's a different address every time: >>> d.data <memory at 0x6ffff70bddc0> >>> d.data <memory at 0x6ffff70bdb80> >>> d.data <memory at 0x6ffff70bd1c0> In your case, the object is reclaimed immediately after print() and the next one created happens to have the same address. A: In an ipython session: In [38]: type(a.data) Out[38]: memoryview and the docs for that object: In [39]: a.data? Type: memoryview String form: <memory at 0x000002AE38BAF5F0> Length: 2 Docstring: Create a new memoryview object which references the given object. The print string of a memory view doesn't tell us anything about the databuffer address. It can be used to make a view of the the array: In [45]: aa = np.ndarray(a.shape, a.dtype, buffer=a.data) In [46]: aa Out[46]: array([[1, 2, 3], [1, 2, 3]]) In [47]: aa.base is a Out[47]: True The data of __array_interface__ is closer to being a numeric address of the underlying data buffer. I don't think it can be used in code, but I find it useful when checking whether an array is a view or copy: In [48]: a.__array_interface__ Out[48]: {'data': (2947257325392, False), 'strides': None, 'descr': [('', '<i4')], 'typestr': '<i4', 'shape': (2, 3), 'version': 3} In [49]: aa.__array_interface__['data'] Out[49]: (2947257325392, False)
numpy.ndarray.data attribute buffer object
I create different numpy arrays as follows: import numpy as np a = np.array([[1,2,3],[1,2,3]]) # 2d array of integers b = np.array([[1,2,3],[1,2,5.0]]) # 2d array of floats c = np.array([1,2,3,4,5,6,7,8,9]) # 1d array of integers d = np.array([10,20,30]) # different 1d array of integers # python buffer object pointing to the start of the arrays data. print(a.data) print(b.data) print(c.data) print(d.data) According to the numpy docs i expect to get a "python buffer object pointing to the start of the arrays data". Here is the official doc (from the numpy website): https://numpy.org/doc/stable/reference/generated/numpy.ndarray.data.html So i would expect a different memory address for each array. but i get this: <memory at 0x000001EAE8B7CAD0> <memory at 0x000001EAE8B7CAD0> <memory at 0x000001EAE98E8A00> <memory at 0x000001EAE98E8A00> The first two have the same memory buffer object pointer. And the second two have the same memory buffer object pointer. Is numpy inferring the memory buffer object pointer based on the dimension of the array or if not, what is the rule that generates similar pointer addresses ?
[ "They aren't sharing the same memory. .data creates a memoryview object every time the attribute is accessed.\nYou can see from this session that it's a different address every time:\n>>> d.data\n<memory at 0x6ffff70bddc0>\n>>> d.data\n<memory at 0x6ffff70bdb80>\n>>> d.data\n<memory at 0x6ffff70bd1c0>\n\nIn your case, the object is reclaimed immediately after print() and the next one created happens to have the same address.\n", "In an ipython session:\nIn [38]: type(a.data)\nOut[38]: memoryview\n\nand the docs for that object:\nIn [39]: a.data?\nType: memoryview\nString form: <memory at 0x000002AE38BAF5F0>\nLength: 2\nDocstring: Create a new memoryview object which references the given object.\n\nThe print string of a memory view doesn't tell us anything about the databuffer address.\nIt can be used to make a view of the the array:\nIn [45]: aa = np.ndarray(a.shape, a.dtype, buffer=a.data) \nIn [46]: aa\nOut[46]: \narray([[1, 2, 3],\n [1, 2, 3]]) \nIn [47]: aa.base is a\nOut[47]: True\n\nThe data of __array_interface__ is closer to being a numeric address of the underlying data buffer. I don't think it can be used in code, but I find it useful when checking whether an array is a view or copy:\nIn [48]: a.__array_interface__\nOut[48]: \n{'data': (2947257325392, False),\n 'strides': None,\n 'descr': [('', '<i4')],\n 'typestr': '<i4',\n 'shape': (2, 3),\n 'version': 3}\nIn [49]: aa.__array_interface__['data']\nOut[49]: (2947257325392, False)\n\n" ]
[ 2, 1 ]
[]
[]
[ "numpy", "numpy_ndarray", "python" ]
stackoverflow_0074661127_numpy_numpy_ndarray_python.txt
Q: Run code for every subset starting by filtering data with df.loc I am trying to run some experiments with my Python code. The input of my code is based on a DataFrame. To filter my DataFrame I use df.loc. Before running my code I filter the DataFrame for the instance I want to run my code. I have the following list of instances: instance = ['A', 'B', 'C', 'D'] (These instances are also contained in a column in my DataFrame named df[Instance]). When I want to run my code for instance 'A' only, I first filter my dataframe for instance 'A': df = df.loc[(df['Instance'] == 'A')] When I want to run my code for instance 'B' df = df.loc[(df['Instance'] == 'B')] When I want to run my code for instance 'A' and 'B' I do the following: df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')] Now I want to run my code for all the subsets between 'A', 'B', 'C', 'D'. I can make subsets with the following function from itertools import chain, combinations def powerset(iterable): s = list(iterable) return chain.from_iterable(combinations(s, r) for r in range(1, len(s)+1)) subsets = list(powerset(instance)) Giving the following output [('A',), ('B',), ('C',), ('D',), ('A', 'B'), ('A', 'C'), ('A', 'D'), ('B', 'C'), ('B', 'D'), ('C', 'D'), ('A', 'B', 'C'), ('A', 'B', 'D'), ('A', 'C', 'D'), ('B', 'C', 'D'), ('A', 'B', 'C', 'D')] Now I want to run my code for all the subsets starting with that it filters the DataFrame for the items in a subset. At the moment, I filter my DataFrames manually. What I want to achieve is that my code runs for every subset. Now I filter every subset by hand using df.loc. Has anyone a tip how to do this automatically? Expecting: Iterate through all the subsets. Run code for A (subset 1) df = df.loc[(df['Instance'] == 'A')] Run code for B (subset 2) df = df.loc[(df['Instance'] == 'C')] Run code For C (subset 3) df = df.loc[(df['Instance'] == 'B')] Run code for D (subset 4) df = df.loc[(df['Instance'] == 'D')] Run code for A, B (subset 5) df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')] Etc. A: I think you want to use pandas.Series.apply which Invoke[s] function on values of Series. It takes each value from the series, in your case df["Instance"] and passes it through a function. Your function only needs to check whether the instance is in the element of subsets you're currently working on: for subset in subsets: selected_rows = df["Instance"].apply(lambda i: i in subset) # do things with selected rows
Run code for every subset starting by filtering data with df.loc
I am trying to run some experiments with my Python code. The input of my code is based on a DataFrame. To filter my DataFrame I use df.loc. Before running my code I filter the DataFrame for the instance I want to run my code. I have the following list of instances: instance = ['A', 'B', 'C', 'D'] (These instances are also contained in a column in my DataFrame named df[Instance]). When I want to run my code for instance 'A' only, I first filter my dataframe for instance 'A': df = df.loc[(df['Instance'] == 'A')] When I want to run my code for instance 'B' df = df.loc[(df['Instance'] == 'B')] When I want to run my code for instance 'A' and 'B' I do the following: df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')] Now I want to run my code for all the subsets between 'A', 'B', 'C', 'D'. I can make subsets with the following function from itertools import chain, combinations def powerset(iterable): s = list(iterable) return chain.from_iterable(combinations(s, r) for r in range(1, len(s)+1)) subsets = list(powerset(instance)) Giving the following output [('A',), ('B',), ('C',), ('D',), ('A', 'B'), ('A', 'C'), ('A', 'D'), ('B', 'C'), ('B', 'D'), ('C', 'D'), ('A', 'B', 'C'), ('A', 'B', 'D'), ('A', 'C', 'D'), ('B', 'C', 'D'), ('A', 'B', 'C', 'D')] Now I want to run my code for all the subsets starting with that it filters the DataFrame for the items in a subset. At the moment, I filter my DataFrames manually. What I want to achieve is that my code runs for every subset. Now I filter every subset by hand using df.loc. Has anyone a tip how to do this automatically? Expecting: Iterate through all the subsets. Run code for A (subset 1) df = df.loc[(df['Instance'] == 'A')] Run code for B (subset 2) df = df.loc[(df['Instance'] == 'C')] Run code For C (subset 3) df = df.loc[(df['Instance'] == 'B')] Run code for D (subset 4) df = df.loc[(df['Instance'] == 'D')] Run code for A, B (subset 5) df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')] Etc.
[ "I think you want to use pandas.Series.apply which\n\nInvoke[s] function on values of Series.\n\nIt takes each value from the series, in your case df[\"Instance\"] and passes it through a function. Your function only needs to check whether the instance is in the element of subsets you're currently working on:\nfor subset in subsets:\n selected_rows = df[\"Instance\"].apply(lambda i: i in subset)\n # do things with selected rows\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074661188_dataframe_pandas_python.txt
Q: Pandas groupby two columns get earliest date This is the dataset: ` data = {'id': ['1','1','1','1','2','2','2','2','2','3','3','3','3','3','3','3'], 'status': ['Active','Active','Active','Pending Action','Pending Action','Pending Action','Active','Pending Action','Active','Draft','Active','Draft','Draft','Draft','Active','Draft'], 'calc_date_id':['05/07/2022','07/06/2022','31/08/2021','01/07/2021','20/11/2022','25/10/2022','02/04/2022','28/02/2022','01/07/2021','23/06/2022','15/06/2022','07/04/2022','09/11/2022','18/08/2020','19/03/2020','17/01/202'] } df = pd.DataFrame(data) #to datetime df['calc_date_id'] = pd.to_datetime(df['calc_date_id']) ` How do I get the first date in the last time the status change by id? I tried sorting by date and groupby with id and status and keep="first" but I got: Groupbing by status Also tried df_mt_date.loc[df_mt_date.groupby(['id',' status'])['calc_date_id'].idxmin()] Instead of that I'd like to preserve the order by date obtaining only the first time where the id has changed status for the last time (not all of the history). This is the desired output I'm running out of ideas, I'll appreciate any suggestion Thank you A: Try: df["desired_output"] = df.groupby("id")["status"].transform( lambda x: df.loc[x.index, "calc_date_id"][(x != x.shift(-1)).idxmax()] ) print(df) Prints: id status calc_date_id desired_output 0 1 Active 2022-07-05 2021-08-31 1 1 Active 2022-06-07 2021-08-31 2 1 Active 2021-08-31 2021-08-31 3 1 Pending Action 2021-07-01 2021-08-31 4 2 Pending Action 2022-11-20 2022-10-25 5 2 Pending Action 2022-10-25 2022-10-25 6 2 Active 2022-04-02 2022-10-25 7 2 Pending Action 2022-02-28 2022-10-25 8 2 Active 2021-07-01 2022-10-25 9 3 Draft 2022-06-23 2022-06-23 10 3 Active 2022-06-15 2022-06-23 11 3 Draft 2022-04-07 2022-06-23 12 3 Draft 2022-11-09 2022-06-23 13 3 Draft 2020-08-18 2022-06-23 14 3 Active 2020-03-19 2022-06-23 15 3 Draft 2020-01-17 2022-06-23 A: From your desired output I see, that the group "boundaries" are points where particular value of status column occurs for the first time, regardless of id column. To indicate first occurrences of values in status column, run: wrk = df.groupby('status', group_keys=False).apply( lambda grp: grp.assign(isFirst=grp.index[0] == grp.index)) wrk.isFirst = wrk.isFirst.cumsum() To see the result, print wrk and look at isFirst column. Then, to generate the result, run: result = wrk.groupby('isFirst', group_keys=False).apply( lambda grp: grp.assign(desired_output=grp.calc_date_id.min()))\ .drop(columns='isFirst') Note the terminating drop to drop now unnecessary isFirst column. The result, for your data sample, is: id status calc_date_id desired_output 0 1 Active 2022-07-05 2021-08-31 1 1 Active 2022-06-07 2021-08-31 2 1 Active 2021-08-31 2021-08-31 3 1 Pending Action 2021-07-01 2021-07-01 4 2 Pending Action 2022-11-20 2021-07-01 5 2 Pending Action 2022-10-25 2021-07-01 6 2 Active 2022-04-02 2021-07-01 7 2 Pending Action 2022-02-28 2021-07-01 8 2 Active 2021-07-01 2021-07-01 9 3 Draft 2022-06-23 2020-03-19 10 3 Active 2022-06-15 2020-03-19 11 3 Draft 2022-04-07 2020-03-19 12 3 Draft 2022-11-09 2020-03-19 13 3 Draft 2020-08-18 2020-03-19 14 3 Active 2020-03-19 2020-03-19 15 3 Draft 2022-01-17 2020-03-19
Pandas groupby two columns get earliest date
This is the dataset: ` data = {'id': ['1','1','1','1','2','2','2','2','2','3','3','3','3','3','3','3'], 'status': ['Active','Active','Active','Pending Action','Pending Action','Pending Action','Active','Pending Action','Active','Draft','Active','Draft','Draft','Draft','Active','Draft'], 'calc_date_id':['05/07/2022','07/06/2022','31/08/2021','01/07/2021','20/11/2022','25/10/2022','02/04/2022','28/02/2022','01/07/2021','23/06/2022','15/06/2022','07/04/2022','09/11/2022','18/08/2020','19/03/2020','17/01/202'] } df = pd.DataFrame(data) #to datetime df['calc_date_id'] = pd.to_datetime(df['calc_date_id']) ` How do I get the first date in the last time the status change by id? I tried sorting by date and groupby with id and status and keep="first" but I got: Groupbing by status Also tried df_mt_date.loc[df_mt_date.groupby(['id',' status'])['calc_date_id'].idxmin()] Instead of that I'd like to preserve the order by date obtaining only the first time where the id has changed status for the last time (not all of the history). This is the desired output I'm running out of ideas, I'll appreciate any suggestion Thank you
[ "Try:\ndf[\"desired_output\"] = df.groupby(\"id\")[\"status\"].transform(\n lambda x: df.loc[x.index, \"calc_date_id\"][(x != x.shift(-1)).idxmax()]\n)\nprint(df)\n\nPrints:\n id status calc_date_id desired_output\n0 1 Active 2022-07-05 2021-08-31\n1 1 Active 2022-06-07 2021-08-31\n2 1 Active 2021-08-31 2021-08-31\n3 1 Pending Action 2021-07-01 2021-08-31\n4 2 Pending Action 2022-11-20 2022-10-25\n5 2 Pending Action 2022-10-25 2022-10-25\n6 2 Active 2022-04-02 2022-10-25\n7 2 Pending Action 2022-02-28 2022-10-25\n8 2 Active 2021-07-01 2022-10-25\n9 3 Draft 2022-06-23 2022-06-23\n10 3 Active 2022-06-15 2022-06-23\n11 3 Draft 2022-04-07 2022-06-23\n12 3 Draft 2022-11-09 2022-06-23\n13 3 Draft 2020-08-18 2022-06-23\n14 3 Active 2020-03-19 2022-06-23\n15 3 Draft 2020-01-17 2022-06-23\n\n", "From your desired output I see, that the group \"boundaries\" are\npoints where particular value of status column occurs for the\nfirst time, regardless of id column.\nTo indicate first occurrences of values in status column, run:\nwrk = df.groupby('status', group_keys=False).apply(\n lambda grp: grp.assign(isFirst=grp.index[0] == grp.index))\nwrk.isFirst = wrk.isFirst.cumsum()\n\nTo see the result, print wrk and look at isFirst column.\nThen, to generate the result, run:\nresult = wrk.groupby('isFirst', group_keys=False).apply(\n lambda grp: grp.assign(desired_output=grp.calc_date_id.min()))\\\n .drop(columns='isFirst')\n\nNote the terminating drop to drop now unnecessary isFirst column.\nThe result, for your data sample, is:\n id status calc_date_id desired_output\n0 1 Active 2022-07-05 2021-08-31\n1 1 Active 2022-06-07 2021-08-31\n2 1 Active 2021-08-31 2021-08-31\n3 1 Pending Action 2021-07-01 2021-07-01\n4 2 Pending Action 2022-11-20 2021-07-01\n5 2 Pending Action 2022-10-25 2021-07-01\n6 2 Active 2022-04-02 2021-07-01\n7 2 Pending Action 2022-02-28 2021-07-01\n8 2 Active 2021-07-01 2021-07-01\n9 3 Draft 2022-06-23 2020-03-19\n10 3 Active 2022-06-15 2020-03-19\n11 3 Draft 2022-04-07 2020-03-19\n12 3 Draft 2022-11-09 2020-03-19\n13 3 Draft 2020-08-18 2020-03-19\n14 3 Active 2020-03-19 2020-03-19\n15 3 Draft 2022-01-17 2020-03-19\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "group_by", "pandas", "python", "sorting" ]
stackoverflow_0074660690_dataframe_group_by_pandas_python_sorting.txt
Q: Newbie question about return keyword in Python functions I am currently working in codecademy on a Python course and while trying to define a function that takes in a list and returns a list with the length of that same list added to the list I realized I keeping getting "None" instead of a full list and was wondering why. I was able figure out the correct solution but for my own education, I'm curious why my original code didn't work as intended. #This is the first one I tried def append_size(lst): return lst.append(len(lst)) #Uncomment the line below when your function is done print(append_size([23, 42, 108])) # returns None instead of [23, 42, 108] #This is the correct function def append_size(lst): lst.append(len(lst)) return lst A: lst.append always returns None. It modifies lst in place, so all you need to do is return lst itself. def append_size(lst): lst.append(len(lst)) return lst This is a violation, though, of the usually conventional (followed by list.append itself) that a function or method should either modify an argument in-place and return None or return a new value based on the unchanged argument. There's no particular need to return lst, since presumably the caller already has a reference to the list, as they were able to pass it as an argument in the first place.
Newbie question about return keyword in Python functions
I am currently working in codecademy on a Python course and while trying to define a function that takes in a list and returns a list with the length of that same list added to the list I realized I keeping getting "None" instead of a full list and was wondering why. I was able figure out the correct solution but for my own education, I'm curious why my original code didn't work as intended. #This is the first one I tried def append_size(lst): return lst.append(len(lst)) #Uncomment the line below when your function is done print(append_size([23, 42, 108])) # returns None instead of [23, 42, 108] #This is the correct function def append_size(lst): lst.append(len(lst)) return lst
[ "lst.append always returns None. It modifies lst in place, so all you need to do is return lst itself.\ndef append_size(lst):\n lst.append(len(lst))\n return lst\n\n\nThis is a violation, though, of the usually conventional (followed by list.append itself) that a function or method should either modify an argument in-place and return None or return a new value based on the unchanged argument.\nThere's no particular need to return lst, since presumably the caller already has a reference to the list, as they were able to pass it as an argument in the first place.\n" ]
[ 0 ]
[]
[]
[ "function", "python", "return" ]
stackoverflow_0074661327_function_python_return.txt
Q: How to convert text into structured data, taking into account missing fields, in Python? First, apologies if this sounds too basic. I have the following semi-structured data in text format, I need to parse these into a structured format: example: Name Alex Address 14 high street London Color blue red Name Bob Color black **Note that Alex has two colors, while Bob does not have an address. ** I want something that looks like this: example output I think the right way is using regular expressions, but I'm struggling to split the text properly since some fields may be missing. What's a proper clean way to do this? text='Name\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\nBob\nColor\nblack' profiles=re.split('(Name\n)', text, flags=re.IGNORECASE) for profile in profiles: #get name name=re.split('(Name\n)|(Address\n)|(Color\n)', profile.strip(), flags=re.IGNORECASE)[0] print(name) #get address #get color A: Try: s = """\ Name Alex Address 14 high street London Color blue red Name Bob Color black""" import pandas as pd from itertools import groupby colnames = ["Name", "Address", "Color"] col1, col2 = [], [] for k, g in groupby( (l for l in s.splitlines() if l.strip()), lambda l: l in colnames ): (col2, col1)[k].append(" ".join(g)) df = pd.DataFrame({"col1": col1, "col2": col2}) df = df.assign(col3=df.col1.eq("Name").cumsum()).pivot( index="col3", columns="col1", values="col2" ) df.index.name, df.columns.name = None, None df["Color"] = df["Color"].str.split() df = df.explode("Color").fillna("") print(df[colnames]) Prints: Name Address Color 1 Alex 14 high street London blue 1 Alex 14 high street London red 2 Bob black A: Here's a vanilla python way to tackle the problem (rather than using pandas) Load your data (you'll probably read the data from your file, but we'll use this placeholder instead) s = """ Name Alex Address 14 high street London Color blue red Name Bob Color black """ Split up each entry by lines with 'Name' entities = [e for e in s.split('Name')] produces [ '\n', '\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\n', '\n\nBob\nColor\nblack\n' ] Replace the newlines with spaces, then clean up duplicate spaces entities = [e.replace('\n',' ').replace(' ',' ') for e in entities] produces [ ' ', ' Alex Address 14 high street London Color blue red ', ' Bob Color black ' ] Split each entry by space and toss any empty list entries entities = [ [x for x in e.split(' ') if x != ''] for e in entities ] produces [ [], ['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'], ['Bob', 'Color', 'black'] ] Get rid of any empty entities entities = [e for e in entities if len(e) > 0] produces [ ['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'], ['Bob', 'Color', 'black'] ] Establish our tokens, then for the index of each token, capture the list elements that appear before the next token # We'll store our findings here. # We'll use the name, which is the first element of our 'entity' list, # as the key for this dict. properties = {} for entity in entities: # We'll figure out where each token shows up in our list token_indices = [] for token in tokens: # we could use # token_indices.append(entity.index(token)) # if we knew that each token would only show up once token_indices += [i for i,t in enumerate(entity) if t==token] # now we'll sort the list of indices so that we can be sure # we're dealing with them in order token_indices = sorted(token_indices) # since we haven't seen this person before, we'll establish a # dict for their properties. individual_properties = {} for k,_ in enumerate(token_indices): this_tkn_name = entity[token_indices[k]] this_tkn_idx = token_indices[k] next_tkn_idx = token_indices[k+1] # We'll iterate over each token's index if k+1 < len(token_indices): # this isn't the last token individual_properties[ this_tkn_name ] = ' '.join(entity[this_tkn_idx+1:next_tkn_idx]) else: # this is the last token individual_properties[ this_tkn_name ] = ' '.join(entity[this_tkn_idx+1:]) # the first element in the entity list is their name, so we can # find that with entity[0] properties[entity[0]] = individual_properties produces { 'Alex': { 'Address': '14 high street London', 'Color': 'blue red' }, 'Bob': { 'Color': 'black' } } NOTE: Depending on what you want to do with this, you may need further processing. Maybe you know what the colors are a list so you could use split(' ') to get a list instead of a single string.
How to convert text into structured data, taking into account missing fields, in Python?
First, apologies if this sounds too basic. I have the following semi-structured data in text format, I need to parse these into a structured format: example: Name Alex Address 14 high street London Color blue red Name Bob Color black **Note that Alex has two colors, while Bob does not have an address. ** I want something that looks like this: example output I think the right way is using regular expressions, but I'm struggling to split the text properly since some fields may be missing. What's a proper clean way to do this? text='Name\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\nBob\nColor\nblack' profiles=re.split('(Name\n)', text, flags=re.IGNORECASE) for profile in profiles: #get name name=re.split('(Name\n)|(Address\n)|(Color\n)', profile.strip(), flags=re.IGNORECASE)[0] print(name) #get address #get color
[ "Try:\ns = \"\"\"\\\nName\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\n\nBob\nColor\nblack\"\"\"\n\n\nimport pandas as pd\nfrom itertools import groupby\n\ncolnames = [\"Name\", \"Address\", \"Color\"]\n\n\ncol1, col2 = [], []\nfor k, g in groupby(\n (l for l in s.splitlines() if l.strip()), lambda l: l in colnames\n):\n (col2, col1)[k].append(\" \".join(g))\n\ndf = pd.DataFrame({\"col1\": col1, \"col2\": col2})\ndf = df.assign(col3=df.col1.eq(\"Name\").cumsum()).pivot(\n index=\"col3\", columns=\"col1\", values=\"col2\"\n)\ndf.index.name, df.columns.name = None, None\n\n\ndf[\"Color\"] = df[\"Color\"].str.split()\ndf = df.explode(\"Color\").fillna(\"\")\n\nprint(df[colnames])\n\nPrints:\n Name Address Color\n1 Alex 14 high street London blue\n1 Alex 14 high street London red\n2 Bob black\n\n", "Here's a vanilla python way to tackle the problem (rather than using pandas)\n\nLoad your data (you'll probably read the data from your file, but we'll use this placeholder instead)\n\ns = \"\"\"\nName\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\n\nBob\nColor\nblack\n\"\"\"\n\n\nSplit up each entry by lines with 'Name'\n\nentities = [e for e in s.split('Name')]\n\nproduces\n[\n '\\n',\n '\\nAlex\\n\\nAddress\\n14 high street\\nLondon\\n\\nColor\\nblue\\nred\\n\\n',\n '\\n\\nBob\\nColor\\nblack\\n'\n]\n\n\nReplace the newlines with spaces, then clean up duplicate spaces\n\nentities = [e.replace('\\n',' ').replace(' ',' ') for e in entities]\n\nproduces\n[\n ' ',\n ' Alex Address 14 high street London Color blue red ',\n ' Bob Color black '\n]\n\n\nSplit each entry by space and toss any empty list entries\n\nentities = [\n [x for x in e.split(' ') if x != '']\n for e in entities\n]\n\nproduces\n[\n [],\n ['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'],\n ['Bob', 'Color', 'black']\n]\n\n\nGet rid of any empty entities\n\nentities = [e for e in entities if len(e) > 0]\n\nproduces\n[\n ['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'],\n ['Bob', 'Color', 'black']\n]\n\n\nEstablish our tokens, then for the index of each token, capture the list elements that appear before the next token\n\n# We'll store our findings here. \n# We'll use the name, which is the first element of our 'entity' list, \n# as the key for this dict.\n\nproperties = {}\n\nfor entity in entities:\n # We'll figure out where each token shows up in our list\n token_indices = []\n for token in tokens:\n # we could use \n # token_indices.append(entity.index(token)) \n # if we knew that each token would only show up once\n token_indices += [i for i,t in enumerate(entity) if t==token]\n\n # now we'll sort the list of indices so that we can be sure\n # we're dealing with them in order\n token_indices = sorted(token_indices)\n \n # since we haven't seen this person before, we'll establish a \n # dict for their properties.\n\n\n\n individual_properties = {}\n \n for k,_ in enumerate(token_indices):\n this_tkn_name = entity[token_indices[k]]\n this_tkn_idx = token_indices[k]\n next_tkn_idx = token_indices[k+1]\n\n # We'll iterate over each token's index\n if k+1 < len(token_indices):\n # this isn't the last token\n individual_properties[\n this_tkn_name\n ] = ' '.join(entity[this_tkn_idx+1:next_tkn_idx])\n\n else:\n # this is the last token\n individual_properties[\n this_tkn_name\n ] = ' '.join(entity[this_tkn_idx+1:])\n \n # the first element in the entity list is their name, so we can\n # find that with entity[0]\n properties[entity[0]] = individual_properties\n\nproduces\n{\n 'Alex': {\n 'Address': '14 high street London', \n 'Color': 'blue red'\n },\n 'Bob': {\n 'Color': 'black'\n }\n}\n\nNOTE: Depending on what you want to do with this, you may need further processing. Maybe you know what the colors are a list so you could use split(' ') to get a list instead of a single string.\n" ]
[ 2, 0 ]
[]
[]
[ "dataframe", "python", "python_re", "string" ]
stackoverflow_0074661015_dataframe_python_python_re_string.txt
Q: How to install telegram api 'aiogram' I just tried to install telegram api 'aiogram' and it didn't work building 'yarl._quoting_c' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for yarl Failed to build multidict yarl ERROR: Could not build wheels for multidict, yarl, which is required to install pyproject.toml-based projects A: It looks like you are trying to install the aiogram library for Python, but you are encountering an error related to Microsoft Visual C++ 14.0 or greater. This error is occurring because the aiogram library has a dependency on the yarl library, which requires Microsoft Visual C++ 14.0 or greater to be installed on your system in order to build. To fix this error, you will need to install Microsoft Visual C++ 14.0 or greater on your system. The error message provides a link to the Microsoft C++ Build Tools page, where you can download and install the necessary tools. Once you have installed Microsoft Visual C++ 14.0 or greater, you should be able to install the aiogram library successfully. Here are the steps to install Microsoft Visual C++ 14.0 or greater and fix the error: Open the following link in your web browser: https://visualstudio.microsoft.com/visual-cpp-build-tools/ On the Microsoft C++ Build Tools page, click the "Download" button to download the installer for the build tools. Run the downloaded installer and follow the on-screen instructions to install Microsoft Visual C++ 14.0 or greater on your system. Once the installation is complete, try installing the aiogram library again using pip. It should now install successfully without any errors.
How to install telegram api 'aiogram'
I just tried to install telegram api 'aiogram' and it didn't work building 'yarl._quoting_c' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for yarl Failed to build multidict yarl ERROR: Could not build wheels for multidict, yarl, which is required to install pyproject.toml-based projects
[ "It looks like you are trying to install the aiogram library for Python, but you are encountering an error related to Microsoft Visual C++ 14.0 or greater. This error is occurring because the aiogram library has a dependency on the yarl library, which requires Microsoft Visual C++ 14.0 or greater to be installed on your system in order to build.\nTo fix this error, you will need to install Microsoft Visual C++ 14.0 or greater on your system. The error message provides a link to the Microsoft C++ Build Tools page, where you can download and install the necessary tools. Once you have installed Microsoft Visual C++ 14.0 or greater, you should be able to install the aiogram library successfully.\nHere are the steps to install Microsoft Visual C++ 14.0 or greater and fix the error:\n\nOpen the following link in your web browser: https://visualstudio.microsoft.com/visual-cpp-build-tools/\n\nOn the Microsoft C++ Build Tools page, click the \"Download\" button to download the installer for the build tools.\n\nRun the downloaded installer and follow the on-screen instructions to install Microsoft Visual C++ 14.0 or greater on your system.\n\nOnce the installation is complete, try installing the aiogram library again using pip. It should now install successfully without any errors.\n\n\n" ]
[ 0 ]
[]
[]
[ "aiogram", "python", "telegram_bot" ]
stackoverflow_0074661367_aiogram_python_telegram_bot.txt
Q: checking a variable against a record in a sqlite3 database to see if data entered is unique so I am trying to create a function that allows a user to create a profile with personal information, in this they will enter a username that will act as primary key and requires to be unique, so when entering this username I am trying to check the data entered to see if it already exists in the sqlite3 database, if it does, the user is asked to try another username, if not the function will continue. I was certain this should work because i used similar code to check values entered when a user logs in in a login function i coded, so i am quite stumped. Any help would be greatly appreciated... the code in question: def signupInfo(): #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur username = input("Choose username: ") #check for if the username is unique uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?''' cursor.execute(uniqueUserCheck, [username]) user = cursor.fetchone() if user is not None: username = input("Choose a unique user: ") else: #check if username is alphanumeric or has enough characters while username.isalpha() == False or len(username) <= 3: username = input("invalid username, try again: ") A: It looks like you're trying to check if the username already exists in the database. To do this, you can use an SQL SELECT query to check if the username exists in the users table. If the query returns a result, then the username already exists and you can prompt the user to enter a different username. Here's one way you could implement this: def signupInfo(): #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur username = input("Choose username: ") #check if username is alphanumeric or has enough characters while username.isalpha() == False or len(username) <= 3: username = input("invalid username, try again: ") #check for if the username is unique uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?''' cursor.execute(uniqueUserCheck, [username]) user = cursor.fetchone() if user is not None: username = input("Choose a unique user: ") Note that in your current implementation, if the username is not unique, you will only prompt the user to enter a new username once, but the code does not check if the new username is unique. To fix this, you can use a while loop to keep prompting the user until they enter a unique username. Here's one way you could do this: def signupInfo(): #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur username = input("Choose username: ") #check if username is alphanumeric or has enough characters while username.isalpha() == False or len(username) <= 3: username = input("invalid username, try again: ") #check for if the username is unique uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?''' cursor.execute(uniqueUserCheck, [username]) user = cursor.fetchone() # keep prompting the user until they enter a unique username while user is not None: username = input("Choose a unique user: ") cursor.execute(uniqueUserCheck, [username]) user = cursor.fetchone()
checking a variable against a record in a sqlite3 database to see if data entered is unique
so I am trying to create a function that allows a user to create a profile with personal information, in this they will enter a username that will act as primary key and requires to be unique, so when entering this username I am trying to check the data entered to see if it already exists in the sqlite3 database, if it does, the user is asked to try another username, if not the function will continue. I was certain this should work because i used similar code to check values entered when a user logs in in a login function i coded, so i am quite stumped. Any help would be greatly appreciated... the code in question: def signupInfo(): #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur username = input("Choose username: ") #check for if the username is unique uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?''' cursor.execute(uniqueUserCheck, [username]) user = cursor.fetchone() if user is not None: username = input("Choose a unique user: ") else: #check if username is alphanumeric or has enough characters while username.isalpha() == False or len(username) <= 3: username = input("invalid username, try again: ")
[ "It looks like you're trying to check if the username already exists in the database. To do this, you can use an SQL SELECT query to check if the username exists in the users table. If the query returns a result, then the username already exists and you can prompt the user to enter a different username.\nHere's one way you could implement this:\ndef signupInfo():\n #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur\n username = input(\"Choose username: \")\n\n #check if username is alphanumeric or has enough characters\n while username.isalpha() == False or len(username) <= 3:\n username = input(\"invalid username, try again: \")\n\n #check for if the username is unique\n uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''\n cursor.execute(uniqueUserCheck, [username])\n user = cursor.fetchone()\n if user is not None:\n username = input(\"Choose a unique user: \")\n\n\nNote that in your current implementation, if the username is not unique, you will only prompt the user to enter a new username once, but the code does not check if the new username is unique. To fix this, you can use a while loop to keep prompting the user until they enter a unique username.\nHere's one way you could do this:\ndef signupInfo():\n #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur\n username = input(\"Choose username: \")\n\n #check if username is alphanumeric or has enough characters\n while username.isalpha() == False or len(username) <= 3:\n username = input(\"invalid username, try again: \")\n\n #check for if the username is unique\n uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''\n cursor.execute(uniqueUserCheck, [username])\n user = cursor.fetchone()\n\n # keep prompting the user until they enter a unique username\n while user is not None:\n username = input(\"Choose a unique user: \")\n cursor.execute(uniqueUserCheck, [username])\n user = cursor.fetchone()\n\n\n" ]
[ 1 ]
[]
[]
[ "python", "sql", "validation" ]
stackoverflow_0074661361_python_sql_validation.txt
Q: How to define different layers in neural network with MLPRegressor I am trying to set up a neural network model using MLPRegressor, I have been told to do so using the following structure: The network must have two different hidden layer node layouts: the first with one hidden layer with 100 nodes, the second with three hidden layers with 100 nodes each. Use the neural network fitting with two activation functions: 'identity' and 'relu'. I have looked around online, but I couldn't really make much sense of the documentation. What I tried so far took the following form: model = MLPRegressor(hidden_layer_sizes=((100),(100,100,100)), activation='relu', solver = 'lbfgs').fit(X,Y) But that doesn't consider the two activation functions, and it throws the following error: TypeError: '<=' not supported between instances of 'tuple' and 'int' Any suggestions on how to implement this? [Edit] I have been asked to clarify the question. The task that I have to complete is to fit experimental data (X and Y) by using different techniques, for example: interpolation, regression... And now, a neural network. The structure of the neural network must be as given above (what I have written there is quiet literally what I have been asked to do. A: To use two different hidden layer node layouts and two activation functions with the MLPRegressor class, you can specify the hidden layer node layouts and activation functions as a list. For example: from sklearn.neural_network import MLPRegressor # Define the hidden layer node layout hidden_layer_sizes = (100) # Define the activation function activation = 'relu' # Create the MLPRegressor model model = MLPRegressor(hidden_layer_sizes=hidden_layer_sizes, activation=activation, solver='lbfgs') # Fit the model to your data model.fit(X, Y) The hidden_layer_sizes and activation parameters should be specified as lists with the same length. The model will then use the first hidden layer node layout with the first activation function, the second hidden layer node layout with the second activation function, and so on. You may also want to consider using a different solver than 'lbfgs'. The 'lbfgs' solver is generally not recommended for use with neural networks because it can be slow and may not always converge. Some other solvers that may work well with the MLPRegressor class are 'adam' and 'sgd'.
How to define different layers in neural network with MLPRegressor
I am trying to set up a neural network model using MLPRegressor, I have been told to do so using the following structure: The network must have two different hidden layer node layouts: the first with one hidden layer with 100 nodes, the second with three hidden layers with 100 nodes each. Use the neural network fitting with two activation functions: 'identity' and 'relu'. I have looked around online, but I couldn't really make much sense of the documentation. What I tried so far took the following form: model = MLPRegressor(hidden_layer_sizes=((100),(100,100,100)), activation='relu', solver = 'lbfgs').fit(X,Y) But that doesn't consider the two activation functions, and it throws the following error: TypeError: '<=' not supported between instances of 'tuple' and 'int' Any suggestions on how to implement this? [Edit] I have been asked to clarify the question. The task that I have to complete is to fit experimental data (X and Y) by using different techniques, for example: interpolation, regression... And now, a neural network. The structure of the neural network must be as given above (what I have written there is quiet literally what I have been asked to do.
[ "To use two different hidden layer node layouts and two activation functions with the MLPRegressor class, you can specify the hidden layer node layouts and activation functions as a list. For example:\nfrom sklearn.neural_network import MLPRegressor\n\n# Define the hidden layer node layout\nhidden_layer_sizes = (100)\n\n# Define the activation function\nactivation = 'relu'\n\n# Create the MLPRegressor model\nmodel = MLPRegressor(hidden_layer_sizes=hidden_layer_sizes, activation=activation, solver='lbfgs')\n\n# Fit the model to your data\nmodel.fit(X, Y)\n\nThe hidden_layer_sizes and activation parameters should be specified as lists with the same length. The model will then use the first hidden layer node layout with the first activation function, the second hidden layer node layout with the second activation function, and so on.\nYou may also want to consider using a different solver than 'lbfgs'. The 'lbfgs' solver is generally not recommended for use with neural networks because it can be slow and may not always converge. Some other solvers that may work well with the MLPRegressor class are 'adam' and 'sgd'.\n" ]
[ 1 ]
[]
[]
[ "artificial_intelligence", "deep_learning", "neural_network", "python", "scikit_learn" ]
stackoverflow_0074661342_artificial_intelligence_deep_learning_neural_network_python_scikit_learn.txt
Q: Extract date from string in date format, add n number of days. to then replace with that modified data another substring within the original string import re, datetime, time input_text = "tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles" #example 1 input_text = "luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #example 2 input_text = "Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo" #example 3 identified_referencial_date = r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})" #obtained with regex capture groups # r"(?:luego[\s|]*de[\s|]*unos|luego[\s|]*de|pasados[\s|]*ya[\s|]*unos|pasados[\s|]*unos|pasados[\s|]*ya|pasados|tras[\s|]*ya|tras)[\s|]*\d*[\s|]*(?:días|dias|día|dia)" # r"\d*[\s|]*(?:días|dias|día|dia)[\s|]*(?:despues|luego)" n = #the number of days that in this case should increase indicated_date_relative_to_another = str(identified_date_in_date_format - datetime.timedelta(days = int(n) )) input_text = re.sub(identified_referencial_date, indicated_date_relative_to_another, input_text) print(repr(input_text)) # --> output The objective is that if a day is indicated first in the format year-month-day (are integers separated by hyphens in that order) \d*-\d{2}-\d{2} and then it says that n amount of days have passed, so you would have to replace that sentence with year-month-day+n luego de unos 3 dias ---> add 3 days to a previous date luego de 6 dias ---> add 6 days to a previous date pasados ya 13 dias ---> add 13 days to a previous date pasados ya unos 48 dias ---> add 48 days to a previous date pasados unos 36 dias ---> add 36 days to a previous date pasados 9 dias ---> add 9 days to a previous date tras ya 2 dias ---> add 2 days to a previous date tras 32 dias ---> add 32 days to a previous date 3 dias despues ---> add 3 days to a previous date 3 dias luego ---> add 3 days to a previous date Keep in mind that in certain cases, increasing the number of days could also change the number of the month or even the year, as in example 1. Outputs that I need obtain in each case: "tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien 2023-01-02 ese objeto aparecio de nuevo tras 2 arboles" #for the example 1 "luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 2022-11-18 ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #for the example 2 "Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero 2022-11-28 ese objeto aparecio en el cielo" #for the example 3 A: Here is a regex solution you could use: ([12]\d{3}-[01]\d-[0-3]\d)(\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\d+) dias|(\d+) dias (?:despues|luego)) This regex requires that there are no other digits between the date and the days. It also is a bit loose on grammar. It would also match "luego de ya 3 dias". You can of course make it more precise with a longer regex, but you get the picture. In a program: from datetime import datetime, timedelta import re def add(datestr, days): return (datetime.strptime(datestr, "%Y-%m-%d") + timedelta(days=int(days))).strftime('%Y-%m-%d') input_texts = [ "tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles", "luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano", "Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo" ] for input_text in input_texts: result = re.sub(r"([12]\d{3}-[01]\d-[0-3]\d)(\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\d+) dias|(\d+) dias (?:despues|luego))", lambda m: m[1] + m[2] + add(m[1], m[3] or m[4]), input_text) print(result)
Extract date from string in date format, add n number of days. to then replace with that modified data another substring within the original string
import re, datetime, time input_text = "tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles" #example 1 input_text = "luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #example 2 input_text = "Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo" #example 3 identified_referencial_date = r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})" #obtained with regex capture groups # r"(?:luego[\s|]*de[\s|]*unos|luego[\s|]*de|pasados[\s|]*ya[\s|]*unos|pasados[\s|]*unos|pasados[\s|]*ya|pasados|tras[\s|]*ya|tras)[\s|]*\d*[\s|]*(?:días|dias|día|dia)" # r"\d*[\s|]*(?:días|dias|día|dia)[\s|]*(?:despues|luego)" n = #the number of days that in this case should increase indicated_date_relative_to_another = str(identified_date_in_date_format - datetime.timedelta(days = int(n) )) input_text = re.sub(identified_referencial_date, indicated_date_relative_to_another, input_text) print(repr(input_text)) # --> output The objective is that if a day is indicated first in the format year-month-day (are integers separated by hyphens in that order) \d*-\d{2}-\d{2} and then it says that n amount of days have passed, so you would have to replace that sentence with year-month-day+n luego de unos 3 dias ---> add 3 days to a previous date luego de 6 dias ---> add 6 days to a previous date pasados ya 13 dias ---> add 13 days to a previous date pasados ya unos 48 dias ---> add 48 days to a previous date pasados unos 36 dias ---> add 36 days to a previous date pasados 9 dias ---> add 9 days to a previous date tras ya 2 dias ---> add 2 days to a previous date tras 32 dias ---> add 32 days to a previous date 3 dias despues ---> add 3 days to a previous date 3 dias luego ---> add 3 days to a previous date Keep in mind that in certain cases, increasing the number of days could also change the number of the month or even the year, as in example 1. Outputs that I need obtain in each case: "tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien 2023-01-02 ese objeto aparecio de nuevo tras 2 arboles" #for the example 1 "luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 2022-11-18 ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #for the example 2 "Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero 2022-11-28 ese objeto aparecio en el cielo" #for the example 3
[ "Here is a regex solution you could use:\n([12]\\d{3}-[01]\\d-[0-3]\\d)(\\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\\d+) dias|(\\d+) dias (?:despues|luego))\n\nThis regex requires that there are no other digits between the date and the days. It also is a bit loose on grammar. It would also match \"luego de ya 3 dias\". You can of course make it more precise with a longer regex, but you get the picture.\nIn a program:\nfrom datetime import datetime, timedelta\nimport re\n\ndef add(datestr, days):\n return (datetime.strptime(datestr, \"%Y-%m-%d\") \n + timedelta(days=int(days))).strftime('%Y-%m-%d')\n\ninput_texts = [\n \"tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles\",\n \"luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano\",\n \"Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo\"\n]\n\nfor input_text in input_texts:\n result = re.sub(r\"([12]\\d{3}-[01]\\d-[0-3]\\d)(\\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\\d+) dias|(\\d+) dias (?:despues|luego))\",\n lambda m: m[1] + m[2] + add(m[1], m[3] or m[4]), \n input_text)\n print(result)\n\n" ]
[ 1 ]
[]
[]
[ "datetime", "python", "python_3.x", "regex", "regex_group" ]
stackoverflow_0074660456_datetime_python_python_3.x_regex_regex_group.txt
Q: Merge 2 dataframes and update column with lists using condtions I have 2 dataframes with same columns and indexes. a a 1 [] 1 [5,2,7] 2 [1,2,3] 2 [1,2,3,4] 3 [7] 3 [7,5] I want to merge them using condition, when length of list is <=1 then take value and add it to 1st data frame, else left old value. So after that result is: a 1 [5,2,7] 2 [1,2,3] 3 [7,5] What is the best way to do this? A: for i, (x,y) in enumerate(zip(dfa['a'], dfb['b'])): # apply your logic - 'when length of list is <=1 then take value...' and save it in dfa['a'][i] if len(x) <= 1: dfa.loc[i]['a'] = y A: Here is an approach using pandas.DataFrame.mask. First, make sure that the values of each dataframe/column are lists : df1["a"]= df1["a"].str.strip("[]").str.split(",") #skip if already a list df2["a"]= df2["a"].str.strip("[]").str.split(",") #skip if already a list Then, use pandas.Series.str.len : out = df1.mask(df1["a"].str.len().le(1), other=df2["a"], axis=0) Or use pandas.Series.transform : out = df1.mask(df1["a"].transform(len).le(1), other=df2["a"], axis=0) # Output : print(out) a 0 [5, 2, 7] 1 [1, 2, 3] 2 [7, 5] A: You can use numpy.where() to evaluate the criteria and return which version you want. In this solution, I combine the two lists and then convert to a set to remove duplicates, and then convert back to a list because that is what you want at the end, I believe. Note that this does change the elements (you can [2,5,7] instead of [5,2,7]) new_df = np.where( df1['a'].apply(len)<=1, (df1['a'] + df2['a']).apply(set).apply(list), df1['a'])
Merge 2 dataframes and update column with lists using condtions
I have 2 dataframes with same columns and indexes. a a 1 [] 1 [5,2,7] 2 [1,2,3] 2 [1,2,3,4] 3 [7] 3 [7,5] I want to merge them using condition, when length of list is <=1 then take value and add it to 1st data frame, else left old value. So after that result is: a 1 [5,2,7] 2 [1,2,3] 3 [7,5] What is the best way to do this?
[ "for i, (x,y) in enumerate(zip(dfa['a'], dfb['b'])):\n # apply your logic - 'when length of list is <=1 then take value...' and save it in dfa['a'][i]\n if len(x) <= 1:\n dfa.loc[i]['a'] = y\n\n", "Here is an approach using pandas.DataFrame.mask.\nFirst, make sure that the values of each dataframe/column are lists :\ndf1[\"a\"]= df1[\"a\"].str.strip(\"[]\").str.split(\",\") #skip if already a list\ndf2[\"a\"]= df2[\"a\"].str.strip(\"[]\").str.split(\",\") #skip if already a list\n\nThen, use pandas.Series.str.len :\nout = df1.mask(df1[\"a\"].str.len().le(1), other=df2[\"a\"], axis=0)\n\nOr use pandas.Series.transform :\nout = df1.mask(df1[\"a\"].transform(len).le(1), other=df2[\"a\"], axis=0)\n\n# Output :\nprint(out)\n a\n0 [5, 2, 7]\n1 [1, 2, 3]\n2 [7, 5]\n\n", "You can use numpy.where() to evaluate the criteria and return which version you want.\nIn this solution, I combine the two lists and then convert to a set to remove duplicates, and then convert back to a list because that is what you want at the end, I believe. Note that this does change the elements (you can [2,5,7] instead of [5,2,7])\nnew_df = np.where(\n df1['a'].apply(len)<=1,\n (df1['a'] + df2['a']).apply(set).apply(list),\n df1['a'])\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074661219_pandas_python.txt
Q: Unable to convert a pandas Dataframe to a list using literal_eval I have been trying to convert a pandas Dataframe column to a list as the data in the column is being read as a str by default. Sample data in the dataframe 'movie' column 'genres' is [{"id": 28, "name": "Action"}, {"id": 12, "name": "Adventure"}, {"id": 14, "name": "Fantasy"}, {"id": 878, "name": "Science Fiction"}] The code I am writing import ast import pandas as pd movie = pd.read_csv("tmdb_5000_movies.csv") movie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x))) print(type(movie['genres'])) The output I am getting is <class 'pandas.core.series.Series'> Really can't wrap my head around where am I going wrong A: pandas.DataFrames are composed of Series objects (where a Series is simply a column. Series are container objects similar to Python lists and can actually be converted into a list by using their Series.tolist method. ast.literal_eval is being applied on each element inside of your Series, converting them a string into dictionary, those dictionaries as then stored back into a Series. So pretty much your code is working- but if you want a list of dictionaries instead of a Series of dictionaries, you'll need to the following: import ast import pandas as pd movie = pd.read_csv("tmdb_5000_movies.csv") movie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x))) genres = movie['genres'].tolist() print(genres)
Unable to convert a pandas Dataframe to a list using literal_eval
I have been trying to convert a pandas Dataframe column to a list as the data in the column is being read as a str by default. Sample data in the dataframe 'movie' column 'genres' is [{"id": 28, "name": "Action"}, {"id": 12, "name": "Adventure"}, {"id": 14, "name": "Fantasy"}, {"id": 878, "name": "Science Fiction"}] The code I am writing import ast import pandas as pd movie = pd.read_csv("tmdb_5000_movies.csv") movie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x))) print(type(movie['genres'])) The output I am getting is <class 'pandas.core.series.Series'> Really can't wrap my head around where am I going wrong
[ "pandas.DataFrames are composed of Series objects (where a Series is simply a column. Series are container objects similar to Python lists and can actually be converted into a list by using their Series.tolist method.\nast.literal_eval is being applied on each element inside of your Series, converting them a string into dictionary, those dictionaries as then stored back into a Series.\nSo pretty much your code is working- but if you want a list of dictionaries instead of a Series of dictionaries, you'll need to the following:\nimport ast \nimport pandas as pd\nmovie = pd.read_csv(\"tmdb_5000_movies.csv\")\nmovie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x)))\n\ngenres = movie['genres'].tolist()\nprint(genres)\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074661236_pandas_python.txt
Q: WatchDog Library is only running once I am new to coding and python, and I am struggling to use this WatchDog library to run this data_analysis function when a file is added to a folder. While it runs, i notice that pasting this function makes the watchdog only detect an added file once. Without, it will keep running. Anyone know why? I have tried searching online but I am endlessly confused lol Also, I tried to pasting my whole function to make it easier to read, but if you can condense it in your IDE, then it should be easier to see the rest of the py file. from tkinter import * from tkinter import filedialog from watchdog.observers import Observer from watchdog.events import PatternMatchingEventHandler import pandas as pd import numpy as np class Watchdog(PatternMatchingEventHandler, Observer): def __init__(self, path='.', patterns='*', logfunc=print): PatternMatchingEventHandler.__init__(self, patterns) Observer.__init__(self) self.schedule(self, path=path, recursive=False) self.log = logfunc def on_created(self, event): # This function is called when a file is created self.log(f"hey, {event.src_path} has been created!") def data_analysis(src_path): readdata = pd.read_csv(event.src_path, delimiter='\t', encoding="latin1", skiprows=24) df = pd.DataFrame(readdata) df = df.drop(labels=0, axis=0) df['Station']=df['Station'].astype(float) df['Station']=df['Station'].astype(int) df["Axial Force Occurences"] = 0 df["Axial Force Actual Value"] = pd.NaT df["Flexion Occurences"] = 0 df["Flexion Actual Value"] = pd.NaT df["IE Occurences"] = 0 df["IE Actual Value"] = pd.NaT df["AP Occurences"] = 0 df["AP Actual Value"] = pd.NaT df['Fz 1']=df['Fz 1'].astype(float) df['Fz 1']=df['Fz 1'].astype(int) df['VLWf']=df['VLWf'].astype(float) df['VLWf']=df['VLWf'].astype(int) df['FLPt']=df['FLPt'].astype(float) # df['FLPt']=df['FLPt'].astype(int) df['FLWf']=df['FLWf'].astype(float) # df['FLWf']=df['FLWf'].astype(int) df['IEPt']=df['IEPt'].astype(float) # df['IEPt']=df['IEPt'].astype(int) df['IEWf']=df['IEWf'].astype(float) # df['IEWf']=df['IEWf'].astype(int) df['APPt']=df['APPt'].astype(float) # df['APPt']=df['APPt'].astype(int) df['APWf']=df['APWf'].astype(float) # df['APWf']=df['APWf'].astype(int) data = df.loc[df['Station'] == 1, ['VLWf','Fz 1', "Axial Force Occurences", "Axial Force Actual Value", 'FLPt', 'FLWf', "Flexion Occurences", "Flexion Actual Value", 'IEPt', 'IEWf', "IE Occurences", "IE Actual Value", 'APPt', 'APWf', "AP Occurences", "AP Actual Value", ]] tol = 3 y = int(len(data.index)) num = int(y * (3/100)) ##Extract first and last rows based on tolerance, and append the first rows to the end, and the last rows to the beginning first_rows = data.iloc[0: num] last_rows = data.iloc[y-num: y] ##Add the last_rows to the beginning, and the first_rows to the end, all one df data = last_rows.append(data) data = data.append(first_rows) ##This keeps the indexing from appending, which is nice to see, but we need to change it use for loops z = int(len(data.index)) new_index = np.linspace(start = 1, stop = z, num = z) new_index2 = new_index.astype(int) data2 = data.set_index(new_index2) # To test if the tables are correct, you can call specific values in console eg: 'data['VLWf'].iloc[1]' axoccur = [] ##AXIAL FORCE OOT for i in range(num, z-num): val = data2['Fz 1'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,0])-0.05*2600)) and np.any(val <= ((data2.iloc[i-num: i+num,0])+0.05*2600)): data2.at[i,'Axial Force Occurences'] = 0 else: data2.at[i,'Axial Force Occurences'] = 1 data2.at[i,'Axial Force Actual Value'] = val axoccur.append(i) # print(apoccur) ##After reading the data, we need to sum the totalaxial = data2['Axial Force Occurences'].sum() print('The number of Axial Force values outside of the tolerance is: ' + str(totalaxial)) flexionoccur = [] ##FLEXION OOT for i in range(num, z-num): val = data2['FLPt'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,5])-0.05*58)) and np.any(val <= ((data2.iloc[i-num: i+num,5])+0.05*58)): data2.at[i,'Flexion Occurences'] = 0 else: data2.at[i,'Flexion Occurences'] = 1 data2.at[i,'Flexion Actual Value'] = val flexionoccur.append(i) ##After reading the data, we need to sum the totalflexion = data2['Flexion Occurences'].sum() print('The number of Flexion values outside of the tolerance is: ' + str(totalflexion)) ieoccur = [] ##IE OOT for i in range(num, z-num): val = data2['IEPt'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,9])-0.05*5.7)) and np.any(val <= ((data2.iloc[i-num: i+num,9])+0.05*5.7)): data2.at[i,'IE Occurences'] = 0 else: data2.at[i,'IE Occurences'] = 1 data2.at[i,'IE Actual Value'] = val ieoccur.append(i) ##After reading the data, we need to sum the totalie = data2['IE Occurences'].sum() print('The number of IE values outside of the tolerance is: ' + str(totalie)) apoccur = [] ##AP OOT for i in range(num, z-num): val = data2['APPt'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,13])-0.05*5.2)) and np.any(val <= ((data2.iloc[i-num: i+num,13])+0.05*5.2)): data2.at[i,'IE Occurences'] = 0 else: data2.at[i,'AP Occurences'] = 1 data2.at[i,'AP Actual Value'] = val apoccur.append(i) ##After reading the data, we need to sum the totalap = data2['AP Occurences'].sum() print('The number of AP values outside of the tolerance is: ' + str(totalap)) data_analysis(event.src_path) def on_deleted(self, event): # This function is called when a file is deleted self.log(f"what the f**k! Someone deleted {event.src_path}!") def on_modified(self, event): # This function is called when a file is modified self.log(f"hey buddy, {event.src_path} has been modified") def on_moved(self, event): # This function is called when a file is moved self.log(f"ok ok ok, someone moved {event.src_path} to {event.dest_path}") class GUI: def __init__(self): self.watchdog = None self.watch_path = '.' self.root = Tk() self.messagebox = Text(width=80, height=10) self.messagebox.pack() frm = Frame(self.root) Button(frm, text='Browse', command=self.select_path).pack(side=LEFT) Button(frm, text='Start Watchdog', command=self.start_watchdog).pack(side=RIGHT) Button(frm, text='Stop Watchdog', command=self.stop_watchdog).pack(side=RIGHT) # Button(frm, text='Excel', command=self.excelexport)pack(side=LEFT) frm.pack(fill=X, expand=1) self.root.mainloop() def start_watchdog(self): if self.watchdog is None: self.watchdog = Watchdog(path=self.watch_path, logfunc=self.log) self.watchdog.start() self.log('Watchdog started') else: self.log('Watchdog already started') def stop_watchdog(self): if self.watchdog: self.watchdog.stop() self.watchdog = None self.log('Watchdog stopped') else: self.log('Watchdog is not running') def select_path(self): path = filedialog.askdirectory() if path: self.watch_path = path self.log(f'Selected path: {path}') def log(self, message): self.messagebox.insert(END, f'{message}\n') self.messagebox.see(END) if __name__ == '__main__': GUI() A: class Handler(watchdog.events.PatternMatchingEventHandler): def __init__(self): watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['*.pdf'], ignore_patterns = None, ignore_directories = False, case_sensitive = False) def on_created(self, event): print(f"File was created at {event.src_path}") OCRscript(self, event) def on_deleted(self, event): print(f"File was deleted at {event.src_path}") event_handler = Handler() observer = watchdog.observers.Observer() observer.schedule(event_handler, "C://Users//Installer//Desktop//tesseract test", recursive = False) observer.start() observer.join() This is the code I have been using to have a continually running watchdog. I call my other funcitions from within the class to ensure that the watchdog continues running. I have the functions I'm calling defined outside of the class just for debugging purposes. It helped me figure out which step was broken.
WatchDog Library is only running once
I am new to coding and python, and I am struggling to use this WatchDog library to run this data_analysis function when a file is added to a folder. While it runs, i notice that pasting this function makes the watchdog only detect an added file once. Without, it will keep running. Anyone know why? I have tried searching online but I am endlessly confused lol Also, I tried to pasting my whole function to make it easier to read, but if you can condense it in your IDE, then it should be easier to see the rest of the py file. from tkinter import * from tkinter import filedialog from watchdog.observers import Observer from watchdog.events import PatternMatchingEventHandler import pandas as pd import numpy as np class Watchdog(PatternMatchingEventHandler, Observer): def __init__(self, path='.', patterns='*', logfunc=print): PatternMatchingEventHandler.__init__(self, patterns) Observer.__init__(self) self.schedule(self, path=path, recursive=False) self.log = logfunc def on_created(self, event): # This function is called when a file is created self.log(f"hey, {event.src_path} has been created!") def data_analysis(src_path): readdata = pd.read_csv(event.src_path, delimiter='\t', encoding="latin1", skiprows=24) df = pd.DataFrame(readdata) df = df.drop(labels=0, axis=0) df['Station']=df['Station'].astype(float) df['Station']=df['Station'].astype(int) df["Axial Force Occurences"] = 0 df["Axial Force Actual Value"] = pd.NaT df["Flexion Occurences"] = 0 df["Flexion Actual Value"] = pd.NaT df["IE Occurences"] = 0 df["IE Actual Value"] = pd.NaT df["AP Occurences"] = 0 df["AP Actual Value"] = pd.NaT df['Fz 1']=df['Fz 1'].astype(float) df['Fz 1']=df['Fz 1'].astype(int) df['VLWf']=df['VLWf'].astype(float) df['VLWf']=df['VLWf'].astype(int) df['FLPt']=df['FLPt'].astype(float) # df['FLPt']=df['FLPt'].astype(int) df['FLWf']=df['FLWf'].astype(float) # df['FLWf']=df['FLWf'].astype(int) df['IEPt']=df['IEPt'].astype(float) # df['IEPt']=df['IEPt'].astype(int) df['IEWf']=df['IEWf'].astype(float) # df['IEWf']=df['IEWf'].astype(int) df['APPt']=df['APPt'].astype(float) # df['APPt']=df['APPt'].astype(int) df['APWf']=df['APWf'].astype(float) # df['APWf']=df['APWf'].astype(int) data = df.loc[df['Station'] == 1, ['VLWf','Fz 1', "Axial Force Occurences", "Axial Force Actual Value", 'FLPt', 'FLWf', "Flexion Occurences", "Flexion Actual Value", 'IEPt', 'IEWf', "IE Occurences", "IE Actual Value", 'APPt', 'APWf', "AP Occurences", "AP Actual Value", ]] tol = 3 y = int(len(data.index)) num = int(y * (3/100)) ##Extract first and last rows based on tolerance, and append the first rows to the end, and the last rows to the beginning first_rows = data.iloc[0: num] last_rows = data.iloc[y-num: y] ##Add the last_rows to the beginning, and the first_rows to the end, all one df data = last_rows.append(data) data = data.append(first_rows) ##This keeps the indexing from appending, which is nice to see, but we need to change it use for loops z = int(len(data.index)) new_index = np.linspace(start = 1, stop = z, num = z) new_index2 = new_index.astype(int) data2 = data.set_index(new_index2) # To test if the tables are correct, you can call specific values in console eg: 'data['VLWf'].iloc[1]' axoccur = [] ##AXIAL FORCE OOT for i in range(num, z-num): val = data2['Fz 1'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,0])-0.05*2600)) and np.any(val <= ((data2.iloc[i-num: i+num,0])+0.05*2600)): data2.at[i,'Axial Force Occurences'] = 0 else: data2.at[i,'Axial Force Occurences'] = 1 data2.at[i,'Axial Force Actual Value'] = val axoccur.append(i) # print(apoccur) ##After reading the data, we need to sum the totalaxial = data2['Axial Force Occurences'].sum() print('The number of Axial Force values outside of the tolerance is: ' + str(totalaxial)) flexionoccur = [] ##FLEXION OOT for i in range(num, z-num): val = data2['FLPt'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,5])-0.05*58)) and np.any(val <= ((data2.iloc[i-num: i+num,5])+0.05*58)): data2.at[i,'Flexion Occurences'] = 0 else: data2.at[i,'Flexion Occurences'] = 1 data2.at[i,'Flexion Actual Value'] = val flexionoccur.append(i) ##After reading the data, we need to sum the totalflexion = data2['Flexion Occurences'].sum() print('The number of Flexion values outside of the tolerance is: ' + str(totalflexion)) ieoccur = [] ##IE OOT for i in range(num, z-num): val = data2['IEPt'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,9])-0.05*5.7)) and np.any(val <= ((data2.iloc[i-num: i+num,9])+0.05*5.7)): data2.at[i,'IE Occurences'] = 0 else: data2.at[i,'IE Occurences'] = 1 data2.at[i,'IE Actual Value'] = val ieoccur.append(i) ##After reading the data, we need to sum the totalie = data2['IE Occurences'].sum() print('The number of IE values outside of the tolerance is: ' + str(totalie)) apoccur = [] ##AP OOT for i in range(num, z-num): val = data2['APPt'].iloc[i] extract_data = data2.iloc[1:z, 0] xval = data2.iloc[i-num: i+num,0]-0.5*2600 if np.any(val >= ((data2.iloc[i-num: i+num,13])-0.05*5.2)) and np.any(val <= ((data2.iloc[i-num: i+num,13])+0.05*5.2)): data2.at[i,'IE Occurences'] = 0 else: data2.at[i,'AP Occurences'] = 1 data2.at[i,'AP Actual Value'] = val apoccur.append(i) ##After reading the data, we need to sum the totalap = data2['AP Occurences'].sum() print('The number of AP values outside of the tolerance is: ' + str(totalap)) data_analysis(event.src_path) def on_deleted(self, event): # This function is called when a file is deleted self.log(f"what the f**k! Someone deleted {event.src_path}!") def on_modified(self, event): # This function is called when a file is modified self.log(f"hey buddy, {event.src_path} has been modified") def on_moved(self, event): # This function is called when a file is moved self.log(f"ok ok ok, someone moved {event.src_path} to {event.dest_path}") class GUI: def __init__(self): self.watchdog = None self.watch_path = '.' self.root = Tk() self.messagebox = Text(width=80, height=10) self.messagebox.pack() frm = Frame(self.root) Button(frm, text='Browse', command=self.select_path).pack(side=LEFT) Button(frm, text='Start Watchdog', command=self.start_watchdog).pack(side=RIGHT) Button(frm, text='Stop Watchdog', command=self.stop_watchdog).pack(side=RIGHT) # Button(frm, text='Excel', command=self.excelexport)pack(side=LEFT) frm.pack(fill=X, expand=1) self.root.mainloop() def start_watchdog(self): if self.watchdog is None: self.watchdog = Watchdog(path=self.watch_path, logfunc=self.log) self.watchdog.start() self.log('Watchdog started') else: self.log('Watchdog already started') def stop_watchdog(self): if self.watchdog: self.watchdog.stop() self.watchdog = None self.log('Watchdog stopped') else: self.log('Watchdog is not running') def select_path(self): path = filedialog.askdirectory() if path: self.watch_path = path self.log(f'Selected path: {path}') def log(self, message): self.messagebox.insert(END, f'{message}\n') self.messagebox.see(END) if __name__ == '__main__': GUI()
[ "class Handler(watchdog.events.PatternMatchingEventHandler):\n def __init__(self):\n watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['*.pdf'],\n ignore_patterns = None,\n ignore_directories = False,\n case_sensitive = False)\n def on_created(self, event):\n print(f\"File was created at {event.src_path}\")\n OCRscript(self, event)\n def on_deleted(self, event):\n print(f\"File was deleted at {event.src_path}\")\n\nevent_handler = Handler()\nobserver = watchdog.observers.Observer()\nobserver.schedule(event_handler, \"C://Users//Installer//Desktop//tesseract test\",\n recursive = False)\nobserver.start()\nobserver.join()\n\nThis is the code I have been using to have a continually running watchdog. I call my other funcitions from within the class to ensure that the watchdog continues running. I have the functions I'm calling defined outside of the class just for debugging purposes. It helped me figure out which step was broken.\n" ]
[ 0 ]
[]
[]
[ "python", "python_watchdog", "tkinter" ]
stackoverflow_0074480624_python_python_watchdog_tkinter.txt
Q: How do I make a post request work after using get_queryset? I would like to have a list of a user's devices with a checkbox next to each one. The user can select the devices they want to view on a map by clicking on the corresponding checkboxes then clicking a submit button. I am not including the mapping portion in this question, because I plan to work that out later. The step that is causing problems right now is trying to use a post request. To have the user only be able to see the devices that are assigned to them I am using the get_queryset method. I have seen a couple questions regarding using a post request along with the get_queryset method, but they do not seem to work for me. Using the view below, when I select a checkbox and then click submit, it looks like a post request happens followed immediately by a get request and the result is my table is empty when the page loads. Portion of my views.py file: class DeviceListView(LoginRequiredMixin, ListView): model = Device template_name = 'tracking/home.html' context_object_name = 'devices' def get_queryset(self, *args, **kwargs): return super().get_queryset(*args, **kwargs).filter(who_added=self.request.user) def post(self, request): form = SelectForm(request.POST) return render(request, self.template_name, {'form': form}) Portions of my template: <div class="table-responsive"> <form action="" method="post" name="devices_to_check"> {% csrf_token %} <table id="registered_devices" class="table table-striped table-bordered table-hover table-sm" style="width:100%; border: 1px solid black; font-size: 10px"> <thead class="table-primary" style="text-align:center; border: 1px solid black"> <tr> <th>IMEI</th> <th>Label</th> <th>Device Type</th> <th>Group</th> <th>Subgroup</th> <th>Description</th> <th>Display</th> </tr> </thead> <tbody> {% for device in devices %} <tr> <td>{{device.imei}}</td> <td>{{device.label}}</td> <td style="text-transform:uppercase">{{device.device_type}}</td> <td>{{device.main_group}}</td> <td>{{device.subgroup}}</td> <td>{{device.description}}</td> <td style="text-align:center"> <a href="{% url 'device-detail' device.id %}">i </a> <input type="checkbox" id="{{device.imei}}" name="chk" value="{{device.imei}}" onclick="show_info_icon()" class="chckvalues"/> </td> </tr> {% endfor %} </tbody> </table> <input type="button" class="btn btn-outline-info" onclick='selects()' value="Select All"/> <input type="button" class="btn btn-outline-info" onclick='deSelect()' value="Deselect All"/> <button type="submit" class="btn btn-outline-info">Show on Map</button> </form> </div> A: I think you're better off using a function based view with the typical "if request.method = POST" logic, this isn't really what the generic list view is for. @loginrequired def device_list_view(request): context = {} if request.method = POST: form = SelectForm(request.POST) if form.is_valid(): # if a request was posted and is valid, do your thing: # maybe your thing is to give your map view the device id # and render it # assuming your form has a field called device_id: context['device_id'] = form.device_id return render(request, 'tracking/device_map.html', context) # either request method is not post or the form wasn't valid user_devices = Device.objects.filter(who_added=request.user) device_forms = [] for i, device in enumerate(user_devices): form = SelectForm(instance=device, prefix=i) device_forms.append(form) context['device_forms'] = device_forms return render(request, 'tracking/home.html', context) The key part here is the if-else logic of checking whether the request was POST or not, you'll need to tweak it based on exactly what you want to happen. It sounds like what you're really doing is creating one form from which you're pulling multiple device_ids. This is a good reference for what's going on with the prefix bit and should help you decide whether to do things that way or with one form like you're trying.
How do I make a post request work after using get_queryset?
I would like to have a list of a user's devices with a checkbox next to each one. The user can select the devices they want to view on a map by clicking on the corresponding checkboxes then clicking a submit button. I am not including the mapping portion in this question, because I plan to work that out later. The step that is causing problems right now is trying to use a post request. To have the user only be able to see the devices that are assigned to them I am using the get_queryset method. I have seen a couple questions regarding using a post request along with the get_queryset method, but they do not seem to work for me. Using the view below, when I select a checkbox and then click submit, it looks like a post request happens followed immediately by a get request and the result is my table is empty when the page loads. Portion of my views.py file: class DeviceListView(LoginRequiredMixin, ListView): model = Device template_name = 'tracking/home.html' context_object_name = 'devices' def get_queryset(self, *args, **kwargs): return super().get_queryset(*args, **kwargs).filter(who_added=self.request.user) def post(self, request): form = SelectForm(request.POST) return render(request, self.template_name, {'form': form}) Portions of my template: <div class="table-responsive"> <form action="" method="post" name="devices_to_check"> {% csrf_token %} <table id="registered_devices" class="table table-striped table-bordered table-hover table-sm" style="width:100%; border: 1px solid black; font-size: 10px"> <thead class="table-primary" style="text-align:center; border: 1px solid black"> <tr> <th>IMEI</th> <th>Label</th> <th>Device Type</th> <th>Group</th> <th>Subgroup</th> <th>Description</th> <th>Display</th> </tr> </thead> <tbody> {% for device in devices %} <tr> <td>{{device.imei}}</td> <td>{{device.label}}</td> <td style="text-transform:uppercase">{{device.device_type}}</td> <td>{{device.main_group}}</td> <td>{{device.subgroup}}</td> <td>{{device.description}}</td> <td style="text-align:center"> <a href="{% url 'device-detail' device.id %}">i </a> <input type="checkbox" id="{{device.imei}}" name="chk" value="{{device.imei}}" onclick="show_info_icon()" class="chckvalues"/> </td> </tr> {% endfor %} </tbody> </table> <input type="button" class="btn btn-outline-info" onclick='selects()' value="Select All"/> <input type="button" class="btn btn-outline-info" onclick='deSelect()' value="Deselect All"/> <button type="submit" class="btn btn-outline-info">Show on Map</button> </form> </div>
[ "I think you're better off using a function based view with the typical \"if request.method = POST\" logic, this isn't really what the generic list view is for.\n@loginrequired\ndef device_list_view(request):\n context = {}\n if request.method = POST:\n form = SelectForm(request.POST)\n if form.is_valid():\n # if a request was posted and is valid, do your thing:\n # maybe your thing is to give your map view the device id\n # and render it\n # assuming your form has a field called device_id:\n context['device_id'] = form.device_id \n return render(request, 'tracking/device_map.html', context)\n \n # either request method is not post or the form wasn't valid\n user_devices = Device.objects.filter(who_added=request.user)\n device_forms = []\n for i, device in enumerate(user_devices):\n form = SelectForm(instance=device, prefix=i)\n device_forms.append(form)\n context['device_forms'] = device_forms\n return render(request, 'tracking/home.html', context)\n \n\nThe key part here is the if-else logic of checking whether the request was POST or not, you'll need to tweak it based on exactly what you want to happen. It sounds like what you're really doing is creating one form from which you're pulling multiple device_ids. This is a good reference for what's going on with the prefix bit and should help you decide whether to do things that way or with one form like you're trying.\n" ]
[ 0 ]
[]
[]
[ "django", "post", "python" ]
stackoverflow_0074659563_django_post_python.txt
Q: Does Bert model need text? Does Bert models need pre-processed text (Like removing special characters, stopwords, etc.) or I can directly pass my text as it is to Bert models. (HuggigFace libraries). note: Follow up question to: String cleaning/preprocessing for BERT A: Cleaning the input text for transformer models is not required. Removing stop words (which are considered as noise in conventional text representation like bag-of-words or tf-idf) can and probably will worsen the predictions of your BERT model. Since BERT is making use of the self-attention mechanism these 'stop words' are valuable information for BERT. Consider the following example: Python's NLTK library considers words like 'her' or 'him' as stop words. Let's say we want to process a text like: 'I told her about the best restaurants in town'. Removing stop words with NLTK would give us: 'I told best restaurants town'. As you can see a lot of information is being discarded. Sure, we could try and train a classic ML classifier (i.e. topic classification, here food) but BERT captures a lot more semantic information based on the surroundings of words. A: You need to tokenize your text first. The BertTokenizer class handles everything you need from raw text to tokens. See this: from transformers import BertTokenizer, BertModel import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state A: According to me, pre-processing is not required while training as well as inferring from BERT. I can explain it with a few examples: So as to continue to @Arthuro's answer, the stop words actually are valuable and BERT internally maps relations between different words. We even should not clean things like a hyperlink or things like Twitter handle mentions (eg. @someones_twitter_handle). The reason is subword tokenization! BERT uses a special subword tokenization called WordPiece Tokenization. WordPiece tokenizer breaks the words into subwords. HuggingFace has a really nice article that explains how this works.
Does Bert model need text?
Does Bert models need pre-processed text (Like removing special characters, stopwords, etc.) or I can directly pass my text as it is to Bert models. (HuggigFace libraries). note: Follow up question to: String cleaning/preprocessing for BERT
[ "Cleaning the input text for transformer models is not required. Removing stop words (which are considered as noise in conventional text representation like bag-of-words or tf-idf) can and probably will worsen the predictions of your BERT model.\nSince BERT is making use of the self-attention mechanism these 'stop words' are valuable information for BERT.\nConsider the following example:\nPython's NLTK library considers words like 'her' or 'him' as stop words. Let's say we want to process a text like: 'I told her about the best restaurants in town'.\nRemoving stop words with NLTK would give us: 'I told best restaurants town'. As you can see a lot of information is being discarded. Sure, we could try and train a classic ML classifier (i.e. topic classification, here food) but BERT captures a lot more semantic information based on the surroundings of words.\n", "You need to tokenize your text first. The BertTokenizer class handles everything you need from raw text to tokens. See this:\nfrom transformers import BertTokenizer, BertModel\nimport torch\n\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained('bert-base-uncased')\n\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\noutputs = model(**inputs)\n\nlast_hidden_states = outputs.last_hidden_state\n\n", "According to me, pre-processing is not required while training as well as inferring from BERT. I can explain it with a few examples:\n\nSo as to continue to @Arthuro's answer, the stop words actually are valuable and BERT internally maps relations between different words.\nWe even should not clean things like a hyperlink or things like Twitter handle mentions (eg. @someones_twitter_handle). The reason is subword tokenization! BERT uses a special subword tokenization called WordPiece Tokenization. WordPiece tokenizer breaks the words into subwords. HuggingFace has a really nice article that explains how this works.\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "bert_language_model", "data_preprocessing", "nlp", "python", "text_classification" ]
stackoverflow_0070649831_bert_language_model_data_preprocessing_nlp_python_text_classification.txt
Q: Program python that launch when start Windows I know, it's a lot of similar questions, but i don't understand how i can make a python program what launch when you start pc, so please learn me that. I want to get a code in python what explain me how to create a program what start when you lanch the pc A: This type of execution in programs/script is usually done through the task scheduler A simple tutorial would be following the next steps: 1: At the windows search box, type: task scheduler 2: Open Task scheduler 3: From Action menu select Create Task. 4: At General tab, type a name for the task. e.g. "StartPythonScript" and select Run with highest privileges. 5.1: At Triggers tab, click New. 5.2: Select to begin the task: At system startup and click OK. 6.1: At Actions tab, click New. 6.2: At New Action window, select the option start a program and then click Browse. 6.3: Choose the script that you want to run at startup and click Open. 6.4: Click OK. 7: At Conditions tab, clear the Start the task only if the computer is on AC Power checkbox and click OK. 8: Restart your PC to apply the change. Hope this solution will help you, anyway you can learn more about task scheduler here: https://learn.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-start-page
Program python that launch when start Windows
I know, it's a lot of similar questions, but i don't understand how i can make a python program what launch when you start pc, so please learn me that. I want to get a code in python what explain me how to create a program what start when you lanch the pc
[ "This type of execution in programs/script is usually done through the task scheduler\nA simple tutorial would be following the next steps:\n1: At the windows search box, type: task scheduler\n2: Open Task scheduler\n3: From Action menu select Create Task.\n4: At General tab, type a name for the task. e.g. \"StartPythonScript\" and select Run with highest privileges.\n5.1: At Triggers tab, click New.\n5.2: Select to begin the task: At system startup and click OK.\n6.1: At Actions tab, click New.\n6.2: At New Action window, select the option start a program and then click Browse.\n6.3: Choose the script that you want to run at startup and click Open.\n6.4: Click OK.\n7: At Conditions tab, clear the Start the task only if the computer is on AC Power checkbox and click OK.\n8: Restart your PC to apply the change.\nHope this solution will help you, anyway you can learn more about task scheduler here: https://learn.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-start-page\n" ]
[ 0 ]
[]
[]
[ "app_startup", "python" ]
stackoverflow_0074661287_app_startup_python.txt
Q: Pandas equivelt of pyspark reduce and add? I have a dataframe in the following where Day_1, Day_2, Day_3 are the number of impressions in the past 3 days. df = pd.DataFrame({'Day_1': [2, 4, 8, 0], 'Day_2': [2, 0, 0, 0], 'Day_3': [1, 1, 0, 0], index=['user1', 'user2', 'user3', 'user4']) df Day_1 Day_2 Day_3 user1 2 2 1 user2 4 0 1 user3 8 0 0 user4 0 0 0 Now, I need to check if a user had any impression in the past n days. For example, if num_days = 2, I need to add a new column, impression, where it gets 1 if sum Day_1 and Day_2 is greater than zero, and 0 otherwise. Here is what I expect to see: Day_1 Day_2 Day_3 impression user1 2 2 1 1 user2 4 0 1 1 user3 8 0 0 1 user4 0 0 0 0 It is a straightforward process in pyspark and I use something like this: imp_cols = ['Day_'+str(i) for i in range(1, num_days+1)] df = df.withColumn("impression",reduce(add, [F.col(x) for x in imp_cols])) A: IIUC, you can use numpy.where with pandas.DataFrame.sum. Try this : df["impression"] = np.where(df.sum(axis=1).gt(0), 1, 0) # Output : print(df) ​ Day_1 Day_2 Day_3 impression user1 2 2 1 1 user2 4 0 1 1 user3 8 0 0 1 user4 0 0 0 0 If you want to select a specific columns/days, you can use pandas.DataFrame.filter : num_days = 2 l = list(range(1, num_days+1)) pat= "|".join([str(x) for x in l]) sub_df = df.filter(regex="Day_[{}]".format(pat)) df["impression"] = np.where(sub_df.sum(axis=1).gt(0), 1, 0) A: You can use the DataFrame.loc method to select the columns you want to sum, and then use the DataFrame.sum method to compute the sum of these columns. You can then use the DataFrame.clip method to set values less than 1 to 0 and values greater than or equal to 1 to 1. Finally, you can use the DataFrame.assign method to add the new impression column to the dataframe. import pandas as pd df = pd.DataFrame({'Day_1': [2, 4, 8, 0], 'Day_2': [2, 0, 0, 0], 'Day_3': [1, 1, 0, 0], index=['user1', 'user2', 'user3', 'user4']) num_days = 2 imp_cols = ['Day_'+str(i) for i in range(1, num_days+1)] df = df.loc[:, imp_cols].sum(axis=1).clip(0, 1).to_frame("impression") df = df.assign(impression=impression)
Pandas equivelt of pyspark reduce and add?
I have a dataframe in the following where Day_1, Day_2, Day_3 are the number of impressions in the past 3 days. df = pd.DataFrame({'Day_1': [2, 4, 8, 0], 'Day_2': [2, 0, 0, 0], 'Day_3': [1, 1, 0, 0], index=['user1', 'user2', 'user3', 'user4']) df Day_1 Day_2 Day_3 user1 2 2 1 user2 4 0 1 user3 8 0 0 user4 0 0 0 Now, I need to check if a user had any impression in the past n days. For example, if num_days = 2, I need to add a new column, impression, where it gets 1 if sum Day_1 and Day_2 is greater than zero, and 0 otherwise. Here is what I expect to see: Day_1 Day_2 Day_3 impression user1 2 2 1 1 user2 4 0 1 1 user3 8 0 0 1 user4 0 0 0 0 It is a straightforward process in pyspark and I use something like this: imp_cols = ['Day_'+str(i) for i in range(1, num_days+1)] df = df.withColumn("impression",reduce(add, [F.col(x) for x in imp_cols]))
[ "IIUC, you can use numpy.where with pandas.DataFrame.sum.\nTry this :\ndf[\"impression\"] = np.where(df.sum(axis=1).gt(0), 1, 0)\n\n# Output :\nprint(df)\n​\n Day_1 Day_2 Day_3 impression\nuser1 2 2 1 1\nuser2 4 0 1 1\nuser3 8 0 0 1\nuser4 0 0 0 0\n\nIf you want to select a specific columns/days, you can use pandas.DataFrame.filter :\nnum_days = 2\nl = list(range(1, num_days+1))\npat= \"|\".join([str(x) for x in l])\n\nsub_df = df.filter(regex=\"Day_[{}]\".format(pat))\n\ndf[\"impression\"] = np.where(sub_df.sum(axis=1).gt(0), 1, 0)\n\n", "You can use the DataFrame.loc method to select the columns you want to sum, and then use the DataFrame.sum method to compute the sum of these columns. You can then use the DataFrame.clip method to set values less than 1 to 0 and values greater than or equal to 1 to 1. Finally, you can use the DataFrame.assign method to add the new impression column to the dataframe.\nimport pandas as pd\n\ndf = pd.DataFrame({'Day_1': [2, 4, 8, 0],\n 'Day_2': [2, 0, 0, 0],\n 'Day_3': [1, 1, 0, 0],\n index=['user1', 'user2', 'user3', 'user4'])\n\nnum_days = 2\nimp_cols = ['Day_'+str(i) for i in range(1, num_days+1)]\n\ndf = df.loc[:, imp_cols].sum(axis=1).clip(0, 1).to_frame(\"impression\")\n\ndf = df.assign(impression=impression)\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074661465_dataframe_pandas_python.txt
Q: Background color of bokeh layout I'm playing around with the Bokeh sliders demo (source code here), and I'm trying to change the background color of the entire page. Though changing the color of the figure is easy using background_fill_color and border_fill_color, the rest of the layout still appears on top of a white background. Is there an attribute I can add to the theme that will allow me to set the color via curdoc().theme? A: There's not currently any Python property that would control the HTML background color. HTML and CSS is vast territory, so instead of trying to make a corresponding Python property for every possible style option, Bokeh provides a general mechanism for supplying your own HMTL templates so that any standard familiar CSS can be applied. This is most easily accomplished by adding a templates/index.html file to a Directory-style Bokeh App. The template should be Jinja2 template. There are two substitutions required to be defined in the <head>: {{ bokeh_css }} {{ bokeh_js }} as well as two required in <body>: {{ plot_div }} {{ plot_script }} The app will appear wherever the plot_script appears in the template. Apart from this, you can apply whatever HTML and CSS you need. You can see a concrete example here: https://github.com/bokeh/bokeh/blob/master/examples/app/crossfilter A boiled down template that changes the page background might look like this: <!DOCTYPE html> <html lang="en"> <head> <style> body { background: #2F2F2F; } </style> <meta charset="utf-8"> {{ bokeh_css }} {{ bokeh_js }} </head> <body> {{ plot_div|indent(8) }} {{ plot_script|indent(8) }} </body> </html> A: Changing the .bk-root style worked for me: from bokeh.resources import Resources from bokeh.io.state import curstate from bokeh.io import curdoc, output_file, save from bokeh.util.browser import view from bokeh.models.widgets import Panel, Tabs from bokeh.plotting import figure class MyResources(Resources): @property def css_raw(self): return super().css_raw + [ """.bk-root { background-color: #000000; border-color: #000000; } """ ] f = figure(height=200, width=200) f.line([1,2,3], [1,2,3]) tabs = Tabs( tabs=[ Panel( child=f, title="TabTitle" ) ], height=500 ) output_file("BlackBG.html") curstate().file['resources'] = MyResources(mode='cdn') save(tabs) view("./BlackBG.html") A: If you are using row or column for displaying several figures in the document, a workaround is setting the background attribute like this: curdoc().add_root(row(fig1, fig2, background="beige")) A: I know it is not the cleanest way to do it, but a workaround would be to modify file.html inside bokeh template folder FILE PATH CODE SNIPPET
Background color of bokeh layout
I'm playing around with the Bokeh sliders demo (source code here), and I'm trying to change the background color of the entire page. Though changing the color of the figure is easy using background_fill_color and border_fill_color, the rest of the layout still appears on top of a white background. Is there an attribute I can add to the theme that will allow me to set the color via curdoc().theme?
[ "There's not currently any Python property that would control the HTML background color. HTML and CSS is vast territory, so instead of trying to make a corresponding Python property for every possible style option, Bokeh provides a general mechanism for supplying your own HMTL templates so that any standard familiar CSS can be applied. \nThis is most easily accomplished by adding a templates/index.html file to a Directory-style Bokeh App. The template should be Jinja2 template. There are two substitutions required to be defined in the <head>:\n\n{{ bokeh_css }}\n{{ bokeh_js }}\n\nas well as two required in <body>:\n\n{{ plot_div }}\n{{ plot_script }}\n\nThe app will appear wherever the plot_script appears in the template. Apart from this, you can apply whatever HTML and CSS you need. You can see a concrete example here:\nhttps://github.com/bokeh/bokeh/blob/master/examples/app/crossfilter\nA boiled down template that changes the page background might look like this:\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <style>\n body { background: #2F2F2F; }\n </style>\n <meta charset=\"utf-8\">\n {{ bokeh_css }}\n {{ bokeh_js }}\n </head>\n <body>\n {{ plot_div|indent(8) }}\n {{ plot_script|indent(8) }}\n </body>\n</html>\n\n", "Changing the .bk-root style worked for me:\nfrom bokeh.resources import Resources\nfrom bokeh.io.state import curstate\nfrom bokeh.io import curdoc, output_file, save\nfrom bokeh.util.browser import view\nfrom bokeh.models.widgets import Panel, Tabs\nfrom bokeh.plotting import figure \n\nclass MyResources(Resources):\n @property\n def css_raw(self):\n return super().css_raw + [\n \"\"\".bk-root {\n background-color: #000000;\n border-color: #000000;\n }\n \"\"\"\n ]\n\nf = figure(height=200, width=200)\nf.line([1,2,3], [1,2,3])\n\ntabs = Tabs( tabs=[ Panel( child=f, title=\"TabTitle\" ) ], height=500 )\n\noutput_file(\"BlackBG.html\")\ncurstate().file['resources'] = MyResources(mode='cdn')\nsave(tabs)\nview(\"./BlackBG.html\")\n\n", "If you are using row or column for displaying several figures in the document, a workaround is setting the background attribute like this:\ncurdoc().add_root(row(fig1, fig2, background=\"beige\"))\n", "I know it is not the cleanest way to do it, but a workaround would be to modify file.html inside bokeh template folder\nFILE PATH\nCODE SNIPPET\n" ]
[ 6, 3, 1, 0 ]
[ "From Bokeh documentation:\n\nThe background fill style is controlled by the background_fill_color\n and background_fill_alpha properties of the Plot object:\nfrom bokeh.plotting import figure, output_file, show\n\noutput_file(\"background.html\")\n\np = figure(plot_width=400, plot_height=400)\np.background_fill_color = \"beige\"\np.background_fill_alpha = 0.5\n\np.circle([1, 2, 3, 4, 5], [2, 5, 8, 2, 7], size=10)\n\nshow(p)\n\n\n\n" ]
[ -1 ]
[ "bokeh", "python" ]
stackoverflow_0044607084_bokeh_python.txt
Q: how do i remove buttons off a message? button = Button(style = discord.ButtonStyle.green, emoji = ":arrow_backward:", custom_id = "button1") button2 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_up_small:", custom_id = "button2") button3 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_forward:", custom_id = "button3") view = View() view.add_item(button) view.add_item(button2) view.add_item(button3) async def button_callback(interaction): if number != ("⠀⠀1"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback2(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀2"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback3(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3"): await message.edit(content="**response 1**") else: await message.edit(content= "**response 2**") button.callback = button_callback button2.callback = button_callback2 button3.callback = button_callback3 await message.edit(content= f"⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:\n{number}", view=view) In the code it creates and sends a message with buttons on it, once you press the button itll either edit the message to say "response1" or "response2" depending if the button had the 1, 2 ,3 over it (if it didnt have the number over it, it prints "response1" if it did have the number over it, it prints "response2") i would like it so when it edits it to either of the responses it also removes the buttons, as it currently doesnt. A: Set view=None in your message.edit function call to remove all of the buttons.
how do i remove buttons off a message?
button = Button(style = discord.ButtonStyle.green, emoji = ":arrow_backward:", custom_id = "button1") button2 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_up_small:", custom_id = "button2") button3 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_forward:", custom_id = "button3") view = View() view.add_item(button) view.add_item(button2) view.add_item(button3) async def button_callback(interaction): if number != ("⠀⠀1"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback2(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀2"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback3(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3"): await message.edit(content="**response 1**") else: await message.edit(content= "**response 2**") button.callback = button_callback button2.callback = button_callback2 button3.callback = button_callback3 await message.edit(content= f"⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:\n{number}", view=view) In the code it creates and sends a message with buttons on it, once you press the button itll either edit the message to say "response1" or "response2" depending if the button had the 1, 2 ,3 over it (if it didnt have the number over it, it prints "response1" if it did have the number over it, it prints "response2") i would like it so when it edits it to either of the responses it also removes the buttons, as it currently doesnt.
[ "Set view=None in your message.edit function call to remove all of the buttons.\n" ]
[ 0 ]
[]
[]
[ "discord", "pycord", "python" ]
stackoverflow_0074607302_discord_pycord_python.txt
Q: Calculate co-occurrences without any overlap in pandas I have the following dataframe import pandas as pd df = pd.DataFrame({'TFD' : ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'], 'Snack' : [1, 0, 1, 1, 0, 0], 'Trans' : [1, 1, 1, 0, 0, 1], 'Dop' : [1, 0, 1, 0, 1, 1]}).set_index('TFD') df Snack Trans Dop TFD AA 1 1 1 SL 0 1 0 BB 1 1 1 D0 1 0 0 Dk 0 0 1 FF 0 1 1 By using this I can calculate the following co-occurrence matrix: df_asint = df.astype(int) coocc = df_asint.T.dot(df_asint) coocc Snack Trans Dop Snack 3 2 2 Trans 2 4 3 Dop 2 3 4 Though, I want the occurrences to not overlap. What I mean is this: in the original df there is only 1 TFD that has only Snack, so the [Snack, Snack] value at the cooc table should be 1. Moreover the [Dop, Trans] should be equal to 1 and not equal to 3(the above calculation gives as output 3 because it takes into account the [Dop, Snack, Trans] combination, which is what I want to avoid) Moreover the order shouldnt matter -> [Dop, Trans] is the same as [Trans, Dop] Having an ['all', 'all'] [row, column] which would indicate how many times an occurrence contains all elements My solution contains the following steps: First, for each row of the df get the list of columns for which the column has value equal to 1: llist = [] for k,v in df.iterrows(): llist.append((list(v[v==1].index))) llist [['Snack', 'Trans', 'Dop'], ['Trans'], ['Snack', 'Trans', 'Dop'], ['Snack'], ['Dop'], ['Trans', 'Dop']] Then I duplicate the lists (inside the list) which have only 1 element: llist2 = llist.copy() for i,l in enumerate(llist2): if len(l) == 1: llist2[i] = l + l if len(l) == 3: llist2[i] = ['all', 'all'] # this is to see how many triple elements I have in the list llist2.append(['Dop', 'Trans']) # This is to test that the order of the elements of the sublists doesnt matter llist2 [['all', 'all'], ['Trans', 'Trans'], ['all', 'all'], ['Snack', 'Snack'], ['Dop', 'Dop'], ['Trans', 'Dop'], ['Dop', 'Trans']] Later I create an empty dataframe with the indexes and columns of interest: elements = ['Trans', 'Dop', 'Snack', 'all'] foo = pd.DataFrame(columns=elements, index=elements) foo.fillna(0,inplace=True) foo Trans Dop Snack all Trans 0 0 0 0 Dop 0 0 0 0 Snack 0 0 0 0 all 0 0 0 0 Then I check and count, which combination is included in the original llist2 from itertools import combinations_with_replacement import collections comb = combinations_with_replacement(elements, 2) for l in comb: val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(l)) if (set(l).__len__() != 1) and (list(reversed(list(l))) in llist2): # check if the reversed element exists as well, but do not double count the diagonal elements val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(reversed(list(l)))) foo Trans Dop Snack all Trans 1 2 0 0 Dop 0 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Last step would be to make foo symmetrical: import numpy as np foo = np.maximum( foo, foo.transpose() ) foo Trans Dop Snack all Trans 1 2 0 0 Dop 2 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Looking for a more efficient/faster (avoiding all these loops) solution A: Managed to shrink it to one "for" loop. I am using "any" and "all" in combination with "mask". import pandas as pd import itertools df = pd.DataFrame({'TFD': ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'], 'Snack': [1, 0, 1, 1, 0, 0], 'Trans': [1, 1, 1, 0, 0, 1], 'Dop': [1, 0, 1, 0, 1, 1]}).set_index('TFD') df["all"] = 0 # adding artifical columns so the results contains "all" list_of_columns = list(df.columns) my_result_list = [] # empty list where we put the results comb = itertools.combinations_with_replacement(list_of_columns, 2) for item in comb: temp_list = list_of_columns[:] # temp_list holds columns of interest if item[0] == item[1]: temp_list.remove(item[0]) my_col_list = [item[0]] # my_col_list holds which occurance we count else: temp_list.remove(item[0]) temp_list.remove(item[1]) my_col_list = [item[0], item[1]] mask = df.loc[:, temp_list].any(axis=1) # creating mask so we know which rows to look at distance = df.loc[~mask, my_col_list].all(axis=1).sum() # calculating ocurrance my_result_list.append([item[0], item[1], distance]) # occurance info recorded in the list my_result_list.append([item[1], item[0], distance]) # occurance put in reverse so we get square form in the end result = pd.DataFrame(my_result_list).drop_duplicates().pivot(index=1, columns=0, values=2) # construc DataFrame in squareform list_of_columns.remove("all") result.loc["all", "all"] = df.loc[:, list_of_columns].all(axis=1).sum() # fill in all/all occurances print(result)
Calculate co-occurrences without any overlap in pandas
I have the following dataframe import pandas as pd df = pd.DataFrame({'TFD' : ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'], 'Snack' : [1, 0, 1, 1, 0, 0], 'Trans' : [1, 1, 1, 0, 0, 1], 'Dop' : [1, 0, 1, 0, 1, 1]}).set_index('TFD') df Snack Trans Dop TFD AA 1 1 1 SL 0 1 0 BB 1 1 1 D0 1 0 0 Dk 0 0 1 FF 0 1 1 By using this I can calculate the following co-occurrence matrix: df_asint = df.astype(int) coocc = df_asint.T.dot(df_asint) coocc Snack Trans Dop Snack 3 2 2 Trans 2 4 3 Dop 2 3 4 Though, I want the occurrences to not overlap. What I mean is this: in the original df there is only 1 TFD that has only Snack, so the [Snack, Snack] value at the cooc table should be 1. Moreover the [Dop, Trans] should be equal to 1 and not equal to 3(the above calculation gives as output 3 because it takes into account the [Dop, Snack, Trans] combination, which is what I want to avoid) Moreover the order shouldnt matter -> [Dop, Trans] is the same as [Trans, Dop] Having an ['all', 'all'] [row, column] which would indicate how many times an occurrence contains all elements My solution contains the following steps: First, for each row of the df get the list of columns for which the column has value equal to 1: llist = [] for k,v in df.iterrows(): llist.append((list(v[v==1].index))) llist [['Snack', 'Trans', 'Dop'], ['Trans'], ['Snack', 'Trans', 'Dop'], ['Snack'], ['Dop'], ['Trans', 'Dop']] Then I duplicate the lists (inside the list) which have only 1 element: llist2 = llist.copy() for i,l in enumerate(llist2): if len(l) == 1: llist2[i] = l + l if len(l) == 3: llist2[i] = ['all', 'all'] # this is to see how many triple elements I have in the list llist2.append(['Dop', 'Trans']) # This is to test that the order of the elements of the sublists doesnt matter llist2 [['all', 'all'], ['Trans', 'Trans'], ['all', 'all'], ['Snack', 'Snack'], ['Dop', 'Dop'], ['Trans', 'Dop'], ['Dop', 'Trans']] Later I create an empty dataframe with the indexes and columns of interest: elements = ['Trans', 'Dop', 'Snack', 'all'] foo = pd.DataFrame(columns=elements, index=elements) foo.fillna(0,inplace=True) foo Trans Dop Snack all Trans 0 0 0 0 Dop 0 0 0 0 Snack 0 0 0 0 all 0 0 0 0 Then I check and count, which combination is included in the original llist2 from itertools import combinations_with_replacement import collections comb = combinations_with_replacement(elements, 2) for l in comb: val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(l)) if (set(l).__len__() != 1) and (list(reversed(list(l))) in llist2): # check if the reversed element exists as well, but do not double count the diagonal elements val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(reversed(list(l)))) foo Trans Dop Snack all Trans 1 2 0 0 Dop 0 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Last step would be to make foo symmetrical: import numpy as np foo = np.maximum( foo, foo.transpose() ) foo Trans Dop Snack all Trans 1 2 0 0 Dop 2 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Looking for a more efficient/faster (avoiding all these loops) solution
[ "Managed to shrink it to one \"for\" loop. I am using \"any\" and \"all\" in combination with \"mask\".\nimport pandas as pd\nimport itertools\n\n\ndf = pd.DataFrame({'TFD': ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'],\n 'Snack': [1, 0, 1, 1, 0, 0],\n 'Trans': [1, 1, 1, 0, 0, 1],\n 'Dop': [1, 0, 1, 0, 1, 1]}).set_index('TFD')\n\ndf[\"all\"] = 0 # adding artifical columns so the results contains \"all\"\nlist_of_columns = list(df.columns)\nmy_result_list = [] # empty list where we put the results\ncomb = itertools.combinations_with_replacement(list_of_columns, 2)\nfor item in comb:\n temp_list = list_of_columns[:] # temp_list holds columns of interest\n if item[0] == item[1]:\n temp_list.remove(item[0])\n my_col_list = [item[0]] # my_col_list holds which occurance we count\n else:\n temp_list.remove(item[0])\n temp_list.remove(item[1])\n my_col_list = [item[0], item[1]]\n\n mask = df.loc[:, temp_list].any(axis=1) # creating mask so we know which rows to look at\n distance = df.loc[~mask, my_col_list].all(axis=1).sum() # calculating ocurrance\n my_result_list.append([item[0], item[1], distance]) # occurance info recorded in the list\n my_result_list.append([item[1], item[0], distance]) # occurance put in reverse so we get square form in the end\n\nresult = pd.DataFrame(my_result_list).drop_duplicates().pivot(index=1, columns=0, values=2) # construc DataFrame in squareform\nlist_of_columns.remove(\"all\")\nresult.loc[\"all\", \"all\"] = df.loc[:, list_of_columns].all(axis=1).sum() # fill in all/all occurances\nprint(result)\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074641437_pandas_python.txt
Q: Virtualenv not compatible with this system or executable Simply trying to create a virtual environment on my mac OSX 10.10.05 Running from the Terminal, already successfully made VirtualEnv on linux and windows OS on other computers. Tried troubleshooting this by adding a WORK_ON path to my bash profile, did not resolve. Online forums doesn't seem to address this, suggestions are to use mkvirtualenv which does not seem to be a downloadable package per pip, conda and easy_install... Anyways, if you're able to help that would be super appreciated. here's the terminal output: joshua ~ $ pip install --upgrade virtualenv Requirement already up-to-date: virtualenv in ./anaconda/lib/python3.5/site-packages joshua ~ $ virtualenv -p python3 test Running virtualenv with interpreter /Users/joshua/anaconda/bin/python3 Using base prefix '/Users/joshua/anaconda' New python executable in /Users/joshua/test/bin/python3 Also creating executable in /Users/joshua/test/bin/python ERROR: The executable /Users/joshua/test/bin/python3 is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable ...tried uninstalling virtualenv Successfully uninstalled virtualenv-15.1.0 joshua ~ $ pip install virtualenv Collecting virtualenv Using cached virtualenv-15.1.0-py2.py3-none-any.whl Installing collected packages: virtualenv Successfully installed virtualenv-15.1.0 joshua ~ $ virtualenv test -v Using base prefix '/Users/joshua/anaconda' Creating /Users/joshua/test/lib/python3.5 Symlinking Python bootstrap modules Symlinking /Users/joshua/test/lib/python3.5/config-3.5m Symlinking /Users/joshua/test/lib/python3.5/lib-dynload Symlinking /Users/joshua/test/lib/python3.5/plat-darwin Symlinking /Users/joshua/test/lib/python3.5/os.py Ignoring built-in bootstrap module: posix Symlinking /Users/joshua/test/lib/python3.5/posixpath.py Cannot import bootstrap module: nt Symlinking /Users/joshua/test/lib/python3.5/ntpath.py Symlinking /Users/joshua/test/lib/python3.5/genericpath.py Symlinking /Users/joshua/test/lib/python3.5/fnmatch.py Symlinking /Users/joshua/test/lib/python3.5/locale.py Symlinking /Users/joshua/test/lib/python3.5/encodings Symlinking /Users/joshua/test/lib/python3.5/codecs.py Symlinking /Users/joshua/test/lib/python3.5/stat.py Cannot import bootstrap module: UserDict Cannot import bootstrap module: copy_reg Symlinking /Users/joshua/test/lib/python3.5/types.py Symlinking /Users/joshua/test/lib/python3.5/re.py Cannot import bootstrap module: sre Symlinking /Users/joshua/test/lib/python3.5/sre_parse.py Symlinking /Users/joshua/test/lib/python3.5/sre_constants.py Symlinking /Users/joshua/test/lib/python3.5/sre_compile.py Cannot import bootstrap module: _abcoll Symlinking /Users/joshua/test/lib/python3.5/warnings.py Symlinking /Users/joshua/test/lib/python3.5/linecache.py Symlinking /Users/joshua/test/lib/python3.5/abc.py Symlinking /Users/joshua/test/lib/python3.5/io.py Symlinking /Users/joshua/test/lib/python3.5/_weakrefset.py Symlinking /Users/joshua/test/lib/python3.5/copyreg.py Symlinking /Users/joshua/test/lib/python3.5/tempfile.py Symlinking /Users/joshua/test/lib/python3.5/random.py Symlinking /Users/joshua/test/lib/python3.5/__future__.py Symlinking /Users/joshua/test/lib/python3.5/collections Symlinking /Users/joshua/test/lib/python3.5/keyword.py Symlinking /Users/joshua/test/lib/python3.5/tarfile.py Symlinking /Users/joshua/test/lib/python3.5/shutil.py Symlinking /Users/joshua/test/lib/python3.5/struct.py Symlinking /Users/joshua/test/lib/python3.5/copy.py Symlinking /Users/joshua/test/lib/python3.5/tokenize.py Symlinking /Users/joshua/test/lib/python3.5/token.py Symlinking /Users/joshua/test/lib/python3.5/functools.py Symlinking /Users/joshua/test/lib/python3.5/heapq.py Symlinking /Users/joshua/test/lib/python3.5/bisect.py Symlinking /Users/joshua/test/lib/python3.5/weakref.py Symlinking /Users/joshua/test/lib/python3.5/reprlib.py Symlinking /Users/joshua/test/lib/python3.5/base64.py Symlinking /Users/joshua/test/lib/python3.5/_dummy_thread.py Symlinking /Users/joshua/test/lib/python3.5/hashlib.py Symlinking /Users/joshua/test/lib/python3.5/hmac.py Symlinking /Users/joshua/test/lib/python3.5/imp.py Symlinking /Users/joshua/test/lib/python3.5/importlib Symlinking /Users/joshua/test/lib/python3.5/rlcompleter.py Symlinking /Users/joshua/test/lib/python3.5/operator.py Symlinking /Users/joshua/test/lib/python3.5/_collections_abc.py Symlinking /Users/joshua/test/lib/python3.5/_bootlocale.py Creating /Users/joshua/test/lib/python3.5/site-packages Writing /Users/joshua/test/lib/python3.5/site.py Writing /Users/joshua/test/lib/python3.5/orig-prefix.txt Writing /Users/joshua/test/lib/python3.5/no-global-site-packages.txt Creating parent directories for /Users/joshua/test/include Symlinking /Users/joshua/test/include/python3.5m Creating /Users/joshua/test/bin New python executable in /Users/joshua/test/bin/python Changed mode of /Users/joshua/test/bin/python to 0o755 Testing executable with /Users/joshua/test/bin/python -c "import sys;out=sys.stdout;getattr(out, "buffer", out).write(sys.prefix.encode("utf-8"))" ERROR: The executable /Users/joshua/test/bin/python is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable here's current bash_profile: # Enable tab completion source ~/git-completion.bash # colors! green="\[\033[0;32m\]" blue="\[\033[0;34m\]" purple="\[\033[0;35m\]" reset="\[\033[0m\]" # Change command prompt source ~/git-prompt.sh export GIT_PS1_SHOWDIRTYSTATE=1 # '\u' adds the name of the current user to the prompt # '\$(__git_ps1)' adds git-related stuff # '\W' adds the name of the current directory export PS1="$purple\u$green\$(__git_ps1)$blue \W $ $reset" alias subl="/Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl" # Add Path export PATH="$HOME/anaconda/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:$PATH" # export PATH=$PATH:/users/Joshua/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin # Locale $ export LC_ALL=en_US.UTF-8 $ export LANG=en_US.UTF-8 A: My limited undestanding is that my python interpreter and packages are managed under Anaconda using Conda package manager, and my virtualenv was originally installed using pip.. uninstalling virtualenv with pip and re-installing with conda fixed the issue pip uninstall virtualenv conda install virtualenv A: I had this same issue when trying to install py2.7 on a newer system. The root issue was that virtualenv was part of py3.7 and thus was not compatible: $ virtualenv -p python2.7 env Running virtualenv with interpreter /usr/local/bin/python2.7 New python executable in /Users/blah/env/bin/python ERROR: The executable /Users/blah/env/bin/python is not functioning ERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env') ERROR: virtualenv is not compatible with this system or executable $ which virtualenv /Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv # install proper version of virtualenv $ pip2.7 install virtualenv $ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env $ . ./env/bin/activate (env) $
Virtualenv not compatible with this system or executable
Simply trying to create a virtual environment on my mac OSX 10.10.05 Running from the Terminal, already successfully made VirtualEnv on linux and windows OS on other computers. Tried troubleshooting this by adding a WORK_ON path to my bash profile, did not resolve. Online forums doesn't seem to address this, suggestions are to use mkvirtualenv which does not seem to be a downloadable package per pip, conda and easy_install... Anyways, if you're able to help that would be super appreciated. here's the terminal output: joshua ~ $ pip install --upgrade virtualenv Requirement already up-to-date: virtualenv in ./anaconda/lib/python3.5/site-packages joshua ~ $ virtualenv -p python3 test Running virtualenv with interpreter /Users/joshua/anaconda/bin/python3 Using base prefix '/Users/joshua/anaconda' New python executable in /Users/joshua/test/bin/python3 Also creating executable in /Users/joshua/test/bin/python ERROR: The executable /Users/joshua/test/bin/python3 is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable ...tried uninstalling virtualenv Successfully uninstalled virtualenv-15.1.0 joshua ~ $ pip install virtualenv Collecting virtualenv Using cached virtualenv-15.1.0-py2.py3-none-any.whl Installing collected packages: virtualenv Successfully installed virtualenv-15.1.0 joshua ~ $ virtualenv test -v Using base prefix '/Users/joshua/anaconda' Creating /Users/joshua/test/lib/python3.5 Symlinking Python bootstrap modules Symlinking /Users/joshua/test/lib/python3.5/config-3.5m Symlinking /Users/joshua/test/lib/python3.5/lib-dynload Symlinking /Users/joshua/test/lib/python3.5/plat-darwin Symlinking /Users/joshua/test/lib/python3.5/os.py Ignoring built-in bootstrap module: posix Symlinking /Users/joshua/test/lib/python3.5/posixpath.py Cannot import bootstrap module: nt Symlinking /Users/joshua/test/lib/python3.5/ntpath.py Symlinking /Users/joshua/test/lib/python3.5/genericpath.py Symlinking /Users/joshua/test/lib/python3.5/fnmatch.py Symlinking /Users/joshua/test/lib/python3.5/locale.py Symlinking /Users/joshua/test/lib/python3.5/encodings Symlinking /Users/joshua/test/lib/python3.5/codecs.py Symlinking /Users/joshua/test/lib/python3.5/stat.py Cannot import bootstrap module: UserDict Cannot import bootstrap module: copy_reg Symlinking /Users/joshua/test/lib/python3.5/types.py Symlinking /Users/joshua/test/lib/python3.5/re.py Cannot import bootstrap module: sre Symlinking /Users/joshua/test/lib/python3.5/sre_parse.py Symlinking /Users/joshua/test/lib/python3.5/sre_constants.py Symlinking /Users/joshua/test/lib/python3.5/sre_compile.py Cannot import bootstrap module: _abcoll Symlinking /Users/joshua/test/lib/python3.5/warnings.py Symlinking /Users/joshua/test/lib/python3.5/linecache.py Symlinking /Users/joshua/test/lib/python3.5/abc.py Symlinking /Users/joshua/test/lib/python3.5/io.py Symlinking /Users/joshua/test/lib/python3.5/_weakrefset.py Symlinking /Users/joshua/test/lib/python3.5/copyreg.py Symlinking /Users/joshua/test/lib/python3.5/tempfile.py Symlinking /Users/joshua/test/lib/python3.5/random.py Symlinking /Users/joshua/test/lib/python3.5/__future__.py Symlinking /Users/joshua/test/lib/python3.5/collections Symlinking /Users/joshua/test/lib/python3.5/keyword.py Symlinking /Users/joshua/test/lib/python3.5/tarfile.py Symlinking /Users/joshua/test/lib/python3.5/shutil.py Symlinking /Users/joshua/test/lib/python3.5/struct.py Symlinking /Users/joshua/test/lib/python3.5/copy.py Symlinking /Users/joshua/test/lib/python3.5/tokenize.py Symlinking /Users/joshua/test/lib/python3.5/token.py Symlinking /Users/joshua/test/lib/python3.5/functools.py Symlinking /Users/joshua/test/lib/python3.5/heapq.py Symlinking /Users/joshua/test/lib/python3.5/bisect.py Symlinking /Users/joshua/test/lib/python3.5/weakref.py Symlinking /Users/joshua/test/lib/python3.5/reprlib.py Symlinking /Users/joshua/test/lib/python3.5/base64.py Symlinking /Users/joshua/test/lib/python3.5/_dummy_thread.py Symlinking /Users/joshua/test/lib/python3.5/hashlib.py Symlinking /Users/joshua/test/lib/python3.5/hmac.py Symlinking /Users/joshua/test/lib/python3.5/imp.py Symlinking /Users/joshua/test/lib/python3.5/importlib Symlinking /Users/joshua/test/lib/python3.5/rlcompleter.py Symlinking /Users/joshua/test/lib/python3.5/operator.py Symlinking /Users/joshua/test/lib/python3.5/_collections_abc.py Symlinking /Users/joshua/test/lib/python3.5/_bootlocale.py Creating /Users/joshua/test/lib/python3.5/site-packages Writing /Users/joshua/test/lib/python3.5/site.py Writing /Users/joshua/test/lib/python3.5/orig-prefix.txt Writing /Users/joshua/test/lib/python3.5/no-global-site-packages.txt Creating parent directories for /Users/joshua/test/include Symlinking /Users/joshua/test/include/python3.5m Creating /Users/joshua/test/bin New python executable in /Users/joshua/test/bin/python Changed mode of /Users/joshua/test/bin/python to 0o755 Testing executable with /Users/joshua/test/bin/python -c "import sys;out=sys.stdout;getattr(out, "buffer", out).write(sys.prefix.encode("utf-8"))" ERROR: The executable /Users/joshua/test/bin/python is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable here's current bash_profile: # Enable tab completion source ~/git-completion.bash # colors! green="\[\033[0;32m\]" blue="\[\033[0;34m\]" purple="\[\033[0;35m\]" reset="\[\033[0m\]" # Change command prompt source ~/git-prompt.sh export GIT_PS1_SHOWDIRTYSTATE=1 # '\u' adds the name of the current user to the prompt # '\$(__git_ps1)' adds git-related stuff # '\W' adds the name of the current directory export PS1="$purple\u$green\$(__git_ps1)$blue \W $ $reset" alias subl="/Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl" # Add Path export PATH="$HOME/anaconda/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:$PATH" # export PATH=$PATH:/users/Joshua/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin # Locale $ export LC_ALL=en_US.UTF-8 $ export LANG=en_US.UTF-8
[ "My limited undestanding is that my python interpreter and packages are managed under Anaconda using Conda package manager, and my virtualenv was originally installed using pip..\nuninstalling virtualenv with pip and re-installing with conda fixed the issue\npip uninstall virtualenv\n\nconda install virtualenv\n\n", "I had this same issue when trying to install py2.7 on a newer system.\nThe root issue was that virtualenv was part of py3.7 and thus was not compatible:\n$ virtualenv -p python2.7 env\nRunning virtualenv with interpreter /usr/local/bin/python2.7\nNew python executable in /Users/blah/env/bin/python\nERROR: The executable /Users/blah/env/bin/python is not functioning\nERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env')\nERROR: virtualenv is not compatible with this system or executable\n\n$ which virtualenv\n/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv\n\n# install proper version of virtualenv \n$ pip2.7 install virtualenv\n\n$ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env\n\n$ . ./env/bin/activate\n(env) $ \n\n" ]
[ 34, 0 ]
[]
[]
[ "bash", "python", "virtualenv" ]
stackoverflow_0044575994_bash_python_virtualenv.txt
Q: Keep observations with two or more consecutive years of data by group I have a dataset consisting of directorid, match_id, and calyear. I would like to keep only observations by director_id and match_id that have at least 2 consecutive years of data. I have tried a few different ways to do this, and haven't been able to get it quite right. The few different things I have tried have also required multiple steps and weren't particularly clean. Here is what I have: director_id match_id calyear 282 1111 2006 282 1111 2007 356 2222 2005 356 2222 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2013 600 3355 2015 600 3355 2016 753 4444 2005 753 4444 2008 753 4444 2009 Here is what I want: director_id match_id calyear 282 1111 2006 282 1111 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2015 600 3355 2016 753 4444 2008 753 4444 2009 I started by creating a variable equal to one: df['tosum'] = 1 And then count the number of observations where the difference in calyear by group is equal to 1. df['num_years'] = ( df.groupby(['directorid','match_id'])['tosum'].transform('sum').where(df.groupby(['match_id'])['calyear'].diff()==1, np.nan) ) And then I keep all observations with 'num_years' greater than 1. However, the first observation per director_id match_id gets set equal to NaN. In general, I think I am going about this in a convoluted way...it feels like there should be a simpler way to achieve my goal. Any help is greatly appreciated! A: Yes you need to groupby 'director_id', 'match_id' and then do a transform but the transform just needs to look at the difference between next element in both directions. In one direction you need to see if it equals 1 and in another -1 and then subset using the resulting True/False values. df = df[ df.groupby(["director_id", "match_id"])["calyear"].transform( lambda x: (x.diff().eq(1)) | (x[::-1].diff().eq(-1)) ) ] print(df): director_id match_id calyear 0 282 1111 2006 1 282 1111 2007 4 600 3333 2010 5 600 3333 2011 6 600 3333 2012 8 600 3355 2015 9 600 3355 2016 11 753 4444 2008 12 753 4444 2009
Keep observations with two or more consecutive years of data by group
I have a dataset consisting of directorid, match_id, and calyear. I would like to keep only observations by director_id and match_id that have at least 2 consecutive years of data. I have tried a few different ways to do this, and haven't been able to get it quite right. The few different things I have tried have also required multiple steps and weren't particularly clean. Here is what I have: director_id match_id calyear 282 1111 2006 282 1111 2007 356 2222 2005 356 2222 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2013 600 3355 2015 600 3355 2016 753 4444 2005 753 4444 2008 753 4444 2009 Here is what I want: director_id match_id calyear 282 1111 2006 282 1111 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2015 600 3355 2016 753 4444 2008 753 4444 2009 I started by creating a variable equal to one: df['tosum'] = 1 And then count the number of observations where the difference in calyear by group is equal to 1. df['num_years'] = ( df.groupby(['directorid','match_id'])['tosum'].transform('sum').where(df.groupby(['match_id'])['calyear'].diff()==1, np.nan) ) And then I keep all observations with 'num_years' greater than 1. However, the first observation per director_id match_id gets set equal to NaN. In general, I think I am going about this in a convoluted way...it feels like there should be a simpler way to achieve my goal. Any help is greatly appreciated!
[ "Yes you need to groupby 'director_id', 'match_id' and then do a transform but the transform just needs to look at the difference between next element in both directions. In one direction you need to see if it equals 1 and in another -1 and then subset using the resulting True/False values.\ndf = df[\n df.groupby([\"director_id\", \"match_id\"])[\"calyear\"].transform(\n lambda x: (x.diff().eq(1)) | (x[::-1].diff().eq(-1))\n )\n]\n\nprint(df):\n director_id match_id calyear\n0 282 1111 2006\n1 282 1111 2007\n4 600 3333 2010\n5 600 3333 2011\n6 600 3333 2012\n8 600 3355 2015\n9 600 3355 2016\n11 753 4444 2008\n12 753 4444 2009\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074661298_pandas_python.txt
Q: Print output includes 'None' def backwards_alphabet(curr_letter): if curr_letter == 'a': print(curr_letter) else: print(curr_letter) prev_letter = chr(ord(curr_letter) - 1) backwards_alphabet(prev_letter) starting_letter = input() print (backwards_alphabet(starting_letter)) #this is the code i wrote The output includes "None" but I have no idea why. Image of output A: The function print takes a parameter - you are giving it the result of backwards_alphabet(starting_letter). Since you aren't explicit about what backwards_alphabet() returns - which you do would with by including return 'this is what I am returning', it will return None by default. So, you are calling print(None) and it prints 'None'. Since your function backwards_alphabet() already has all of the printing within it, you don't want to do print(backwards_alphabet(...)), you just want to call backwards_alphabet(...) by itself.
Print output includes 'None'
def backwards_alphabet(curr_letter): if curr_letter == 'a': print(curr_letter) else: print(curr_letter) prev_letter = chr(ord(curr_letter) - 1) backwards_alphabet(prev_letter) starting_letter = input() print (backwards_alphabet(starting_letter)) #this is the code i wrote The output includes "None" but I have no idea why. Image of output
[ "The function print takes a parameter - you are giving it the result of backwards_alphabet(starting_letter).\nSince you aren't explicit about what backwards_alphabet() returns - which you do would with by including return 'this is what I am returning', it will return None by default.\nSo, you are calling print(None) and it prints 'None'.\nSince your function backwards_alphabet() already has all of the printing within it, you don't want to do print(backwards_alphabet(...)), you just want to call backwards_alphabet(...) by itself.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074661626_python.txt
Q: Simplify Python array retrieval of values I have the following code at the moment which works perfectly:- my_array = [ ['var1', 1], ['var2', 2], ['var3', 3], ['var4', 4], ['var5', 5] ] for i in range(len(my_array)): if my_array[i][0] == "var1": var_a = my_array[i][1] elif my_array[i][0] == "var2": var_b = my_array[i][1] elif my_array[i][0] == "var3": var_c = my_array[i][1] elif my_array[i][0] == "var4": var_d = my_array[i][1] elif my_array[i][0] == "var5": var_e = my_array[i][1] print(var_a) print(var_b) print(var_c) print(var_d) print(var_e) Is there a way I can simplify the way I get the values, instead of using multiple "elif's"? I tried something like this:- var_f = my_array[i]["var1"] print(var_f) but I get an error:- TypeError: list indices must be integer, not str Any help would be very much appreciated! Thank you A: You can convert my_array to dict to simplify the retrieval of values: my_array = [["var1", 1], ["var2", 2], ["var3", 3], ["var4", 4], ["var5", 5]] dct = dict(my_array) # print var1 print(dct["var1"]) Prints: 1
Simplify Python array retrieval of values
I have the following code at the moment which works perfectly:- my_array = [ ['var1', 1], ['var2', 2], ['var3', 3], ['var4', 4], ['var5', 5] ] for i in range(len(my_array)): if my_array[i][0] == "var1": var_a = my_array[i][1] elif my_array[i][0] == "var2": var_b = my_array[i][1] elif my_array[i][0] == "var3": var_c = my_array[i][1] elif my_array[i][0] == "var4": var_d = my_array[i][1] elif my_array[i][0] == "var5": var_e = my_array[i][1] print(var_a) print(var_b) print(var_c) print(var_d) print(var_e) Is there a way I can simplify the way I get the values, instead of using multiple "elif's"? I tried something like this:- var_f = my_array[i]["var1"] print(var_f) but I get an error:- TypeError: list indices must be integer, not str Any help would be very much appreciated! Thank you
[ "You can convert my_array to dict to simplify the retrieval of values:\nmy_array = [[\"var1\", 1], [\"var2\", 2], [\"var3\", 3], [\"var4\", 4], [\"var5\", 5]]\n\ndct = dict(my_array)\n\n# print var1\nprint(dct[\"var1\"])\n\nPrints:\n1\n\n" ]
[ 0 ]
[]
[]
[ "list", "python", "python_3.x" ]
stackoverflow_0074661638_list_python_python_3.x.txt
Q: Dynamic Importing with Pyinstaller Executable I’m trying to write a script that dynamically imports and uses any modules a user places in a folder. The dynamic importing works fine when I’m running it via python, but when I try to compile it into a Pyinstaller executable, it breaks down and throws me a ModuleNotFoundError, saying it can't find a module with the same name as the folder the modules are placed in. The executable sits alongside this folder, which contains all the modules to be dynamically imported, so my import statements look like __import__("FOLDERNAME.MODULENAME"). The script must be able to run the modules dropped in this folder without being recompiled. What's strange is that the ModuleNotFoundError says No module named 'FOLDERNAME', despite that just being the name of the folder containing the modules, I'd expect it to complain about No module named 'FOLDERNAME.MODULENAME' instead. In my googling, I found this question (pyinstaller: adding dynamically loaded modules), which is pretty similar, but the answer they provided from the docs doesn’t really help. How do I give additional files on the command line if I don’t know what files are going to be in the folder in the first place? That kind of beats the purpose of dynamic importing. I've attempted to use the hidden-import command line flag, but the compiler output said Hidden import '[X]' not found. Maybe I'm just using it wrong? And I have no idea how to modify the spec file or write a hook file to do what I need. Any help would be greatly appreciated. A: I was working on a similar functionality to implement a Plugin Architecture and ran into the same issue. Quoting @Gao Yuan from a similar question :- Pyinstaller (currently v 3.4) can't detect imports like importlib.import_module(). The issue and solutions are detailed in Pyinstaller's documentation, which I pasted below as an entry point. But of-course there is always a way. Instead you can use importlib.util.spec_from_file_location to load and then compile the module Minimum wokring example iterface.py # from dependency import VARIABLE # from PySide6.QtCore import Qt def hello_world(): print(f"this is a plugin calling QT {Qt.AlignmentFlag.AlignAbsolute}") print(f"this is a plugin calling DEPENDENCY {VARIABLE}") cli.py import sys import types from pprint import pprint import importlib.util import sys if __name__ == "__main__": module_name = "dependency" module_file = "plugins/abcplugin/dependency.py" if spec:=importlib.util.spec_from_file_location(module_name, module_file): dependency = importlib.util.module_from_spec(spec) sys.modules[module_name] = dependency spec.loader.exec_module(dependency) module_name = "interface" module_file = "plugins/abcplugin/interface.py" if spec:=importlib.util.spec_from_file_location(module_name, module_file): interface = importlib.util.module_from_spec(spec) sys.modules[module_name] = interface spec.loader.exec_module(interface) sys.modules[module_name].hello_world() project structure cli.exe plugins abcplugin __init__.py interface.py dependency.py complus __init__.py ... Thumb Rules Plugin must always be relative to .exe As you can notice I commented out # from dependency import VARIABLE in line one of interface.py. If you scripts depend on scripts in the same plugin, then you must load dependency.py before loading interface.py. You can then un-comment the line. In pyinstaller.spec file you need to add hiddenimports in this case PySide6 and then un-comment # from PySide6.QtCore import Qt Always use absolute imports when designing a plugin in reference to your project root folder. You can then set the module name to plugins.abcplugin.interface and plugins.abcplugin.dependency and also update from dependency import VARIABLE to from plugins.abcplugin.dependency import VARIABLE Hope people find this usefull, cheers!!
Dynamic Importing with Pyinstaller Executable
I’m trying to write a script that dynamically imports and uses any modules a user places in a folder. The dynamic importing works fine when I’m running it via python, but when I try to compile it into a Pyinstaller executable, it breaks down and throws me a ModuleNotFoundError, saying it can't find a module with the same name as the folder the modules are placed in. The executable sits alongside this folder, which contains all the modules to be dynamically imported, so my import statements look like __import__("FOLDERNAME.MODULENAME"). The script must be able to run the modules dropped in this folder without being recompiled. What's strange is that the ModuleNotFoundError says No module named 'FOLDERNAME', despite that just being the name of the folder containing the modules, I'd expect it to complain about No module named 'FOLDERNAME.MODULENAME' instead. In my googling, I found this question (pyinstaller: adding dynamically loaded modules), which is pretty similar, but the answer they provided from the docs doesn’t really help. How do I give additional files on the command line if I don’t know what files are going to be in the folder in the first place? That kind of beats the purpose of dynamic importing. I've attempted to use the hidden-import command line flag, but the compiler output said Hidden import '[X]' not found. Maybe I'm just using it wrong? And I have no idea how to modify the spec file or write a hook file to do what I need. Any help would be greatly appreciated.
[ "I was working on a similar functionality to implement a Plugin Architecture and ran into the same issue. Quoting @Gao Yuan from a similar question :-\n\nPyinstaller (currently v 3.4) can't detect imports like importlib.import_module(). The issue and solutions are detailed in Pyinstaller's documentation, which I pasted below as an entry point.\n\nBut of-course there is always a way. Instead you can use importlib.util.spec_from_file_location to load and then compile the module\nMinimum wokring example\niterface.py\n# from dependency import VARIABLE\n# from PySide6.QtCore import Qt\n\ndef hello_world():\n print(f\"this is a plugin calling QT {Qt.AlignmentFlag.AlignAbsolute}\")\n print(f\"this is a plugin calling DEPENDENCY {VARIABLE}\")\n\ncli.py\nimport sys\nimport types\nfrom pprint import pprint\nimport importlib.util\nimport sys\nif __name__ == \"__main__\":\n module_name = \"dependency\"\n module_file = \"plugins/abcplugin/dependency.py\"\n if spec:=importlib.util.spec_from_file_location(module_name, module_file):\n dependency = importlib.util.module_from_spec(spec)\n sys.modules[module_name] = dependency\n spec.loader.exec_module(dependency)\n\n module_name = \"interface\"\n module_file = \"plugins/abcplugin/interface.py\"\n if spec:=importlib.util.spec_from_file_location(module_name, module_file):\n interface = importlib.util.module_from_spec(spec)\n sys.modules[module_name] = interface\n spec.loader.exec_module(interface)\n\n sys.modules[module_name].hello_world()\n\nproject structure\ncli.exe\nplugins\n abcplugin\n __init__.py\n interface.py\n dependency.py\n\n complus\n __init__.py\n ...\n\nThumb Rules\n\nPlugin must always be relative to .exe\nAs you can notice I commented out # from dependency import VARIABLE in line one of interface.py. If you scripts depend on scripts in the same plugin, then you must load dependency.py before loading interface.py. You can then un-comment the line.\nIn pyinstaller.spec file you need to add hiddenimports in this case PySide6 and then un-comment # from PySide6.QtCore import Qt\nAlways use absolute imports when designing a plugin in reference to your project root folder. You can then set the module name to plugins.abcplugin.interface and plugins.abcplugin.dependency and also update from dependency import VARIABLE to from plugins.abcplugin.dependency import VARIABLE \n\nHope people find this usefull, cheers!!\n" ]
[ 0 ]
[]
[]
[ "dynamic_import", "pyinstaller", "python" ]
stackoverflow_0071162951_dynamic_import_pyinstaller_python.txt
Q: Merge lists within dictionaries with the same keys I have the following three dictionaries within a list like so: dict1 = {'key1':'x', 'key2':['one', 'two', 'three']} dict2 = {'key1':'x', 'key2':['four', 'five', 'six']} dict3 = {'key1':'y', 'key2':['one', 'two', 'three']} list = [dict1, dict2, dict3] I'd like to merge the dictionaries that have the same value for key1 into a single dictionary with merged values (lists in this case) for key2 like so: new_dict = {'key1':'x', 'key2':['one', 'two', 'three', 'four', 'five', 'six']} list = [new_dict, dict3] I've come up with a very brutish solution riddled with hard codes and loops. I'd like to employ some higher-order functions but I'm new to those. A: With the help of itertools.groupby and itertools.chain, your goal can be achieved in a single line: from itertools import groupby from itertools import chain dict1 = {'key1':'x', 'key2':['one', 'two', 'three']} dict2 = {'key1':'x', 'key2':['four', 'five', 'six']} dict3 = {'key1':'y', 'key2':['one', 'two', 'three']} list_of_dicts = [dict1, dict2, dict3] result = [{'key1': k, 'key2': list(chain(*[x['key2'] for x in v]))} for k, v in groupby(list_of_dicts, lambda x: x['key1'])] print(result) [{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}] A: Build an intermediate dict that uses key1 as a key to aggregate the key2 lists, and then build the final list of dicts out of that: >>> my_dicts = [ ... {'key1':'x', 'key2':['one', 'two', 'three']}, ... {'key1':'x', 'key2':['four', 'five', 'six']}, ... {'key1':'y', 'key2':['one', 'two', 'three']}, ... ] >>> agg_dict = {} >>> for d in my_dicts: ... agg_dict.setdefault(d['key1'], []).extend(d['key2']) ... >>> [{'key1': key1, 'key2': key2} for key1, key2 in agg_dict.items()] [{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}]
Merge lists within dictionaries with the same keys
I have the following three dictionaries within a list like so: dict1 = {'key1':'x', 'key2':['one', 'two', 'three']} dict2 = {'key1':'x', 'key2':['four', 'five', 'six']} dict3 = {'key1':'y', 'key2':['one', 'two', 'three']} list = [dict1, dict2, dict3] I'd like to merge the dictionaries that have the same value for key1 into a single dictionary with merged values (lists in this case) for key2 like so: new_dict = {'key1':'x', 'key2':['one', 'two', 'three', 'four', 'five', 'six']} list = [new_dict, dict3] I've come up with a very brutish solution riddled with hard codes and loops. I'd like to employ some higher-order functions but I'm new to those.
[ "With the help of itertools.groupby and itertools.chain, your goal can be achieved in a single line:\nfrom itertools import groupby\nfrom itertools import chain\n\ndict1 = {'key1':'x', 'key2':['one', 'two', 'three']}\ndict2 = {'key1':'x', 'key2':['four', 'five', 'six']}\ndict3 = {'key1':'y', 'key2':['one', 'two', 'three']}\nlist_of_dicts = [dict1, dict2, dict3]\n\nresult = [{'key1': k, 'key2': list(chain(*[x['key2'] for x in v]))} for k, v in groupby(list_of_dicts, lambda x: x['key1'])]\n\nprint(result)\n[{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}]\n\n", "Build an intermediate dict that uses key1 as a key to aggregate the key2 lists, and then build the final list of dicts out of that:\n>>> my_dicts = [\n... {'key1':'x', 'key2':['one', 'two', 'three']},\n... {'key1':'x', 'key2':['four', 'five', 'six']},\n... {'key1':'y', 'key2':['one', 'two', 'three']},\n... ]\n>>> agg_dict = {}\n>>> for d in my_dicts:\n... agg_dict.setdefault(d['key1'], []).extend(d['key2'])\n...\n>>> [{'key1': key1, 'key2': key2} for key1, key2 in agg_dict.items()]\n[{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}]\n\n" ]
[ 2, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074661388_dictionary_list_python.txt
Q: Can FastAPI guarantee a sync handler will never block the main application thread? I have the following FastAPI application: from fastapi import FastAPI import socket app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} @app.get("/healthcheck") def health_check(): result = some_network_operation() return result def some_network_operation(): HOST = "192.168.30.12" # This host does not exist so the connection will time out PORT = 4567 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.settimeout(10) s.connect((HOST, PORT)) s.sendall(b"Are you ok?") data = s.recv(1024) print(data) This is a simple application with two routes: / handler that is async /healthcheck handler that is sync With this particular example, if you call /healthcheck, it won't complete until after 10 seconds because the socket connection will timeout. However, if you make a call to / in the meantime, it will return the response right away because FastAPI's main thread is not blocked. This makes sense because according to the docs, FastAPI runs sync handlers on an external threadpool. My question is, if it is at all possible for us to block the application (block FastAPI's main thread) by doing something inside the health_check method. Perhaps by acquiring the global interpreter lock? Some other kind of lock? A: Yes, if you try to do sync work in a async method it will block FastAPI, something like this: @router.get("/healthcheck") async def health_check(): result = some_network_operation() return result Where some_network_operation() is blocking the event loop because it is a synchronous method. A: I think I may have an answer to my question, which is that there are some weird edge cases where a sync endpoint handler can block FastAPI. For instance, if we adjust the some_network_operation in my example to the following, it will block the entire application. def some_network_operation(): """ No, this is not a network operation, but it illustrates the point """ block = pow (363,100000000000000) I reached this conclusion based on this question: pow function blocking all threads with ThreadPoolExecutor. So, it looks like the GIL maybe the culprit here. That SO question suggests using the multiprocessing module (which will get around GIL). However, I tried this, and it still resulted in the same behavior. So my root problem remains unsolved. Either way, here is the entire example in the question edited to reproduce the problem: from fastapi import FastAPI app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} @app.get("/healthcheck") def health_check(): result = some_network_operation() return result def some_network_operation(): block = pow(363,100000000000000)
Can FastAPI guarantee a sync handler will never block the main application thread?
I have the following FastAPI application: from fastapi import FastAPI import socket app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} @app.get("/healthcheck") def health_check(): result = some_network_operation() return result def some_network_operation(): HOST = "192.168.30.12" # This host does not exist so the connection will time out PORT = 4567 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.settimeout(10) s.connect((HOST, PORT)) s.sendall(b"Are you ok?") data = s.recv(1024) print(data) This is a simple application with two routes: / handler that is async /healthcheck handler that is sync With this particular example, if you call /healthcheck, it won't complete until after 10 seconds because the socket connection will timeout. However, if you make a call to / in the meantime, it will return the response right away because FastAPI's main thread is not blocked. This makes sense because according to the docs, FastAPI runs sync handlers on an external threadpool. My question is, if it is at all possible for us to block the application (block FastAPI's main thread) by doing something inside the health_check method. Perhaps by acquiring the global interpreter lock? Some other kind of lock?
[ "Yes, if you try to do sync work in a async method it will block FastAPI, something like this:\[email protected](\"/healthcheck\")\nasync def health_check():\n result = some_network_operation()\n return result\n\nWhere some_network_operation() is blocking the event loop because it is a synchronous method.\n", "I think I may have an answer to my question, which is that there are some weird edge cases where a sync endpoint handler can block FastAPI.\nFor instance, if we adjust the some_network_operation in my example to the following, it will block the entire application.\ndef some_network_operation():\n \"\"\" No, this is not a network operation, but it illustrates the point \"\"\"\n block = pow (363,100000000000000)\n\nI reached this conclusion based on this question: pow function blocking all threads with ThreadPoolExecutor.\nSo, it looks like the GIL maybe the culprit here.\nThat SO question suggests using the multiprocessing module (which will get around GIL). However, I tried this, and it still resulted in the same behavior. So my root problem remains unsolved.\nEither way, here is the entire example in the question edited to reproduce the problem:\nfrom fastapi import FastAPI\n\napp = FastAPI()\n\n\[email protected](\"/\")\nasync def root():\n return {\"message\": \"Hello World\"}\n\n\[email protected](\"/healthcheck\")\ndef health_check():\n result = some_network_operation()\n return result\n\n\ndef some_network_operation():\n block = pow(363,100000000000000)\n\n" ]
[ 0, 0 ]
[]
[]
[ "fastapi", "python", "sockets", "tcp" ]
stackoverflow_0074636003_fastapi_python_sockets_tcp.txt
Q: import custom python module in azure ml deployment environment I have an sklearn k-means model. I am training the model and saving it in a pickle file so I can deploy it later using azure ml library. The model that I am training uses a custom Feature Encoder called MultiColumnLabelEncoder. The pipeline model is defined as follow : # Pipeline kmeans = KMeans(n_clusters=3, random_state=0) pipe = Pipeline([ ("encoder", MultiColumnLabelEncoder()), ('k-means', kmeans), ]) #Training the pipeline model = pipe.fit(visitors_df) prediction = model.predict(visitors_df) #save the model in pickle/joblib format filename = 'k_means_model.pkl' joblib.dump(model, filename) The model saving works fine. The Deployment steps are the same as the steps in this link : https://notebooks.azure.com/azureml/projects/azureml-getting-started/html/how-to-use-azureml/deploy-to-cloud/model-register-and-deploy.ipynb However the deployment always fails with this error : File "/var/azureml-server/create_app.py", line 3, in <module> from app import main File "/var/azureml-server/app.py", line 27, in <module> import main as user_main File "/var/azureml-app/main.py", line 19, in <module> driver_module_spec.loader.exec_module(driver_module) File "/structure/azureml-app/score.py", line 22, in <module> importlib.import_module("multilabelencoder") File "/azureml-envs/azureml_b707e8c15a41fd316cf6c660941cf3d5/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'multilabelencoder' I understand that pickle/joblib has some problems unpickling the custom function MultiLabelEncoder. That's why I defined this class in a separate python script (which I executed also). I called this custom function in the training python script, in the deployment script and in the scoring python file (score.py). The importing in the score.py file is not successful. So my question is how can I import custom python module to azure ml deployment environment ? Thank you in advance. EDIT: This is my .yml file name: project_environment dependencies: # The python interpreter version. # Currently Azure ML only supports 3.5.2 and later. - python=3.6.2 - pip: - multilabelencoder==1.0.4 - scikit-learn - azureml-defaults==1.0.74.* - pandas channels: - conda-forge A: In fact, the solution was to import my customized class MultiColumnLabelEncoder as a pip package (You can find it through pip install multilllabelencoder==1.0.5). Then I passed the pip package to the .yml file or in the InferenceConfig of the azure ml environment. In the score.py file, I imported the class as follows : from multilabelencoder import multilabelencoder def init(): global model # Call the custom encoder to be used dfor unpickling the model encoder = multilabelencoder.MultiColumnLabelEncoder() # Get the path where the deployed model can be found. model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'k_means_model_45.pkl') model = joblib.load(model_path) Then the deployment was successful. One more important thing is I had to use the same pip package (multilabelencoder) in the training pipeline as here : from multilabelencoder import multilabelencoder pipe = Pipeline([ ("encoder", multilabelencoder.MultiColumnLabelEncoder(columns)), ('k-means', kmeans), ]) #Training the pipeline trainedModel = pipe.fit(df) A: I am facing the same problem, trying to deploy a model that has dependency on some of my own scripts and got the error message: ModuleNotFoundError: No module named 'my-own-module-name' Found this "Private wheel files" solution in MS documentation and it works. The difference from the solution above is now I do not need to publish my scripts to pip. I think many people might face the same situation that for some reason you cannot or do not want to publish your scripts. Instead, your own wheel file is saved under your own blob storage. Following the documentation, I did the following steps and it worked for me. Now I can deploy my model that has dependency in my own scripts. Package your own scripts that the model is dependent on into wheel file, and the wheel file is saved locally. "your_path/your-wheel-file-name.whl" Follow the instructions in the "Private wheel files" solution in MS documentation. Below is the code that worked for me. from azureml.core.environment import Environment from azureml.core.conda_dependencies import CondaDependencies whl_url = Environment.add_private_pip_wheel(workspace=ws,file_path = "your_pathpath/your-wheel-file-name.whl") myenv = CondaDependencies() myenv.add_pip_package("scikit-learn==0.22.1") myenv.add_pip_package("azureml-defaults") myenv.add_pip_package(whl_url) with open("myenv.yml","w") as f: f.write(myenv.serialize_to_string()) My environment file now looks like: name: project_environment dependencies: # The python interpreter version. # Currently Azure ML only supports 3.5.2 and later. - python=3.6.2 - pip: - scikit-learn==0.22.1 - azureml-defaults - https://myworkspaceid.blob.core/azureml/Environment/azureml-private-packages/my-wheel-file-name.whl channels: - conda-forge I'm new to Azure ml. Learning by doing and communicating with the community. This solution works fine for me, hope that it helps. A: An alternative method that works for me is to register a "model_src"-directory containing both the pickled model and a custom module, instead of registering only the pickled model. Then, specify the custom module in the scoring script during deployment, e.g., using python's os module. Example below using sdk-v1: Example of "model_src"-directory model_src │ ├─ utils # your custom module │ └─ multilabelencoder.py │ └─ models # your pickled files └─ k_means_model_45.pkl Register "model_src" in sdk-v1 model = Model.register(model_path="./model_src", model_name="kmeans", description="model registered as a directory", workspace=ws ) Correspondingly, when defining the inference config deployment_folder = './model_src' script_file = 'models/score.py' service_env = Environment.from_conda_specification(kmeans-service, './environment.yml' # wherever yml is located locally ) inference_config = InferenceConfig(source_directory=deployment_folder, entry_script=script_file, environment=service_env ) Content of scoring script, e.g., score.py # Specify model_src as your parent import os deploy_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'),'model_src') # Import custom module import sys sys.path.append("{0}/utils".format(deploy_dir)) from multilabelencoder import MultiColumnLabelEncoder import joblib def init(): global model # Call the custom encoder to be used dfor unpickling the model encoder = MultiColumnLabelEncoder() # Use as intended downstream # Get the path where the deployed model can be found. model = joblib.load('{}/models/k_means_model_45.pkl'.format(deploy_dir)) This method provides flexibility in importing various custom scripts in my scoring script.
import custom python module in azure ml deployment environment
I have an sklearn k-means model. I am training the model and saving it in a pickle file so I can deploy it later using azure ml library. The model that I am training uses a custom Feature Encoder called MultiColumnLabelEncoder. The pipeline model is defined as follow : # Pipeline kmeans = KMeans(n_clusters=3, random_state=0) pipe = Pipeline([ ("encoder", MultiColumnLabelEncoder()), ('k-means', kmeans), ]) #Training the pipeline model = pipe.fit(visitors_df) prediction = model.predict(visitors_df) #save the model in pickle/joblib format filename = 'k_means_model.pkl' joblib.dump(model, filename) The model saving works fine. The Deployment steps are the same as the steps in this link : https://notebooks.azure.com/azureml/projects/azureml-getting-started/html/how-to-use-azureml/deploy-to-cloud/model-register-and-deploy.ipynb However the deployment always fails with this error : File "/var/azureml-server/create_app.py", line 3, in <module> from app import main File "/var/azureml-server/app.py", line 27, in <module> import main as user_main File "/var/azureml-app/main.py", line 19, in <module> driver_module_spec.loader.exec_module(driver_module) File "/structure/azureml-app/score.py", line 22, in <module> importlib.import_module("multilabelencoder") File "/azureml-envs/azureml_b707e8c15a41fd316cf6c660941cf3d5/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'multilabelencoder' I understand that pickle/joblib has some problems unpickling the custom function MultiLabelEncoder. That's why I defined this class in a separate python script (which I executed also). I called this custom function in the training python script, in the deployment script and in the scoring python file (score.py). The importing in the score.py file is not successful. So my question is how can I import custom python module to azure ml deployment environment ? Thank you in advance. EDIT: This is my .yml file name: project_environment dependencies: # The python interpreter version. # Currently Azure ML only supports 3.5.2 and later. - python=3.6.2 - pip: - multilabelencoder==1.0.4 - scikit-learn - azureml-defaults==1.0.74.* - pandas channels: - conda-forge
[ "In fact, the solution was to import my customized class MultiColumnLabelEncoder as a pip package (You can find it through pip install multilllabelencoder==1.0.5).\nThen I passed the pip package to the .yml file or in the InferenceConfig of the azure ml environment.\nIn the score.py file, I imported the class as follows :\nfrom multilabelencoder import multilabelencoder\ndef init():\n global model\n\n # Call the custom encoder to be used dfor unpickling the model\n encoder = multilabelencoder.MultiColumnLabelEncoder() \n # Get the path where the deployed model can be found.\n model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'k_means_model_45.pkl')\n model = joblib.load(model_path)\n\nThen the deployment was successful. \nOne more important thing is I had to use the same pip package (multilabelencoder) in the training pipeline as here :\nfrom multilabelencoder import multilabelencoder \npipe = Pipeline([\n (\"encoder\", multilabelencoder.MultiColumnLabelEncoder(columns)),\n ('k-means', kmeans),\n])\n#Training the pipeline\ntrainedModel = pipe.fit(df)\n\n", "I am facing the same problem, trying to deploy a model that has dependency on some of my own scripts and got the error message:\n ModuleNotFoundError: No module named 'my-own-module-name'\n\nFound this \"Private wheel files\" solution in MS documentation and it works. The difference from the solution above is now I do not need to publish my scripts to pip. I think many people might face the same situation that for some reason you cannot or do not want to publish your scripts. Instead, your own wheel file is saved under your own blob storage.\nFollowing the documentation, I did the following steps and it worked for me. Now I can deploy my model that has dependency in my own scripts.\n\nPackage your own scripts that the model is dependent on into wheel file, and the wheel file is saved locally.\n\"your_path/your-wheel-file-name.whl\"\n\nFollow the instructions in the \"Private wheel files\" solution in MS documentation. Below is the code that worked for me.\n\n\n\nfrom azureml.core.environment import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\n\nwhl_url = Environment.add_private_pip_wheel(workspace=ws,file_path = \"your_pathpath/your-wheel-file-name.whl\")\n\nmyenv = CondaDependencies()\nmyenv.add_pip_package(\"scikit-learn==0.22.1\")\nmyenv.add_pip_package(\"azureml-defaults\")\nmyenv.add_pip_package(whl_url)\n\nwith open(\"myenv.yml\",\"w\") as f:\n f.write(myenv.serialize_to_string())\n\nMy environment file now looks like:\nname: project_environment\ndependencies:\n # The python interpreter version.\n\n # Currently Azure ML only supports 3.5.2 and later.\n\n- python=3.6.2\n\n- pip:\n - scikit-learn==0.22.1\n - azureml-defaults\n - https://myworkspaceid.blob.core/azureml/Environment/azureml-private-packages/my-wheel-file-name.whl\nchannels:\n- conda-forge\n\nI'm new to Azure ml. Learning by doing and communicating with the community. This solution works fine for me, hope that it helps.\n", "An alternative method that works for me is to register a \"model_src\"-directory containing both the pickled model and a custom module, instead of registering only the pickled model. Then, specify the custom module in the scoring script during deployment, e.g., using python's os module. Example below using sdk-v1:\nExample of \"model_src\"-directory\nmodel_src\n │\n ├─ utils # your custom module\n │ └─ multilabelencoder.py\n │\n └─ models # your pickled files\n └─ k_means_model_45.pkl \n\nRegister \"model_src\" in sdk-v1\nmodel = Model.register(model_path=\"./model_src\",\n model_name=\"kmeans\", \n description=\"model registered as a directory\",\n workspace=ws\n)\n\nCorrespondingly, when defining the inference config\ndeployment_folder = './model_src'\nscript_file = 'models/score.py'\nservice_env = Environment.from_conda_specification(kmeans-service,\n './environment.yml' # wherever yml is located locally\n)\ninference_config = InferenceConfig(source_directory=deployment_folder,\n entry_script=script_file,\n environment=service_env\n)\n\nContent of scoring script, e.g., score.py\n# Specify model_src as your parent\nimport os\ndeploy_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'),'model_src')\n\n# Import custom module\nimport sys\nsys.path.append(\"{0}/utils\".format(deploy_dir)) \nfrom multilabelencoder import MultiColumnLabelEncoder\n\nimport joblib\n\ndef init():\n global model\n\n # Call the custom encoder to be used dfor unpickling the model\n encoder = MultiColumnLabelEncoder() # Use as intended downstream \n \n # Get the path where the deployed model can be found.\n model = joblib.load('{}/models/k_means_model_45.pkl'.format(deploy_dir))\n\n\nThis method provides flexibility in importing various custom scripts in my scoring script.\n" ]
[ 4, 4, 0 ]
[]
[]
[ "azure_machine_learning_service", "azure_machine_learning_studio", "pickle", "python" ]
stackoverflow_0059176241_azure_machine_learning_service_azure_machine_learning_studio_pickle_python.txt
Q: how can I count the occurrences > than a value for each year of a data frame I have a data frame with the values of precipitations day per day. I would like to do a sort of resample, so instead of day per day the data is collected year per year and every year has a column that contains the number of times it rained more than a certain value. Date Precipitation 2000-01-01 1 2000-01-03 6 2000-01-03 5 2001-01-01 3 2001-01-02 1 2001-01-03 0 2002-01-01 10 2002-01-02 8 2002-01-03 12 what I want is to count every year how many times Precipitation > 2 Date Count 2000 2 2001 1 2002 3 I tried using resample() but with no results A: @Tatthew you can do this with GroupBy.apply: import pandas as pd df = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03', '2000-01-03', '2001-01-01', '2001-01-02', '2001-01-03', '2002-01-01', '2002-01-02', '2002-01-03'], 'Precipitation': [1, 6, 5, 3, 1, 0, 10, 8, 12]}) df = df.astype({'Date': datetime64}) df.groupby(df.Date.dt.year).apply(lambda df: df.Precipitation[df.Precipitation > 2].count()) A: You can use this bit of code: # convert "Precipitation" and "date" values to proper types df['Precipitation'] = df['Precipitation'].astype(int) df["date"] = pd.to_datetime(df["date"]) # find rows that have "Precipitation" > 2 df['Count']= df.apply(lambda x: x["Precipitation"] > 2, axis=1) # group df by year and drop the "Precipitation" column df.groupby(df['date'].dt.year).sum().drop(columns=['Precipitation']) A: @Tatthew you can do this with query and Groupby.size too. import pandas as pd df = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03', '2000-01-03', '2001-01-01', '2001-01-02', '2001-01-03', '2002-01-01', '2002-01-02', '2002-01-03'], 'Precipitation': [1, 6, 5, 3, 1, 0, 10, 8, 12]}) df = df.astype({'Date': datetime64}) above_threshold = df.query('Precipitation > 2') above_threshold.groupby(above_threshold.Date.dt.year).size()
how can I count the occurrences > than a value for each year of a data frame
I have a data frame with the values of precipitations day per day. I would like to do a sort of resample, so instead of day per day the data is collected year per year and every year has a column that contains the number of times it rained more than a certain value. Date Precipitation 2000-01-01 1 2000-01-03 6 2000-01-03 5 2001-01-01 3 2001-01-02 1 2001-01-03 0 2002-01-01 10 2002-01-02 8 2002-01-03 12 what I want is to count every year how many times Precipitation > 2 Date Count 2000 2 2001 1 2002 3 I tried using resample() but with no results
[ "@Tatthew you can do this with GroupBy.apply:\nimport pandas as pd\ndf = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03',\n '2000-01-03', '2001-01-01',\n '2001-01-02', '2001-01-03',\n '2002-01-01', '2002-01-02',\n '2002-01-03'],\n 'Precipitation': [1, 6, 5, 3, 1, 0,\n 10, 8, 12]})\ndf = df.astype({'Date': datetime64})\ndf.groupby(df.Date.dt.year).apply(lambda df: df.Precipitation[df.Precipitation > 2].count())\n\n", "You can use this bit of code:\n# convert \"Precipitation\" and \"date\" values to proper types\ndf['Precipitation'] = df['Precipitation'].astype(int)\ndf[\"date\"] = pd.to_datetime(df[\"date\"])\n\n# find rows that have \"Precipitation\" > 2\ndf['Count']= df.apply(lambda x: x[\"Precipitation\"] > 2, axis=1)\n\n# group df by year and drop the \"Precipitation\" column\ndf.groupby(df['date'].dt.year).sum().drop(columns=['Precipitation'])\n\n", "@Tatthew you can do this with query and Groupby.size too.\nimport pandas as pd\ndf = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03',\n '2000-01-03', '2001-01-01',\n '2001-01-02', '2001-01-03',\n '2002-01-01', '2002-01-02',\n '2002-01-03'],\n 'Precipitation': [1, 6, 5, 3, 1, 0,\n 10, 8, 12]})\ndf = df.astype({'Date': datetime64})\nabove_threshold = df.query('Precipitation > 2')\nabove_threshold.groupby(above_threshold.Date.dt.year).size()\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074659756_dataframe_pandas_python.txt
Q: I am having trouble trying to fix TypeError: string indices must be integers Grades.txt file I am currently trying to finish a assignment but I am confused on how to fix this error. I am creating a program that will analyzes grades from a file and should calculate the average score for each distinct section (given). I receive the error for sections[sec]["total"] = grade[grade] grades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69} # this section reads the file def calculate_average(): file = open("grades.txt", "r") sections = {} for line in file: [_, sec, grade] = line.split("\t") grade = grade.strip() if sec in sections: sections[sec]["count"] += 1 sections[sec]["total"] += grade[grade] else: sections[sec] = {} sections[sec]["count"] = 1 sections[sec]["total"] = grade[grade] file.close() # This section calculates the average data based on file for sec, secdata in sections.items(): avg = secdata[" total "] / secdata[" count"] print(" {0} : {1}".format(sec, round(avg, 2))) if __name__ == "__main__": calculate_average() A: It looks like you are trying to access the value of the grade dictionary by using the value of the grade variable as a key. This won't work because the keys of the grade dictionary are strings (e.g. 'A', 'B', 'C'), but the value of the grade variable is also a string (e.g. 'A', 'B', 'C'), so you are trying to use a string as an index for a dictionary. To fix this error, you should use the grade variable directly to access the value in the grade dictionary, like this: sections[sec]["total"] += grades[grade] This will add the value associated with the grade string in the grades dictionary to the total field in the sections dictionary. Here is the updated code with this change: grades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69} # this section reads the file def calculate_average(): file = open("grades.txt", "r") sections = {} for line in file: [_, sec, grade] = line.split("\t") grade = grade.strip() if sec in sections: sections[sec]["count"] += 1 sections[sec]["total"] += grades[grade] else: sections[sec] = {} sections[sec]["count"] = 1 sections[sec]["total"] = grades[grade] file.close() # This section calculates the average data based on file for sec, secdata in sections.items(): avg = secdata[" total "] / secdata[" count"] print(" {0} : {1}".format(sec, round(avg, 2))) if __name__ == "__main__": calculate_average() A: You probably mean: sections[sec]["total"] = grades[grade]
I am having trouble trying to fix TypeError: string indices must be integers
Grades.txt file I am currently trying to finish a assignment but I am confused on how to fix this error. I am creating a program that will analyzes grades from a file and should calculate the average score for each distinct section (given). I receive the error for sections[sec]["total"] = grade[grade] grades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69} # this section reads the file def calculate_average(): file = open("grades.txt", "r") sections = {} for line in file: [_, sec, grade] = line.split("\t") grade = grade.strip() if sec in sections: sections[sec]["count"] += 1 sections[sec]["total"] += grade[grade] else: sections[sec] = {} sections[sec]["count"] = 1 sections[sec]["total"] = grade[grade] file.close() # This section calculates the average data based on file for sec, secdata in sections.items(): avg = secdata[" total "] / secdata[" count"] print(" {0} : {1}".format(sec, round(avg, 2))) if __name__ == "__main__": calculate_average()
[ "It looks like you are trying to access the value of the grade dictionary by using the value of the grade variable as a key. This won't work because the keys of the grade dictionary are strings (e.g. 'A', 'B', 'C'), but the value of the grade variable is also a string (e.g. 'A', 'B', 'C'), so you are trying to use a string as an index for a dictionary.\nTo fix this error, you should use the grade variable directly to access the value in the grade dictionary, like this:\nsections[sec][\"total\"] += grades[grade]\n\nThis will add the value associated with the grade string in the grades dictionary to the total field in the sections dictionary.\nHere is the updated code with this change:\ngrades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69}\n\n# this section reads the file\n\n\ndef calculate_average():\n file = open(\"grades.txt\", \"r\")\n sections = {}\n for line in file:\n [_, sec, grade] = line.split(\"\\t\")\n grade = grade.strip()\n if sec in sections:\n sections[sec][\"count\"] += 1\n sections[sec][\"total\"] += grades[grade]\n else:\n sections[sec] = {}\n sections[sec][\"count\"] = 1\n sections[sec][\"total\"] = grades[grade]\n file.close()\n\n# This section calculates the average data based on file\n\n for sec, secdata in sections.items():\n avg = secdata[\" total \"] / secdata[\" count\"]\n print(\" {0} : {1}\".format(sec, round(avg, 2)))\n\n\nif __name__ == \"__main__\":\n calculate_average()\n\n", "You probably mean:\nsections[sec][\"total\"] = grades[grade]\n\n" ]
[ 0, 0 ]
[ "Welcome to SO\nThe error message you are getting is because you are trying to use the grade as a key to access the value in the grades dictionary. However, the grade variable contains the actual grade (e.g. 'A', 'B', etc.), not the key. To fix this, you need to use the grade variable to access the corresponding value in the grades dictionary, like this:\nsections[sec][\"total\"] = grades[grade]\n\nHere is the complete calculate_average function with this change applied:\ndef calculate_average():\n file = open(\"grades.txt\", \"r\")\n sections = {}\n for line in file:\n [_, sec, grade] = line.split(\"\\t\")\n grade = grade.strip()\n if sec in sections:\n sections[sec][\"count\"] += 1\n sections[sec][\"total\"] += grades[grade]\n else:\n sections[sec] = {}\n sections[sec][\"count\"] = 1\n sections[sec][\"total\"] = grades[grade]\n file.close()\n\n for sec, secdata in sections.items():\n avg = secdata[\" total \"] / secdata[\" count\"]\n print(\" {0} : {1}\".format(sec, round(avg, 2)))\n\nHowever, I think you can optimise this function a bit better.\nIt is good practice to use the with statement when working with files, as it ensures that the file is properly closed even if an error occurs. This ensures that the file is automatically closed when the with block is exited, even if an error occurs.\nSecondly, you can eliminate the need for the if statement inside the for loop that processes the lines in the file. Instead, the if statement is moved outside the for loop and only executed once for each section.\nA third improvement would be within the format of the string. You can use the below code to round which is more pythonic.\nprint(\" {sec} : {avg:.2f}\".format(sec=sec, avg=avg))\n\nFinally, you don't need to unpack your variables into a list you can remove this.\nI also think your variables could be renamed better to give you a final result of:\ndef calculate_average():\n with open(\"grades.txt\", \"r\") as file:\n sections = {}\n for line in file:\n _, section, grade = line.split(\"\\t\")\n grade = grade.strip()\n if section not in sections:\n sections[section] = {\"count\": 0, \"total\": 0}\n sections[section][\"count\"] += 1\n sections[section][\"total\"] += grades[grade]\n\n for section, section_data in sections.items():\n avg = section_data[\"total\"] / section_data[\"count\"]\n print(\" {section} : {avg:.2f}\".format(section=section, avg=avg))\n\nAnother possible improvement could be to use:\nsections[section][\"total\"] += grades.get(grade, 0)\n# rather than\nsections[section][\"total\"] += grades[grade]\n\nThis could stop exceptions being raised if the grade is not in the dictionary but it depends on your desired behaviour.\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074661176_python.txt
Q: how to automatically change IP got from proxy api to use for selenium from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By from time import sleep import requests response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") print(response.json()) proxy_url = "127.0.0.1:9009" proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh") I am having a problem. I have rented a revolving proxy from a web service and I want to change the IP continuously via the api. I used the requests module to get a new IP, so how can it automatically get that new IP and replace it with the old one to use? I'm a newbie and really don't know much, hope someone can help me. Thanks very much! I read the instructions of the service site but they don't have a tutorial for python A: The code appears to be correctly importing the necessary modules and using them to create a Proxy object and a webdriver.Chrome object. However, there are a few issues with the code that may cause it to not work as expected: The proxy_url variable is set to "127.0.0.1:9009", which is the localhost IP address and port number. This is not a valid proxy server, and it will not allow the webdriver.Chrome object to access the internet. You should replace this with a valid proxy server IP address and port number. The response variable is not being used in the code. The requests.get() method is used to get a response from the proxy API, but the response is not being saved or used in any way. You should either use the response to get a valid proxy server IP address and port number, or remove the requests.get() method from the code. The sleep() method is imported but not used in the code. This is not causing any errors, but it is unnecessary and can be removed from the code. Here is the corrected code: from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By import requests response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") proxy_url = response.json()["ip"] proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh") A: To change the IP address used by your Selenium webdriver, you can do the following: from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By from time import sleep import requests # Get the latest IP address from the proxy service response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") ip = response.json()["ip"] port = response.json()["port"] # Update the proxy URL with the new IP address proxy_url = f"{ip}:{port}" # Create a proxy object with the updated proxy URL proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) # Update the capabilities object to use the new proxy capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) # Create a new instance of the webdriver with the updated capabilities driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh")
how to automatically change IP got from proxy api to use for selenium
from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By from time import sleep import requests response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") print(response.json()) proxy_url = "127.0.0.1:9009" proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh") I am having a problem. I have rented a revolving proxy from a web service and I want to change the IP continuously via the api. I used the requests module to get a new IP, so how can it automatically get that new IP and replace it with the old one to use? I'm a newbie and really don't know much, hope someone can help me. Thanks very much! I read the instructions of the service site but they don't have a tutorial for python
[ "The code appears to be correctly importing the necessary modules and using them to create a Proxy object and a webdriver.Chrome object.\nHowever, there are a few issues with the code that may cause it to not work as expected:\nThe proxy_url variable is set to \"127.0.0.1:9009\", which is the localhost IP address and port number. This is not a valid proxy server, and it will not allow the webdriver.Chrome object to access the internet. You should replace this with a valid proxy server IP address and port number.\nThe response variable is not being used in the code. The requests.get() method is used to get a response from the proxy API, but the response is not being saved or used in any way. You should either use the response to get a valid proxy server IP address and port number, or remove the requests.get() method from the code.\nThe sleep() method is imported but not used in the code. This is not causing any errors, but it is unnecessary and can be removed from the code.\nHere is the corrected code:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.proxy import *\nfrom selenium.webdriver.common.by import By\nimport requests\n\nresponse = requests.get(\"http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0\")\nproxy_url = response.json()[\"ip\"]\n\nproxy = Proxy({\n 'proxyType': ProxyType.MANUAL,\n 'httpProxy': proxy_url,\n 'sslProxy': proxy_url,\n 'noProxy': ''})\n\ncapabilities = webdriver.DesiredCapabilities.CHROME\nproxy.add_to_capabilities(capabilities)\n\ndriver = webdriver.Chrome(desired_capabilities=capabilities)\ndriver.get(\"https://whoer.net/zh\")\n\n", "To change the IP address used by your Selenium webdriver, you can do the following:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.proxy import *\nfrom selenium.webdriver.common.by import By\nfrom time import sleep\nimport requests\n\n# Get the latest IP address from the proxy service\nresponse = requests.get(\"http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0\")\nip = response.json()[\"ip\"]\nport = response.json()[\"port\"]\n\n# Update the proxy URL with the new IP address\nproxy_url = f\"{ip}:{port}\"\n\n# Create a proxy object with the updated proxy URL\nproxy = Proxy({\n 'proxyType': ProxyType.MANUAL,\n 'httpProxy': proxy_url,\n 'sslProxy': proxy_url,\n 'noProxy': ''})\n\n# Update the capabilities object to use the new proxy\ncapabilities = webdriver.DesiredCapabilities.CHROME\nproxy.add_to_capabilities(capabilities)\n\n# Create a new instance of the webdriver with the updated capabilities\ndriver = webdriver.Chrome(desired_capabilities=capabilities)\ndriver.get(\"https://whoer.net/zh\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "api", "python", "python_3.x", "selenium", "selenium_webdriver" ]
stackoverflow_0074661524_api_python_python_3.x_selenium_selenium_webdriver.txt
Q: 'DataFrame' object does not support item assignment I imported a df into Databricks as a pyspark.sql.dataframe.DataFrame. Within this df I have 3 columns (which I have verified to be strings) that I wish to concatenate. I have tried to use a simple "+" function first, eg. df["fullname"] = df["firstname"] + df["middlename"] + df["lastname"] But I keep receiving the error "'DataFrame' object does not support item assignment". So I tried to add .astype(str) after every column with no avail. Finally I tried to simply add another column full of the number 5: df['new_col'] = 5 and received the same error. So now Im thinking maybe this dataframe is immutable. But I even tried to make a copy of the original df hoping I could modify it df2 = df.select('*') But once again I could not concatenate or modify the new dataframe. Any help is greatly appreciated! A: The error message you are getting suggests that the DataFrame object you are trying to modify is immutable, which means that it cannot be changed. To solve this problem, you will need to create a new DataFrame object that contains the concatenated column. You can do this using the withColumn method, which creates a new DataFrame by adding a new column to the existing DataFrame. Here is an example of how you can use withColumn to concatenate the three columns in your DataFrame and create a new DataFrame: from pyspark.sql.functions import concat # Concatenate the columns and create a new DataFrame df2 = df.withColumn("fullname", concat(df["firstname"], df["middlename"], df["lastname"])) This will create a new DataFrame called df2 that contains the concatenated column. You can then use this new DataFrame for any further operations you need to perform on your data. Alternatively, if you don't need to keep the original DataFrame and want to modify the existing DataFrame, you can use the withColumn method in place of the assignment operator (=) to add the new column to the existing DataFrame: from pyspark.sql.functions import concat # Concatenate the columns and add the new column to the existing DataFrame df = df.withColumn("fullname", concat(df["firstname"], df["middlename"], df["lastname"])) This will add the new column to the existing DataFrame and you can then use the updated DataFrame for any further operations you need to perform.
'DataFrame' object does not support item assignment
I imported a df into Databricks as a pyspark.sql.dataframe.DataFrame. Within this df I have 3 columns (which I have verified to be strings) that I wish to concatenate. I have tried to use a simple "+" function first, eg. df["fullname"] = df["firstname"] + df["middlename"] + df["lastname"] But I keep receiving the error "'DataFrame' object does not support item assignment". So I tried to add .astype(str) after every column with no avail. Finally I tried to simply add another column full of the number 5: df['new_col'] = 5 and received the same error. So now Im thinking maybe this dataframe is immutable. But I even tried to make a copy of the original df hoping I could modify it df2 = df.select('*') But once again I could not concatenate or modify the new dataframe. Any help is greatly appreciated!
[ "The error message you are getting suggests that the DataFrame object you are trying to modify is immutable, which means that it cannot be changed. To solve this problem, you will need to create a new DataFrame object that contains the concatenated column. You can do this using the withColumn method, which creates a new DataFrame by adding a new column to the existing DataFrame. Here is an example of how you can use withColumn to concatenate the three columns in your DataFrame and create a new DataFrame:\nfrom pyspark.sql.functions import concat\n\n# Concatenate the columns and create a new DataFrame\ndf2 = df.withColumn(\"fullname\", concat(df[\"firstname\"], df[\"middlename\"], df[\"lastname\"]))\n\nThis will create a new DataFrame called df2 that contains the concatenated column. You can then use this new DataFrame for any further operations you need to perform on your data.\nAlternatively, if you don't need to keep the original DataFrame and want to modify the existing DataFrame, you can use the withColumn method in place of the assignment operator (=) to add the new column to the existing DataFrame:\nfrom pyspark.sql.functions import concat\n\n# Concatenate the columns and add the new column to the existing DataFrame\ndf = df.withColumn(\"fullname\", concat(df[\"firstname\"], df[\"middlename\"], df[\"lastname\"]))\n\nThis will add the new column to the existing DataFrame and you can then use the updated DataFrame for any further operations you need to perform.\n" ]
[ 1 ]
[]
[]
[ "databricks", "dataframe", "pandas", "pyspark", "python" ]
stackoverflow_0074661704_databricks_dataframe_pandas_pyspark_python.txt
Q: BeautifulSoup find partial string in section I am trying to use BeautifulSoup to scrape a particular download URL from a web page, based on a partial text match. There are many links on the page, and it changes frequently. The html I'm scraping is full of sections that look something like this: <section class="onecol habonecol"> <a href="https://longGibberishDownloadURL" title="Download"> <img src="\azure_storage_blob\includes\download_for_windows.png"/> </a> sentinel-3.2022335.1201.1507_1608C.ab.L3.FL3.v951T202211_1_3.CIcyano.LakeOkee.tif </section> The second to last line (sentinel-3.2022335...LakeOkee.tif) is the part I need to search using a partial string to pull out the correct download url. The code I have attempted so far looks something like this: import requests, re from bs4 import BeautifulSoup reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') result = soup.find('section', attrs={'class':'onecol habonecol'}, string=re.compile(?)) I've been searching StackOverflow a long time now and while there are similar questions and answers, none of the proposed solutions have worked for me so far (re.compile, lambdas, etc.). I am able to pull up a section if I remove the string argument, but when I try to include a partial matching string I get None for my result. I'm unsure what to put for the string argument (? above) to find a match based on partial text, say if I wanted to find the filename that has "CIcyano" somewhere in it (see second to last line of html example at top). I've tried multiple methods using re.compile and lambdas, but I don't quite understand how either of those functions really work. I was able to pull up other sections from the html using these solutions, but something about this filename string with all the periods seems to be preventing it from working. Or maybe it's the way it is positioned within the section? Perhaps I'm going about this the wrong way entirely. Is this perhaps considered part of the section id, and so the string argument can't find it?? An example of a section on the page that I AM able to find has html like the one below, and I'm easily able to find it using the string argument and re.compile using "Name", "^N", etc. <section class="onecol habonecol"> <h3> Name </h3> </section> Appreciate any advice on how to go about this! Once I get the correct section, I know how to pull out the URL via the a tag. Here is the full html of the page I'm scraping, if that helps clarify the structure I'm working against. A: I believe you are overthinking. Just remove the regular expression part, take the text and you will be fine. import requests from bs4 import BeautifulSoup reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') result = soup.find('section', attrs={'class':'onecol habonecol'}).text print(result) A: You can query inside every section for the string you want. Like so: s.find('section', attrs={'class':'onecol habonecol'}).find(string=re.compile(r'.sentinel.*')) Using this regular expression you will match any text that has sentinel in it, be careful that you will have to match some characters like spaces, that's why there is a . at beginning of the regex, you might want a more robust regex which you can test here: https://regex101.com/ A: I ended up finding another method not using the string argument in find(), instead using something like the code below, which pulls the first instance of a section that contains a partial text match. sections = soup.find_all('section', attrs={'class':'onecol habonecol'}) for s in sections: text = s.text if 'CIcyano' in text: print(s) break links = s.find('a') dwn_url = links.get('href') This works for my purposes and fetches the first instance of the matching filename, and grabs the URL.
BeautifulSoup find partial string in section
I am trying to use BeautifulSoup to scrape a particular download URL from a web page, based on a partial text match. There are many links on the page, and it changes frequently. The html I'm scraping is full of sections that look something like this: <section class="onecol habonecol"> <a href="https://longGibberishDownloadURL" title="Download"> <img src="\azure_storage_blob\includes\download_for_windows.png"/> </a> sentinel-3.2022335.1201.1507_1608C.ab.L3.FL3.v951T202211_1_3.CIcyano.LakeOkee.tif </section> The second to last line (sentinel-3.2022335...LakeOkee.tif) is the part I need to search using a partial string to pull out the correct download url. The code I have attempted so far looks something like this: import requests, re from bs4 import BeautifulSoup reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') result = soup.find('section', attrs={'class':'onecol habonecol'}, string=re.compile(?)) I've been searching StackOverflow a long time now and while there are similar questions and answers, none of the proposed solutions have worked for me so far (re.compile, lambdas, etc.). I am able to pull up a section if I remove the string argument, but when I try to include a partial matching string I get None for my result. I'm unsure what to put for the string argument (? above) to find a match based on partial text, say if I wanted to find the filename that has "CIcyano" somewhere in it (see second to last line of html example at top). I've tried multiple methods using re.compile and lambdas, but I don't quite understand how either of those functions really work. I was able to pull up other sections from the html using these solutions, but something about this filename string with all the periods seems to be preventing it from working. Or maybe it's the way it is positioned within the section? Perhaps I'm going about this the wrong way entirely. Is this perhaps considered part of the section id, and so the string argument can't find it?? An example of a section on the page that I AM able to find has html like the one below, and I'm easily able to find it using the string argument and re.compile using "Name", "^N", etc. <section class="onecol habonecol"> <h3> Name </h3> </section> Appreciate any advice on how to go about this! Once I get the correct section, I know how to pull out the URL via the a tag. Here is the full html of the page I'm scraping, if that helps clarify the structure I'm working against.
[ "I believe you are overthinking. Just remove the regular expression part, take the text and you will be fine.\nimport requests\nfrom bs4 import BeautifulSoup\n\nreqs = requests.get(url)\nsoup = BeautifulSoup(reqs.text, 'html.parser')\nresult = soup.find('section', attrs={'class':'onecol habonecol'}).text\nprint(result)\n\n", "You can query inside every section for the string you want. Like so:\ns.find('section', attrs={'class':'onecol habonecol'}).find(string=re.compile(r'.sentinel.*'))\n\nUsing this regular expression you will match any text that has sentinel in it, be careful that you will have to match some characters like spaces, that's why there is a . at beginning of the regex, you might want a more robust regex which you can test here:\nhttps://regex101.com/\n", "I ended up finding another method not using the string argument in find(), instead using something like the code below, which pulls the first instance of a section that contains a partial text match.\nsections = soup.find_all('section', attrs={'class':'onecol habonecol'})\n\n\nfor s in sections:\n text = s.text\n if 'CIcyano' in text:\n print(s)\n break\n\nlinks = s.find('a')\ndwn_url = links.get('href')\n\nThis works for my purposes and fetches the first instance of the matching filename, and grabs the URL.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "beautifulsoup", "html", "partial", "python" ]
stackoverflow_0074648666_beautifulsoup_html_partial_python.txt
Q: My code is not doing what I want it to do and I cant get it out of the while loop. Please explain why it's like that val = [*range(1,51)] print("Now, I need aaato know how many state Capitals you would like to practice") user = input("chose a number from 1 to 50") while user not in val: print("There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type \"EXIT\"") user = input("I needbbb to know how many state Capitals you would like to practice") if user.capitalize() == "EXIT": break if user == 0: print("There are more than zero States in the United Sts That means that you do not want to play today") user = input("I needccc to know how many state Capitals you would like to practice. If you want to exit the game, type \"EXIT\"") print("Hello") output: There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice123 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practiceexit There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice I created a list with ints between the number 1 and 50. I want the user to pick a number from the list (val). If it's not there, I want the user to keep trying. Unless the user wants to quit with "EXIT". It just keeps getting stuck in my user input print statement and I dont understand why? A: You want user.upper(), not user.capitalize(). From the help-text: >>> help(str.capitalize) Help on method_descriptor: capitalize(self, /) Return a capitalized version of the string. More specifically, make the first character have upper case and the rest lower case. >>> help(str.upper) Help on method_descriptor: upper(self, /) Return a copy of the string converted to uppercase.
My code is not doing what I want it to do and I cant get it out of the while loop. Please explain why it's like that
val = [*range(1,51)] print("Now, I need aaato know how many state Capitals you would like to practice") user = input("chose a number from 1 to 50") while user not in val: print("There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type \"EXIT\"") user = input("I needbbb to know how many state Capitals you would like to practice") if user.capitalize() == "EXIT": break if user == 0: print("There are more than zero States in the United Sts That means that you do not want to play today") user = input("I needccc to know how many state Capitals you would like to practice. If you want to exit the game, type \"EXIT\"") print("Hello") output: There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice123 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practiceexit There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice I created a list with ints between the number 1 and 50. I want the user to pick a number from the list (val). If it's not there, I want the user to keep trying. Unless the user wants to quit with "EXIT". It just keeps getting stuck in my user input print statement and I dont understand why?
[ "You want user.upper(), not user.capitalize().\nFrom the help-text:\n>>> help(str.capitalize)\nHelp on method_descriptor:\n\ncapitalize(self, /)\n Return a capitalized version of the string.\n\n More specifically, make the first character have upper case and the rest lower\n case.\n\n>>> help(str.upper)\nHelp on method_descriptor:\n\nupper(self, /)\n Return a copy of the string converted to uppercase.\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074661763_python.txt
Q: Pass whole row to DB function as an argument SQLAlchemy I need to implement following SQL expression using SQLAlchemy 1.4.41, Postgres 13.6 SELECT book.name, my_func(book) AS func_result FROM book WHERE book.name = 'The Adventures of Tom Sawyer'; Is there a way to implement such SQL expression? Function is the following and I'm not supposed to change it: create function my_func(table_row anyelement) returns json I assume that passing Book to func.my_func is not correct as SQLAlchemy unpacks it to list of Book attributes (ex. book.id, book.name, book.total_pages) from db.models import Book from sqlalchemy import func, select function = func.my_func(Book) query = select(Book.name, function).where(Book.name == 'The Adventures of Tom Sawyer') A: In PostgreSQL, you would do this by passing a row object to the function. For example, row_to_json is a function that accepts a row and returns JSON, so given this table Table "public.users" Column │ Type │ Collation │ Nullable │ Default ═══════════════════╪═══════════════════╪═══════════╪══════════╪══════════════════════════════ id │ integer │ │ not null │ generated always as identity name │ character varying │ │ │ registration_date │ date │ │ │ this query select name, row_to_json(users) from users; could return name │ row_to_json ═══════╪══════════════════════════════════════════════════════════ Alice │ {"id":1,"name":"Alice","registration_date":"2022-10-27"} Bob │ {"id":2,"name":"Bob","registration_date":"2022-10-27"} Translating this to SQLAlchemy, if you are only passing the columns from the table underlying the model (so no relationships), you can use FromClause.table_valued as a shortcut. q = sa.select(User.name, sa.func.row_to_json(User.__table__.table_valued())) If you do require values from relationships, or only a subset of the table's fields, you need to use a subquery to specify the them*: subq = sa.select(User.id, User.name).subquery() q = sa.select(sa.func.row_to_json(subq.table_valued())) * This part of the answer is inspired by this answer by Anatoly Ressin.
Pass whole row to DB function as an argument SQLAlchemy
I need to implement following SQL expression using SQLAlchemy 1.4.41, Postgres 13.6 SELECT book.name, my_func(book) AS func_result FROM book WHERE book.name = 'The Adventures of Tom Sawyer'; Is there a way to implement such SQL expression? Function is the following and I'm not supposed to change it: create function my_func(table_row anyelement) returns json I assume that passing Book to func.my_func is not correct as SQLAlchemy unpacks it to list of Book attributes (ex. book.id, book.name, book.total_pages) from db.models import Book from sqlalchemy import func, select function = func.my_func(Book) query = select(Book.name, function).where(Book.name == 'The Adventures of Tom Sawyer')
[ "In PostgreSQL, you would do this by passing a row object to the function. For example, row_to_json is a function that accepts a row and returns JSON, so given this table\n Table \"public.users\"\n Column │ Type │ Collation │ Nullable │ Default \n═══════════════════╪═══════════════════╪═══════════╪══════════╪══════════════════════════════\n id │ integer │ │ not null │ generated always as identity\n name │ character varying │ │ │ \n registration_date │ date │ │ │ \n\nthis query\nselect name, row_to_json(users) from users;\n\ncould return\n name │ row_to_json \n═══════╪══════════════════════════════════════════════════════════\n Alice │ {\"id\":1,\"name\":\"Alice\",\"registration_date\":\"2022-10-27\"}\n Bob │ {\"id\":2,\"name\":\"Bob\",\"registration_date\":\"2022-10-27\"}\n\nTranslating this to SQLAlchemy, if you are only passing the columns from the table underlying the model (so no relationships), you can use FromClause.table_valued as a shortcut.\nq = sa.select(User.name, sa.func.row_to_json(User.__table__.table_valued()))\n\nIf you do require values from relationships, or only a subset of the table's fields, you need to use a subquery to specify the them*:\nsubq = sa.select(User.id, User.name).subquery() \nq = sa.select(sa.func.row_to_json(subq.table_valued()))\n\n\n* This part of the answer is inspired by this answer by Anatoly Ressin.\n" ]
[ 1 ]
[]
[]
[ "python", "sql", "sqlalchemy" ]
stackoverflow_0074654755_python_sql_sqlalchemy.txt
Q: AWS Python Glue Job Not Importing Numeric Columns into RDS I have a glue job that takes a csv file from an s3 bucket and imports the data into a postgres rds table. It connects to the db with a jdbc connection. The string/varchar columns are being imported, but the numeric columns are not. Here is the postgres rds column types: And here is the python glue script: def __step_mapping_columns(self): # Script generated for node S3 bucket dynamicFrame_dept_summary = self.glueContext.create_dynamic_frame.from_options( format_options={"quoteChar": '"', "withHeader": True, "separator": ","}, connection_type="s3", format="csv", connection_options={ "paths": [ "" ], "recurse": True, }, transformation_ctx="dynamicFrame_dept_summary", ) # Script generated for node ApplyMapping applyMapping_dept_summary = ApplyMapping.apply( frame=dynamicFrame_dept_summary, mappings=[("PROCESS_MAIN", "string", "process_main", "string"), ("PROCESS_CORE", "string", "process_core", "string"), ("DC", "string", "dc", "string"), ("BAG_SIZE", "string", "bag_size", "string"), ("EVENT_30_LOC", "string", "start_time_utc", "string"), ("VOLUME", "long", "box_volume", "long"), ("MINUTES", "long", "minutes", "long"), ("PLAN_MINUTES", "long", "plan_minutes", "long"), ("PLAN_RATE", "long", "plan_rate", "long")], transformation_ctx="applyMapping_dept_summary", ) logger.info(mappings) return applyMapping_dept_summary Does anyone know what the issue might be? A: Figured it out. I needed to typecast those columns to the long type first because the Dynamic frame is unsure about the data type. dynamicFrame_dept_summary = dynamicFrame_dept_summary.resolveChoice( specs =[('VOLUME','cast:long')]).resolveChoice( specs = [('MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_RATE','cast:long')])
AWS Python Glue Job Not Importing Numeric Columns into RDS
I have a glue job that takes a csv file from an s3 bucket and imports the data into a postgres rds table. It connects to the db with a jdbc connection. The string/varchar columns are being imported, but the numeric columns are not. Here is the postgres rds column types: And here is the python glue script: def __step_mapping_columns(self): # Script generated for node S3 bucket dynamicFrame_dept_summary = self.glueContext.create_dynamic_frame.from_options( format_options={"quoteChar": '"', "withHeader": True, "separator": ","}, connection_type="s3", format="csv", connection_options={ "paths": [ "" ], "recurse": True, }, transformation_ctx="dynamicFrame_dept_summary", ) # Script generated for node ApplyMapping applyMapping_dept_summary = ApplyMapping.apply( frame=dynamicFrame_dept_summary, mappings=[("PROCESS_MAIN", "string", "process_main", "string"), ("PROCESS_CORE", "string", "process_core", "string"), ("DC", "string", "dc", "string"), ("BAG_SIZE", "string", "bag_size", "string"), ("EVENT_30_LOC", "string", "start_time_utc", "string"), ("VOLUME", "long", "box_volume", "long"), ("MINUTES", "long", "minutes", "long"), ("PLAN_MINUTES", "long", "plan_minutes", "long"), ("PLAN_RATE", "long", "plan_rate", "long")], transformation_ctx="applyMapping_dept_summary", ) logger.info(mappings) return applyMapping_dept_summary Does anyone know what the issue might be?
[ "Figured it out. I needed to typecast those columns to the long type first because the Dynamic frame is unsure about the data type.\ndynamicFrame_dept_summary = dynamicFrame_dept_summary.resolveChoice( specs =[('VOLUME','cast:long')]).resolveChoice( specs = [('MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_RATE','cast:long')])\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_glue", "postgresql", "python" ]
stackoverflow_0074659315_amazon_s3_amazon_web_services_aws_glue_postgresql_python.txt
Q: Telethon New Message Event Handler waits minute Telethon event handler waits 1 minute before sending out a burst of messages at the same time. I tried removing functions from other souces as I thought that could be it and it did not work. code: ` from telethon import TelegramClient, events import logging import time #from main import add logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING) api_id = api_hash = client = TelegramClient('anon', api_id, api_hash) @client.on(events.NewMessage) async def my_event_handler(event): print(event.raw_text) #add(event.raw_text) client.start() client.run_until_disconnected() ` A: Try uninstall and reinstall Telethon again I also Can't login! A: form me work ok and i have testet python need to run all time #exit() import sys from telethon import TelegramClient, events import logging import time import telethon.tl.functions as _fn logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING) api_id = row[2] api_hash = row[3] client = TelegramClient(path+str(api_id), api_id, api_hash) @client.on(events.NewMessage) async def my_event_handler(event): print(event.raw_text) print(event) #add(event.raw_text) client.start() client.run_until_disconnected() print('Finish...')
Telethon New Message Event Handler waits minute
Telethon event handler waits 1 minute before sending out a burst of messages at the same time. I tried removing functions from other souces as I thought that could be it and it did not work. code: ` from telethon import TelegramClient, events import logging import time #from main import add logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING) api_id = api_hash = client = TelegramClient('anon', api_id, api_hash) @client.on(events.NewMessage) async def my_event_handler(event): print(event.raw_text) #add(event.raw_text) client.start() client.run_until_disconnected() `
[ "Try uninstall and reinstall Telethon again\nI also Can't login!\n", "form me work ok and i have testet\npython need to run all time\n\n\n#exit()\nimport sys\nfrom telethon import TelegramClient, events\nimport logging\nimport time\nimport telethon.tl.functions as _fn \n\n\nlogging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING)\n\napi_id = row[2]\napi_hash = row[3] \nclient = TelegramClient(path+str(api_id), api_id, api_hash)\n\[email protected](events.NewMessage)\n\nasync def my_event_handler(event):\n print(event.raw_text)\n print(event)\n #add(event.raw_text)\n\nclient.start()\nclient.run_until_disconnected()\n\n\nprint('Finish...')\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "telegram", "telethon" ]
stackoverflow_0074257571_python_telegram_telethon.txt
Q: Linking multiple lists with a variable I'm trying to link multiple lists with a variable. With the output being an item from one of the 'multiple' lists. The variable needs to have the name of the list. So that the index of the item in the one list is the same as the index of the item in one of the others. Sorry if it's a duplicate, but I couln't find anything that I can understand. creature_type = "easy" creature_type = "medium" creature_type = "hard" list1 = ['slime', 'dog', 'chicken'] list2 = ['orc', 'wolf'] list3 = ['dragon', 'golem', 'vampire'] attack4 = ['spits juice', 'bites', 'pecks'] attack5 = ['slams', 'howls'] attack6 = ['breaths fire', 'throws rocks', 'transforms'] With as output the attack for the creature in the first three lists. A: Use dictionaries and zip: creatures = {'easy': ['slime', 'dog', 'chicken'], 'medium': ['orc', 'wolf'], 'hard': ['dragon', 'golem', 'vampire']} attacks = {'easy': ['spits juice', 'bites', 'pecks'], 'medium': ['slams', 'howls'], 'hard': ['breaths fire', 'throws rocks', 'transforms']} choice = 'medium' linked = list(zip(creatures[choice], attacks[choice])) Output: [('orc', 'slams'), ('wolf', 'howls')]
Linking multiple lists with a variable
I'm trying to link multiple lists with a variable. With the output being an item from one of the 'multiple' lists. The variable needs to have the name of the list. So that the index of the item in the one list is the same as the index of the item in one of the others. Sorry if it's a duplicate, but I couln't find anything that I can understand. creature_type = "easy" creature_type = "medium" creature_type = "hard" list1 = ['slime', 'dog', 'chicken'] list2 = ['orc', 'wolf'] list3 = ['dragon', 'golem', 'vampire'] attack4 = ['spits juice', 'bites', 'pecks'] attack5 = ['slams', 'howls'] attack6 = ['breaths fire', 'throws rocks', 'transforms'] With as output the attack for the creature in the first three lists.
[ "Use dictionaries and zip:\ncreatures = {'easy': ['slime', 'dog', 'chicken'],\n 'medium': ['orc', 'wolf'],\n 'hard': ['dragon', 'golem', 'vampire']}\n\nattacks = {'easy': ['spits juice', 'bites', 'pecks'],\n 'medium': ['slams', 'howls'],\n 'hard': ['breaths fire', 'throws rocks', 'transforms']}\n\nchoice = 'medium'\n\nlinked = list(zip(creatures[choice], attacks[choice]))\n\nOutput:\n[('orc', 'slams'), ('wolf', 'howls')]\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074661760_python.txt
Q: What is the fastest way to check if a substring is in a string as an entire word or term, like RegEx with boundaries? I have a problem to find the fastest way to check if a substring is in a string as an entire word or term. Currently, I'm using RegEx, but I need to perform thousands of verifications and RegEx is being VERY slow. There are many ways to respond to this. The easier way to verify is substring in string: substring = "programming" string = "Python is a high-level programming language" substring in string >>> True In other hand, it's a naivy solution when we need to find the substring as an entire word or term: substring = "program" string = "Python is a high-level programming language" substring in string >>> True Another solution is to split the string into a list of words and verify if the substring is in that list: substring = "program" string = "Python is a high-level programming language" substring in string.split() >>> False Nevertheless, it doesn't work if the substring is a term. To resolve this, another solution would be to use RegEx: import re substring = "high-level program" string = "Python is a high-level programming language" re.search(r"\b{}\b".format(substring), string) != None >>> False However, my biggest problem is that the solution is REALLY slow if you need to perform thousands of verifications. To mitigate this issue, I created some approaches that, although they are faster than RegEx (for the use I need), still are a lot slower than substring in string: substring = "high-level program" string = "Python is a high-level programming language" all([word in string.split() for word in substring.split()]) >>> False Although simple, the above approach didn't fit because it ignores substring word order, returning True if the substring was "programming high-level", unlike the solution in RegEx. So, I created another approach verifying if the substring is in a ngram list where each ngram has the same number of words as the substring: from nltk import ngrams substring = "high-level program" string = "Python is a high-level programming language" ngram = list(ngrams(string.split(), len(substring.split()))) substring in [" ".join(tuples) for tuples in ngram] >>> False EDIT: Here is a less slow version, working with the same principle, but using only built-in functions: substring = "high-level program" string = "Python is a high-level programming language" length = len(substring.split()) words = string.split() ngrams = [" ".join(words[i:i+length]) for i in range(len(words) - length)] substring in ngrams >>> False Someone knows some a faster approach to find a substring inside a string as an entire word or term? A: Simply loop through the string and splice the string according to the substring length and compare the splice string with the substring if it is equal return True. Illustration* strs = "Coding" substr = "ding" slen = 4 i = 0 check = strs[i:slen+i]==substr # 1st iteration strs[0:4+0] == ding codi == ding # False # 2nd iteration i=1 strs[1:4+1] == ding odin == ding # False # 3rd iteration i=2 strs [2:4+2] == ding ding == ding # True Solution def str_exist(string, substring, slen): for i in range(len(string)): if string[i:slen+i] == substring: return True return False substring = "high-level program" string = "Python is a high-level programming language" slen = len(substring) print(str_exist(string, substring, slen)) OUTPUT True A: Check this out. I've added comments in my code for better understanding of what this algorithm is doing. def check_substr(S: str, sub_str: str) -> bool: """ This function tells whether the given sub-string in a string is present or not. Parameters S: str: The original string sub_str: str: The sub-string to be checked Returns result: boolean: Whether the string is present or not """ i = 0 pointer = 0 while (i < len(S)): # This means that we are already in that word # whose sub-part is already matched. For eg: # `program` in `programming`. Therefore we are # going to skip the rest of the word and check # the next word instead. if (S[i] != ' ' and pointer == len(sub_str)): while (i < len(S) and S[i] != ' '): i += 1 i += 1 pointer = 0 if (i >= len(S)): break # If we encounter a space, we check whether we # have already found the sub-string or not. elif (S[i] == ' ' and pointer == len(sub_str)): break if (S[i] == sub_str[pointer]): pointer += 1 else: # If the current element of the original # string matched with the first element of # the sub-string then we increment the # pointer by 1. Otherwise we set it to 0. pointer = 1 if (S[i] == sub_str[0]) else 0 i += 1 return pointer == len(sub_str) S = "Python is a high-level programming" print(check_substr(S, "high-level program")) print(check_substr(S, "programming language")) Output False False Time Complexity O(n) Edits: As @PGHE pointed out in the comments, we can also do the checking in punctuation characters and not only in spaces. Since the OP hasn't mentioned anything about the punctuation, I'm keeping this answer as it is. A: Add spaces on both sides of the substring and string, then test 'substring in string'
What is the fastest way to check if a substring is in a string as an entire word or term, like RegEx with boundaries?
I have a problem to find the fastest way to check if a substring is in a string as an entire word or term. Currently, I'm using RegEx, but I need to perform thousands of verifications and RegEx is being VERY slow. There are many ways to respond to this. The easier way to verify is substring in string: substring = "programming" string = "Python is a high-level programming language" substring in string >>> True In other hand, it's a naivy solution when we need to find the substring as an entire word or term: substring = "program" string = "Python is a high-level programming language" substring in string >>> True Another solution is to split the string into a list of words and verify if the substring is in that list: substring = "program" string = "Python is a high-level programming language" substring in string.split() >>> False Nevertheless, it doesn't work if the substring is a term. To resolve this, another solution would be to use RegEx: import re substring = "high-level program" string = "Python is a high-level programming language" re.search(r"\b{}\b".format(substring), string) != None >>> False However, my biggest problem is that the solution is REALLY slow if you need to perform thousands of verifications. To mitigate this issue, I created some approaches that, although they are faster than RegEx (for the use I need), still are a lot slower than substring in string: substring = "high-level program" string = "Python is a high-level programming language" all([word in string.split() for word in substring.split()]) >>> False Although simple, the above approach didn't fit because it ignores substring word order, returning True if the substring was "programming high-level", unlike the solution in RegEx. So, I created another approach verifying if the substring is in a ngram list where each ngram has the same number of words as the substring: from nltk import ngrams substring = "high-level program" string = "Python is a high-level programming language" ngram = list(ngrams(string.split(), len(substring.split()))) substring in [" ".join(tuples) for tuples in ngram] >>> False EDIT: Here is a less slow version, working with the same principle, but using only built-in functions: substring = "high-level program" string = "Python is a high-level programming language" length = len(substring.split()) words = string.split() ngrams = [" ".join(words[i:i+length]) for i in range(len(words) - length)] substring in ngrams >>> False Someone knows some a faster approach to find a substring inside a string as an entire word or term?
[ "Simply loop through the string and splice the string according to the substring length and compare the splice string with the substring if it is equal return True.\nIllustration*\nstrs = \"Coding\"\nsubstr = \"ding\"\nslen = 4\ni = 0\n\ncheck = strs[i:slen+i]==substr\n\n# 1st iteration\nstrs[0:4+0] == ding\ncodi == ding # False\n\n# 2nd iteration\ni=1\nstrs[1:4+1] == ding\nodin == ding # False\n\n# 3rd iteration\ni=2\nstrs [2:4+2] == ding\nding == ding # True\n\n\nSolution\ndef str_exist(string, substring, slen):\n for i in range(len(string)):\n if string[i:slen+i] == substring:\n return True\n return False\n\nsubstring = \"high-level program\"\nstring = \"Python is a high-level programming language\"\nslen = len(substring)\n\nprint(str_exist(string, substring, slen))\n\n\nOUTPUT\nTrue\n\n", "Check this out. I've added comments in my code for better understanding of what this algorithm is doing.\ndef check_substr(S: str, sub_str: str) -> bool:\n \"\"\"\n This function tells whether the given sub-string \n in a string is present or not.\n \n Parameters\n S: str: The original string\n sub_str: str: The sub-string to be checked\n \n Returns\n result: boolean: Whether the string is present or not\n \"\"\"\n i = 0\n pointer = 0\n \n while (i < len(S)):\n # This means that we are already in that word\n # whose sub-part is already matched. For eg:\n # `program` in `programming`. Therefore we are\n # going to skip the rest of the word and check\n # the next word instead.\n if (S[i] != ' ' and pointer == len(sub_str)):\n while (i < len(S) and S[i] != ' '):\n i += 1\n i += 1\n pointer = 0\n \n if (i >= len(S)):\n break\n \n # If we encounter a space, we check whether we\n # have already found the sub-string or not.\n elif (S[i] == ' ' and pointer == len(sub_str)):\n break\n \n if (S[i] == sub_str[pointer]):\n pointer += 1\n \n else:\n # If the current element of the original \n # string matched with the first element of\n # the sub-string then we increment the \n # pointer by 1. Otherwise we set it to 0.\n pointer = 1 if (S[i] == sub_str[0]) else 0\n \n i += 1\n \n return pointer == len(sub_str)\n \nS = \"Python is a high-level programming\"\nprint(check_substr(S, \"high-level program\"))\nprint(check_substr(S, \"programming language\"))\n\nOutput\nFalse\nFalse\n\nTime Complexity\nO(n)\n\nEdits:\nAs @PGHE pointed out in the comments, we can also do the checking in punctuation characters and not only in spaces. Since the OP hasn't mentioned anything about the punctuation, I'm keeping this answer as it is.\n", "Add spaces on both sides of the substring and string, then test 'substring in string'\n" ]
[ 1, 1, 1 ]
[]
[]
[ "contains", "python", "regex", "string", "substring" ]
stackoverflow_0074426371_contains_python_regex_string_substring.txt
Q: Problem with virtualenv in Mac OS X I've installed virtualenv via pip and get this error after creating a new environment: selenium:~ auser$ virtualenv new New python executable in new/bin/python ERROR: The executable new/bin/python is not functioning ERROR: It thinks sys.prefix is u'/System/Library/Frameworks/Python.framework/ Versions/2.6' (should be '/Users/user/new') ERROR: virtualenv is not compatible with this system or executable In my environment: PYTHONPATH=/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages PATH=/System/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin How can I repair this? Thanks. A: Just in case there's someone still seeking for the answer. I ran into this same problem just today and realized since I already have Anaconda installed, I should not have used pip install virtualenv to install virtual environment as this would give me the error message when trying to initiate it later. Instead, I tried conda install virtualenv then entered virtualenv env_mysite and problem solved. A: Like @RyanWilcox mentioned, you might be inadvertently pointing virtualenv to the wrong Python installation. Virtualenv comes with a -p flag to let you specify which interpreter to use. In my case, virtualenv test_env threw the same error as yours, while virtualenv -p python test_env worked perfectly. If you call virtualenv -h, the documentation for the -p flag will tell you which python it thinks it should be using; if it looks wonky, try passing -p python. For reference, I'm on virtualenv 1.11.6. A: In case anyone in the future runs into this problem - this is caused by your default Python distribution being conda. Conda has it's own virtual env set up process but if you have the conda distribution of python and still wish to use virtualenv here's how: Find the other python distribution on your machine: ls -ls /usr/bin/python* Take note of the availble python version that is not conda and run the code below (note for python 3 and above you have to upgrade virtualenv first): virtualenv -p python2.7(or your python version) flaskapp A: I've run across this problem myself. I wrote down the instructions in a README, which I have pasted below.... I have found there are two things that work: Make sure you're running the latest virtualenv (1.5.1, of this writting) If you're using a non system Python as your standard Python (which python to check) Forcefully use the System supplied one. Instead of virtualenv thing use /usr/bin/python2.6 PATH/TO/VIRTUALENV thing (or whatever which python returned to you - this is what it did for me when I ran into this issue) A: I had the same problem and as I see it now, it was caused by a messy Python installation. I have OS X installed for over a year since I bought a new laptop and I have already installed and reinstalled Python for several times using different sources (official binaries, homebrew, official binaries + hand-made adjustments as described here). Don't ask me why I did that, I'm just a miserable newbie believing everything will fix itself after being re-installed. So, I had a number of different Pythons installed here and there as well as many hardlinks pointing at them inconsistently. Eventually I got sick of all of them and reinstalled OS X carefully cleaned the system from all the Pythons I found using find utility. Also, I have unlinked all the links pointing to whatever Python from everywhere. Then I've installed a fresh Python using homebrew, installed virtualenv and everything works as a charm now. So, my recipe is: sudo find / -iname "python*" > python.log Then analyze this file, remove and unlink everything related to the version of Python you need, reinstall it (I did it with homebrew, maybe official installation will also work) and enjoy. Make sure you unlink everything python-related from /usr/bin and /usr/local/bin as well as remove all the instances of Frameworks/Python.framework/Versions/<Your.Version> in /Library and /System/Library. It may be a dirty hack, but it worked for me. I prefer not to keep any system-wide Python libraries except pip and virtualenv and create virtual environments for all of my projects, so I do not care about removing the important libraries. If you don't want to remove everything, still try to understand whether your Pythons are, what links point to them and from where. Then think what may cause the problem and fix it. A: I ran into a variation of this "not functioning" error. I was trying to create an environment in a folder that included the path ".../Programming/Developing..." which is actually "/Users/eric/Documents/Programming:Developing/" and got this error: ImportError: No module named site ERROR: The executable env/bin/python2.7 is not functioning ERROR: It thinks sys.prefix is u'/Users/eric/Documents/Programming:Developing/heroku' (should be u'/Users/eric/Documents/Programming:Developing/heroku/env') ERROR: virtualenv is not compatible with this system or executable I tried the same in a different folder and it worked fine, no errors and env/bin has what I expect (activate, etc.). A: I got the same problem and I found that it happens when you do not specify the python executable name properly. So for python 2x, for example: virtualenv --system-site-packages -p python mysite But for python 3.6 you need to specify the executable name like python3.6 virtualenv --system-site-packages -p python3.6 mysite A: On on OSX 10.6.8 leopard, after having "upgraded" to Lion, then downgrading again (ouch - AVOID!), I went through the Wolf Paulus method a few months ago, completely ignorant of python. Deleted python 2.7 altogether and "replaced" it with 3.something. My FTP program stopped working (Fetch) and who knows what else relies on Python 2.7. So at that point I downloaded the latest version of 2.7 from python.org and it's installer got me up and running - until i tried to use virtualenv. What seems to have worked for me this time was totally deleting Python 2.7 with this code: sudo rm -R /System/Library/Frameworks/Python.framework/Versions/2.7 removing all the links with this code: sudo rm /usr/bin/pydoc sudo rm /usr/bin/python sudo rm /usr/bin/pythonw sudo rm /usr/bin/python-config I had tried to install python with homebrew, but apparently it will not work unless all of XTools is installed, which I have been avoiding, since the version of XTools compatible with 10.6 is ancient and 4GB and mostly all I need is GCC, the compiler, which you can get here. So I just installed with the latest download from python.org. Then had to reinstall easy_install, pip, virtualenv. Definitely wondering when it will be time for a new laptop, but there's a lot to be said for buying fewer pieces of hardware (slave labor, unethical mining, etc). A: The above solutions failed for me, but the following worked: python3 -m venv --without-pip <ENVIRONMENT_NAME> . <ENVIRONMENT_NAME>/bin/activate curl https://bootstrap.pypa.io/get-pip.py | python deactivate It's hacky, but yes, the core problem really did just seem to be pip. A: I did the following steps to get virtualenv working : Update virtualenv as follows : ==> sudo pip install --upgrade virtualenv Initialize python3 virtualenv : ==> virtualenv -p python3 venv A: I had this same issue, and I can confirm that the problem was with an outdated virtualenv.py file. It was not necessary to do a whole install --upgrade. Replacing the virtualenv.py file with the most recent version sufficed. A: I also had this problem, and I tried the following method which worked for me: conda install virtualenv virtualenv --system-site-packages /anaconda/envs/tensorflow (here envs keeps all the virtual environments made by user) source /anaconda/envs/tensorflow/bin/activate Hope it's helpful. A: I had this same issue when trying to install py2.7 on a newer system. The root issue was that virtualenv was part of py3.7 and thus was not compatible: $ virtualenv -p python2.7 env Running virtualenv with interpreter /usr/local/bin/python2.7 New python executable in /Users/blah/env/bin/python ERROR: The executable /Users/blah/env/bin/python is not functioning ERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env') ERROR: virtualenv is not compatible with this system or executable $ which virtualenv /Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv # install proper version of virtualenv $ pip2.7 install virtualenv $ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env $ . ./env/bin/activate (env) $ A: If you continue to have trouble with virtualenv, you might try pythonbrew, instead. It's an alternate solution to the same problem. It works more like Ruby's rvm: It builds and creates an entire instance of Python, under $HOME/.pythonbrew, and then sets up some bash functions that allow you to switch easily between versions. Where virtualenv shadows the system version of Python, using symbolic links as part of its solution, pythonbrew builds entirely self-contained installations of Python. I used virtualenv for years. It's a decent solution, but I've switched to pythonbrew lately. Having completely self-contained Python instances means that installing a new one takes awhile (since pythonbrew actually compiles Python from scratch), but the self-contained nature of each installation appeals to me. And disk is cheap.
Problem with virtualenv in Mac OS X
I've installed virtualenv via pip and get this error after creating a new environment: selenium:~ auser$ virtualenv new New python executable in new/bin/python ERROR: The executable new/bin/python is not functioning ERROR: It thinks sys.prefix is u'/System/Library/Frameworks/Python.framework/ Versions/2.6' (should be '/Users/user/new') ERROR: virtualenv is not compatible with this system or executable In my environment: PYTHONPATH=/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages PATH=/System/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin How can I repair this? Thanks.
[ "Just in case there's someone still seeking for the answer.\nI ran into this same problem just today and realized since I already have Anaconda installed, I should not have used pip install virtualenv to install virtual environment as this would give me the error message when trying to initiate it later. Instead, I tried conda install virtualenv then entered virtualenv env_mysite and problem solved.\n", "Like @RyanWilcox mentioned, you might be inadvertently pointing virtualenv to the wrong Python installation. Virtualenv comes with a -p flag to let you specify which interpreter to use.\nIn my case,\nvirtualenv test_env\n\nthrew the same error as yours, while\nvirtualenv -p python test_env\n\nworked perfectly.\nIf you call virtualenv -h, the documentation for the -p flag will tell you which python it thinks it should be using; if it looks wonky, try passing -p python. For reference, I'm on virtualenv 1.11.6.\n", "In case anyone in the future runs into this problem - this is caused by your default Python distribution being conda. Conda has it's own virtual env set up process but if you have the conda distribution of python and still wish to use virtualenv here's how:\n\nFind the other python distribution on your machine: ls -ls /usr/bin/python*\nTake note of the availble python version that is not conda and run the code below (note for python 3 and above you have to upgrade virtualenv first): virtualenv -p python2.7(or your python version) flaskapp\n\n", "I've run across this problem myself. I wrote down the instructions in a README, which I have pasted below....\nI have found there are two things that work:\n\nMake sure you're running the latest virtualenv (1.5.1, of this writting)\nIf you're using a non system Python as your standard Python (which python to check) Forcefully use the System supplied one.\nInstead of virtualenv thing use /usr/bin/python2.6 PATH/TO/VIRTUALENV thing (or whatever which \npython returned to you - this is what it did for me when I ran into this issue)\n\n", "I had the same problem and as I see it now, it was caused by a messy Python installation. I have OS X installed for over a year since I bought a new laptop and I have already installed and reinstalled Python for several times using different sources (official binaries, homebrew, official binaries + hand-made adjustments as described here). Don't ask me why I did that, I'm just a miserable newbie believing everything will fix itself after being re-installed. \nSo, I had a number of different Pythons installed here and there as well as many hardlinks pointing at them inconsistently. Eventually I got sick of all of them and reinstalled OS X carefully cleaned the system from all the Pythons I found using find utility. Also, I have unlinked all the links pointing to whatever Python from everywhere. Then I've installed a fresh Python using homebrew, installed virtualenv and everything works as a charm now.\nSo, my recipe is:\nsudo find / -iname \"python*\" > python.log\nThen analyze this file, remove and unlink everything related to the version of Python you need, reinstall it (I did it with homebrew, maybe official installation will also work) and enjoy. Make sure you unlink everything python-related from /usr/bin and /usr/local/bin as well as remove all the instances of Frameworks/Python.framework/Versions/<Your.Version> in /Library and /System/Library. \nIt may be a dirty hack, but it worked for me. I prefer not to keep any system-wide Python libraries except pip and virtualenv and create virtual environments for all of my projects, so I do not care about removing the important libraries. If you don't want to remove everything, still try to understand whether your Pythons are, what links point to them and from where. Then think what may cause the problem and fix it.\n", "I ran into a variation of this \"not functioning\" error.\nI was trying to create an environment in a folder that included the path \".../Programming/Developing...\" which is actually \"/Users/eric/Documents/Programming:Developing/\"\nand got this error:\nImportError: No module named site\nERROR: The executable env/bin/python2.7 is not functioning\nERROR: It thinks sys.prefix is u'/Users/eric/Documents/Programming:Developing/heroku' (should be u'/Users/eric/Documents/Programming:Developing/heroku/env')\nERROR: virtualenv is not compatible with this system or executable\n\nI tried the same in a different folder and it worked fine, no errors and env/bin has what I expect (activate, etc.).\n", "I got the same problem and I found that it happens when you do not specify the python executable name properly. So for python 2x, for example:\nvirtualenv --system-site-packages -p python mysite\nBut for python 3.6 you need to specify the executable name like python3.6\nvirtualenv --system-site-packages -p python3.6 mysite \n", "On on OSX 10.6.8 leopard, after having \"upgraded\" to Lion, then downgrading again (ouch - AVOID!), I went through the Wolf Paulus method a few months ago, completely ignorant of python. Deleted python 2.7 altogether and \"replaced\" it with 3.something. My FTP program stopped working (Fetch) and who knows what else relies on Python 2.7. So at that point I downloaded the latest version of 2.7 from python.org and it's installer got me up and running - until i tried to use virtualenv.\nWhat seems to have worked for me this time was totally deleting Python 2.7 with this code:\nsudo rm -R /System/Library/Frameworks/Python.framework/Versions/2.7\n\nremoving all the links with this code: \nsudo rm /usr/bin/pydoc\nsudo rm /usr/bin/python\nsudo rm /usr/bin/pythonw\nsudo rm /usr/bin/python-config\n\nI had tried to install python with homebrew, but apparently it will not work unless all of XTools is installed, which I have been avoiding, since the version of XTools compatible with 10.6 is ancient and 4GB and mostly all I need is GCC, the compiler, which you can get here.\nSo I just installed with the latest download from python.org.\nThen had to reinstall easy_install, pip, virtualenv.\nDefinitely wondering when it will be time for a new laptop, but there's a lot to be said for buying fewer pieces of hardware (slave labor, unethical mining, etc).\n", "The above solutions failed for me, but the following worked:\npython3 -m venv --without-pip <ENVIRONMENT_NAME>\n. <ENVIRONMENT_NAME>/bin/activate\ncurl https://bootstrap.pypa.io/get-pip.py | python\ndeactivate\n\nIt's hacky, but yes, the core problem really did just seem to be pip.\n", "I did the following steps to get virtualenv working : \nUpdate virtualenv as follows : \n==> sudo pip install --upgrade virtualenv\n\nInitialize python3 virtualenv :\n==> virtualenv -p python3 venv\n\n", "I had this same issue, and I can confirm that the problem was with an outdated virtualenv.py file. \nIt was not necessary to do a whole install --upgrade. \nReplacing the virtualenv.py file with the most recent version sufficed. \n", "I also had this problem, and I tried the following method which worked for me:\nconda install virtualenv\n\nvirtualenv --system-site-packages /anaconda/envs/tensorflow (here envs keeps all the virtual environments made by user)\nsource /anaconda/envs/tensorflow/bin/activate\n\nHope it's helpful.\n", "I had this same issue when trying to install py2.7 on a newer system. The root issue was that virtualenv was part of py3.7 and thus was not compatible:\n$ virtualenv -p python2.7 env\nRunning virtualenv with interpreter /usr/local/bin/python2.7\nNew python executable in /Users/blah/env/bin/python\nERROR: The executable /Users/blah/env/bin/python is not functioning\nERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env')\nERROR: virtualenv is not compatible with this system or executable\n\n$ which virtualenv\n/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv\n\n# install proper version of virtualenv \n$ pip2.7 install virtualenv\n\n$ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env\n\n$ . ./env/bin/activate\n(env) $ \n\n", "If you continue to have trouble with virtualenv, you might try pythonbrew, instead. It's an alternate solution to the same problem. It works more like Ruby's rvm: It builds and creates an entire instance of Python, under $HOME/.pythonbrew, and then sets up some bash functions that allow you to switch easily between versions. Where virtualenv shadows the system version of Python, using symbolic links as part of its solution, pythonbrew builds entirely self-contained installations of Python.\nI used virtualenv for years. It's a decent solution, but I've switched to pythonbrew lately. Having completely self-contained Python instances means that installing a new one takes awhile (since pythonbrew actually compiles Python from scratch), but the self-contained nature of each installation appeals to me. And disk is cheap.\n" ]
[ 109, 6, 5, 4, 3, 1, 1, 0, 0, 0, 0, 0, 0, -3 ]
[ "Open terminal and type /Library/Frameworks/Python.framework/Versions/\nthen type ls /Library/Frameworks/Python.framework/Versions/2.7/bin/\n if you are using Python2(or any other else).\nEdit ~/.bash_profile and add the following line:\nexport PATH=$PATH:/Library/Frameworks/Python.framework/Versions/2.7/bin/\ncat ~/.bash_profile\n\nIn my case the content of ~/.bash_profile is as follows:\nexport PATH=$PATH:/Library/Frameworks/Python.framework/Versions/2.7/bin/\nNow the virtualenv command should work.\n" ]
[ -1 ]
[ "macos", "operating_system", "python", "virtualenv" ]
stackoverflow_0005904319_macos_operating_system_python_virtualenv.txt
Q: pybind11: How to organize pybind module under a namespace package In the below example of the pybind tutorial, a dynamic library is build. setup.py in https://github.com/pybind/python_example: ext_modules = [ Pybind11Extension("python_example", ["src/main.cpp"], ... ), ] setup( ext_modules=ext_modules, ... ) It can be imported like this: import python_example But this lives in the global namespace and I would like to organize it under a namepsace package like this: import mypackage.python_example It seems that regardless of where I put the main.cpp it will be always accessible under the global namespace. I am thinking of e.g. numpy, where everything is used as np.somefunction and never do I import from an other namespace. A: One can add a namespace in front of the module name. Pybind11Extension("mypackage.python_example", ["src/main.cpp"], ... ) But the name in PYBIND11_MODULE should stay as it is. PYBIND11_MODULE(python_example, m) { This will add a folder during the build: mypackage/python_example.cpython-38-x86_64-linux-gnu.so This way you can import it like this: import mypackage.python_example Thanks to Marc Gliss for his answer in the comments.
pybind11: How to organize pybind module under a namespace package
In the below example of the pybind tutorial, a dynamic library is build. setup.py in https://github.com/pybind/python_example: ext_modules = [ Pybind11Extension("python_example", ["src/main.cpp"], ... ), ] setup( ext_modules=ext_modules, ... ) It can be imported like this: import python_example But this lives in the global namespace and I would like to organize it under a namepsace package like this: import mypackage.python_example It seems that regardless of where I put the main.cpp it will be always accessible under the global namespace. I am thinking of e.g. numpy, where everything is used as np.somefunction and never do I import from an other namespace.
[ "One can add a namespace in front of the module name.\nPybind11Extension(\"mypackage.python_example\",\n [\"src/main.cpp\"],\n ...\n)\n\nBut the name in PYBIND11_MODULE should stay as it is.\nPYBIND11_MODULE(python_example, m) {\n\nThis will add a folder during the build: mypackage/python_example.cpython-38-x86_64-linux-gnu.so\nThis way you can import it like this:\nimport mypackage.python_example\n\nThanks to Marc Gliss for his answer in the comments.\n" ]
[ 0 ]
[]
[]
[ "c++", "packaging", "pybind11", "python" ]
stackoverflow_0074660906_c++_packaging_pybind11_python.txt
Q: Trying to create a sliding window that checks for repeats in a DNA sequence I'm trying to write a bioinformatics code that will check for certain repeats in a given string of nucleotides. The user inputs a certain patter, and the program outputs how many times something is repeated, or even highlights where they are. I've gotten a good start on it, but could use some help. Below is my code so far. while True: text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA' print ("Input Pattern:") pattern = input("") def pattern_count(text, pattern): count = 0 for i in range(len(text) - len(pattern) + 1): if text[i: i + len(pattern)] == pattern: count = count + 1 return count print(pattern_count(text, pattern)) The issue lies in in the fact that I can only put the input from the beginning (ex. AGA or AGAC) to get an output. Any help or recommendations would be greatly appreciated. Thank you so much! A: One possibility is to use re.findall: import re text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA' pattern = "CCT" count = sum(1 for _ in re.findall(pattern, text)) The sum(1 for ...) is a common pattern to count the number of items, a generator returns. See e.g. this answer. A: Here is a modified version of your code that will allow the user to input a string of nucleotides and a pattern to search for. It will then output the number of times the pattern appears in the string. Note that this code is case sensitive, so "AGC" and "agc" will be treated as different patterns. def pattern_count(text, pattern): count = 0 for i in range(len(text) - len(pattern) + 1): if text[i: i + len(pattern)] == pattern: count = count + 1 return count while True: print("Input the string of nucleotides:") text = input() print("Input the pattern to search for:") pattern = input() count = pattern_count(text, pattern) print("The pattern appears {} times in the string.".format(count)) One potential optimization you could make to your code is to use the built-in count() method to count the number of times a pattern appears in a string. This would avoid the need to loop over the string and check each substring manually. Here is how you could modify your code to use this method: def pattern_count(text, pattern): return text.count(pattern) while True: print("Input the string of nucleotides:") text = input() print("Input the pattern to search for:") pattern = input() count = pattern_count(text, pattern) print("The pattern appears {} times in the string.".format(count)) A: Here's a fixed version of your code: def pattern_count(text, pattern): count = 0 for i in range(len(text) - len(pattern) + 1): if text[i: i + len(pattern)] == pattern: count += 1 return count while True: text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA' print("Input Pattern:") pattern = input("") print(pattern_count(text, pattern)) The issues with your code were that you had an extra indentation in the for loop, which caused the return statement to be executed after the first iteration of the loop, instead of after all iterations. I also added a += operator to increase the count, instead of overwriting the count with the result of count + 1. Finally, I moved the return statement outside the for loop, so that it returns the count after all iterations of the loop have been completed.
Trying to create a sliding window that checks for repeats in a DNA sequence
I'm trying to write a bioinformatics code that will check for certain repeats in a given string of nucleotides. The user inputs a certain patter, and the program outputs how many times something is repeated, or even highlights where they are. I've gotten a good start on it, but could use some help. Below is my code so far. while True: text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA' print ("Input Pattern:") pattern = input("") def pattern_count(text, pattern): count = 0 for i in range(len(text) - len(pattern) + 1): if text[i: i + len(pattern)] == pattern: count = count + 1 return count print(pattern_count(text, pattern)) The issue lies in in the fact that I can only put the input from the beginning (ex. AGA or AGAC) to get an output. Any help or recommendations would be greatly appreciated. Thank you so much!
[ "One possibility is to use re.findall:\nimport re\ntext = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'\npattern = \"CCT\"\ncount = sum(1 for _ in re.findall(pattern, text))\n\nThe sum(1 for ...) is a common pattern to count the number of items, a generator returns. See e.g. this answer.\n", "Here is a modified version of your code that will allow the user to input a string of nucleotides and a pattern to search for. It will then output the number of times the pattern appears in the string. Note that this code is case sensitive, so \"AGC\" and \"agc\" will be treated as different patterns.\ndef pattern_count(text, pattern):\n count = 0\n for i in range(len(text) - len(pattern) + 1):\n if text[i: i + len(pattern)] == pattern:\n count = count + 1\n return count\n\nwhile True:\n print(\"Input the string of nucleotides:\")\n text = input()\n\n print(\"Input the pattern to search for:\")\n pattern = input()\n\n count = pattern_count(text, pattern)\n print(\"The pattern appears {} times in the string.\".format(count))\n\nOne potential optimization you could make to your code is to use the built-in count() method to count the number of times a pattern appears in a string. This would avoid the need to loop over the string and check each substring manually. Here is how you could modify your code to use this method:\ndef pattern_count(text, pattern):\n return text.count(pattern)\n\nwhile True:\n print(\"Input the string of nucleotides:\")\n text = input()\n\n print(\"Input the pattern to search for:\")\n pattern = input()\n\n count = pattern_count(text, pattern)\n print(\"The pattern appears {} times in the string.\".format(count))\n\n", "Here's a fixed version of your code:\ndef pattern_count(text, pattern):\n count = 0\n for i in range(len(text) - len(pattern) + 1):\n if text[i: i + len(pattern)] == pattern:\n count += 1\n return count\n\n\nwhile True:\n text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'\n print(\"Input Pattern:\")\n pattern = input(\"\")\n\n print(pattern_count(text, pattern))\n\nThe issues with your code were that you had an extra indentation in the for loop, which caused the return statement to be executed after the first iteration of the loop, instead of after all iterations. I also added a += operator to increase the count, instead of overwriting the count with the result of count + 1. Finally, I moved the return statement outside the for loop, so that it returns the count after all iterations of the loop have been completed.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "bioinformatics", "biopython", "dna_sequence", "python", "repeat" ]
stackoverflow_0074659092_bioinformatics_biopython_dna_sequence_python_repeat.txt
Q: Command not found - installing ganache-cli with yarn on Visual Studio I've installed nodeJS in the terminal of Visual Studio version : v16.13.1 Yarn 1.22.17 Ganache-cli MacBook:web3_py_simple_storage myName$ yarn global add ganache-cli warning ../package.json: No license field [1/4] Resolving packages... [2/4] Fetching packages... [3/4] Linking dependencies... [4/4] Building fresh packages... success Installed "[email protected]" with binaries: - ganache-cli ✨ Done in 1.10s.``` BUT when I try to do `ganache-cli --version`, I always received the same msg `bash: ganache-cli: command not found`... I maybe think it's a path problem but I tried a lot and a lot of solution and still nothing.. I really thanks a lot in advance the guy who will help me ! A: I encountered the same issue earlier today, got it solved by typing this command in my visual studio terminal npm install -g ganache-cli (Note: You must have Nodejs already installed) After the installation simply run the command below ganache-cli --version to check if it was installed properly A: I used npm install -g ganache and it solved my problem, but when I want to start Ganache-cli, run with npx ganache-cli. A: you might have to add it to path C:\Users\userName\AppData\Local\Yarn\bin
Command not found - installing ganache-cli with yarn on Visual Studio
I've installed nodeJS in the terminal of Visual Studio version : v16.13.1 Yarn 1.22.17 Ganache-cli MacBook:web3_py_simple_storage myName$ yarn global add ganache-cli warning ../package.json: No license field [1/4] Resolving packages... [2/4] Fetching packages... [3/4] Linking dependencies... [4/4] Building fresh packages... success Installed "[email protected]" with binaries: - ganache-cli ✨ Done in 1.10s.``` BUT when I try to do `ganache-cli --version`, I always received the same msg `bash: ganache-cli: command not found`... I maybe think it's a path problem but I tried a lot and a lot of solution and still nothing.. I really thanks a lot in advance the guy who will help me !
[ "I encountered the same issue earlier today, got it solved by typing this command in my visual studio terminal\nnpm install -g ganache-cli\n(Note: You must have Nodejs already installed)\nAfter the installation simply run the command below\nganache-cli --version\nto check if it was installed properly\n", "I used npm install -g ganache and it solved my problem, but when I want to start Ganache-cli, run with npx ganache-cli.\n", "you might have to add it to path\nC:\\Users\\userName\\AppData\\Local\\Yarn\\bin\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "ganache", "installation", "python", "terminal", "visual_studio" ]
stackoverflow_0070599723_ganache_installation_python_terminal_visual_studio.txt
Q: I want to read this csv file with pandas and display the first 5 records but I keep getting this error I keep getting an error when i use df.head() on my dataframe I read in. When I read in my CSV file and attempt to display The first 5 records, I use these lines df=pd.read_csv('US_Accidents_Dec21.csv') df.head() But I Get the following error and I want to know how to fix it. File ~\anaconda3\lib\site-packages\IPython\core\formatters.py:707, in PlainTextFormatter.__call__(self, obj) 700 stream = StringIO() 701 printer = pretty.RepresentationPrinter(stream, self.verbose, 702 self.max_width, self.newline, 703 max_seq_length=self.max_seq_length, 704 singleton_pprinters=self.singleton_printers, 705 type_pprinters=self.type_printers, 706 deferred_pprinters=self.deferred_printers) --> 707 printer.pretty(obj) 708 printer.flush() 709 return stream.getvalue() File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:410, in RepresentationPrinter.pretty(self, obj) 407 return meth(obj, self, cycle) 408 if cls is not object \ 409 and callable(cls.__dict__.get('__repr__')): --> 410 return _repr_pprint(obj, self, cycle) 412 return _default_pprint(obj, self, cycle) 413 finally: File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:778, in _repr_pprint(obj, p, cycle) 776 """A pprint that just redirects to the normal repr function.""" 777 # Find newlines and replace them with p.break_() --> 778 output = repr(obj) 779 lines = output.splitlines() 780 with p.group(): File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1011, in DataFrame.__repr__(self) 1008 return buf.getvalue() 1010 repr_params = fmt.get_dataframe_repr_params() -> 1011 return self.to_string(**repr_params) File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1192, in DataFrame.to_string(self, buf, columns, col_space, header, index, na_rep, formatters, float_format, sparsify, index_names, justify, max_rows, max_cols, show_dimensions, decimal, line_width, min_rows, max_colwidth, encoding) 1173 with option_context("display.max_colwidth", max_colwidth): 1174 formatter = fmt.DataFrameFormatter( 1175 self, 1176 columns=columns, (...) 1190 decimal=decimal, 1191 ) -> 1192 return fmt.DataFrameRenderer(formatter).to_string( 1193 buf=buf, 1194 encoding=encoding, 1195 line_width=line_width, 1196 ) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1128, in DataFrameRenderer.to_string(self, buf, encoding, line_width) 1125 from pandas.io.formats.string import StringFormatter 1127 string_formatter = StringFormatter(self.fmt, line_width=line_width) -> 1128 string = string_formatter.to_string() 1129 return save_to_buffer(string, buf=buf, encoding=encoding) File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:25, in StringFormatter.to_string(self) 24 def to_string(self) -> str: ---> 25 text = self._get_string_representation() 26 if self.fmt.should_show_dimensions: 27 text = "".join([text, self.fmt.dimensions_info]) File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:40, in StringFormatter._get_string_representation(self) 37 if self.fmt.frame.empty: 38 return self._empty_info_line ---> 40 strcols = self._get_strcols() 42 if self.line_width is None: 43 # no need to wrap around just print the whole frame 44 return self.adj.adjoin(1, *strcols) File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:31, in StringFormatter._get_strcols(self) 30 def _get_strcols(self) -> list[list[str]]: ---> 31 strcols = self.fmt.get_strcols() 32 if self.fmt.is_truncated: 33 strcols = self._insert_dot_separators(strcols) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:611, in DataFrameFormatter.get_strcols(self) 607 def get_strcols(self) -> list[list[str]]: 608 """ 609 Render a DataFrame to a list of columns (as lists of strings). 610 """ --> 611 strcols = self._get_strcols_without_index() 613 if self.index: 614 str_index = self._get_formatted_index(self.tr_frame) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:875, in DataFrameFormatter._get_strcols_without_index(self) 871 cheader = str_columns[i] 872 header_colwidth = max( 873 int(self.col_space.get(c, 0)), *(self.adj.len(x) for x in cheader) 874 ) --> 875 fmt_values = self.format_col(i) 876 fmt_values = _make_fixed_width( 877 fmt_values, self.justify, minimum=header_colwidth, adj=self.adj 878 ) 880 max_len = max(max(self.adj.len(x) for x in fmt_values), header_colwidth) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:889, in DataFrameFormatter.format_col(self, i) 887 frame = self.tr_frame 888 formatter = self._get_formatter(i) --> 889 return format_array( 890 frame.iloc[:, i]._values, 891 formatter, 892 float_format=self.float_format, 893 na_rep=self.na_rep, 894 space=self.col_space.get(frame.columns[i]), 895 decimal=self.decimal, 896 leading_space=self.index, 897 ) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1316, in format_array(values, formatter, float_format, na_rep, digits, space, justify, decimal, leading_space, quoting) 1301 digits = get_option("display.precision") 1303 fmt_obj = fmt_klass( 1304 values, 1305 digits=digits, (...) 1313 quoting=quoting, 1314 ) -> 1316 return fmt_obj.get_result() File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1347, in GenericArrayFormatter.get_result(self) 1346 def get_result(self) -> list[str]: -> 1347 fmt_values = self._format_strings() 1348 return _make_fixed_width(fmt_values, self.justify) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1594, in FloatArrayFormatter._format_strings(self) 1593 def _format_strings(self) -> list[str]: -> 1594 return list(self.get_result_as_array()) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1511, in FloatArrayFormatter.get_result_as_array(self) 1508 return formatted 1510 if self.formatter is not None: -> 1511 return format_with_na_rep(self.values, self.formatter, self.na_rep) 1513 if self.fixed_width: 1514 threshold = get_option("display.chop_threshold") File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1503, in FloatArrayFormatter.get_result_as_array.<locals>.format_with_na_rep(values, formatter, na_rep) 1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str): 1501 mask = isna(values) 1502 formatted = np.array( -> 1503 [ 1504 formatter(val) if not m else na_rep 1505 for val, m in zip(values.ravel(), mask.ravel()) 1506 ] 1507 ).reshape(values.shape) 1508 return formatted File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1504, in <listcomp>(.0) 1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str): 1501 mask = isna(values) 1502 formatted = np.array( 1503 [ -> 1504 formatter(val) if not m else na_rep 1505 for val, m in zip(values.ravel(), mask.ravel()) 1506 ] 1507 ).reshape(values.shape) 1508 return formatted KeyError: ';,' Its a lot to paste here and I dont know exactly what to detail because Im a beginner with using Python. A: The error message gives the following exception: KeyError: ';,'. I suggest verifying that your CSV-file doesn't contain any errors first. Are you able to open it in e.g. Excel? If yes: are you using the correct separator and delimiter? (See the sep and delimiter parameters in the documentation)
I want to read this csv file with pandas and display the first 5 records but I keep getting this error
I keep getting an error when i use df.head() on my dataframe I read in. When I read in my CSV file and attempt to display The first 5 records, I use these lines df=pd.read_csv('US_Accidents_Dec21.csv') df.head() But I Get the following error and I want to know how to fix it. File ~\anaconda3\lib\site-packages\IPython\core\formatters.py:707, in PlainTextFormatter.__call__(self, obj) 700 stream = StringIO() 701 printer = pretty.RepresentationPrinter(stream, self.verbose, 702 self.max_width, self.newline, 703 max_seq_length=self.max_seq_length, 704 singleton_pprinters=self.singleton_printers, 705 type_pprinters=self.type_printers, 706 deferred_pprinters=self.deferred_printers) --> 707 printer.pretty(obj) 708 printer.flush() 709 return stream.getvalue() File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:410, in RepresentationPrinter.pretty(self, obj) 407 return meth(obj, self, cycle) 408 if cls is not object \ 409 and callable(cls.__dict__.get('__repr__')): --> 410 return _repr_pprint(obj, self, cycle) 412 return _default_pprint(obj, self, cycle) 413 finally: File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:778, in _repr_pprint(obj, p, cycle) 776 """A pprint that just redirects to the normal repr function.""" 777 # Find newlines and replace them with p.break_() --> 778 output = repr(obj) 779 lines = output.splitlines() 780 with p.group(): File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1011, in DataFrame.__repr__(self) 1008 return buf.getvalue() 1010 repr_params = fmt.get_dataframe_repr_params() -> 1011 return self.to_string(**repr_params) File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1192, in DataFrame.to_string(self, buf, columns, col_space, header, index, na_rep, formatters, float_format, sparsify, index_names, justify, max_rows, max_cols, show_dimensions, decimal, line_width, min_rows, max_colwidth, encoding) 1173 with option_context("display.max_colwidth", max_colwidth): 1174 formatter = fmt.DataFrameFormatter( 1175 self, 1176 columns=columns, (...) 1190 decimal=decimal, 1191 ) -> 1192 return fmt.DataFrameRenderer(formatter).to_string( 1193 buf=buf, 1194 encoding=encoding, 1195 line_width=line_width, 1196 ) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1128, in DataFrameRenderer.to_string(self, buf, encoding, line_width) 1125 from pandas.io.formats.string import StringFormatter 1127 string_formatter = StringFormatter(self.fmt, line_width=line_width) -> 1128 string = string_formatter.to_string() 1129 return save_to_buffer(string, buf=buf, encoding=encoding) File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:25, in StringFormatter.to_string(self) 24 def to_string(self) -> str: ---> 25 text = self._get_string_representation() 26 if self.fmt.should_show_dimensions: 27 text = "".join([text, self.fmt.dimensions_info]) File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:40, in StringFormatter._get_string_representation(self) 37 if self.fmt.frame.empty: 38 return self._empty_info_line ---> 40 strcols = self._get_strcols() 42 if self.line_width is None: 43 # no need to wrap around just print the whole frame 44 return self.adj.adjoin(1, *strcols) File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:31, in StringFormatter._get_strcols(self) 30 def _get_strcols(self) -> list[list[str]]: ---> 31 strcols = self.fmt.get_strcols() 32 if self.fmt.is_truncated: 33 strcols = self._insert_dot_separators(strcols) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:611, in DataFrameFormatter.get_strcols(self) 607 def get_strcols(self) -> list[list[str]]: 608 """ 609 Render a DataFrame to a list of columns (as lists of strings). 610 """ --> 611 strcols = self._get_strcols_without_index() 613 if self.index: 614 str_index = self._get_formatted_index(self.tr_frame) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:875, in DataFrameFormatter._get_strcols_without_index(self) 871 cheader = str_columns[i] 872 header_colwidth = max( 873 int(self.col_space.get(c, 0)), *(self.adj.len(x) for x in cheader) 874 ) --> 875 fmt_values = self.format_col(i) 876 fmt_values = _make_fixed_width( 877 fmt_values, self.justify, minimum=header_colwidth, adj=self.adj 878 ) 880 max_len = max(max(self.adj.len(x) for x in fmt_values), header_colwidth) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:889, in DataFrameFormatter.format_col(self, i) 887 frame = self.tr_frame 888 formatter = self._get_formatter(i) --> 889 return format_array( 890 frame.iloc[:, i]._values, 891 formatter, 892 float_format=self.float_format, 893 na_rep=self.na_rep, 894 space=self.col_space.get(frame.columns[i]), 895 decimal=self.decimal, 896 leading_space=self.index, 897 ) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1316, in format_array(values, formatter, float_format, na_rep, digits, space, justify, decimal, leading_space, quoting) 1301 digits = get_option("display.precision") 1303 fmt_obj = fmt_klass( 1304 values, 1305 digits=digits, (...) 1313 quoting=quoting, 1314 ) -> 1316 return fmt_obj.get_result() File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1347, in GenericArrayFormatter.get_result(self) 1346 def get_result(self) -> list[str]: -> 1347 fmt_values = self._format_strings() 1348 return _make_fixed_width(fmt_values, self.justify) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1594, in FloatArrayFormatter._format_strings(self) 1593 def _format_strings(self) -> list[str]: -> 1594 return list(self.get_result_as_array()) File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1511, in FloatArrayFormatter.get_result_as_array(self) 1508 return formatted 1510 if self.formatter is not None: -> 1511 return format_with_na_rep(self.values, self.formatter, self.na_rep) 1513 if self.fixed_width: 1514 threshold = get_option("display.chop_threshold") File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1503, in FloatArrayFormatter.get_result_as_array.<locals>.format_with_na_rep(values, formatter, na_rep) 1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str): 1501 mask = isna(values) 1502 formatted = np.array( -> 1503 [ 1504 formatter(val) if not m else na_rep 1505 for val, m in zip(values.ravel(), mask.ravel()) 1506 ] 1507 ).reshape(values.shape) 1508 return formatted File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1504, in <listcomp>(.0) 1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str): 1501 mask = isna(values) 1502 formatted = np.array( 1503 [ -> 1504 formatter(val) if not m else na_rep 1505 for val, m in zip(values.ravel(), mask.ravel()) 1506 ] 1507 ).reshape(values.shape) 1508 return formatted KeyError: ';,' Its a lot to paste here and I dont know exactly what to detail because Im a beginner with using Python.
[ "The error message gives the following exception: KeyError: ';,'.\nI suggest verifying that your CSV-file doesn't contain any errors first. Are you able to open it in e.g. Excel? If yes: are you using the correct separator and delimiter? (See the sep and delimiter parameters in the documentation)\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074661778_pandas_python.txt
Q: What is the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1) in python In python: what's the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1) A: In python, when you create a new instance of an object, you are creating a new object in memory. For example: a = ListNode(-1) b = ListNode(-1) In this case, you are creating two separate ListNode objects, a and b, which are stored in different locations in memory. On the other hand, when you use the assignment operator = in the following way: a = b = ListNode(-1) You are creating a single ListNode object and assigning it to two different variables, a and b. This means that a and b will both reference the same object in memory. Any changes made to the object through either a or b will be reflected in the other variable. In other words, the difference between the two examples is that in the first case, you are creating two separate objects, while in the second case, you are creating a single object and assigning it to multiple variables.
What is the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1) in python
In python: what's the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1)
[ "In python, when you create a new instance of an object, you are creating a new object in memory. For example:\na = ListNode(-1)\nb = ListNode(-1)\n\nIn this case, you are creating two separate ListNode objects, a and b, which are stored in different locations in memory.\nOn the other hand, when you use the assignment operator = in the following way:\na = b = ListNode(-1)\n\nYou are creating a single ListNode object and assigning it to two different variables, a and b. This means that a and b will both reference the same object in memory. Any changes made to the object through either a or b will be reflected in the other variable.\nIn other words, the difference between the two examples is that in the first case, you are creating two separate objects, while in the second case, you are creating a single object and assigning it to multiple variables.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074661831_python.txt
Q: Flask server timeouts at 30 sec with gunicorn Here is the minimal example of the code. curl request curl http://127.0.0.1:5000/get_zip/my_zip.zip -o my_zip.zip should send user the file archive/my_zip.zip It works correctly without gunicorn and disconnects after 30 seconds, when the server is launched with gunicorn. from os import path from flask import Flask, request, jsonify, json, send_file app = Flask(__name__) @app.route('/get_zip/<file_path>', methods=['GET']) def get_zip(file_path): # file_path: path to a large zip file return send_file(path.join('archive', file_path), as_attachment=True) if __name__ == '__main__': app.run(host="0.0.0.0", port="5000", debug=False, use_reloader=False) What is the correct way to fix this disconnect when ran under gunicorn? A: 30 seconds is default timeout value for gunicorn. To increase it use --timeout <seconds> parameter on your gunicorn config. Also if you run gunicorn under nginx, don't forget to manage nginx's settings: proxy_connect_timeout <seconds>s; proxy_read_timeout <seconds>s; UPDATE: it's better and safer to send files from flask by using send_from_directory
Flask server timeouts at 30 sec with gunicorn
Here is the minimal example of the code. curl request curl http://127.0.0.1:5000/get_zip/my_zip.zip -o my_zip.zip should send user the file archive/my_zip.zip It works correctly without gunicorn and disconnects after 30 seconds, when the server is launched with gunicorn. from os import path from flask import Flask, request, jsonify, json, send_file app = Flask(__name__) @app.route('/get_zip/<file_path>', methods=['GET']) def get_zip(file_path): # file_path: path to a large zip file return send_file(path.join('archive', file_path), as_attachment=True) if __name__ == '__main__': app.run(host="0.0.0.0", port="5000", debug=False, use_reloader=False) What is the correct way to fix this disconnect when ran under gunicorn?
[ "30 seconds is default timeout value for gunicorn.\nTo increase it use --timeout <seconds> parameter on your gunicorn config.\nAlso if you run gunicorn under nginx, don't forget to manage nginx's settings:\nproxy_connect_timeout <seconds>s;\nproxy_read_timeout <seconds>s;\n\nUPDATE:\nit's better and safer to send files from flask by using send_from_directory\n" ]
[ 3 ]
[]
[]
[ "flask", "gunicorn", "python" ]
stackoverflow_0074661482_flask_gunicorn_python.txt
Q: List of quarter hours between two timestamps in python I have two timestamps in python and I need to get all quarter hours between those timestamps. Any idea how to do this? A: To get a list of all quarter hours between two timestamps in Python, you can use the dateutil.rrule module to create a dateutil.rrule.rrule object with the freq argument set to dateutil.rrule.MINUTELY and the interval argument set to 15 to generate a list of datetime objects separated by 15 minute intervals. You can then iterate over the rrule object and extract the time from each datetime object to create a list of quarter-hour timestamps. Here's an example: from datetime import datetime from dateutil.rrule import rrule, MINUTELY # Timestamps for start and end timestamp1 = 1605653932 timestamp2 = 1605656932 # Create a datetime object for the start timestamp start = datetime.fromtimestamp(timestamp1) # Create a datetime object for the end timestamp end = datetime.fromtimestamp(timestamp2) # Create a list of quarter-hour timestamps between the start and end timestamps timestamps = [] for dt in rrule(freq=MINUTELY, interval=15, dtstart=start, until=end): timestamps.append(dt.strftime('%H:%M')) # Print the list of quarter-hour timestamps print(timestamps) This code will generate a list of quarter-hour timestamps in the format 'HH:MM', where HH is the hour and MM is the minute, for all quarter hours between the start and end timestamps. For the timestamps 1605653932 and 1605656932, the output would be: ['23:58', '00:13', '00:28', '00:43'] If you want to start with the next quarter-hour after the start timestamp, you can change the code as follows: from datetime import datetime, timedelta from dateutil.rrule import rrule, MINUTELY # Timestamps for start and end timestamp1 = 1605653932 timestamp2 = 1605656932 # Create a datetime object for the start timestamp start = datetime.fromtimestamp(timestamp1) # Create a datetime object for the end timestamp end = datetime.fromtimestamp(timestamp2) # Find the next quarter-hour after the start timestamp start_minutes = start.minute start_next_quarter_hour = start + timedelta(minutes=15 - start_minutes % 15) # Create a list of quarter-hour timestamps between the start and end timestamps timestamps = [] for dt in rrule(freq=MINUTELY, interval=15, dtstart=start_next_quarter_hour, until=end): timestamps.append(dt.strftime('%H:%M')) # Print the list of quarter-hour timestamps print(timestamps) This will give the following output: ['00:00', '00:15', '00:30', '00:45']
List of quarter hours between two timestamps in python
I have two timestamps in python and I need to get all quarter hours between those timestamps. Any idea how to do this?
[ "To get a list of all quarter hours between two timestamps in Python, you can use the dateutil.rrule module to create a dateutil.rrule.rrule object with the freq argument set to dateutil.rrule.MINUTELY and the interval argument set to 15 to generate a list of datetime objects separated by 15 minute intervals. You can then iterate over the rrule object and extract the time from each datetime object to create a list of quarter-hour timestamps.\nHere's an example:\nfrom datetime import datetime\nfrom dateutil.rrule import rrule, MINUTELY\n\n# Timestamps for start and end\ntimestamp1 = 1605653932\ntimestamp2 = 1605656932\n\n# Create a datetime object for the start timestamp\nstart = datetime.fromtimestamp(timestamp1)\n\n# Create a datetime object for the end timestamp\nend = datetime.fromtimestamp(timestamp2)\n\n# Create a list of quarter-hour timestamps between the start and end timestamps\ntimestamps = []\nfor dt in rrule(freq=MINUTELY, interval=15, dtstart=start, until=end):\n timestamps.append(dt.strftime('%H:%M'))\n\n# Print the list of quarter-hour timestamps\nprint(timestamps)\n\nThis code will generate a list of quarter-hour timestamps in the format 'HH:MM', where HH is the hour and MM is the minute, for all quarter hours between the start and end timestamps. For the timestamps 1605653932 and 1605656932, the output would be:\n['23:58', '00:13', '00:28', '00:43']\n\nIf you want to start with the next quarter-hour after the start timestamp, you can change the code as follows:\nfrom datetime import datetime, timedelta\nfrom dateutil.rrule import rrule, MINUTELY\n\n# Timestamps for start and end\ntimestamp1 = 1605653932\ntimestamp2 = 1605656932\n\n# Create a datetime object for the start timestamp\nstart = datetime.fromtimestamp(timestamp1)\n\n# Create a datetime object for the end timestamp\nend = datetime.fromtimestamp(timestamp2)\n\n# Find the next quarter-hour after the start timestamp\nstart_minutes = start.minute\nstart_next_quarter_hour = start + timedelta(minutes=15 - start_minutes % 15)\n\n# Create a list of quarter-hour timestamps between the start and end timestamps\ntimestamps = []\nfor dt in rrule(freq=MINUTELY, interval=15, dtstart=start_next_quarter_hour, until=end):\n timestamps.append(dt.strftime('%H:%M'))\n\n# Print the list of quarter-hour timestamps\nprint(timestamps)\n\nThis will give the following output:\n['00:00', '00:15', '00:30', '00:45']\n\n" ]
[ 0 ]
[]
[]
[ "python", "timestamp" ]
stackoverflow_0074661759_python_timestamp.txt
Q: Reading the last line of an empty file on python I have this function on my code that is supposed to read a files last line, and if there is no file create one. My issue is when it creates the files and tries to read the last line it comes up as an error. with open(HIGH_SCORES_FILE_PATH, "w+") as file: last_line = file.readlines()[-1] if last_line == '\n': with open(HIGH_SCORES_FILE_PATH, 'a') as file: file.write('Jogo:') file.write('\n') file.write(str(0)) file.write('\n') I have tried multiple ways of reading the last line but all of the ones I've tried ends in an error. A: Opening a file in "w+" erases any content in the file. readlines() returns an empty list and trying to get value results in an IndexError. You can test for a file's existence with os.path.exists or os.path.isfile, or you could use an exception handler to deal with that case. Start with last_line set to a sentinel value. If the open fails, or if no lines are read, last_line will not be updated and you can base file creation on that. last_line = None try: with open(HIGH_SCORES_FILE_PATH) as file: for last_line in file: pass except OSError: pass if last_line is None: with open(HIGH_SCORES_FILE_PATH, "w") as file: file.write('Jogo:\n0\n') last_line = '0\n' A: To read the last line of a file, you can use the seek method and set the position to the beginning of the file, then move the file cursor to the end of the file. Then, you can use the readline method to read the last line. with open(HIGH_SCORES_FILE_PATH, "w+") as file: file.seek(0, 2) # Move cursor to the end of the file last_line = file.readline() if last_line == '\n': with open(HIGH_SCORES_FILE_PATH, 'a') as file: file.write('Jogo:') file.write('\n') file.write(str(0)) file.write('\n') Note that if the file is empty, readline will return an empty string, so you should check for that case as well. with open(HIGH_SCORES_FILE_PATH, "w+") as file: file.seek(0, 2) # Move cursor to the end of the file last_line = file.readline() if last_line == '' or last_line == '\n': with open(HIGH_SCORES_FILE_PATH, 'a') as file: file.write('Jogo:') file.write('\n') file.write(str(0)) file.write('\n')
Reading the last line of an empty file on python
I have this function on my code that is supposed to read a files last line, and if there is no file create one. My issue is when it creates the files and tries to read the last line it comes up as an error. with open(HIGH_SCORES_FILE_PATH, "w+") as file: last_line = file.readlines()[-1] if last_line == '\n': with open(HIGH_SCORES_FILE_PATH, 'a') as file: file.write('Jogo:') file.write('\n') file.write(str(0)) file.write('\n') I have tried multiple ways of reading the last line but all of the ones I've tried ends in an error.
[ "Opening a file in \"w+\" erases any content in the file. readlines() returns an empty list and trying to get value results in an IndexError. You can test for a file's existence with os.path.exists or os.path.isfile, or you could use an exception handler to deal with that case.\nStart with last_line set to a sentinel value. If the open fails, or if no lines are read, last_line will not be updated and you can base file creation on that.\nlast_line = None\ntry:\n with open(HIGH_SCORES_FILE_PATH) as file:\n for last_line in file:\n pass \nexcept OSError:\n pass\n\nif last_line is None:\n with open(HIGH_SCORES_FILE_PATH, \"w\") as file:\n file.write('Jogo:\\n0\\n')\n last_line = '0\\n'\n\n", "To read the last line of a file, you can use the seek method and set the position to the beginning of the file, then move the file cursor to the end of the file. Then, you can use the readline method to read the last line.\nwith open(HIGH_SCORES_FILE_PATH, \"w+\") as file:\n file.seek(0, 2) # Move cursor to the end of the file\n last_line = file.readline()\n if last_line == '\\n':\n with open(HIGH_SCORES_FILE_PATH, 'a') as file:\n file.write('Jogo:')\n file.write('\\n')\n file.write(str(0))\n file.write('\\n')\n\nNote that if the file is empty, readline will return an empty string, so you should check for that case as well.\nwith open(HIGH_SCORES_FILE_PATH, \"w+\") as file:\n file.seek(0, 2) # Move cursor to the end of the file\n last_line = file.readline()\n if last_line == '' or last_line == '\\n':\n with open(HIGH_SCORES_FILE_PATH, 'a') as file:\n file.write('Jogo:')\n file.write('\\n')\n file.write(str(0))\n file.write('\\n')\n\n" ]
[ 1, 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0074661682_file_python.txt
Q: Why is my exit statement not working properly? the goal is to move between rooms in this simplified version of a text base game. The code works exactly as planned except for if you try and input 'exit' directly after inputting 'instructions'. after inputting 'instructions' the first 'exit' get ran in the else invalid statement then the second 'exit' input exits the game as intended. If you continue at least one input after 'instructions' than exit works properly as well. rooms = { 'Great Hall': {'South': 'Bedroom'}, 'Bedroom': {'North': 'Great Hall', 'East': 'Cellar'}, 'Cellar': {'West': 'Bedroom'} } def instruction(): """Function to give instructions on how to play the game""" print('Welcome to Module 6 Milestone') print('Move commands are go North, go South, go East, go West') print('Typing exit will exit the game') print('Inputting instructions will remind you of the game instructions') print('Good luck may the odds be in your favor') def invalid(): """Function for if an invalid input is entered""" print('------------') print('Whoops invalid command, try again') print('------------') def main(): """Main function that runs the movement between rooms""" current_room = 'Great Hall' print('\nYou are starting in the', current_room) move = input('What will you do next?\n>').split() directions = ['North', 'South', 'East', 'West'] # directions in the dictionary while True: if len(move) < 2: # for one word inputs if 'exit' in move: # exit the game print('\nThanks for playing!') break elif 'instructions' in move: # reprint instructions instruction() print('------------') print('\nYou are in the', current_room) else: invalid() print('You are still in the', current_room) move = input('\nWhat will you do next?\n>').split() # next move input if len(move) == 2: # 2 word inputs if move[1] in directions: # checks if move is a valid direction if move[1] in rooms[current_room]: # if move is a valid direction in current room current_room = rooms[current_room][move[1]] # changes current room if valid print('------------') print('You have found the', current_room) elif move[1] not in rooms[current_room]: # if move in directions but not a valid move in current room print('------------') print('Oh no it seems to be a dead in') print('You are still in the', current_room) else: # not a valid move command invalid() print('You are still in the', current_room) move = input('What will you do next?\n>').split() # next move input else: # invalid move command invalid() print('You are still in the', current_room) move = input('What will you do next?\n>').split() instruction() # prints instructions function when game runs print('------------') if __name__ == '__main__': # if code is not imported than main() will run main() A: After your line move = input('\nWhat will you do next?\n>').split() # next move input You should jump back to the beginning of the loop, using continue. A: while True: if len(move) < 2: # for one word inputs # handle the input in various ways move = input('\nWhat will you do next?\n>').split() # next move input if len(move) == 2: # 2 word inputs When a single-word command is entered, it is handled in the first if block, and then a new move is entered. The problem is that since the second block is an if and not an elif, it processes the new move immediately. Generally it's better to ask for user input in only one place, near the top of the loop: while True: move = input("...").split() if len(move) < 2: # handle it elif len(move) == 2: # handle it else: # handle it
Why is my exit statement not working properly?
the goal is to move between rooms in this simplified version of a text base game. The code works exactly as planned except for if you try and input 'exit' directly after inputting 'instructions'. after inputting 'instructions' the first 'exit' get ran in the else invalid statement then the second 'exit' input exits the game as intended. If you continue at least one input after 'instructions' than exit works properly as well. rooms = { 'Great Hall': {'South': 'Bedroom'}, 'Bedroom': {'North': 'Great Hall', 'East': 'Cellar'}, 'Cellar': {'West': 'Bedroom'} } def instruction(): """Function to give instructions on how to play the game""" print('Welcome to Module 6 Milestone') print('Move commands are go North, go South, go East, go West') print('Typing exit will exit the game') print('Inputting instructions will remind you of the game instructions') print('Good luck may the odds be in your favor') def invalid(): """Function for if an invalid input is entered""" print('------------') print('Whoops invalid command, try again') print('------------') def main(): """Main function that runs the movement between rooms""" current_room = 'Great Hall' print('\nYou are starting in the', current_room) move = input('What will you do next?\n>').split() directions = ['North', 'South', 'East', 'West'] # directions in the dictionary while True: if len(move) < 2: # for one word inputs if 'exit' in move: # exit the game print('\nThanks for playing!') break elif 'instructions' in move: # reprint instructions instruction() print('------------') print('\nYou are in the', current_room) else: invalid() print('You are still in the', current_room) move = input('\nWhat will you do next?\n>').split() # next move input if len(move) == 2: # 2 word inputs if move[1] in directions: # checks if move is a valid direction if move[1] in rooms[current_room]: # if move is a valid direction in current room current_room = rooms[current_room][move[1]] # changes current room if valid print('------------') print('You have found the', current_room) elif move[1] not in rooms[current_room]: # if move in directions but not a valid move in current room print('------------') print('Oh no it seems to be a dead in') print('You are still in the', current_room) else: # not a valid move command invalid() print('You are still in the', current_room) move = input('What will you do next?\n>').split() # next move input else: # invalid move command invalid() print('You are still in the', current_room) move = input('What will you do next?\n>').split() instruction() # prints instructions function when game runs print('------------') if __name__ == '__main__': # if code is not imported than main() will run main()
[ "After your line\nmove = input('\\nWhat will you do next?\\n>').split() # next move input\n\nYou should jump back to the beginning of the loop, using continue.\n", "while True:\n if len(move) < 2: # for one word inputs\n # handle the input in various ways\n move = input('\\nWhat will you do next?\\n>').split() # next move input\n if len(move) == 2: # 2 word inputs\n\nWhen a single-word command is entered, it is handled in the first if block, and then a new move is entered.\nThe problem is that since the second block is an if and not an elif, it processes the new move immediately.\nGenerally it's better to ask for user input in only one place, near the top of the loop:\nwhile True:\n move = input(\"...\").split()\n if len(move) < 2:\n # handle it\n elif len(move) == 2:\n # handle it\n else:\n # handle it\n \n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074661877_python.txt
Q: How to combine streams in anyio? How to iterate over multiple steams at once in anyio, interleaving the items as they appear? Let's say, I want a simple equivalent of annotate-output. The simplest I could make is #!/usr/bin/env python3 import dataclasses from collections.abc import Sequence from typing import TypeVar import anyio import anyio.abc import anyio.streams.text SCRIPT = r""" for idx in $(seq 1 5); do printf "%s " "$idx" date -Ins sleep 0.08 done echo "." """ CMD = ["bash", "-x", "-c", SCRIPT] def print_data(data: str, is_stderr: bool) -> None: print(f"{int(is_stderr)}: {data!r}") T_Item = TypeVar("T_Item") # TODO: covariant=True? @dataclasses.dataclass(eq=False) class CombinedReceiveStream(anyio.abc.ObjectReceiveStream[tuple[int, T_Item]]): """Combines multiple streams into a single one, annotating each item with position index of the origin stream""" streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]] max_buffer_size_items: int = 32 def __post_init__(self) -> None: self._queue_send, self._queue_receive = anyio.create_memory_object_stream( max_buffer_size=self.max_buffer_size_items, # Should be: `item_type=tuple[int, T_Item] | None` ) self._pending = set(range(len(self.streams))) self._started = False self._task_group = anyio.create_task_group() async def _copier(self, idx: int) -> None: assert idx in self._pending stream = self.streams[idx] async for item in stream: await self._queue_send.send((idx, item)) assert idx in self._pending self._pending.remove(idx) await self._queue_send.send(None) # Wake up the `receive` waiters, if any. async def _start(self) -> None: assert not self._started await self._task_group.__aenter__() for idx in range(len(self.streams)): self._task_group.start_soon(self._copier, idx, name=f"_combined_receive_copier_{idx}") self._started = True async def receive(self) -> tuple[int, T_Item]: if not self._started: await self._start() # Non-blocking pre-check. # Gathers items that are in the queue when `self._pending` is empty. try: item = self._queue_receive.receive_nowait() except anyio.WouldBlock: pass else: if item is not None: return item while True: if not self._pending: raise anyio.EndOfStream item = await self._queue_receive.receive() if item is not None: return item async def aclose(self) -> None: if self._started: self._task_group.cancel_scope.cancel() self._started = False await self._task_group.__aexit__(None, None, None) async def amain(max_buffer_size_items: int = 32) -> None: async with await anyio.open_process(CMD) as proc: assert proc.stdout is not None assert proc.stderr is not None raw_streams = [proc.stdout, proc.stderr] idx_to_is_stderr = {0: False, 1: True} # just making it explicit streams = [anyio.streams.text.TextReceiveStream(stream) for stream in raw_streams] async with CombinedReceiveStream(streams) as outputs: async for idx, data in outputs: is_stderr = idx_to_is_stderr[idx] print_data(data, is_stderr=is_stderr) def main(): anyio.run(amain) if __name__ == "__main__": main() However, this CombinedReceiveStream solution is somewhat ugly, and I would some solution should already exist. What am I overlooking? A: This should be more safe and idiomatic. class CtxObj: """ Add an async context manager that calls `_ctx` to run the context. Usage:: class Foo(CtxObj): @asynccontextmanager async def _ctx(self): yield self # or whatever async with Foo() as self_or_whatever: pass """ async def __aenter__(self): self.__ctx = ctx = self._ctx() # pylint: disable=E1101,W0201 return await ctx.__aenter__() def __aexit__(self, *tb): return self.__ctx.__aexit__(*tb) @dataclasses.dataclass(eq=False) class CombinedReceiveStream(CtxObj): """Combines multiple streams into a single one, annotating each item with position index of the origin stream""" streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]] max_buffer_size_items: int = 32 def __post_init__(self) -> None: self._queue_send, self._queue_receive = anyio.create_memory_object_stream( max_buffer_size=self.max_buffer_size_items, # Should be: `item_type=tuple[int, T_Item] | None` ) self._pending = set(range(len(self.streams))) @asynccontextmanager async def _ctx(self): async with anyio.create_task_group() as tg: for i in self._pending: tg.start_soon(self._copier, i) yield self tg.cancel_scope.cancel() async def _copier(self, idx: int) -> None: stream = self.streams[idx] async for item in stream: await self._queue_send.send((idx, item)) self._pending.remove(idx) if not self._pending: await self._queue_send.aclose() async def receive(self) -> tuple[int, T_Item]: return await self._queue_receive.receive() def __aiter__(self): return self async def __anext__(self): try: return await self.receive() except anyio.EndOfStream: raise StopAsyncIteration() from None
How to combine streams in anyio?
How to iterate over multiple steams at once in anyio, interleaving the items as they appear? Let's say, I want a simple equivalent of annotate-output. The simplest I could make is #!/usr/bin/env python3 import dataclasses from collections.abc import Sequence from typing import TypeVar import anyio import anyio.abc import anyio.streams.text SCRIPT = r""" for idx in $(seq 1 5); do printf "%s " "$idx" date -Ins sleep 0.08 done echo "." """ CMD = ["bash", "-x", "-c", SCRIPT] def print_data(data: str, is_stderr: bool) -> None: print(f"{int(is_stderr)}: {data!r}") T_Item = TypeVar("T_Item") # TODO: covariant=True? @dataclasses.dataclass(eq=False) class CombinedReceiveStream(anyio.abc.ObjectReceiveStream[tuple[int, T_Item]]): """Combines multiple streams into a single one, annotating each item with position index of the origin stream""" streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]] max_buffer_size_items: int = 32 def __post_init__(self) -> None: self._queue_send, self._queue_receive = anyio.create_memory_object_stream( max_buffer_size=self.max_buffer_size_items, # Should be: `item_type=tuple[int, T_Item] | None` ) self._pending = set(range(len(self.streams))) self._started = False self._task_group = anyio.create_task_group() async def _copier(self, idx: int) -> None: assert idx in self._pending stream = self.streams[idx] async for item in stream: await self._queue_send.send((idx, item)) assert idx in self._pending self._pending.remove(idx) await self._queue_send.send(None) # Wake up the `receive` waiters, if any. async def _start(self) -> None: assert not self._started await self._task_group.__aenter__() for idx in range(len(self.streams)): self._task_group.start_soon(self._copier, idx, name=f"_combined_receive_copier_{idx}") self._started = True async def receive(self) -> tuple[int, T_Item]: if not self._started: await self._start() # Non-blocking pre-check. # Gathers items that are in the queue when `self._pending` is empty. try: item = self._queue_receive.receive_nowait() except anyio.WouldBlock: pass else: if item is not None: return item while True: if not self._pending: raise anyio.EndOfStream item = await self._queue_receive.receive() if item is not None: return item async def aclose(self) -> None: if self._started: self._task_group.cancel_scope.cancel() self._started = False await self._task_group.__aexit__(None, None, None) async def amain(max_buffer_size_items: int = 32) -> None: async with await anyio.open_process(CMD) as proc: assert proc.stdout is not None assert proc.stderr is not None raw_streams = [proc.stdout, proc.stderr] idx_to_is_stderr = {0: False, 1: True} # just making it explicit streams = [anyio.streams.text.TextReceiveStream(stream) for stream in raw_streams] async with CombinedReceiveStream(streams) as outputs: async for idx, data in outputs: is_stderr = idx_to_is_stderr[idx] print_data(data, is_stderr=is_stderr) def main(): anyio.run(amain) if __name__ == "__main__": main() However, this CombinedReceiveStream solution is somewhat ugly, and I would some solution should already exist. What am I overlooking?
[ "This should be more safe and idiomatic.\nclass CtxObj:\n \"\"\"\n Add an async context manager that calls `_ctx` to run the context.\n\n Usage::\n class Foo(CtxObj):\n @asynccontextmanager\n async def _ctx(self):\n yield self # or whatever\n\n async with Foo() as self_or_whatever:\n pass\n \"\"\"\n\n async def __aenter__(self):\n self.__ctx = ctx = self._ctx() # pylint: disable=E1101,W0201\n return await ctx.__aenter__()\n\n def __aexit__(self, *tb):\n return self.__ctx.__aexit__(*tb)\n\n\[email protected](eq=False)\nclass CombinedReceiveStream(CtxObj):\n \"\"\"Combines multiple streams into a single one, annotating each item with position index of the origin stream\"\"\"\n\n streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]]\n max_buffer_size_items: int = 32\n\n def __post_init__(self) -> None:\n self._queue_send, self._queue_receive = anyio.create_memory_object_stream(\n max_buffer_size=self.max_buffer_size_items,\n # Should be: `item_type=tuple[int, T_Item] | None`\n )\n self._pending = set(range(len(self.streams)))\n\n @asynccontextmanager\n async def _ctx(self):\n async with anyio.create_task_group() as tg:\n for i in self._pending:\n tg.start_soon(self._copier, i)\n\n yield self\n tg.cancel_scope.cancel()\n\n\n async def _copier(self, idx: int) -> None:\n stream = self.streams[idx]\n async for item in stream:\n await self._queue_send.send((idx, item))\n self._pending.remove(idx)\n if not self._pending:\n await self._queue_send.aclose()\n\n\n async def receive(self) -> tuple[int, T_Item]:\n return await self._queue_receive.receive()\n\n def __aiter__(self):\n return self\n\n async def __anext__(self):\n try:\n return await self.receive()\n except anyio.EndOfStream:\n raise StopAsyncIteration() from None\n\n" ]
[ 1 ]
[]
[]
[ "anyio", "python", "python_trio" ]
stackoverflow_0074661106_anyio_python_python_trio.txt
Q: Tell pylint that a given decorator is a classmethod How can I modify my pylintrc so that a given decorator is interpreted as a classmethod. pydantic defines a validator decorator to allow for attribute validation of model classes and operates as a class method. pylint throws a E0213: Method 'has_risk_assigned' should have "self" as first argument (no-self-argument) for a validator method declared as: from pydantic import BaseModel, validator class RiskyRecord(BaseModel): # ... attributes ... @validator('risk') def has_risk_assigned(cls, v): # ... make sure that risk is properly assigned ... How can I configure my pylintrc such that it views this decorator as defining a class (instead of instance) method? Note: I want a solution in terms of pylintrc since there are multiple classes in this module that each use multiple validators; managing this warning in one place is more desirable. I only see two classmethod related features in my current pylintrc; both only relate to the valid name(s) for the first argument. A: To configure pylint to interpret a decorator as defining a class method, you can add the following to your pylintrc file: [TYPECHECK] ignored-decorators=validator This will tell pylint to ignore the validator decorator when checking for the first argument of a method. Note that this will not affect other checks or warnings related to the validator decorator, such as the use of cls instead of self. If you want to suppress those as well, you can add the cls-is-class option to your pylintrc file: [TYPECHECK] ignored-decorators=validator cls-is-class=true This will tell pylint to interpret the cls argument in methods decorated with validator as referring to the class itself, rather than an instance of the class.
Tell pylint that a given decorator is a classmethod
How can I modify my pylintrc so that a given decorator is interpreted as a classmethod. pydantic defines a validator decorator to allow for attribute validation of model classes and operates as a class method. pylint throws a E0213: Method 'has_risk_assigned' should have "self" as first argument (no-self-argument) for a validator method declared as: from pydantic import BaseModel, validator class RiskyRecord(BaseModel): # ... attributes ... @validator('risk') def has_risk_assigned(cls, v): # ... make sure that risk is properly assigned ... How can I configure my pylintrc such that it views this decorator as defining a class (instead of instance) method? Note: I want a solution in terms of pylintrc since there are multiple classes in this module that each use multiple validators; managing this warning in one place is more desirable. I only see two classmethod related features in my current pylintrc; both only relate to the valid name(s) for the first argument.
[ "To configure pylint to interpret a decorator as defining a class method, you can add the following to your pylintrc file:\n[TYPECHECK]\nignored-decorators=validator\n\nThis will tell pylint to ignore the validator decorator when checking for the first argument of a method. Note that this will not affect other checks or warnings related to the validator decorator, such as the use of cls instead of self. If you want to suppress those as well, you can add the cls-is-class option to your pylintrc file:\n[TYPECHECK]\nignored-decorators=validator\ncls-is-class=true\n\nThis will tell pylint to interpret the cls argument in methods decorated with validator as referring to the class itself, rather than an instance of the class.\n" ]
[ 1 ]
[]
[]
[ "class_method", "pylint", "pylintrc", "python" ]
stackoverflow_0074661891_class_method_pylint_pylintrc_python.txt
Q: PermissionError: [Errno 13] Permission denied when trying to play mp3 with python I'm trying to play an mp3 with pydub, and I keep getting the error File "c:\Users\ryanc\Desktop\codefiles\python\audio player.py", line 5, in <module> play(song) File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 71, in play _play_with_ffplay(audio_segment) File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 15, in _play_with_ffplay seg.export(f.name, "wav") File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\audio_segment.py", line 867, in export out_f, _ = _fd_or_path_or_tempfile(out_f, 'wb+') File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\utils.py", line 60, in _fd_or_path_or_tempfile fd = open(fd, mode=mode) PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ryanc\\AppData\\Local\\Temp\\tmpkdgigv5o.wav' My code is just from pydub import AudioSegment from pydub.playback import play song = AudioSegment.from_file("C:\\Users\\ryanc\\Music\\rr.mp3") play(song) I tried running vscode with admin but that didnt work either. A: So it seems that 'pydub' library by default is not able to play .mp3 songs. You will neeed to convert it into .wav format and then execute the command again. So here is your code with some minor modifications: from pydub import AudioSegment from pydub.playback import play song = AudioSegment.from_mp3("C:\\Users\\ryanc\\Music\\rr.mp3") play(song) Now in order to work for this you need to have the ffmpeg installed. If not it will gain throw an error. Download ffmpeg and paste the code to your script directory. Here is the link to make you better understand the process. A: If saving the mp3 to a file isn't a pain for you, then using the audio playback extension is the easiest option. https://marketplace.visualstudio.com/items?itemName=sukumo28.wav-preview A: Try this link.. https://github.com/jiaaro/pydub/issues/209 Adding f.close() line in playback.py to close the stream works magic. def _play_with_ffplay(seg): with NamedTemporaryFile("w+b", suffix=".wav") as f: f.close() # close the file stream seg.export(f.name, "wav") subprocess.call([PLAYER, "-nodisp", "-autoexit", "-hide_banner", f.name]) A: The solution is to install pyAudio from python package
PermissionError: [Errno 13] Permission denied when trying to play mp3 with python
I'm trying to play an mp3 with pydub, and I keep getting the error File "c:\Users\ryanc\Desktop\codefiles\python\audio player.py", line 5, in <module> play(song) File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 71, in play _play_with_ffplay(audio_segment) File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 15, in _play_with_ffplay seg.export(f.name, "wav") File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\audio_segment.py", line 867, in export out_f, _ = _fd_or_path_or_tempfile(out_f, 'wb+') File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\utils.py", line 60, in _fd_or_path_or_tempfile fd = open(fd, mode=mode) PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ryanc\\AppData\\Local\\Temp\\tmpkdgigv5o.wav' My code is just from pydub import AudioSegment from pydub.playback import play song = AudioSegment.from_file("C:\\Users\\ryanc\\Music\\rr.mp3") play(song) I tried running vscode with admin but that didnt work either.
[ "So it seems that 'pydub' library by default is not able to play .mp3 songs. You will neeed to convert it into .wav format and then execute the command again.\nSo here is your code with some minor modifications:\nfrom pydub import AudioSegment\nfrom pydub.playback import play\n\nsong = AudioSegment.from_mp3(\"C:\\\\Users\\\\ryanc\\\\Music\\\\rr.mp3\")\nplay(song)\n\nNow in order to work for this you need to have the ffmpeg installed. If not it will gain throw an error. Download ffmpeg and paste the code to your script directory.\nHere is the link to make you better understand the process.\n", "If saving the mp3 to a file isn't a pain for you, then using the audio playback extension is the easiest option.\nhttps://marketplace.visualstudio.com/items?itemName=sukumo28.wav-preview\n", "Try this link..\nhttps://github.com/jiaaro/pydub/issues/209\nAdding f.close() line in playback.py to close the stream works magic.\ndef _play_with_ffplay(seg):\nwith NamedTemporaryFile(\"w+b\", suffix=\".wav\") as f:\n f.close() # close the file stream\n seg.export(f.name, \"wav\")\n subprocess.call([PLAYER, \"-nodisp\", \"-autoexit\", \"-hide_banner\", f.name])\n\n", "The solution is to install pyAudio from python package\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "pydub", "python" ]
stackoverflow_0069323707_pydub_python.txt
Q: Why doesn't the abort function in Flask take the handlers? I am developing a REST API with python and flask, I leave the project here Github project I added error handlers to the application but when I run an abort function, it gives me a default message from Flask, not the structure I am defining. I will leave the path to the handlers and where I run the abort from. Handlers abort(400) Flask message A: Ok, the solution was told to me that it could be in another question. What to do is to overwrite the handler function of the Flask Api object. With that, you can configure the format with which each query will be answered, even the ones that contain an error. def response_structure(code_status: int, response=None, message=None): if code_status == 200 or code_status == 201: status = 'Success' else: status = 'Error' args = dict() args['status'] = status if message is not None: args['message'] = message if response is not None: args['response'] = response return args, code_status class ExtendAPI(Api): def handle_error(self, e): return response_structure(e.code, str(e)) Once the function is overwritten, you must use this new one to create users_bp = Blueprint('users', __name__) api = ExtendAPI(users_bp) With this, we can then use the flask functions to respond with the structure that we define. if request.args.get('name') is None: abort(400) Response JSON { "response": "400 Bad Request: The browser (or proxy) sent a request that this server could not understand.", "status": "Error" }
Why doesn't the abort function in Flask take the handlers?
I am developing a REST API with python and flask, I leave the project here Github project I added error handlers to the application but when I run an abort function, it gives me a default message from Flask, not the structure I am defining. I will leave the path to the handlers and where I run the abort from. Handlers abort(400) Flask message
[ "Ok, the solution was told to me that it could be in another question.\nWhat to do is to overwrite the handler function of the Flask Api object.\nWith that, you can configure the format with which each query will be answered, even the ones that contain an error.\ndef response_structure(code_status: int, response=None, message=None):\n if code_status == 200 or code_status == 201:\n status = 'Success'\n else:\n status = 'Error'\n\n args = dict()\n args['status'] = status\n if message is not None:\n args['message'] = message\n\n if response is not None:\n args['response'] = response\n\n return args, code_status\n\n\nclass ExtendAPI(Api):\n\n def handle_error(self, e):\n return response_structure(e.code, str(e))\n\nOnce the function is overwritten, you must use this new one to create\nusers_bp = Blueprint('users', __name__)\napi = ExtendAPI(users_bp)\n\nWith this, we can then use the flask functions to respond with the structure that we define.\nif request.args.get('name') is None:\n abort(400)\n\nResponse JSON\n{\n \"response\": \"400 Bad Request: The browser (or proxy) sent a request that this server could not understand.\",\n \"status\": \"Error\"\n}\n\n" ]
[ 0 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0074650833_flask_python.txt
Q: C# and Python result difference - basic Math So I have tried the same math in c# and python but got 2 different answer. can someone please explain why is this happening. def test(): l = 50 r = 3 gg= l + (r - l) / 2 mid = l + (r - l) // 2 print(mid) print(gg) public void test() { var l = 50; var r = 3; var gg = l + (r - l) / 2; double x = l + (r - l) / 2; var mid = Math.Floor(x); Console.WriteLine(mid); Console.WriteLine(gg); } A: In C#, the / operator performs integer division (ignores the fractional part) when both values are type int. For example, 3 / 2 = 1, since the fractional part (0.5) is dropped. As a result, in your equation, the operation (r - l) / 2 is evaluating to -23, since (3 - 50) / 2 = -47 / 2 = -23 (again, the fractional part is dropped). Then, 50 + (-23) = 27. However, Python does not do this. By default, all division, whether between integers or doubles, is "normal" division - the fractional part is kept. Because of that, the result is the same as you'd get on a calculator: 50 + (3 - 50) / 2 = 26.5 If you want C# to calculate this the same way as Python, the easiest way is to make one of the numbers a double. Adding .0 to the end of the divisor should do the trick: // changed '2' to '2.0' var gg = l + (r - l) / 2.0; double x = l + (r - l) / 2.0; 26 26.5
C# and Python result difference - basic Math
So I have tried the same math in c# and python but got 2 different answer. can someone please explain why is this happening. def test(): l = 50 r = 3 gg= l + (r - l) / 2 mid = l + (r - l) // 2 print(mid) print(gg) public void test() { var l = 50; var r = 3; var gg = l + (r - l) / 2; double x = l + (r - l) / 2; var mid = Math.Floor(x); Console.WriteLine(mid); Console.WriteLine(gg); }
[ "In C#, the / operator performs integer division (ignores the fractional part) when both values are type int. For example, 3 / 2 = 1, since the fractional part (0.5) is dropped.\nAs a result, in your equation, the operation (r - l) / 2 is evaluating to -23, since (3 - 50) / 2 = -47 / 2 = -23 (again, the fractional part is dropped). Then, 50 + (-23) = 27.\nHowever, Python does not do this. By default, all division, whether between integers or doubles, is \"normal\" division - the fractional part is kept. Because of that, the result is the same as you'd get on a calculator: 50 + (3 - 50) / 2 = 26.5\nIf you want C# to calculate this the same way as Python, the easiest way is to make one of the numbers a double. Adding .0 to the end of the divisor should do the trick:\n// changed '2' to '2.0'\nvar gg = l + (r - l) / 2.0;\ndouble x = l + (r - l) / 2.0;\n\n\n26\n26.5\n\n" ]
[ 0 ]
[]
[]
[ "c#", "floor_division", "math", "python" ]
stackoverflow_0074501942_c#_floor_division_math_python.txt
Q: Insert rows in Python dataframe with conditions I have a large data file as shown below. I wanted to add two new columns (E and F) next to column D and move the suite # when applicable and City/State data in cell D3 and D4 to E2 and F2, respectively. The challenge is not every entry has the suite number. I would need to insert a row first for those entries that don't have the suite number, only for them, not for those that already have the suite information. I know how to do loops, but am having trouble to define the conditions. One way is to count the length of the string. How should I get started? Much appreciate your help! A: This is how I would do it. I don't recommend looping when using pandas. There are a lot of tools that it is often not needed. Some caution on this. Your spreadsheet has NaN and I think that is actually numpy np.nan equivalent. You also have blanks I am thinking that it is a "" equivalent. # dictionary of your data companies = { 'Comp ID': ['C1', '', np.nan, 'C2', '', np.nan, 'C3',np.nan], 'Address': ['10 foo', 'Suite A','foo city', '11 spam','STE 100','spam town', '12 ham', 'Myhammy'], 'phone': ['888-321-4567', '', np.nan, '888-321-4567', '', np.nan, '888-321-4567',np.nan], 'Type': ['W_sale', '', np.nan, 'W_sale', '', np.nan, 'W_sale',np.nan], } # make the frames needed. df = pd.DataFrame( companies) df1 = pd.DataFrame() # blank frame for suite and town columns # Need a where clause it is similar to a if() statement in excel df1['Suite'] = np.where( df['Comp ID']=='', df['Address'], np.nan) df1['City/State'] = np.where( df['Comp ID'].isna(), df['Address'], np.nan) # copy values to rows above df1 = df1[['Suite','City/State']].backfill() # joint the frames together on index df = df.join(df1) df.drop_duplicates(subset=['City/State'], keep='first', inplace=True) # set the column order to what you want df = df[['Comp ID', 'Type', 'Address', 'Suite', 'City/State', 'phone' ]] output Comp ID Type Address Suite City/State phone C1 W_sale 10 foo Suite A foo city 888-321-4567 C2 W_sale 11 spam STE 100 spam town 888-321-4567 C3 W_sale 12 ham Myhammy 888-321-4567
Insert rows in Python dataframe with conditions
I have a large data file as shown below. I wanted to add two new columns (E and F) next to column D and move the suite # when applicable and City/State data in cell D3 and D4 to E2 and F2, respectively. The challenge is not every entry has the suite number. I would need to insert a row first for those entries that don't have the suite number, only for them, not for those that already have the suite information. I know how to do loops, but am having trouble to define the conditions. One way is to count the length of the string. How should I get started? Much appreciate your help!
[ "This is how I would do it. I don't recommend looping when using pandas. There are a lot of tools that it is often not needed. Some caution on this. Your spreadsheet has NaN and I think that is actually numpy np.nan equivalent. You also have blanks I am thinking that it is a \"\" equivalent.\n# dictionary of your data\ncompanies = {\n 'Comp ID': ['C1', '', np.nan, 'C2', '', np.nan, 'C3',np.nan],\n 'Address': ['10 foo', 'Suite A','foo city', '11 spam','STE 100','spam town', '12 ham', 'Myhammy'],\n 'phone': ['888-321-4567', '', np.nan, '888-321-4567', '', np.nan, '888-321-4567',np.nan],\n 'Type': ['W_sale', '', np.nan, 'W_sale', '', np.nan, 'W_sale',np.nan],\n}\n# make the frames needed. \ndf = pd.DataFrame( companies)\ndf1 = pd.DataFrame() # blank frame for suite and town columns\n\n# Need a where clause it is similar to a if() statement in excel\ndf1['Suite'] = np.where( df['Comp ID']=='', df['Address'], np.nan)\ndf1['City/State'] = np.where( df['Comp ID'].isna(), df['Address'], np.nan)\n# copy values to rows above\ndf1 = df1[['Suite','City/State']].backfill()\n# joint the frames together on index\ndf = df.join(df1)\ndf.drop_duplicates(subset=['City/State'], keep='first', inplace=True)\n# set the column order to what you want\ndf = df[['Comp ID', 'Type', 'Address', 'Suite', 'City/State', 'phone' ]]\n\noutput\n\n\n\n\nComp ID\nType\nAddress\nSuite\nCity/State\nphone\n\n\n\n\nC1\nW_sale\n10 foo\nSuite A\nfoo city\n888-321-4567\n\n\nC2\nW_sale\n11 spam\nSTE 100\nspam town\n888-321-4567\n\n\nC3\nW_sale\n12 ham\n\nMyhammy\n888-321-4567\n\n\n\n" ]
[ 0 ]
[]
[]
[ "conditional_statements", "dataframe", "insert", "pandas", "python" ]
stackoverflow_0074661308_conditional_statements_dataframe_insert_pandas_python.txt
Q: Toggle Boolean value based on a triple state filter I'm having a brain melting time with this. For some reason I thought it would be easier, but I'm struggling with this. I have an application that a user can config before running based on desired parameters the user wants to test. There are 3 filters that the user can either turn on, turn off, or toggle. If the user wants a filter on, he will set the filter in the configuration file to True. If the user wants it off, he sets it to False. If however, the user wishes to run the test with the filter on and than again off, he can set the configuration file to toggle here are examples of filter1, filter2, and filter3 stored in a list. toggle_state = ["toggle", "toggle", False] toggle_state = ["toggle", True, "toggle"] toggle_state = [False, "toggle", True] toggle_state = [True, False, False] ... Any combination should be available for testing purposes. I have implemented nested while loops to accomplish what I'm attempting to do. However, I have had no real success. I have been able to make it work, with just toggle for all three filters. I stripped out the functions related to my in a simple MUC script below. #####CODE BLOCK 1###### import time def toggle_filters(): toggle_state = ["toggle", "toggle", "toggle"] # toggle_state = ["toggle", "toggle", False] # toggle_state = ["toggle", "toggle", True] # toggle_state = ["toggle", False, "toggle"] # toggle_state = ["toggle", True, "toggle"] # toggle_state = [False, "toggle", "toggle"] # toggle_state = [True, "toggle", "toggle"] filter_state = init_filters(toggle_state) idx = 2 complete = 2 terminate = False while True: print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}") ### do something here with the filters ### while True: if toggle_state[idx] == "toggle" and not filter_state[idx]: filter_state[idx] = True break elif complete < -1: terminate = True break elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1: filter_state[idx] = False if complete != 0: filter_state[complete] = False complete -= 1 if complete < 0: idx = 1 else: idx = complete continue elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1: if complete == 0 and idx == 0: idx += 1 idx += 1 if terminate: break def init_filters(toggle_state): """...""" filters = [] for idx in toggle_state: if idx == "toggle": filters.append(False) else: filters.append(idx) return filters if __name__ == "__main__": toggle_filters() however, when I've attempted to add in static values for the filters, it all goes to hell. I updated the toggle_filter() function to start looking for filters that are not set to toggle. ####CODE BLOCK 2#### import time def toggle_filters(): # toggle_state = ["toggle", "toggle", "toggle"] toggle_state = ["toggle", "toggle", False] # toggle_state = ["toggle", "toggle", True] # toggle_state = ["toggle", False, "toggle"] # toggle_state = ["toggle", True, "toggle"] # toggle_state = [False, "toggle", "toggle"] # toggle_state = [True, "toggle", "toggle"] filter_state = init_filters(toggle_state) idx = 2 complete = 2 terminate = False while True: print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}") ### do something here with the filters ### while True: if toggle_state[idx] == "toggle" and not filter_state[idx]: filter_state[idx] = True break elif complete < -1: terminate = True break elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1: filter_state[idx] = False if complete != 0: filter_state[complete] = False complete -= 1 if complete < 0: idx = 1 else: idx = complete continue elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1: if complete == 0 and idx == 0: idx += 1 idx += 1 elif toggle_state[idx] != "toggle" and idx == len(toggle_state) - 1: if complete != 0: pass complete -= 1 if complete < 0: idx = 1 else: idx = complete continue elif toggle_state[idx] != "toggle" and idx != len(toggle_state) - 1: if complete == 2 and idx == 2: complete = 1 idx = complete if complete == 1 and idx == 1: complete = 0 idx = complete else: idx -= 1 if terminate: break def init_filters(toggle_state): """...""" filters = [] for idx in toggle_state: if idx == "toggle": filters.append(False) else: filters.append(idx) return filters if __name__ == "__main__": toggle_filters() Which fails each time, and honestly I imagine I'm approaching this from the wrong direction, just based on the shear number of conditions I have to set. Does anyone have any suggestions as to what I should be looking at? UPDATE: if you take the first block of code, it will run as is. The output will look like a truth table. 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 This is when you set the filters to all toggle. I've updated the second code block as a complete MUC. here the output looks like this 0 0 0 0 1 0 1 1 0 however it should look like this 0 0 0 0 1 0 1 0 0 1 1 0 depending on which filter you set static, the ouputs are not correct. A: This gives the same output with less complication. itertools.product is a function that gives you all the combinations of each state listed. A TOGGLE filter can be zero or one, while a FALSE or TRUE state only provides a zero or one state, respectively. Does this manage the states you want? import itertools TOGGLE = [0,1] FALSE = [0] TRUE = [1] def toggle_filters(toggle_state): for state in itertools.product(*toggle_state): print(*state) toggle_filters([TOGGLE, TOGGLE, TOGGLE]) print() toggle_filters([TOGGLE, TOGGLE, FALSE]) Output: 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 1 1 0
Toggle Boolean value based on a triple state filter
I'm having a brain melting time with this. For some reason I thought it would be easier, but I'm struggling with this. I have an application that a user can config before running based on desired parameters the user wants to test. There are 3 filters that the user can either turn on, turn off, or toggle. If the user wants a filter on, he will set the filter in the configuration file to True. If the user wants it off, he sets it to False. If however, the user wishes to run the test with the filter on and than again off, he can set the configuration file to toggle here are examples of filter1, filter2, and filter3 stored in a list. toggle_state = ["toggle", "toggle", False] toggle_state = ["toggle", True, "toggle"] toggle_state = [False, "toggle", True] toggle_state = [True, False, False] ... Any combination should be available for testing purposes. I have implemented nested while loops to accomplish what I'm attempting to do. However, I have had no real success. I have been able to make it work, with just toggle for all three filters. I stripped out the functions related to my in a simple MUC script below. #####CODE BLOCK 1###### import time def toggle_filters(): toggle_state = ["toggle", "toggle", "toggle"] # toggle_state = ["toggle", "toggle", False] # toggle_state = ["toggle", "toggle", True] # toggle_state = ["toggle", False, "toggle"] # toggle_state = ["toggle", True, "toggle"] # toggle_state = [False, "toggle", "toggle"] # toggle_state = [True, "toggle", "toggle"] filter_state = init_filters(toggle_state) idx = 2 complete = 2 terminate = False while True: print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}") ### do something here with the filters ### while True: if toggle_state[idx] == "toggle" and not filter_state[idx]: filter_state[idx] = True break elif complete < -1: terminate = True break elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1: filter_state[idx] = False if complete != 0: filter_state[complete] = False complete -= 1 if complete < 0: idx = 1 else: idx = complete continue elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1: if complete == 0 and idx == 0: idx += 1 idx += 1 if terminate: break def init_filters(toggle_state): """...""" filters = [] for idx in toggle_state: if idx == "toggle": filters.append(False) else: filters.append(idx) return filters if __name__ == "__main__": toggle_filters() however, when I've attempted to add in static values for the filters, it all goes to hell. I updated the toggle_filter() function to start looking for filters that are not set to toggle. ####CODE BLOCK 2#### import time def toggle_filters(): # toggle_state = ["toggle", "toggle", "toggle"] toggle_state = ["toggle", "toggle", False] # toggle_state = ["toggle", "toggle", True] # toggle_state = ["toggle", False, "toggle"] # toggle_state = ["toggle", True, "toggle"] # toggle_state = [False, "toggle", "toggle"] # toggle_state = [True, "toggle", "toggle"] filter_state = init_filters(toggle_state) idx = 2 complete = 2 terminate = False while True: print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}") ### do something here with the filters ### while True: if toggle_state[idx] == "toggle" and not filter_state[idx]: filter_state[idx] = True break elif complete < -1: terminate = True break elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1: filter_state[idx] = False if complete != 0: filter_state[complete] = False complete -= 1 if complete < 0: idx = 1 else: idx = complete continue elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1: if complete == 0 and idx == 0: idx += 1 idx += 1 elif toggle_state[idx] != "toggle" and idx == len(toggle_state) - 1: if complete != 0: pass complete -= 1 if complete < 0: idx = 1 else: idx = complete continue elif toggle_state[idx] != "toggle" and idx != len(toggle_state) - 1: if complete == 2 and idx == 2: complete = 1 idx = complete if complete == 1 and idx == 1: complete = 0 idx = complete else: idx -= 1 if terminate: break def init_filters(toggle_state): """...""" filters = [] for idx in toggle_state: if idx == "toggle": filters.append(False) else: filters.append(idx) return filters if __name__ == "__main__": toggle_filters() Which fails each time, and honestly I imagine I'm approaching this from the wrong direction, just based on the shear number of conditions I have to set. Does anyone have any suggestions as to what I should be looking at? UPDATE: if you take the first block of code, it will run as is. The output will look like a truth table. 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 This is when you set the filters to all toggle. I've updated the second code block as a complete MUC. here the output looks like this 0 0 0 0 1 0 1 1 0 however it should look like this 0 0 0 0 1 0 1 0 0 1 1 0 depending on which filter you set static, the ouputs are not correct.
[ "This gives the same output with less complication. itertools.product is a function that gives you all the combinations of each state listed. A TOGGLE filter can be zero or one, while a FALSE or TRUE state only provides a zero or one state, respectively.\nDoes this manage the states you want?\nimport itertools\n\nTOGGLE = [0,1]\nFALSE = [0]\nTRUE = [1]\n\ndef toggle_filters(toggle_state):\n for state in itertools.product(*toggle_state):\n print(*state)\n\ntoggle_filters([TOGGLE, TOGGLE, TOGGLE])\nprint()\ntoggle_filters([TOGGLE, TOGGLE, FALSE])\n\nOutput:\n0 0 0\n0 0 1\n0 1 0\n0 1 1\n1 0 0\n1 0 1\n1 1 0\n1 1 1\n\n0 0 0\n0 1 0\n1 0 0\n1 1 0\n\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074661476_python.txt
Q: Remove Columns with missing values above a threshold pandas I am doing data preprocessing and want to remove features/columns which have more than say 10% missing values. I have made the below code: df_missing=df.isna() result=df_missing.sum()/len(df) result Default 0.010066 Income 0.142857 Age 0.109090 Name 0.047000 Gender 0.000000 Type of job 0.200000 Amt of credit 0.850090 Years employed 0.009003 dtype: float64 I want df to have columns only where there are no missing values above 10%. Expected output: df Default Name Gender Years employed (columns where there were missing values greater than 10% are removed.) I have tried result.iloc[:,0] IndexingError: Too many indexers Please help A: Because division of sum by length is mean, you can instead df_missing.sum()/len(df) use df_missing.mean(): result = df.isna().mean() Then filter by DataFrame.loc with : for all rows and columns by mask: df = df.loc[:,result > .1] A: it should be df = df.loc[:,result < .1] as the user only want to keep the columns that have less than 10% of the rows missing A: pandas has built in methods for such things: df_clean = df.dropna(axis=1, thresh=(len(df)*.1), inplace=False) Or if you don't want to create an extra dataframe object you can do it inplace: df.dropna(axis=1, thresh=(len(df)*.1), inplace=True)
Remove Columns with missing values above a threshold pandas
I am doing data preprocessing and want to remove features/columns which have more than say 10% missing values. I have made the below code: df_missing=df.isna() result=df_missing.sum()/len(df) result Default 0.010066 Income 0.142857 Age 0.109090 Name 0.047000 Gender 0.000000 Type of job 0.200000 Amt of credit 0.850090 Years employed 0.009003 dtype: float64 I want df to have columns only where there are no missing values above 10%. Expected output: df Default Name Gender Years employed (columns where there were missing values greater than 10% are removed.) I have tried result.iloc[:,0] IndexingError: Too many indexers Please help
[ "Because division of sum by length is mean, you can instead df_missing.sum()/len(df) use df_missing.mean():\nresult = df.isna().mean()\n\nThen filter by DataFrame.loc with : for all rows and columns by mask:\ndf = df.loc[:,result > .1]\n\n", "it should be df = df.loc[:,result < .1] as the user only want to keep the columns that have less than 10% of the rows missing\n", "pandas has built in methods for such things:\ndf_clean = df.dropna(axis=1, thresh=(len(df)*.1), inplace=False)\nOr if you don't want to create an extra dataframe object you can do it inplace:\ndf.dropna(axis=1, thresh=(len(df)*.1), inplace=True)\n" ]
[ 4, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0060450808_pandas_python.txt
Q: How to generate a list with every monday, between two dates, and exclude that some a specific list using pandas I want to generate a dataframe with pandas where one of the columns is filled with all mondays between to dates. But I need to exclude some mondays that are in a specific list. I could generate the column with the mondays, but I could find how to remove that mondays in the given list. I generate the mondays using: import pandas as pd st=pd.to_datetime('8/22/2022') ed=pd.to_datetime('12/22/2022') a1=pd.date_range(start=st,end=ed, freq='W-MON') But I would like to exclude the mondays that are in this list fer=pd.to_datetime(['09/07/2022','10/12/2022','10/15/2022','10/28/2022','11/01/2022','11/14/2022','11/15/2022','11/20/2022']) I was not able to find the solution online. A: IIUC, you can use a negative pandas.Index.isin : a1= a1[~a1.isin(fer)] # Output : print(a1) ​ DatetimeIndex(['2022-08-22', '2022-08-29', '2022-09-05', '2022-09-12', '2022-09-19', '2022-09-26', '2022-10-03', '2022-10-10', '2022-10-17', '2022-10-24', '2022-10-31', '2022-11-07', '2022-11-21', '2022-11-28', '2022-12-05', '2022-12-12', '2022-12-19'], dtype='datetime64[ns]', freq=None) A: Here's a code snippet that hopefully answers your question. The code snippet uses the pandas package to remove a list of blacklisted dates from a list of dates. It does this by first generating a list of every Monday between a specified start and end date using the generate_mondays function. It then defines a list of blacklisted Mondays and converts both lists to pandas Series objects. Next, the code uses the Series.isin() method to create a Boolean mask indicating which dates in the mondays Series are not in the blacklisted_mondays Series. This mask is then used to filter the mondays Series, and the resulting Series is converted to a list using the tolist() method. The resulting list of non-blacklisted Mondays can be accessed by calling the non_blacklisted_mondays variable, which is the final line of the code snippet. This variable contains a list of all the Mondays between the start and end dates, with the blacklisted Mondays removed. # Import the date and timedelta classes from the datetime module from datetime import date, timedelta # Import the pandas package import pandas as pd # Function to generate a list of every Monday between two dates def generate_mondays(start_date, end_date): # Create a variable to hold the list of Mondays mondays = [] # Create a variable to hold the current date, starting with the start date current_date = start_date # Calculate the number of days between the start date and the first Monday # We use (7 - start_date.weekday()) % 7 to find the number of days to the # next Monday, and then subtract one to get the number of days to the first # Monday days_to_first_monday = (7 - start_date.weekday()) % 7 - 1 # Add the number of days to the first Monday to the current date to move to # the first Monday current_date += timedelta(days=days_to_first_monday) # Loop until we reach the end date while current_date <= end_date: # Append the current date to the list of Mondays mondays.append(current_date) # Move to the next Monday by adding 7 days current_date += timedelta(days=7) # Return the list of Mondays return mondays # Set the start and end dates start_date = date(2022, 1, 1) end_date = date(2022, 12, 31) # Generate a list of every Monday between the start and end dates mondays = generate_mondays(start_date, end_date) # Define a list of blacklisted Mondays blacklisted_mondays = [ date(2022, 1, 10), date(2022, 2, 14), date(2022, 3, 21), ] # Convert the list of mondays and the list of blacklisted mondays to pandas # Series objects mondays_series = pd.Series(mondays) blacklisted_mondays_series = pd.Series(blacklisted_mondays) # Use the pandas Series.isin() method to create a Boolean mask indicating # which dates in the mondays Series are not in the blacklisted_mondays Series mask = ~mondays_series.isin(blacklisted_mondays_series) # Use the mask to filter the mondays Series and convert the resulting Series # to a list non_blacklisted_mondays = mondays_series[mask].tolist() # Print the resulting list of non-blacklisted Mondays print(non_blacklisted_mondays)
How to generate a list with every monday, between two dates, and exclude that some a specific list using pandas
I want to generate a dataframe with pandas where one of the columns is filled with all mondays between to dates. But I need to exclude some mondays that are in a specific list. I could generate the column with the mondays, but I could find how to remove that mondays in the given list. I generate the mondays using: import pandas as pd st=pd.to_datetime('8/22/2022') ed=pd.to_datetime('12/22/2022') a1=pd.date_range(start=st,end=ed, freq='W-MON') But I would like to exclude the mondays that are in this list fer=pd.to_datetime(['09/07/2022','10/12/2022','10/15/2022','10/28/2022','11/01/2022','11/14/2022','11/15/2022','11/20/2022']) I was not able to find the solution online.
[ "IIUC, you can use a negative pandas.Index.isin :\na1= a1[~a1.isin(fer)]\n\n# Output :\nprint(a1)\n​\nDatetimeIndex(['2022-08-22', '2022-08-29', '2022-09-05', '2022-09-12',\n '2022-09-19', '2022-09-26', '2022-10-03', '2022-10-10',\n '2022-10-17', '2022-10-24', '2022-10-31', '2022-11-07',\n '2022-11-21', '2022-11-28', '2022-12-05', '2022-12-12',\n '2022-12-19'],\n dtype='datetime64[ns]', freq=None)\n\n", "Here's a code snippet that hopefully answers your question. The code snippet uses the pandas package to remove a list of blacklisted dates from a list of dates. It does this by first generating a list of every Monday between a specified start and end date using the generate_mondays function. It then defines a list of blacklisted Mondays and converts both lists to pandas Series objects.\nNext, the code uses the Series.isin() method to create a Boolean mask indicating which dates in the mondays Series are not in the blacklisted_mondays Series. This mask is then used to filter the mondays Series, and the resulting Series is converted to a list using the tolist() method.\nThe resulting list of non-blacklisted Mondays can be accessed by calling the non_blacklisted_mondays variable, which is the final line of the code snippet. This variable contains a list of all the Mondays between the start and end dates, with the blacklisted Mondays removed.\n# Import the date and timedelta classes from the datetime module\nfrom datetime import date, timedelta\n\n# Import the pandas package\nimport pandas as pd\n\n# Function to generate a list of every Monday between two dates\ndef generate_mondays(start_date, end_date):\n # Create a variable to hold the list of Mondays\n mondays = []\n\n # Create a variable to hold the current date, starting with the start date\n current_date = start_date\n\n # Calculate the number of days between the start date and the first Monday\n # We use (7 - start_date.weekday()) % 7 to find the number of days to the\n # next Monday, and then subtract one to get the number of days to the first\n # Monday\n days_to_first_monday = (7 - start_date.weekday()) % 7 - 1\n\n # Add the number of days to the first Monday to the current date to move to\n # the first Monday\n current_date += timedelta(days=days_to_first_monday)\n\n # Loop until we reach the end date\n while current_date <= end_date:\n # Append the current date to the list of Mondays\n mondays.append(current_date)\n\n # Move to the next Monday by adding 7 days\n current_date += timedelta(days=7)\n\n # Return the list of Mondays\n return mondays\n\n# Set the start and end dates\nstart_date = date(2022, 1, 1)\nend_date = date(2022, 12, 31)\n\n# Generate a list of every Monday between the start and end dates\nmondays = generate_mondays(start_date, end_date)\n\n# Define a list of blacklisted Mondays\nblacklisted_mondays = [\n date(2022, 1, 10),\n date(2022, 2, 14),\n date(2022, 3, 21),\n]\n\n# Convert the list of mondays and the list of blacklisted mondays to pandas\n# Series objects\nmondays_series = pd.Series(mondays)\nblacklisted_mondays_series = pd.Series(blacklisted_mondays)\n\n# Use the pandas Series.isin() method to create a Boolean mask indicating\n# which dates in the mondays Series are not in the blacklisted_mondays Series\nmask = ~mondays_series.isin(blacklisted_mondays_series)\n\n# Use the mask to filter the mondays Series and convert the resulting Series\n# to a list\nnon_blacklisted_mondays = mondays_series[mask].tolist()\n\n# Print the resulting list of non-blacklisted Mondays\nprint(non_blacklisted_mondays)\n\n" ]
[ 1, 1 ]
[]
[]
[ "datetime", "pandas", "python" ]
stackoverflow_0074661985_datetime_pandas_python.txt
Q: flask template not rendering as expected Expected output is 'not detected' but I get 'no error' on get and post. Why? index.html {% if error %} <p>{{ error }}</p> {% else %} <p>no error</p> {% endif %} main.py @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'GET': print('get') return render_template('index.html') elif request.method == 'POST': print('post') post_data = request.get_json(force=True) if post_data['message'] == False: print('false') print('not detected') return render_template('index.html', error='not detected') edit: not sure if this is what's causing the errors. script.js window.onload = (event) => { if (!window.ethereum) { console.log('error - not detected'); fetch(`${window.origin}/`, { method: 'POST', headers: {'content-type': 'application/json'}, body: JSON.stringify({ 'message': false }) }); } else { console.log('detected'); } }; A: The problem is with your js code. As I can see, you make fetch call providing parameters and not providing callback for response processing. It should be: fetch(`${window.origin}/`, { method: 'POST', headers: {'content-type': 'application/json'}, body: JSON.stringify({ 'message': false }) }) .then((response) => response.json()) .then((data) => { console.log('Success:', data); }) .catch((error) => { console.error('Error:', error); }); And in backend it's better to return json response for api calls instead of html. To do it you can change your code: from flask import jsonify ... if post_data['message'] == False: print('false') print('not detected') return jsonify(error='not detected') Ref: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#uploading_json_data A: There's something else wrong. I copied your code exactly and ran curl.exe --header "Content-Type: application/json" -d '{\"message\":false}' http://localhost:5000 and got <p>not detected</p>
flask template not rendering as expected
Expected output is 'not detected' but I get 'no error' on get and post. Why? index.html {% if error %} <p>{{ error }}</p> {% else %} <p>no error</p> {% endif %} main.py @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'GET': print('get') return render_template('index.html') elif request.method == 'POST': print('post') post_data = request.get_json(force=True) if post_data['message'] == False: print('false') print('not detected') return render_template('index.html', error='not detected') edit: not sure if this is what's causing the errors. script.js window.onload = (event) => { if (!window.ethereum) { console.log('error - not detected'); fetch(`${window.origin}/`, { method: 'POST', headers: {'content-type': 'application/json'}, body: JSON.stringify({ 'message': false }) }); } else { console.log('detected'); } };
[ "The problem is with your js code. As I can see, you make fetch call providing parameters and not providing callback for response processing.\nIt should be:\nfetch(`${window.origin}/`, {\n method: 'POST',\n headers: {'content-type': 'application/json'},\n body: JSON.stringify({\n 'message': false\n })\n})\n.then((response) => response.json())\n.then((data) => {\n console.log('Success:', data);\n})\n.catch((error) => {\n console.error('Error:', error);\n});\n\nAnd in backend it's better to return json response for api calls instead of html. To do it you can change your code:\nfrom flask import jsonify\n\n...\n if post_data['message'] == False:\n print('false')\n print('not detected')\n return jsonify(error='not detected')\n\nRef: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#uploading_json_data\n", "There's something else wrong. I copied your code exactly and ran\ncurl.exe --header \"Content-Type: application/json\" -d '{\\\"message\\\":false}' http://localhost:5000\n\nand got\n <p>not detected</p>\n\n" ]
[ 1, 0 ]
[]
[]
[ "flask", "post", "python" ]
stackoverflow_0074651348_flask_post_python.txt
Q: Replace value at i in Dictionary I am trying to loop through a dictionary and if it meets a requirement, the requirements being distinction >=70, merit>=60, pass>=50, and fail less than 50 then the value that is currently being passed through the loop will be replaced by the correct classification. For example, the first value being passed through is mark_1 which is 20 so at 20 in the dictionary I am looking to replace the 20 with "fail" module_1="Maths" module_2="English" module_3="Science" module_4="Business" module_5="PE" mark_1 =20 mark_2=30 mark_3 =40 mark_4=50 mark_5=60 module_marks = {module_1:int(mark_1), module_2: int(mark_2), module_3: int(mark_3), module_4: int(mark_4), module_5:int(mark_5)} marks= classifygrade.classify_grade(module_marks) And in my other class it defines the method to try and accomplish this. def classify_grade(module_marks): for i in module_marks.values(): if i>=70: module_marks[i].update("distinction") elif i>=60: module_marks[i].update("merit") elif i>=50: module_marks[i].update("pass") else: module_marks[i].update("fail") A: The problem is that you're trying to access a field in a dictionary by its value. You are also trying to return a new dictionary, but you are changing the original dictionary. You should create a separate dictionary and use dict.items() instead. Like this: def classifyMarks(marks): result = {} for (subject, grade) in marks.items(): if grade >= 70: result[subject] = "Distinction" elif grade >= 60: result[subject] = "Merit" elif grade >= 50: result[subject] = "Pass" else: result[subject] = "Fail" return result marks = { "Maths": 20, "English": 30, "Science": 40, "Business": 50, "PE": 60 } marks = classifyMarks(marks) print(marks)
Replace value at i in Dictionary
I am trying to loop through a dictionary and if it meets a requirement, the requirements being distinction >=70, merit>=60, pass>=50, and fail less than 50 then the value that is currently being passed through the loop will be replaced by the correct classification. For example, the first value being passed through is mark_1 which is 20 so at 20 in the dictionary I am looking to replace the 20 with "fail" module_1="Maths" module_2="English" module_3="Science" module_4="Business" module_5="PE" mark_1 =20 mark_2=30 mark_3 =40 mark_4=50 mark_5=60 module_marks = {module_1:int(mark_1), module_2: int(mark_2), module_3: int(mark_3), module_4: int(mark_4), module_5:int(mark_5)} marks= classifygrade.classify_grade(module_marks) And in my other class it defines the method to try and accomplish this. def classify_grade(module_marks): for i in module_marks.values(): if i>=70: module_marks[i].update("distinction") elif i>=60: module_marks[i].update("merit") elif i>=50: module_marks[i].update("pass") else: module_marks[i].update("fail")
[ "The problem is that you're trying to access a field in a dictionary by its value. You are also trying to return a new dictionary, but you are changing the original dictionary. You should create a separate dictionary and use dict.items() instead. Like this:\ndef classifyMarks(marks):\n result = {}\n for (subject, grade) in marks.items():\n if grade >= 70:\n result[subject] = \"Distinction\"\n elif grade >= 60:\n result[subject] = \"Merit\"\n elif grade >= 50:\n result[subject] = \"Pass\"\n else:\n result[subject] = \"Fail\"\n return result\n\n\nmarks = {\n \"Maths\": 20,\n \"English\": 30,\n \"Science\": 40,\n \"Business\": 50,\n \"PE\": 60\n}\n\nmarks = classifyMarks(marks)\nprint(marks)\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074662028_python_python_3.x.txt
Q: Selenium cannot scrape all elements with same xpath, don't know if the page is not fully loaded? I am trying to scrape this page the title and the price, https://magnumbikes.com/collections/e-bikes?sort_by=best-selling, but only half of the products can be collected (it stops at product Metro X), not sure if it is the page is not fully loaded, Please let me know or correct me thank you! Here is my code: URL='https://magnumbikes.com/collections/e-bikes?sort_by=best-selling' #driver.maxmize_window() driver.get(URL) #There is a subscription window popping up for first time, refresh the page again! driver.refresh() time.sleep(2) driver.execute_script("window.scrollTo(0,document.body.scrollHeight);") titless=WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, '//div/div/h3[@class="collection-card__title h3"]'))) prices=WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.XPATH,'//div[@class="collection-card__prices-inner"]/span'))) for i in range(0,len(titless)): print(titless[i].text) print(prices[i].text) A: Here is a way to get that information using Requests: import requests from bs4 import BeautifulSoup as bs import pandas as pd pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } s = requests.Session() s.headers.update(headers) big_list = [] for x in range(1, 3): r = s.get(f'https://magnumbikes.com/collections/e-bikes?page={x}&sort_by=best-selling') soup = bs(r.text, 'html.parser') items = soup.select('div[class="collection-filter__item"]') for i in items: title = i.select_one('h3 a').get_text() url = 'https://magnumbikes.com' + i.select_one('h3 a').get('href') big_list.append((title, url)) df = pd.DataFrame(big_list, columns=['Bike', 'Url']) print(df.to_markdown()) Result in terminal: Bike Url 0 Nomad https://magnumbikes.com//collections/e-bikes/products/magnum-nomad 1 Pathfinder 350 https://magnumbikes.com//collections/e-bikes/products/magnum-pathfinder-350 2 Metro S https://magnumbikes.com//collections/e-bikes/products/magnum-metro-s 3 Scout https://magnumbikes.com//collections/e-bikes/products/magnum-scout 4 Payload https://magnumbikes.com//collections/e-bikes/products/magnum-payload 5 Peak T7 https://magnumbikes.com//collections/e-bikes/products/magnum-peak-t7 6 Metro 750 https://magnumbikes.com//collections/e-bikes/products/magnum-metro-750 7 Pathfinder T https://magnumbikes.com//collections/e-bikes/products/magnum-pathfinder-t 8 Ranger 1.0 https://magnumbikes.com//collections/e-bikes/products/magnum-ranger 9 Low rider 1.0 https://magnumbikes.com//collections/e-bikes/products/magnum-low-rider-2019 10 Pathfinder 500 https://magnumbikes.com//collections/e-bikes/products/magnum-pathfinder-500 11 Cruiser 1.0 https://magnumbikes.com//collections/e-bikes/products/magnum-cruiser-1-0 12 Summit 27.5" https://magnumbikes.com//collections/e-bikes/products/magnum-summit-27-5 13 Metro X https://magnumbikes.com//collections/e-bikes/products/magnum-metro-x 14 Peak T5 https://magnumbikes.com//collections/e-bikes/products/magnum-peak-t5 15 Premium 3 Low Step https://magnumbikes.com//collections/e-bikes/products/magnum-premium-3-low-step 16 Navigator X https://magnumbikes.com//collections/e-bikes/products/magnum-navigator-x 17 Cosmo X https://magnumbikes.com//collections/e-bikes/products/magnum-cosmo-x 18 Cosmo+ https://magnumbikes.com//collections/e-bikes/products/magnum-cosmo-1 19 Cosmo S https://magnumbikes.com//collections/e-bikes/products/magnum-cosmo-s 20 Low rider 2.0 https://magnumbikes.com//collections/e-bikes/products/magnum-low-rider 21 Navigator S https://magnumbikes.com//collections/e-bikes/products/magnum-navigator-s 22 Ranger 2.0 https://magnumbikes.com//collections/e-bikes/products/ranger-2-0 23 Premium 3 High Step https://magnumbikes.com//collections/e-bikes/products/magnum-premium-3-high-step 24 Voyager https://magnumbikes.com//collections/e-bikes/products/magnum-voyager 25 Cruiser 2.0 https://magnumbikes.com//collections/e-bikes/products/magnum-cruiser 26 Cosmo https://magnumbikes.com//collections/e-bikes/products/magnum-cosmo ​ You can use Selenium if you wish - just observe the urls accessed (taken from Dev tools - Network tab) and the locators.
Selenium cannot scrape all elements with same xpath, don't know if the page is not fully loaded?
I am trying to scrape this page the title and the price, https://magnumbikes.com/collections/e-bikes?sort_by=best-selling, but only half of the products can be collected (it stops at product Metro X), not sure if it is the page is not fully loaded, Please let me know or correct me thank you! Here is my code: URL='https://magnumbikes.com/collections/e-bikes?sort_by=best-selling' #driver.maxmize_window() driver.get(URL) #There is a subscription window popping up for first time, refresh the page again! driver.refresh() time.sleep(2) driver.execute_script("window.scrollTo(0,document.body.scrollHeight);") titless=WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, '//div/div/h3[@class="collection-card__title h3"]'))) prices=WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.XPATH,'//div[@class="collection-card__prices-inner"]/span'))) for i in range(0,len(titless)): print(titless[i].text) print(prices[i].text)
[ "Here is a way to get that information using Requests:\nimport requests\nfrom bs4 import BeautifulSoup as bs\nimport pandas as pd\n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\n\ns = requests.Session()\ns.headers.update(headers)\n\nbig_list = []\n\nfor x in range(1, 3):\n r = s.get(f'https://magnumbikes.com/collections/e-bikes?page={x}&sort_by=best-selling')\n soup = bs(r.text, 'html.parser')\n items = soup.select('div[class=\"collection-filter__item\"]')\n for i in items:\n title = i.select_one('h3 a').get_text() \n url = 'https://magnumbikes.com' + i.select_one('h3 a').get('href')\n big_list.append((title, url))\ndf = pd.DataFrame(big_list, columns=['Bike', 'Url'])\nprint(df.to_markdown())\n\nResult in terminal:\n\n\n\n\n\nBike\nUrl\n\n\n\n\n0\nNomad\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-nomad\n\n\n1\nPathfinder 350\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-pathfinder-350\n\n\n2\nMetro S\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-metro-s\n\n\n3\nScout\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-scout\n\n\n4\nPayload\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-payload\n\n\n5\nPeak T7\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-peak-t7\n\n\n6\nMetro 750\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-metro-750\n\n\n7\nPathfinder T\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-pathfinder-t\n\n\n8\nRanger 1.0\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-ranger\n\n\n9\nLow rider 1.0\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-low-rider-2019\n\n\n10\nPathfinder 500\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-pathfinder-500\n\n\n11\nCruiser 1.0\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-cruiser-1-0\n\n\n12\nSummit 27.5\"\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-summit-27-5\n\n\n13\nMetro X\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-metro-x\n\n\n14\nPeak T5\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-peak-t5\n\n\n15\nPremium 3 Low Step\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-premium-3-low-step\n\n\n16\nNavigator X\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-navigator-x\n\n\n17\nCosmo X\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-cosmo-x\n\n\n18\nCosmo+\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-cosmo-1\n\n\n19\nCosmo S\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-cosmo-s\n\n\n20\nLow rider 2.0\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-low-rider\n\n\n21\nNavigator S\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-navigator-s\n\n\n22\nRanger 2.0\nhttps://magnumbikes.com//collections/e-bikes/products/ranger-2-0\n\n\n23\nPremium 3 High Step\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-premium-3-high-step\n\n\n24\nVoyager\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-voyager\n\n\n25\nCruiser 2.0\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-cruiser\n\n\n26\nCosmo\nhttps://magnumbikes.com//collections/e-bikes/products/magnum-cosmo\n\n\n\n\n​\nYou can use Selenium if you wish - just observe the urls accessed (taken from Dev tools - Network tab) and the locators.\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074660079_python_selenium_selenium_webdriver_web_scraping.txt
Q: Spotify API (Obtaining Authorization Code) using Python My goal is to connect to the Spotify API using pure Python and I have been able to figure out how to obtain the authorization token given the authorization code but I am unable to get the authorization code itself. Note: I have not provided the client_id and client_secret for obvious reasons and you can assume that all libraries have been imported. Once the web browser opens, the authorization code ("code") is displayed as a query parameter in the URL but I am unsure how to go about saving it to a variable. I can't just copy and paste it to a variable as the authorization code constantly changes. My question is how exactly I would go about retrieving the code query paramter and save it to a variable? Here is what I have tried so far: # credentials client_id = "xxx..." client_secret = "xxx..." # urls redirect_uri = "http://localhost:7777/callback" auth_url = "https://accounts.spotify.com/authorize?" token_url = "https://accounts.spotify.com/api/token" # data scopes scopes = "user-read-private user-read-email" # obtains authorization code payload = { "client_id": client_id, "response_type": "code", "redirect_uri": redirect_uri, "scope": scopes } webbrowser.open(auth_url + urlencode(payload)) code = # NOT SURE HOW TO RETRIEVE CODE # obtains authorization token encoded_creds = base64.b64encode(client_id.encode() + b":" + client_secret.encode()).decode('utf-8') token_headers = { "Authorization": "Basic " + encoded_creds, "Content-Type": "application/x-www-form-urlencoded" } token_data = { "grant_type": "authorization_code", "code": code, "redirect_uri": redirect_uri } r = req.post(token_url, data=token_data, headers=token_headers) A: You'll need to extract the code from your callback URL. If authentication is successful, Spotify will make a request to your redirect_uri with the code in the query (e.g http://localhost:7777/callback?code=...). The easiest way to do that is probably spin up a Flask server (or equivalent) with a GET callback endpoint and grab the code there. Then you can exchange it for the authorization token in the same endpoint if you'd like. This example may be helpful: https://github.com/spotify/web-api-auth-examples/blob/master/authorization_code/app.js#L60
Spotify API (Obtaining Authorization Code) using Python
My goal is to connect to the Spotify API using pure Python and I have been able to figure out how to obtain the authorization token given the authorization code but I am unable to get the authorization code itself. Note: I have not provided the client_id and client_secret for obvious reasons and you can assume that all libraries have been imported. Once the web browser opens, the authorization code ("code") is displayed as a query parameter in the URL but I am unsure how to go about saving it to a variable. I can't just copy and paste it to a variable as the authorization code constantly changes. My question is how exactly I would go about retrieving the code query paramter and save it to a variable? Here is what I have tried so far: # credentials client_id = "xxx..." client_secret = "xxx..." # urls redirect_uri = "http://localhost:7777/callback" auth_url = "https://accounts.spotify.com/authorize?" token_url = "https://accounts.spotify.com/api/token" # data scopes scopes = "user-read-private user-read-email" # obtains authorization code payload = { "client_id": client_id, "response_type": "code", "redirect_uri": redirect_uri, "scope": scopes } webbrowser.open(auth_url + urlencode(payload)) code = # NOT SURE HOW TO RETRIEVE CODE # obtains authorization token encoded_creds = base64.b64encode(client_id.encode() + b":" + client_secret.encode()).decode('utf-8') token_headers = { "Authorization": "Basic " + encoded_creds, "Content-Type": "application/x-www-form-urlencoded" } token_data = { "grant_type": "authorization_code", "code": code, "redirect_uri": redirect_uri } r = req.post(token_url, data=token_data, headers=token_headers)
[ "You'll need to extract the code from your callback URL. If authentication is successful, Spotify will make a request to your redirect_uri with the code in the query (e.g http://localhost:7777/callback?code=...).\nThe easiest way to do that is probably spin up a Flask server (or equivalent) with a GET callback endpoint and grab the code there. Then you can exchange it for the authorization token in the same endpoint if you'd like. This example may be helpful: https://github.com/spotify/web-api-auth-examples/blob/master/authorization_code/app.js#L60\n" ]
[ 1 ]
[]
[]
[ "authorization", "python", "spotify" ]
stackoverflow_0074661575_authorization_python_spotify.txt
Q: Python: Sum up list of objects I am working with a number of custom classes X that have __add__(self), and when added together return another class, Y. I often have iterables [of various sizes] of X, ex = [X1, X2, X3] that I would love to add together to get Y. However, sum(ex) throws an int error, because sum starts at 0 which can't be added to my class X. Can someone please help me with an easy, pythonic way to do X1 + X2 + X3 ... of an interable, so I get Y... Thanks! Ps it’s a 3rd party class, X, so I can’t change it. It does have radd though. My gut was that there was some way to do list comprehension? Like += on themselves A: You can specify the starting point for a sum by passing it as a parameter. For example, sum([1,2,3], 10) produces 16 (10 + 1 + 2 + 3), and sum([[1], [2], [3]], []) produces [1,2,3]. So if you pass an appropriate ("zero-like") X object as the second parameter to your sum, ie sum([x1, x2, x3,...], x0) you should get the results you're looking for Some example code, per request. Given the following definitions: class X: def __init__(self, val): self.val = val def __add__(self, other): return X(self.val + other.val) def __repr__(self): return "X({})".format(self.val) class Y: def __init__(self, val): self.val = val def __add__(self, other): return X(self.val + other.val) def __repr__(self): return "Y({})".format(self.val) I get the following results: >>> sum([Y(1), Y(2), Y(3)], Y(0)) X(6) >>> sum([Y(1), Y(2), Y(3)], Y(0)) X(6) >>> (note that Y returns an X object - but that the two objects' add methods are compatible, which may not be the case in the OP's situation)
Python: Sum up list of objects
I am working with a number of custom classes X that have __add__(self), and when added together return another class, Y. I often have iterables [of various sizes] of X, ex = [X1, X2, X3] that I would love to add together to get Y. However, sum(ex) throws an int error, because sum starts at 0 which can't be added to my class X. Can someone please help me with an easy, pythonic way to do X1 + X2 + X3 ... of an interable, so I get Y... Thanks! Ps it’s a 3rd party class, X, so I can’t change it. It does have radd though. My gut was that there was some way to do list comprehension? Like += on themselves
[ "You can specify the starting point for a sum by passing it as a parameter. For example, sum([1,2,3], 10) produces 16 (10 + 1 + 2 + 3), and sum([[1], [2], [3]], []) produces [1,2,3].\nSo if you pass an appropriate (\"zero-like\") X object as the second parameter to your sum, ie sum([x1, x2, x3,...], x0) you should get the results you're looking for\nSome example code, per request. Given the following definitions:\nclass X: \n def __init__(self, val): \n self.val = val \n def __add__(self, other):\n return X(self.val + other.val) \n \n def __repr__(self): \n return \"X({})\".format(self.val) \n \nclass Y: \n def __init__(self, val): \n self.val = val \n def __add__(self, other):\n return X(self.val + other.val) \n \n def __repr__(self): \n return \"Y({})\".format(self.val) \n\n\nI get the following results:\n>>> sum([Y(1), Y(2), Y(3)], Y(0))\nX(6)\n>>> sum([Y(1), Y(2), Y(3)], Y(0))\nX(6)\n>>>\n\n(note that Y returns an X object - but that the two objects' add methods are compatible, which may not be the case in the OP's situation)\n" ]
[ 4 ]
[ "Assuming that you can add an X to a Y (i.e. __add__ is defined for Y and accepts an object of class X), then you can use\nreduce from functools, a generic way to apply an operation to a number of objects, either with or without a start value.\nfrom functools import reduce\n\nxes = [x1, x2, x3]\ny = reduce(lambda a,b: a+b, xes)\n\n", "What if you did something like:\nY = [ex[0]=+X for x in ex[1:]][0]\n\nI haven’t tested it yet though, on mobile\n" ]
[ -1, -1 ]
[ "python" ]
stackoverflow_0074661819_python.txt
Q: why does numpy matrix multiply computation time increase by an order of magnitude at 100x100? When computing A @ a where A is a random N by N matrix and a is a vector with N random elements using numpy the computation time jumps by an order of magnitude at N=100. Is there any particular reason for this? As a comparison the same operation using torch on the cpu has a more gradual increase Tried it with python3.10 and 3.9 and 3.7 with the same behavior Code used for generating numpy part of the plot: import numpy as np from tqdm.notebook import tqdm import pandas as pd import time import sys def sym(A): return .5 * (A + A.T) results = [] for n in tqdm(range(2, 500)): for trial_idx in range(10): A = sym(np.random.randn(n, n)) a = np.random.randn(n) t = time.time() for i in range(1000): A @ a t = time.time() - t results.append({ 'n': n, 'time': t, 'method': 'numpy', }) results = pd.DataFrame(results) from matplotlib import pyplot as plt fig, ax = plt.subplots(1, 1) ax.semilogy(results.n.unique(), results.groupby('n').time.mean(), label="numpy") ax.set_title(f'A @ a timimgs (1000 times)\nPython {sys.version.split(" ")[0]}') ax.legend() ax.set_xlabel('n') ax.set_ylabel('avg. time') Update Adding import os os.environ["MKL_NUM_THREADS"] = "1" os.environ["NUMEXPR_NUM_THREADS"] = "1" os.environ["OMP_NUM_THREADS"] = "1" before ìmport numpy gives a more expected output, see this answer for details: https://stackoverflow.com/a/74662135/5043576 A: numpy tries to use threads when multiplying matricies of size 100 or larger, and the default CBLAS implementation of threaded multiplication is ... sub optimal, as opposed to other backends like intel-MKL or ATLAS. if you force it to use only 1 thread using the answers in this post you will get a continuous line for numpy performance.
why does numpy matrix multiply computation time increase by an order of magnitude at 100x100?
When computing A @ a where A is a random N by N matrix and a is a vector with N random elements using numpy the computation time jumps by an order of magnitude at N=100. Is there any particular reason for this? As a comparison the same operation using torch on the cpu has a more gradual increase Tried it with python3.10 and 3.9 and 3.7 with the same behavior Code used for generating numpy part of the plot: import numpy as np from tqdm.notebook import tqdm import pandas as pd import time import sys def sym(A): return .5 * (A + A.T) results = [] for n in tqdm(range(2, 500)): for trial_idx in range(10): A = sym(np.random.randn(n, n)) a = np.random.randn(n) t = time.time() for i in range(1000): A @ a t = time.time() - t results.append({ 'n': n, 'time': t, 'method': 'numpy', }) results = pd.DataFrame(results) from matplotlib import pyplot as plt fig, ax = plt.subplots(1, 1) ax.semilogy(results.n.unique(), results.groupby('n').time.mean(), label="numpy") ax.set_title(f'A @ a timimgs (1000 times)\nPython {sys.version.split(" ")[0]}') ax.legend() ax.set_xlabel('n') ax.set_ylabel('avg. time') Update Adding import os os.environ["MKL_NUM_THREADS"] = "1" os.environ["NUMEXPR_NUM_THREADS"] = "1" os.environ["OMP_NUM_THREADS"] = "1" before ìmport numpy gives a more expected output, see this answer for details: https://stackoverflow.com/a/74662135/5043576
[ "numpy tries to use threads when multiplying matricies of size 100 or larger, and the default CBLAS implementation of threaded multiplication is ... sub optimal, as opposed to other backends like intel-MKL or ATLAS.\nif you force it to use only 1 thread using the answers in this post you will get a continuous line for numpy performance.\n" ]
[ 7 ]
[]
[]
[ "linear_algebra", "numerical_computing", "numpy", "python" ]
stackoverflow_0074661959_linear_algebra_numerical_computing_numpy_python.txt
Q: 'jt' is not recognized as an internal or external command Trying to change theme of Jupyter notebook but running into difficulty after successful install. I run: jt-t chesterish 'jt' is not recognized as an internal or external command, operable program or batch file. I know its related to not setting the environmental path somehow. But I have tried using SETX PATH but still didn't work and found not other solution thus far. I have before set up python so I can directly type "python" to get it on the command line but doesn't work for anything else like "jupyter". A: Even if ı have installed the jupyterthemes and upgrade it, ı have the same issue when ı write down the command (!jt -t [themename]) into one of the jupyter notebook's cells. The solution that ı have found is open up the Anaconda prompt and after installing the jupyterthemes, write the command (jt -t exampletheme) and restart the jupyter notebook. A: I had the same problem. I was using a conda environment with Python 3.7 installed. After I switched the kernel to the original standard "Python 3" kernel with Python 3.9 install, then the jt commands worked for me. Not sure if it was an environment issue or a python version issue.
'jt' is not recognized as an internal or external command
Trying to change theme of Jupyter notebook but running into difficulty after successful install. I run: jt-t chesterish 'jt' is not recognized as an internal or external command, operable program or batch file. I know its related to not setting the environmental path somehow. But I have tried using SETX PATH but still didn't work and found not other solution thus far. I have before set up python so I can directly type "python" to get it on the command line but doesn't work for anything else like "jupyter".
[ "Even if ı have installed the jupyterthemes and upgrade it, ı have the same issue when ı write down the command (!jt -t [themename]) into one of the jupyter notebook's cells. The solution that ı have found is open up the Anaconda prompt and after installing the jupyterthemes, write the command (jt -t exampletheme) and restart the jupyter notebook.\n", "I had the same problem. I was using a conda environment with Python 3.7 installed. After I switched the kernel to the original standard \"Python 3\" kernel with Python 3.9 install, then the jt commands worked for me. Not sure if it was an environment issue or a python version issue.\n" ]
[ 1, 0 ]
[]
[]
[ "jupyter", "path", "python" ]
stackoverflow_0054411892_jupyter_path_python.txt
Q: Place a Window behind desktop icons using PyQt on Ubuntu/GNOME I'm trying to develop a simple cross-platform Wallpaper manager, but I am not able to find any method to place my PyQt Window between the current wallpaper and the desktop icons using XLib (on windows and macOS it's way easier and works perfectly). This works right on Cinnamon (with a little workround just simulating a click), but not on GNOME. Can anyone help or give me any clue? (I'm providing all this code just to provide a minimum executable piece, but the key part, I guess, is right after 'if "GNOME"...' sentence) import os import time import Xlib import ewmh import pywinctl from pynput import mouse DISP = Xlib.display.Display() SCREEN = DISP.screen() ROOT = DISP.screen().root EWMH = ewmh.EWMH(_display=DISP, root=ROOT) def sendBehind(hWnd): w = DISP.create_resource_object('window', hWnd) w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_BELOW', False), ], Xlib.X.PropModeReplace) w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_SKIP_TASKBAR', False), ], Xlib.X.PropModeAppend) w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_SKIP_PAGER', False), ], Xlib.X.PropModeAppend) DISP.flush() # This sends window below all others, but not behind the desktop icons w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_DESKTOP', False), ],Xlib.X.PropModeReplace) DISP.flush() if "GNOME" in os.environ.get('XDG_CURRENT_DESKTOP', ""): # This sends the window "too far behind" (below all others, including Wallpaper, like unmapped) # Trying to figure out how to raise it on top of wallpaper but behind desktop icons desktop = _xlibGetAllWindows(title="gnome-shell") if desktop: w.reparent(desktop[-1], 0, 0) DISP.flush() else: # Mint/Cinnamon: just clicking on the desktop, it raises, sending the window/wallpaper to the bottom! m = mouse.Controller() m.move(SCREEN.width_in_pixels - 1, 100) m.click(mouse.Button.left, 1) return '_NET_WM_WINDOW_TYPE_DESKTOP' in EWMH.getWmWindowType(hWnd, str=True) def bringBack(hWnd, parent): w = DISP.create_resource_object('window', hWnd) if parent: w.reparent(parent, 0, 0) DISP.flush() w.unmap() w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_NORMAL', False), ], Xlib.X.PropModeReplace) DISP.flush() w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_FOCUSED', False), ], Xlib.X.PropModeReplace) DISP.flush() w.map() EWMH.setActiveWindow(hWnd) EWMH.display.flush() return '_NET_WM_WINDOW_TYPE_NORMAL' in EWMH.getWmWindowType(hWnd, str=True) def _xlibGetAllWindows(parent: int = None, title: str = ""): if not parent: parent = ROOT allWindows = [parent] def findit(hwnd): query = hwnd.query_tree() for child in query.children: allWindows.append(child) findit(child) findit(parent) if not title: matches = allWindows else: matches = [] for w in allWindows: if w.get_wm_name() == title: matches.append(w) return matches hWnd = pywinctl.getActiveWindow() parent = hWnd._hWnd.query_tree().parent sendBehind(hWnd._hWnd) time.sleep(3) bringBack(hWnd._hWnd, parent) A: Eureka!!! Last Ubuntu version (22.04) seems to have brought the solution by itself. It now has a "layer" for desktop icons you can interact with. This also gave me the clue to find a smarter solution on Mint/Cinnamon (testing in other OS is still pending). This is the code which seems to work OK, for those with the same issue: import time import Xlib.display import ewmh import pywinctl DISP = Xlib.display.Display() SCREEN = DISP.screen() ROOT = DISP.screen().root EWMH = ewmh.EWMH(_display=DISP, root=ROOT) def sendBehind(hWnd): w = DISP.create_resource_object('window', hWnd.id) w.unmap() w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_DESKTOP', False), ], Xlib.X.PropModeReplace) DISP.flush() w.map() # This will try to raise the desktop icons layer on top of the window # Ubuntu: "@!0,0;BDHF" is the new desktop icons NG extension # Mint: "Desktop" name is language-dependent. Using its class (nemo-desktop) desktop = _xlibGetAllWindows(title="@!0,0;BDHF", klass=('nemo-desktop', 'Nemo-desktop')) for d in desktop: w = DISP.create_resource_object('window', d) w.raise_window() return '_NET_WM_WINDOW_TYPE_DESKTOP' in EWMH.getWmWindowType(hWnd, str=True) def bringBack(hWnd): w = DISP.create_resource_object('window', hWnd.id) w.unmap() w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_NORMAL', False), ], Xlib.X.PropModeReplace) DISP.flush() w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_FOCUSED', False), ], Xlib.X.PropModeReplace) DISP.flush() w.map() EWMH.setActiveWindow(hWnd) EWMH.display.flush() return '_NET_WM_WINDOW_TYPE_NORMAL' in EWMH.getWmWindowType(hWnd, str=True) def _xlibGetAllWindows(parent=None, title: str = "", klass=None): parent = parent or ROOT allWindows = [parent] def findit(hwnd): query = hwnd.query_tree() for child in query.children: allWindows.append(child) findit(child) findit(parent) if not title and not klass: return allWindows else: return [window for window in allWindows if ((title and window.get_wm_name() == title) or (klass and window.get_wm_class() == klass))] hWnd = pywinctl.getActiveWindow() sendBehind(hWnd._hWnd) time.sleep(3) bringBack(hWnd._hWnd)
Place a Window behind desktop icons using PyQt on Ubuntu/GNOME
I'm trying to develop a simple cross-platform Wallpaper manager, but I am not able to find any method to place my PyQt Window between the current wallpaper and the desktop icons using XLib (on windows and macOS it's way easier and works perfectly). This works right on Cinnamon (with a little workround just simulating a click), but not on GNOME. Can anyone help or give me any clue? (I'm providing all this code just to provide a minimum executable piece, but the key part, I guess, is right after 'if "GNOME"...' sentence) import os import time import Xlib import ewmh import pywinctl from pynput import mouse DISP = Xlib.display.Display() SCREEN = DISP.screen() ROOT = DISP.screen().root EWMH = ewmh.EWMH(_display=DISP, root=ROOT) def sendBehind(hWnd): w = DISP.create_resource_object('window', hWnd) w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_BELOW', False), ], Xlib.X.PropModeReplace) w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_SKIP_TASKBAR', False), ], Xlib.X.PropModeAppend) w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_SKIP_PAGER', False), ], Xlib.X.PropModeAppend) DISP.flush() # This sends window below all others, but not behind the desktop icons w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_DESKTOP', False), ],Xlib.X.PropModeReplace) DISP.flush() if "GNOME" in os.environ.get('XDG_CURRENT_DESKTOP', ""): # This sends the window "too far behind" (below all others, including Wallpaper, like unmapped) # Trying to figure out how to raise it on top of wallpaper but behind desktop icons desktop = _xlibGetAllWindows(title="gnome-shell") if desktop: w.reparent(desktop[-1], 0, 0) DISP.flush() else: # Mint/Cinnamon: just clicking on the desktop, it raises, sending the window/wallpaper to the bottom! m = mouse.Controller() m.move(SCREEN.width_in_pixels - 1, 100) m.click(mouse.Button.left, 1) return '_NET_WM_WINDOW_TYPE_DESKTOP' in EWMH.getWmWindowType(hWnd, str=True) def bringBack(hWnd, parent): w = DISP.create_resource_object('window', hWnd) if parent: w.reparent(parent, 0, 0) DISP.flush() w.unmap() w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_NORMAL', False), ], Xlib.X.PropModeReplace) DISP.flush() w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_FOCUSED', False), ], Xlib.X.PropModeReplace) DISP.flush() w.map() EWMH.setActiveWindow(hWnd) EWMH.display.flush() return '_NET_WM_WINDOW_TYPE_NORMAL' in EWMH.getWmWindowType(hWnd, str=True) def _xlibGetAllWindows(parent: int = None, title: str = ""): if not parent: parent = ROOT allWindows = [parent] def findit(hwnd): query = hwnd.query_tree() for child in query.children: allWindows.append(child) findit(child) findit(parent) if not title: matches = allWindows else: matches = [] for w in allWindows: if w.get_wm_name() == title: matches.append(w) return matches hWnd = pywinctl.getActiveWindow() parent = hWnd._hWnd.query_tree().parent sendBehind(hWnd._hWnd) time.sleep(3) bringBack(hWnd._hWnd, parent)
[ "Eureka!!! Last Ubuntu version (22.04) seems to have brought the solution by itself. It now has a \"layer\" for desktop icons you can interact with. This also gave me the clue to find a smarter solution on Mint/Cinnamon (testing in other OS is still pending). This is the code which seems to work OK, for those with the same issue:\nimport time\n\nimport Xlib.display\nimport ewmh\nimport pywinctl\n\nDISP = Xlib.display.Display()\nSCREEN = DISP.screen()\nROOT = DISP.screen().root\nEWMH = ewmh.EWMH(_display=DISP, root=ROOT)\n\n\ndef sendBehind(hWnd):\n w = DISP.create_resource_object('window', hWnd.id)\n w.unmap()\n w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM,\n 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_DESKTOP', False), ],\n Xlib.X.PropModeReplace)\n DISP.flush()\n w.map()\n\n # This will try to raise the desktop icons layer on top of the window\n # Ubuntu: \"@!0,0;BDHF\" is the new desktop icons NG extension\n # Mint: \"Desktop\" name is language-dependent. Using its class (nemo-desktop)\n desktop = _xlibGetAllWindows(title=\"@!0,0;BDHF\", klass=('nemo-desktop', 'Nemo-desktop'))\n for d in desktop:\n w = DISP.create_resource_object('window', d)\n w.raise_window()\n\n return '_NET_WM_WINDOW_TYPE_DESKTOP' in EWMH.getWmWindowType(hWnd, str=True)\n\n\ndef bringBack(hWnd):\n\n w = DISP.create_resource_object('window', hWnd.id)\n\n w.unmap()\n w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM,\n 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_NORMAL', False), ],\n Xlib.X.PropModeReplace)\n DISP.flush()\n w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM,\n 32, [DISP.intern_atom('_NET_WM_STATE_FOCUSED', False), ],\n Xlib.X.PropModeReplace)\n DISP.flush()\n w.map()\n EWMH.setActiveWindow(hWnd)\n EWMH.display.flush()\n return '_NET_WM_WINDOW_TYPE_NORMAL' in EWMH.getWmWindowType(hWnd, str=True)\n\n\ndef _xlibGetAllWindows(parent=None, title: str = \"\", klass=None):\n\n parent = parent or ROOT\n allWindows = [parent]\n\n def findit(hwnd):\n query = hwnd.query_tree()\n for child in query.children:\n allWindows.append(child)\n findit(child)\n\n findit(parent)\n if not title and not klass:\n return allWindows\n else:\n return [window for window in allWindows if ((title and window.get_wm_name() == title) or\n (klass and window.get_wm_class() == klass))]\n\n\nhWnd = pywinctl.getActiveWindow()\nsendBehind(hWnd._hWnd)\ntime.sleep(3)\nbringBack(hWnd._hWnd)\n\n" ]
[ 1 ]
[]
[]
[ "gnome", "pyqt5", "python", "ubuntu", "xlib" ]
stackoverflow_0071241339_gnome_pyqt5_python_ubuntu_xlib.txt
Q: Plotting two variable in the same bar plot I have a dataset having gdp of countries and their biofuel production named "Merging2". I am trying to plot a bar chart of top 5 countries in gdp and in the same plot have the bar chart of their biofuel_production. I plotted the top gdp's using : yr=Merging2.groupby(by='Years') access1=yr.get_group(2019) sorted=access1.sort_values(["rgdpe"], ascending=[False]) #sorting it highest_gdp_2019=sorted.head(10) # taking the top 5 rgdpe fig, ax=plt.subplots() plt.bar(highest_gdp_2019.Countries,highest_gdp_2019.rgdpe, color ='black',width = 0.8,alpha=0.8) ax.set_xlabel("Countries") ax.set_ylabel("Real GDP") plt.xticks(rotation = 90) Is there a way to do that in Python ? A: Do you want two subplots, or do you want both bars next to each other? Either way, check out this other thread which should give you the answer to both. In the second case, you would want a secondary Y-axis (as illustrated in the post)
Plotting two variable in the same bar plot
I have a dataset having gdp of countries and their biofuel production named "Merging2". I am trying to plot a bar chart of top 5 countries in gdp and in the same plot have the bar chart of their biofuel_production. I plotted the top gdp's using : yr=Merging2.groupby(by='Years') access1=yr.get_group(2019) sorted=access1.sort_values(["rgdpe"], ascending=[False]) #sorting it highest_gdp_2019=sorted.head(10) # taking the top 5 rgdpe fig, ax=plt.subplots() plt.bar(highest_gdp_2019.Countries,highest_gdp_2019.rgdpe, color ='black',width = 0.8,alpha=0.8) ax.set_xlabel("Countries") ax.set_ylabel("Real GDP") plt.xticks(rotation = 90) Is there a way to do that in Python ?
[ "Do you want two subplots, or do you want both bars next to each other? Either way, check out this other thread which should give you the answer to both. In the second case, you would want a secondary Y-axis (as illustrated in the post)\n" ]
[ 0 ]
[]
[]
[ "data_cleaning", "data_science", "pandas", "python" ]
stackoverflow_0074662109_data_cleaning_data_science_pandas_python.txt
Q: How to make input text bold in console? I'm looking for a way to make the text that the user types in the console bold input("Input your name: ") If I type "John", I want it to show up as bold as I'm typing it, something like this Input your name: John A: They are called ANSI escape sequence. Basically you output some special bytes to control how the terminal text looks. Try this: x = input('Name: \u001b[1m') # anything from here on will be BOLD print('\u001b[0m', end='') # anything from here on will be normal print('Your input is:', x) \u001b[1m tells the terminal to switch to bold text. \u001b[0m tells it to reset. This page gives a good introduction to ANSI escape sequence. A: You can do the following with colorama: from colorama import init,Style,Fore,Back import os os.system('cls') def inputer(prompt) : init() print(Style.NORMAL+prompt + Style.BRIGHT+'',end="") x = input() return x ## ------------------------- inputer("Input your name: ") and the output will be as follows: Input your name: John ref. : https://youtu.be/wYPh61tROiY
How to make input text bold in console?
I'm looking for a way to make the text that the user types in the console bold input("Input your name: ") If I type "John", I want it to show up as bold as I'm typing it, something like this Input your name: John
[ "They are called ANSI escape sequence. Basically you output some special bytes to control how the terminal text looks. Try this:\nx = input('Name: \\u001b[1m') # anything from here on will be BOLD\n\nprint('\\u001b[0m', end='') # anything from here on will be normal\nprint('Your input is:', x)\n\n\\u001b[1m tells the terminal to switch to bold text. \\u001b[0m tells it to reset.\nThis page gives a good introduction to ANSI escape sequence.\n", "You can do the following with colorama:\nfrom colorama import init,Style,Fore,Back\nimport os\n\nos.system('cls')\n\ndef inputer(prompt) :\n init()\n print(Style.NORMAL+prompt + Style.BRIGHT+'',end=\"\")\n x = input()\n return x\n\n## -------------------------\n\ninputer(\"Input your name: \")\n\nand the output will be as follows:\n\nInput your name: John\n\nref. : https://youtu.be/wYPh61tROiY\n" ]
[ 4, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0059122173_python_python_3.x.txt
Q: TS SS Vector similarity using matrices I have two sets of vectors A,B and I want to compute the TS-SS similarity for each vector in A compared to every vector in B. I have an implementation (from https://github.com/taki0112/Vector_Similarity) used for 2 vectors, however when I tried using it for matrices (function "compute_matrix_sim") - it is very inefficient and takes forever. Is there a more vectorized approach to use TS-SS for matrices? Thanks! import math import numpy as np class TS_SS: def Cosine(self, vec1: np.ndarray, vec2: np.ndarray): return np.dot(vec1, vec2.T) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) def VectorSize(self, vec: np.ndarray): return np.linalg.norm(vec) def Euclidean(self, vec1: np.ndarray, vec2: np.ndarray): return np.linalg.norm(vec1 - vec2) def Theta(self, vec1: np.ndarray, vec2: np.ndarray): return np.arccos(self.Cosine(vec1, vec2)) + np.radians(10) def Triangle(self, vec1: np.ndarray, vec2: np.ndarray): theta = np.radians(self.Theta(vec1, vec2)) return (self.VectorSize(vec1) * self.VectorSize(vec2) * np.sin(theta)) / 2 def Magnitude_Difference(self, vec1: np.ndarray, vec2: np.ndarray): return abs(self.VectorSize(vec1) - self.VectorSize(vec2)) def Sector(self, vec1: np.ndarray, vec2: np.ndarray): ED = self.Euclidean(vec1, vec2) MD = self.Magnitude_Difference(vec1, vec2) theta = self.Theta(vec1, vec2) return math.pi * (ED + MD) ** 2 * theta / 360 def __call__(self, vec1: np.ndarray, vec2: np.ndarray): return self.Triangle(vec1, vec2) * self.Sector(vec1, vec2) def compute_matrix_sim(m1, m2): similarity = TS_SS() ans = np.zeros((len(m1), len(m2))) for i, vec_m1 in enumerate(m1): for j, vec_m2 in enumerate(m2): ans[i, j] = similarity(vec_m1, vec_m2) return ans # Usage v1 = np.random.random_sample((40, 80)) v2 = np.random.random_sample((2000000, 80)) x = compute_matrix_sim(v1, v2) print(x) A: import numpy as np import torch class TS_SS: def __init__(self): self.thetaval = 0 self.vecnorm1 = 0 self.vecnorm2 = 0 def Theta(self, vec1, vec2): return (torch.arccos(torch.mm(vec1,vec2.T)/(self.vecnorm1*self.vecnorm2.T)) + np.radians(10)) def Triangle(self, vec1, vec2): self.thetaval = self.Theta(vec1, vec2) return ((self.vecnorm1 * self.vecnorm2.T * torch.sin(self.thetaval)) / 2) def Sector(self, vec1: np.ndarray, vec2: np.ndarray): ED = torch.cdist(vec1,vec2,p=2) MD = torch.abs(self.vecnorm1 - self.vecnorm2.T) # theta = self.Theta(vec1, vec2) return (1/2) * ((ED + MD) ** 2) * self.thetaval def __call__(self, vec1, vec2): self.thetaval = 0 self.vecnorm1 = vec1.norm(p=2, dim=1, keepdim=True) self.vecnorm2 = vec2.norm(p=2, dim=1, keepdim=True) return self.Triangle(vec1, vec2) * self.Sector(vec1, vec2) v1 = torch.tensor([[3.0,4.0]]) v2 = torch.tensor([[4.0,3.0]]) similarity = TS_SS() print(similarity(v1, v2))
TS SS Vector similarity using matrices
I have two sets of vectors A,B and I want to compute the TS-SS similarity for each vector in A compared to every vector in B. I have an implementation (from https://github.com/taki0112/Vector_Similarity) used for 2 vectors, however when I tried using it for matrices (function "compute_matrix_sim") - it is very inefficient and takes forever. Is there a more vectorized approach to use TS-SS for matrices? Thanks! import math import numpy as np class TS_SS: def Cosine(self, vec1: np.ndarray, vec2: np.ndarray): return np.dot(vec1, vec2.T) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) def VectorSize(self, vec: np.ndarray): return np.linalg.norm(vec) def Euclidean(self, vec1: np.ndarray, vec2: np.ndarray): return np.linalg.norm(vec1 - vec2) def Theta(self, vec1: np.ndarray, vec2: np.ndarray): return np.arccos(self.Cosine(vec1, vec2)) + np.radians(10) def Triangle(self, vec1: np.ndarray, vec2: np.ndarray): theta = np.radians(self.Theta(vec1, vec2)) return (self.VectorSize(vec1) * self.VectorSize(vec2) * np.sin(theta)) / 2 def Magnitude_Difference(self, vec1: np.ndarray, vec2: np.ndarray): return abs(self.VectorSize(vec1) - self.VectorSize(vec2)) def Sector(self, vec1: np.ndarray, vec2: np.ndarray): ED = self.Euclidean(vec1, vec2) MD = self.Magnitude_Difference(vec1, vec2) theta = self.Theta(vec1, vec2) return math.pi * (ED + MD) ** 2 * theta / 360 def __call__(self, vec1: np.ndarray, vec2: np.ndarray): return self.Triangle(vec1, vec2) * self.Sector(vec1, vec2) def compute_matrix_sim(m1, m2): similarity = TS_SS() ans = np.zeros((len(m1), len(m2))) for i, vec_m1 in enumerate(m1): for j, vec_m2 in enumerate(m2): ans[i, j] = similarity(vec_m1, vec_m2) return ans # Usage v1 = np.random.random_sample((40, 80)) v2 = np.random.random_sample((2000000, 80)) x = compute_matrix_sim(v1, v2) print(x)
[ "import numpy as np\nimport torch\n\n\nclass TS_SS:\n def __init__(self):\n self.thetaval = 0\n self.vecnorm1 = 0\n self.vecnorm2 = 0\n\n def Theta(self, vec1, vec2):\n return (torch.arccos(torch.mm(vec1,vec2.T)/(self.vecnorm1*self.vecnorm2.T))\n + np.radians(10))\n\n def Triangle(self, vec1, vec2):\n self.thetaval = self.Theta(vec1, vec2)\n return ((self.vecnorm1 * self.vecnorm2.T * torch.sin(self.thetaval))\n / 2)\n\n\n\n def Sector(self, vec1: np.ndarray, vec2: np.ndarray):\n ED = torch.cdist(vec1,vec2,p=2)\n MD = torch.abs(self.vecnorm1 - self.vecnorm2.T)\n # theta = self.Theta(vec1, vec2)\n return (1/2) * ((ED + MD) ** 2) * self.thetaval\n\n def __call__(self, vec1, vec2):\n self.thetaval = 0\n self.vecnorm1 = vec1.norm(p=2, dim=1, keepdim=True)\n self.vecnorm2 = vec2.norm(p=2, dim=1, keepdim=True)\n return self.Triangle(vec1, vec2) * self.Sector(vec1, vec2)\n\n\nv1 = torch.tensor([[3.0,4.0]])\nv2 = torch.tensor([[4.0,3.0]])\nsimilarity = TS_SS()\nprint(similarity(v1, v2))\n\n" ]
[ 0 ]
[]
[]
[ "cosine_similarity", "nlp", "python", "similarity", "vectorization" ]
stackoverflow_0068338024_cosine_similarity_nlp_python_similarity_vectorization.txt
Q: Why doesn't this conversion to utf8 work? I have a subprocess command that outputs some characters such as '\xf1'. I'm trying to decode it as utf8 but I get an error. s = '\xf1' s.decode('utf-8') The above throws: UnicodeDecodeError: 'utf8' codec can't decode byte 0xf1 in position 0: unexpected end of data It works when I use 'latin-1' but shouldn't utf8 work as well? My understanding is that latin1 is a subset of utf8. Am I missing something here? EDIT: print s # ñ repr(s) # returns "'\\xa9'" A: You have confused Unicode with UTF-8. Latin-1 is a subset of Unicode, but it is not a subset of UTF-8. Avoid like the plague ever thinking about individual code units. Just use code points. Do not think about UTF-8. Think about Unicode instead. This is where you are being confused. Source Code for Demo Program Using Unicode in Python is very easy. It’s especially with Python 3 and wide builds, the only way I use Python, but you can still use the legacy Python 2 under a narrow build if you are careful about sticking to UTF-8. To do this, always your source code encoding and your output encoding correctly to UTF-8. Now stop thinking of UTF-anything and use only UTF-8 literals, logical code point numbers, or symbolic character names throughout your Python program. Here’s the source code with line numbers: % cat -n /tmp/py 1 #!/usr/bin/env python3.2 2 # -*- coding: UTF-8 -*- 3 4 from __future__ import unicode_literals 5 from __future__ import print_function 6 7 import sys 8 import os 9 import re 10 11 if not (("PYTHONIOENCODING" in os.environ) 12 and 13 re.search("^utf-?8$", os.environ["PYTHONIOENCODING"], re.I)): 14 sys.stderr.write(sys.argv[0] + ": Please set your PYTHONIOENCODING envariable to utf8\n") 15 sys.exit(1) 16 17 print('1a: el ni\xF1o') 18 print('2a: el nin\u0303o') 19 20 print('1a: el niño') 21 print('2b: el niño') 22 23 print('1c: el ni\N{LATIN SMALL LETTER N WITH TILDE}o') 24 print('2c: el nin\N{COMBINING TILDE}o') And here are print functions with their non-ASCII characters uniquoted using the \x{⋯} notation: % grep -n ^print /tmp/py | uniquote -x 17:print('1a: el ni\xF1o') 18:print('2a: el nin\u0303o') 20:print('1b: el ni\x{F1}o') 21:print('2b: el nin\x{303}o') 23:print('1c: el ni\N{LATIN SMALL LETTER N WITH TILDE}o') 24:print('2c: el nin\N{COMBINING TILDE}o') Sample Runs of Demo Program Here’s a sample run of that program that shows the three different ways (a, b, and c) of doing it: the first set as literals in your source code (which will be subject to StackOverflow’s NFC conversions and so cannot be trusted!!!) and the second two sets with numeric Unicode code points and with symbolic Unicode character names respectively, again uniquoted so you can see what things really are: % python /tmp/py 1a: el niño 2a: el niño 1b: el niño 2b: el niño 1c: el niño 2c: el niño % python /tmp/py | uniquote -x 1a: el ni\x{F1}o 2a: el nin\x{303}o 1b: el ni\x{F1}o 2b: el nin\x{303}o 1c: el ni\x{F1}o 2c: el nin\x{303}o % python /tmp/py | uniquote -v 1a: el ni\N{LATIN SMALL LETTER N WITH TILDE}o 2a: el nin\N{COMBINING TILDE}o 1b: el ni\N{LATIN SMALL LETTER N WITH TILDE}o 2b: el nin\N{COMBINING TILDE}o 1c: el ni\N{LATIN SMALL LETTER N WITH TILDE}o 2c: el nin\N{COMBINING TILDE}o I really dislike looking at binary, but here is what that looks like as binary bytes: % python /tmp/py | uniquote -b 1a: el ni\xC3\xB1o 2a: el nin\xCC\x83o 1b: el ni\xC3\xB1o 2b: el nin\xCC\x83o 1c: el ni\xC3\xB1o 2c: el nin\xCC\x83o The Moral of the Story Even when you use UTF-8 source, you should think and use only logical Unicode code point numbers (or symbolic named characters), not the individual 8-bit code units that underlie the serial representation of UTF-8 (or for that matter of UTF-16). It is extremely rare to need code units instead of code points, and it just confuses you. You will also get more reliably behavior if you use a wide build of Python3 than you will get with alternatives to those choices, but that is a UTF-32 matter, not a UTF-8 one. Both UTF-32 and UTF-8 are easy to work with, if you just go with the flow. A: UTF-8 is not a subset of Latin-1. UTF-8 encodes ASCII with the same single bytes. For all other code points, it's all multiple bytes. Put simply, \xf1 is not valid UTF-8, as Python tells you. "Unexpected end of input" indicates that this byte marks the beginning of a multi-byte sequence which is not provided. I recommend you read up on UTF-8. A: My understanding is that latin1 is a subset of utf8. Wrong. Latin-1, aka ISO 8859-1 (and sometimes erroneously as Windows-1252), is not a subet of UTF-8. ASCII, on the other hand, is a subset of UTF-8. ASCII strings are valid UTF-8 strings, but generalized Windows-1252 or ISO 8859-1 strings are not valid UTF-8, which is why s.decode('UTF-8') is throwing a UnicodeDecodeError. A: It's the first byte of a multi-byte sequence in UTF-8, so it's not valid by itself. In fact, it's the first byte of a 4 byte sequence. Bits Last code point Byte 1 Byte 2 Byte 3 Byte 4 Byte 5 Byte 6 21 U+1FFFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx See here for more info. A: the easy way (python 3) s='\xf1' bytes(s, 'utf-8').decode('utf-8') #'ñ' if you are trying decode escaped unicode you can use: s='Autom\\u00e1tico' bytes(s, "utf-8").decode('unicode-escape') #'Automático'
Why doesn't this conversion to utf8 work?
I have a subprocess command that outputs some characters such as '\xf1'. I'm trying to decode it as utf8 but I get an error. s = '\xf1' s.decode('utf-8') The above throws: UnicodeDecodeError: 'utf8' codec can't decode byte 0xf1 in position 0: unexpected end of data It works when I use 'latin-1' but shouldn't utf8 work as well? My understanding is that latin1 is a subset of utf8. Am I missing something here? EDIT: print s # ñ repr(s) # returns "'\\xa9'"
[ "You have confused Unicode with UTF-8. Latin-1 is a subset of Unicode, but it is not a subset of UTF-8. Avoid like the plague ever thinking about individual code units. Just use code points. Do not think about UTF-8. Think about Unicode instead. This is where you are being confused.\nSource Code for Demo Program\nUsing Unicode in Python is very easy. It’s especially with Python 3 and wide builds, the only way I use Python, but you can still use the legacy Python 2 under a narrow build if you are careful about sticking to UTF-8. \nTo do this, always your source code encoding and your output encoding correctly to UTF-8. Now stop thinking of UTF-anything and use only UTF-8 literals, logical code point numbers, or symbolic character names throughout your Python program. \nHere’s the source code with line numbers:\n% cat -n /tmp/py\n 1 #!/usr/bin/env python3.2\n 2 # -*- coding: UTF-8 -*-\n 3 \n 4 from __future__ import unicode_literals\n 5 from __future__ import print_function\n 6 \n 7 import sys\n 8 import os\n 9 import re\n 10 \n 11 if not ((\"PYTHONIOENCODING\" in os.environ)\n 12 and\n 13 re.search(\"^utf-?8$\", os.environ[\"PYTHONIOENCODING\"], re.I)):\n 14 sys.stderr.write(sys.argv[0] + \": Please set your PYTHONIOENCODING envariable to utf8\\n\")\n 15 sys.exit(1)\n 16 \n 17 print('1a: el ni\\xF1o')\n 18 print('2a: el nin\\u0303o')\n 19 \n 20 print('1a: el niño')\n 21 print('2b: el niño')\n 22 \n 23 print('1c: el ni\\N{LATIN SMALL LETTER N WITH TILDE}o')\n 24 print('2c: el nin\\N{COMBINING TILDE}o')\n\nAnd here are print functions with their non-ASCII characters uniquoted using the \\x{⋯} notation:\n% grep -n ^print /tmp/py | uniquote -x\n17:print('1a: el ni\\xF1o')\n18:print('2a: el nin\\u0303o')\n20:print('1b: el ni\\x{F1}o')\n21:print('2b: el nin\\x{303}o')\n23:print('1c: el ni\\N{LATIN SMALL LETTER N WITH TILDE}o')\n24:print('2c: el nin\\N{COMBINING TILDE}o')\n\nSample Runs of Demo Program\nHere’s a sample run of that program that shows the three different ways (a, b, and c) of doing it: the first set as literals in your source code (which will be subject to StackOverflow’s NFC conversions and so cannot be trusted!!!) and the second two sets with numeric Unicode code points and with symbolic Unicode character names respectively, again uniquoted so you can see what things really are:\n% python /tmp/py\n1a: el niño\n2a: el niño\n1b: el niño\n2b: el niño\n1c: el niño\n2c: el niño\n\n% python /tmp/py | uniquote -x\n1a: el ni\\x{F1}o\n2a: el nin\\x{303}o\n1b: el ni\\x{F1}o\n2b: el nin\\x{303}o\n1c: el ni\\x{F1}o\n2c: el nin\\x{303}o\n\n% python /tmp/py | uniquote -v\n1a: el ni\\N{LATIN SMALL LETTER N WITH TILDE}o\n2a: el nin\\N{COMBINING TILDE}o\n1b: el ni\\N{LATIN SMALL LETTER N WITH TILDE}o\n2b: el nin\\N{COMBINING TILDE}o\n1c: el ni\\N{LATIN SMALL LETTER N WITH TILDE}o\n2c: el nin\\N{COMBINING TILDE}o\n\nI really dislike looking at binary, but here is what that looks like as binary bytes:\n% python /tmp/py | uniquote -b\n1a: el ni\\xC3\\xB1o\n2a: el nin\\xCC\\x83o\n1b: el ni\\xC3\\xB1o\n2b: el nin\\xCC\\x83o\n1c: el ni\\xC3\\xB1o\n2c: el nin\\xCC\\x83o\n\nThe Moral of the Story\nEven when you use UTF-8 source, you should think and use only logical Unicode code point numbers (or symbolic named characters), not the individual 8-bit code units that underlie the serial representation of UTF-8 (or for that matter of UTF-16). It is extremely rare to need code units instead of code points, and it just confuses you.\nYou will also get more reliably behavior if you use a wide build of Python3 than you will get with alternatives to those choices, but that is a UTF-32 matter, not a UTF-8 one. Both UTF-32 and UTF-8 are easy to work with, if you just go with the flow.\n", "UTF-8 is not a subset of Latin-1. UTF-8 encodes ASCII with the same single bytes. For all other code points, it's all multiple bytes.\nPut simply, \\xf1 is not valid UTF-8, as Python tells you. \"Unexpected end of input\" indicates that this byte marks the beginning of a multi-byte sequence which is not provided.\nI recommend you read up on UTF-8.\n", "\nMy understanding is that latin1 is a subset of utf8.\n\nWrong. Latin-1, aka ISO 8859-1 (and sometimes erroneously as Windows-1252), is not a subet of UTF-8. ASCII, on the other hand, is a subset of UTF-8. ASCII strings are valid UTF-8 strings, but generalized Windows-1252 or ISO 8859-1 strings are not valid UTF-8, which is why s.decode('UTF-8') is throwing a UnicodeDecodeError.\n", "It's the first byte of a multi-byte sequence in UTF-8, so it's not valid by itself.\nIn fact, it's the first byte of a 4 byte sequence.\nBits Last code point Byte 1 Byte 2 Byte 3 Byte 4 Byte 5 Byte 6\n21 U+1FFFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx\n\nSee here for more info.\n", "the easy way (python 3)\ns='\\xf1'\nbytes(s, 'utf-8').decode('utf-8')\n#'ñ'\n\nif you are trying decode escaped unicode you can use:\ns='Autom\\\\u00e1tico'\nbytes(s, \"utf-8\").decode('unicode-escape')\n#'Automático'\n\n" ]
[ 9, 4, 1, 1, 0 ]
[]
[]
[ "encoding", "python", "unicode", "utf_8" ]
stackoverflow_0007163485_encoding_python_unicode_utf_8.txt
Q: Implementing the Fibonacci sequence for the last n elements: The nBonacci sequence I was curious about how I can implement the Fibonacci sequence for summing the last n elements instead of just the last 2. So I was thinking about implementing a function nBonacci(n,m) where n is the number of last elements we gotta sum, and m is the number of elements in this list. The Fibonacci sequence starts with 2 ones, and each following element is the sum of the 2 previous numbers. Let's make a generalization: The nBonacci sequence starts with n ones, and each following element is the sum of the previous n numbers. I want to define the nBonacci function that takes the positive integer n and a positive integer m, with m>n, and returns a list with the first m elements of the nBonacci sequence corresponding to the value n. For example, nBonacci(3,8) should return the list [1, 1, 1, 3, 5, 9, 17, 31]. def fib(num): a = 0 b = 1 while b <= num: prev_a = a a = b b = prev_a +b The problem is that I don't know the number of times I gotta sum. Does anyone have an idea and a suggestion of resolution? A: The nBonacci sequence will always have to start with n ones, or the sequence could never start. Therefore, we can just take advantage of the range() function and slice the existing list: def nfib(n, m): lst = [1] * n for i in range(n, m): lst.append(sum(lst[i-n:i])) return lst print(nfib(3, 8)) # => [1, 1, 1, 3, 5, 9, 17, 31] A: If you wanted to the fibonacci sequence recursively, you could do def fib(x, y, l): if len(l) == y: return l return fib(x, y, l + [sum(l[-x:])]) num = 3 print(fib(num, 8, [1 for _ in range(num)])) #[1, 1, 1, 3, 5, 9, 17, 31]
Implementing the Fibonacci sequence for the last n elements: The nBonacci sequence
I was curious about how I can implement the Fibonacci sequence for summing the last n elements instead of just the last 2. So I was thinking about implementing a function nBonacci(n,m) where n is the number of last elements we gotta sum, and m is the number of elements in this list. The Fibonacci sequence starts with 2 ones, and each following element is the sum of the 2 previous numbers. Let's make a generalization: The nBonacci sequence starts with n ones, and each following element is the sum of the previous n numbers. I want to define the nBonacci function that takes the positive integer n and a positive integer m, with m>n, and returns a list with the first m elements of the nBonacci sequence corresponding to the value n. For example, nBonacci(3,8) should return the list [1, 1, 1, 3, 5, 9, 17, 31]. def fib(num): a = 0 b = 1 while b <= num: prev_a = a a = b b = prev_a +b The problem is that I don't know the number of times I gotta sum. Does anyone have an idea and a suggestion of resolution?
[ "The nBonacci sequence will always have to start with n ones, or the sequence could never start. Therefore, we can just take advantage of the range() function and slice the existing list:\ndef nfib(n, m):\n lst = [1] * n\n for i in range(n, m):\n lst.append(sum(lst[i-n:i]))\n return lst\n\n\nprint(nfib(3, 8)) # => [1, 1, 1, 3, 5, 9, 17, 31]\n\n", "If you wanted to the fibonacci sequence recursively, you could do\ndef fib(x, y, l):\n if len(l) == y:\n return l\n return fib(x, y, l + [sum(l[-x:])])\n\n\nnum = 3\nprint(fib(num, 8, [1 for _ in range(num)])) #[1, 1, 1, 3, 5, 9, 17, 31]\n\n" ]
[ 1, 1 ]
[]
[]
[ "fibonacci", "iteration", "python" ]
stackoverflow_0074662150_fibonacci_iteration_python.txt
Q: "input expected at most 1 arguments, got 2" I'm trying to create a function that will prompt the user to give a radius for each circle that they have designated as having, however, I can't seem to figure out how to display it without running into the TypeError: input expected at most 1 arguments, got 2 def GetRadius(): NUM_CIRCLES = eval(input("Enter the number of circles: ")) for i in range(NUM_CIRCLES): Radius = eval(input("Enter the radius of circle #", i + 1)) GetRadius() A: That's because you gave it a second argument. You can only give it the string you want to see displayed. This isn't a free-form print statement. Try this: Radius = eval(input("Enter the radius of circle #" + str(i + 1))) This gives you a single string value to send to input. Also, be very careful with using eval. A: input only takes one argument, if you want to create a string with your i value you can use Radius = eval(input("Enter the radius of circle #{} ".format(i + 1))) Also it is very dangerous to use eval to blindly execute user input. A: TypeError: input expected at most 1 arguments, got 2 Thats because you provided two arguments for input function but it expects one argument (yeah I rephrased error message...). Anyway, use this: Radius = float(input("Enter the radius of circle #" + str(i + 1))) Don't use eval for this. (Other answer explains why) For future issues, it's worth using help function from within python interpreter. Try help(input) in python interpreter. A: In golang I got the exact same error using database/sql when I tried this code: err := db.QueryRow("SELECT ... WHERE field=$1 ", field, field2).Scan(&count) if err != nil { return false, err } it turns out the solution was to do add 1 parameter instead of 2 parameters to the varargs of the function. so the solution was this: err := db.QueryRow("SELECT ... WHERE field=$1 ", field).Scan(&count) // notice field2 is missing here if err != nil { return false, err } I hope this helps anyone, particularly since this is the first google result when googling this issue. also guys, always provide context to your errors. HTH
"input expected at most 1 arguments, got 2"
I'm trying to create a function that will prompt the user to give a radius for each circle that they have designated as having, however, I can't seem to figure out how to display it without running into the TypeError: input expected at most 1 arguments, got 2 def GetRadius(): NUM_CIRCLES = eval(input("Enter the number of circles: ")) for i in range(NUM_CIRCLES): Radius = eval(input("Enter the radius of circle #", i + 1)) GetRadius()
[ "That's because you gave it a second argument. You can only give it the string you want to see displayed. This isn't a free-form print statement. Try this:\nRadius = eval(input(\"Enter the radius of circle #\" + str(i + 1)))\n\nThis gives you a single string value to send to input.\nAlso, be very careful with using eval.\n", "input only takes one argument, if you want to create a string with your i value you can use\nRadius = eval(input(\"Enter the radius of circle #{} \".format(i + 1)))\n\nAlso it is very dangerous to use eval to blindly execute user input.\n", "\nTypeError: input expected at most 1 arguments, got 2\n\nThats because you provided two arguments for input function but it expects one argument (yeah I rephrased error message...).\nAnyway, use this:\nRadius = float(input(\"Enter the radius of circle #\" + str(i + 1)))\n\nDon't use eval for this. (Other answer explains why)\nFor future issues, it's worth using help function from within python interpreter. Try help(input) in python interpreter. \n", "In golang I got the exact same error using database/sql when I tried this code:\n err := db.QueryRow(\"SELECT ... WHERE field=$1 \", field, field2).Scan(&count)\n if err != nil {\n return false, err\n }\n\nit turns out the solution was to do add 1 parameter instead of 2 parameters to the varargs of the function.\nso the solution was this:\n err := db.QueryRow(\"SELECT ... WHERE field=$1 \", field).Scan(&count)\n // notice field2 is missing here\n if err != nil {\n return false, err\n }\n\nI hope this helps anyone, particularly since this is the first google result when googling this issue.\nalso guys, always provide context to your errors.\nHTH\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0043243627_python_python_3.x.txt
Q: How to classify unknown/unseen data as anomaly I trained a CNN model with 6 different classes (labels are 0-5) and I am getting more than 90% accuracy out of it. It can correctly classify the classes. I am actually trying to detect anomaly with it. So what I want is, if any data comes which my model has never seen before or never been trained on similar data then it will be classified as anomaly. I do not have any abnormal data to train my model, I just have the normal data. So the rule would be, if any incoming data point does not belong to any of the six classes then it is anomaly. How can I do it? I thought of a method which I am not sure if it works in this scenario. The method is, when I predict a single data point it gives me the probability score for all 6 classes. So, I take the maximum value out of this 6 value and if this max value is below a threshold level, for example, 70, then this observation will be classified as anomaly. That means, if any data point has less than 70% probability of being one of the six classes then it is an anomaly. The code looks like this y_pred = s_model.predict(X_test_scaled) normal = [] abnormal = [] max_value_list= [] for i in y_pred: max_value= np.max(i) max_value_list.append(max_value) if max_value <=0.70: abnormal.append(max_value) print('Anomaly detected') else: normal.append(max_value) print('The number of total abnormal observations are: ',len(abnormal)) Does this method works in my case? Or is there any better way to do it? Any kind of help is appreciated. A: Interesting problem but I think your method does not work. When your model's entropy is high, i.e. it is unsure which class to choose for that particular sample input, it does not necessarily mean that that sample is an anomaly, it just means that the model is perhaps struggling to select the correct normal class. I suggest adding some abnormal samples (some random unrelated images, if your samples are images), between 1% to 10% of your data, and labelling them as class 7. Then train your model with those (and perhaps give more penalty for misclassifying the class 7). When you have your unseen samples, you classify them using your trained model. If they are classified as class 7, then you know they are anomalies. Hope this helps.
How to classify unknown/unseen data as anomaly
I trained a CNN model with 6 different classes (labels are 0-5) and I am getting more than 90% accuracy out of it. It can correctly classify the classes. I am actually trying to detect anomaly with it. So what I want is, if any data comes which my model has never seen before or never been trained on similar data then it will be classified as anomaly. I do not have any abnormal data to train my model, I just have the normal data. So the rule would be, if any incoming data point does not belong to any of the six classes then it is anomaly. How can I do it? I thought of a method which I am not sure if it works in this scenario. The method is, when I predict a single data point it gives me the probability score for all 6 classes. So, I take the maximum value out of this 6 value and if this max value is below a threshold level, for example, 70, then this observation will be classified as anomaly. That means, if any data point has less than 70% probability of being one of the six classes then it is an anomaly. The code looks like this y_pred = s_model.predict(X_test_scaled) normal = [] abnormal = [] max_value_list= [] for i in y_pred: max_value= np.max(i) max_value_list.append(max_value) if max_value <=0.70: abnormal.append(max_value) print('Anomaly detected') else: normal.append(max_value) print('The number of total abnormal observations are: ',len(abnormal)) Does this method works in my case? Or is there any better way to do it? Any kind of help is appreciated.
[ "Interesting problem but I think your method does not work.\nWhen your model's entropy is high, i.e. it is unsure which class to choose for that particular sample input, it does not necessarily mean that that sample is an anomaly, it just means that the model is perhaps struggling to select the correct normal class.\nI suggest adding some abnormal samples (some random unrelated images, if your samples are images), between 1% to 10% of your data, and labelling them as class 7. Then train your model with those (and perhaps give more penalty for misclassifying the class 7).\nWhen you have your unseen samples, you classify them using your trained model. If they are classified as class 7, then you know they are anomalies.\nHope this helps.\n" ]
[ 2 ]
[]
[]
[ "anomaly_detection", "conv_neural_network", "machine_learning", "python", "tensorflow" ]
stackoverflow_0074662022_anomaly_detection_conv_neural_network_machine_learning_python_tensorflow.txt
Q: Unable to import nonsense from Nostril I am trying to import nonsense from Nostril ( from nostril import nonsense) but I get this error; ImportError Traceback (most recent call last) Cell In [12], line 1 ----> 1 from nostril import nonsense ImportError: cannot import name 'nonsense' from 'nostril' (c:\Users\GithuaG\AppData\Local\Programs\Python\Python310\lib\site-packages\nostril\__init__.py) Here is what the init.py file contains: from .__version__ import __version__, __title__, __url__, __description__ from .__version__ import __author__, __email__ from .__version__ import __license__, __copyright__ from .ng import NGramData from .nonsense_detector import ( nonsense, generate_nonsense_detector, test_unlabeled, test_labeled, ngrams, dataset_from_pickle, sanitize_string ) Tried researching on the web but no success. Any help will be appreciated, Thanks A: You most probably did pip install nostril before you installed the actual nostril package that you needed (just like I did). This would have caused another package that is used for testing to be installed alongside. You can either uninstall both nostril packages and then re-install just the nostril package you need. Or you can directly import the nonsense object from nonsense_detector as below if you need both the packages from nostril.nonsense_detector import nonsense
Unable to import nonsense from Nostril
I am trying to import nonsense from Nostril ( from nostril import nonsense) but I get this error; ImportError Traceback (most recent call last) Cell In [12], line 1 ----> 1 from nostril import nonsense ImportError: cannot import name 'nonsense' from 'nostril' (c:\Users\GithuaG\AppData\Local\Programs\Python\Python310\lib\site-packages\nostril\__init__.py) Here is what the init.py file contains: from .__version__ import __version__, __title__, __url__, __description__ from .__version__ import __author__, __email__ from .__version__ import __license__, __copyright__ from .ng import NGramData from .nonsense_detector import ( nonsense, generate_nonsense_detector, test_unlabeled, test_labeled, ngrams, dataset_from_pickle, sanitize_string ) Tried researching on the web but no success. Any help will be appreciated, Thanks
[ "You most probably did pip install nostril before you installed the actual nostril package that you needed (just like I did). This would have caused another package that is used for testing to be installed alongside. You can either uninstall both nostril packages and then re-install just the nostril package you need.\nOr you can directly import the nonsense object from nonsense_detector as below if you need both the packages\nfrom nostril.nonsense_detector import nonsense\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074386232_python.txt
Q: Python TkInter: How can I get the canvas coordinates of the visible area of a scrollable canvas? I need to find out what the visible coordinates of a vertically scrollable canvas are using python and tkinter. Let's assume I have a canvas that is 800 x 5000 pixels, and the visible, vertically scrollable window is 800x800. If I am scrolled all the way to the top of the canvas, I would like to have a function that, when run, would return something like: x=0 y=0, w=800 h=800 But if I were to scroll down and then run the function, I would get something like this: x=0 y=350 w=800 h=800 And if I resized the window vertically to, say, 1000, I would get: x=0 y=350 w=800 h=1000 I tried this code: self.canvas.update() print(f"X={self.canvas.canvasx(0)}") print(f"Y={self.canvas.canvasy(0)}") print(f"W={self.canvas.canvasx(self.canvas.winfo_width())}") print(f"H={self.canvas.canvasy(self.canvas.winfo_height())}") But it gives me the size of the whole canvas, not the visible window inside the canvas. I have tried searching for the answer, but am surprised not to have found anyone else with the same question. Perhaps I just don't have the right search terms. Context for anyone who cares: I am writing a thumbnail browser that is a grid of thumbnails. Since there may be thousands of thumbnails, I want to update the ones that are visible first, and then use a thread to update the remaining (hidden) thumbnails as needed. A: The methods canvasx and canvasy of the Canvas widget will convert screen pixels (ie: what's visible on the screen) into canvas pixels (the location in the larger virtual canvas). You can feed it an x or y of zero to get the virtual pixel at the top-left of the visible window, and you can give it the width and height of the widget to get the pixel at the bottom-right of the visible window. x0 = canvas.canvasx(0) y0 = canvas.canvasy(0) x1 = canvas.canvasx(canvas.winfo_width()) y1 = canvas.canvasy(canvas.winfo_height()) The canonical tcl/tk documentation says this about the canvasx method: pathName canvasx screenx ?gridspacing?: Given a window x-coordinate in the canvas screenx, this command returns the canvas x-coordinate that is displayed at that location. If gridspacing is specified, then the canvas coordinate is rounded to the nearest multiple of gridspacing units. Here is a contrived program that will print the coordinates of the top-left and bottom right visible pixels every five seconds. import tkinter as tk root = tk.Tk() canvas_frame = tk.Frame(root, bd=1, relief="sunken") statusbar = tk.Label(root, bd=1, relief="sunken") statusbar.pack(side="bottom", fill="x") canvas_frame.pack(fill="both", expand=True) canvas = tk.Canvas(canvas_frame, width=800, height=800, bd=0, highlightthickness=0) ysb = tk.Scrollbar(canvas_frame, orient="vertical", command=canvas.yview) xsb = tk.Scrollbar(canvas_frame, orient="horizontal", command=canvas.xview) canvas.configure(yscrollcommand=ysb.set, xscrollcommand=xsb.set) canvas_frame.grid_rowconfigure(0, weight=1) canvas_frame.grid_columnconfigure(0, weight=1) canvas.grid(row=0, column=0, sticky="nsew") ysb.grid(row=0, column=1, sticky="ns") xsb.grid(row=1, column=0, sticky="ew") for row in range(70): for column in range(10): x = column * 80 y = row * 80 canvas.create_rectangle(x, y, x+64, y+64, fill="gray") canvas.create_text(x+32, y+32, anchor="c", text=f"{row},{column}") canvas.configure(scrollregion=canvas.bbox("all")) def show_coords(): x0 = int(canvas.canvasx(0)) y0 = int(canvas.canvasy(0)) x1 = int(canvas.canvasx(canvas.winfo_width())) y1 = int(canvas.canvasy(canvas.winfo_height())) statusbar.configure(text=f"{x0},{y0} / {x1},{y1}") root.after(5000, show_coords) show_coords() root.mainloop()
Python TkInter: How can I get the canvas coordinates of the visible area of a scrollable canvas?
I need to find out what the visible coordinates of a vertically scrollable canvas are using python and tkinter. Let's assume I have a canvas that is 800 x 5000 pixels, and the visible, vertically scrollable window is 800x800. If I am scrolled all the way to the top of the canvas, I would like to have a function that, when run, would return something like: x=0 y=0, w=800 h=800 But if I were to scroll down and then run the function, I would get something like this: x=0 y=350 w=800 h=800 And if I resized the window vertically to, say, 1000, I would get: x=0 y=350 w=800 h=1000 I tried this code: self.canvas.update() print(f"X={self.canvas.canvasx(0)}") print(f"Y={self.canvas.canvasy(0)}") print(f"W={self.canvas.canvasx(self.canvas.winfo_width())}") print(f"H={self.canvas.canvasy(self.canvas.winfo_height())}") But it gives me the size of the whole canvas, not the visible window inside the canvas. I have tried searching for the answer, but am surprised not to have found anyone else with the same question. Perhaps I just don't have the right search terms. Context for anyone who cares: I am writing a thumbnail browser that is a grid of thumbnails. Since there may be thousands of thumbnails, I want to update the ones that are visible first, and then use a thread to update the remaining (hidden) thumbnails as needed.
[ "The methods canvasx and canvasy of the Canvas widget will convert screen pixels (ie: what's visible on the screen) into canvas pixels (the location in the larger virtual canvas).\nYou can feed it an x or y of zero to get the virtual pixel at the top-left of the visible window, and you can give it the width and height of the widget to get the pixel at the bottom-right of the visible window.\nx0 = canvas.canvasx(0)\ny0 = canvas.canvasy(0)\nx1 = canvas.canvasx(canvas.winfo_width())\ny1 = canvas.canvasy(canvas.winfo_height())\n\nThe canonical tcl/tk documentation says this about the canvasx method:\n\npathName canvasx screenx ?gridspacing?: Given a window x-coordinate in the canvas screenx, this command returns the canvas x-coordinate that is displayed at that location. If gridspacing is specified, then the canvas coordinate is rounded to the nearest multiple of gridspacing units.\n\n\nHere is a contrived program that will print the coordinates of the top-left and bottom right visible pixels every five seconds.\nimport tkinter as tk\n\nroot = tk.Tk()\ncanvas_frame = tk.Frame(root, bd=1, relief=\"sunken\")\nstatusbar = tk.Label(root, bd=1, relief=\"sunken\")\n\nstatusbar.pack(side=\"bottom\", fill=\"x\")\ncanvas_frame.pack(fill=\"both\", expand=True)\n\ncanvas = tk.Canvas(canvas_frame, width=800, height=800, bd=0, highlightthickness=0)\nysb = tk.Scrollbar(canvas_frame, orient=\"vertical\", command=canvas.yview)\nxsb = tk.Scrollbar(canvas_frame, orient=\"horizontal\", command=canvas.xview)\ncanvas.configure(yscrollcommand=ysb.set, xscrollcommand=xsb.set)\n\ncanvas_frame.grid_rowconfigure(0, weight=1)\ncanvas_frame.grid_columnconfigure(0, weight=1)\ncanvas.grid(row=0, column=0, sticky=\"nsew\")\nysb.grid(row=0, column=1, sticky=\"ns\")\nxsb.grid(row=1, column=0, sticky=\"ew\")\n\nfor row in range(70):\n for column in range(10):\n x = column * 80\n y = row * 80\n canvas.create_rectangle(x, y, x+64, y+64, fill=\"gray\")\n canvas.create_text(x+32, y+32, anchor=\"c\", text=f\"{row},{column}\")\n\ncanvas.configure(scrollregion=canvas.bbox(\"all\"))\n\ndef show_coords():\n x0 = int(canvas.canvasx(0))\n y0 = int(canvas.canvasy(0))\n x1 = int(canvas.canvasx(canvas.winfo_width()))\n y1 = int(canvas.canvasy(canvas.winfo_height()))\n statusbar.configure(text=f\"{x0},{y0} / {x1},{y1}\")\n root.after(5000, show_coords)\n\nshow_coords()\n\nroot.mainloop()\n\n" ]
[ 1 ]
[]
[]
[ "canvas", "python", "scroll", "tkinter", "visible" ]
stackoverflow_0074661864_canvas_python_scroll_tkinter_visible.txt
Q: Recursively iterate trough Pydantic Model Lets say I have a model and I want to do some preprocessing on it. (for this problem it does not matter it this is pydantic model, or some kid of nested iterable, its a general question). def preprocess(string): # Accepts some preprocessing and returnes that string class OtherModel(BaseModel): other_id:int some_name: str class DummyModel(BaseModel): location_id: int other_models: List[OtherModel] name:str surname:str one_other_model : OtherModel I want to make a recursive function that will iterate trough every attribute of a Model and run some preprocessing funciton on it. For example that function can be removing some letter from a string. I came this far and I dont know how to move further: from collections.abc import Iterable def preprocess_item(request: BaseModel) -> BaseModel: for attribute_key, attribute_value in request: if isinstance(attribute_value, str): setattr( request, attribute_key, _remove_html_tag(getattr(request, attribute_key)), ) elif isinstance(attribute_value, BaseModel): preprocess_item(attribute_value) elif isinstance(attribute_value, Iterable): for item in getattr(request,attribute_key): preprocess_item(item) This gives me the wrong answer, it basically unpacks every value. I want the same request object returned but with string fields preprocessed. A: If you are actually dealing with Pydantic models, I would argue this is one of the use cases for validators. There is not really any need for recursion because you can just define the validator on your own base model, if you want it to apply to all models (that inherit from it): from pydantic import BaseModel as PydanticBaseModel from pydantic import validator def process_string(string: str) -> str: return string.replace("a", "") class BaseModel(PydanticBaseModel): @validator("*", pre=True, each_item=True) def preprocess(cls, v: object) -> object: if isinstance(v, str): return process_string(v) return v class OtherModel(BaseModel): other_id: int some_name: str class DummyModel(BaseModel): location_id: int other_models: list[OtherModel] name: str surname: str one_other_model: OtherModel If you want to be more selective and apply the same validator to specific models, they can be made reusable as well: from pydantic import BaseModel, validator def preprocess(v: object) -> object: if isinstance(v, str): return v.replace("a", "") return v class OtherModel(BaseModel): other_id: int some_name: str _preprocess = validator("*", pre=True, allow_reuse=True)(preprocess) class DummyModel(BaseModel): location_id: int other_models: list[OtherModel] name: str surname: str one_other_model: OtherModel _preprocess = validator( "*", pre=True, each_item=True, allow_reuse=True, )(preprocess) class NotProcessed(BaseModel): field: str We can test both versions like this: if __name__ == "__main__": dummy = DummyModel.parse_obj({ "location_id": 1, "other_models": [ {"other_id": 1, "some_name": "foo"}, {"other_id": 2, "some_name": "spam"}, ], "name": "bar", "surname": "baz", "one_other_model": {"other_id": 2, "some_name": "eggs"}, }) print(dummy.json(indent=4)) The output in both cases is the same: { "location_id": 1, "other_models": [ { "other_id": 1, "some_name": "foo" }, { "other_id": 2, "some_name": "spm" } ], "name": "br", "surname": "bz", "one_other_model": { "other_id": 2, "some_name": "eggs" } }
Recursively iterate trough Pydantic Model
Lets say I have a model and I want to do some preprocessing on it. (for this problem it does not matter it this is pydantic model, or some kid of nested iterable, its a general question). def preprocess(string): # Accepts some preprocessing and returnes that string class OtherModel(BaseModel): other_id:int some_name: str class DummyModel(BaseModel): location_id: int other_models: List[OtherModel] name:str surname:str one_other_model : OtherModel I want to make a recursive function that will iterate trough every attribute of a Model and run some preprocessing funciton on it. For example that function can be removing some letter from a string. I came this far and I dont know how to move further: from collections.abc import Iterable def preprocess_item(request: BaseModel) -> BaseModel: for attribute_key, attribute_value in request: if isinstance(attribute_value, str): setattr( request, attribute_key, _remove_html_tag(getattr(request, attribute_key)), ) elif isinstance(attribute_value, BaseModel): preprocess_item(attribute_value) elif isinstance(attribute_value, Iterable): for item in getattr(request,attribute_key): preprocess_item(item) This gives me the wrong answer, it basically unpacks every value. I want the same request object returned but with string fields preprocessed.
[ "If you are actually dealing with Pydantic models, I would argue this is one of the use cases for validators.\nThere is not really any need for recursion because you can just define the validator on your own base model, if you want it to apply to all models (that inherit from it):\nfrom pydantic import BaseModel as PydanticBaseModel\nfrom pydantic import validator\n\n\ndef process_string(string: str) -> str:\n return string.replace(\"a\", \"\")\n\n\nclass BaseModel(PydanticBaseModel):\n @validator(\"*\", pre=True, each_item=True)\n def preprocess(cls, v: object) -> object:\n if isinstance(v, str):\n return process_string(v)\n return v\n\n\nclass OtherModel(BaseModel):\n other_id: int\n some_name: str\n\n\nclass DummyModel(BaseModel):\n location_id: int\n other_models: list[OtherModel]\n name: str\n surname: str\n one_other_model: OtherModel\n\nIf you want to be more selective and apply the same validator to specific models, they can be made reusable as well:\nfrom pydantic import BaseModel, validator\n\n\ndef preprocess(v: object) -> object:\n if isinstance(v, str):\n return v.replace(\"a\", \"\")\n return v\n\n\nclass OtherModel(BaseModel):\n other_id: int\n some_name: str\n\n _preprocess = validator(\"*\", pre=True, allow_reuse=True)(preprocess)\n\n\nclass DummyModel(BaseModel):\n location_id: int\n other_models: list[OtherModel]\n name: str\n surname: str\n one_other_model: OtherModel\n\n _preprocess = validator(\n \"*\",\n pre=True,\n each_item=True,\n allow_reuse=True,\n )(preprocess)\n\n\nclass NotProcessed(BaseModel):\n field: str\n\nWe can test both versions like this:\nif __name__ == \"__main__\":\n dummy = DummyModel.parse_obj({\n \"location_id\": 1,\n \"other_models\": [\n {\"other_id\": 1, \"some_name\": \"foo\"},\n {\"other_id\": 2, \"some_name\": \"spam\"},\n ],\n \"name\": \"bar\",\n \"surname\": \"baz\",\n \"one_other_model\": {\"other_id\": 2, \"some_name\": \"eggs\"},\n })\n print(dummy.json(indent=4))\n\nThe output in both cases is the same:\n{\n \"location_id\": 1,\n \"other_models\": [\n {\n \"other_id\": 1,\n \"some_name\": \"foo\"\n },\n {\n \"other_id\": 2,\n \"some_name\": \"spm\"\n }\n ],\n \"name\": \"br\",\n \"surname\": \"bz\",\n \"one_other_model\": {\n \"other_id\": 2,\n \"some_name\": \"eggs\"\n }\n}\n\n" ]
[ 2 ]
[]
[]
[ "pydantic", "python", "python_3.x", "recursion" ]
stackoverflow_0074657576_pydantic_python_python_3.x_recursion.txt
Q: Twisted sending files. python I'm trying to transfer images and other files over the network using Twisted. I use for this the class "FileSender" and in particular the method "beginFileTransfer", which I use on the server. But the file is not fully received by the client and I can't open it. At the same time, if I send a small file it comes. So the problem is the size. Can you tell me how I can send big files? Below is the server and client code: from twisted.internet.protocol import Protocol, connectionDone from twisted.python import failure from twisted.protocols.basic import FileSender from twisted.internet.protocol import Factory from twisted.internet.endpoints import TCP4ServerEndpoint from twisted.internet import reactor class TestServer(Protocol): def connectionMade(self): filesender = FileSender() f = open('00000.jpg', 'rb') filesender.beginFileTransfer(f, self.transport) def dataReceived(self, data: bytes): data = data.decode('UTF-8') print(data) def connectionLost(self, reason: failure.Failure = connectionDone): print("Server lost Connection") class QOTDFactory(Factory): def buildProtocol(self, addr): return TestServer() # 8007 is the port you want to run under. Choose something >1024 endpoint = TCP4ServerEndpoint(reactor, 8007, interface="127.0.0.1") endpoint.listen(QOTDFactory()) reactor.run() from twisted.internet.protocol import Protocol, ClientFactory, connectionDone from sys import stdout from twisted.protocols.basic import FileSender from twisted.python import failure class TestClient(Protocol): def connectionMade(self): print("Client did connection") def dataReceived(self, data): f = open('13.jpg', 'wb') f.write(data) self.transport.write("Client take connection".encode('UTF-8')) def connectionLost(self, reason: failure.Failure = connectionDone): print("Client lost Connection Protocol") class EchoClientFactory(ClientFactory): def startedConnecting(self, connector): print('Started to connect.') def buildProtocol(self, addr): print('Connected.') return TestClient() def clientConnectionLost(self, connector, reason): print('Lost connection factory. Reason:', reason) def clientConnectionFailed(self, connector, reason): print('Connection failed factory. Reason:', reason) from twisted.internet import reactor reactor.connectTCP('127.0.0.1', 8007, EchoClientFactory()) reactor.run() A: Twisted uses non-blocking socket operations: data written to or read from sockets are just enough to not block. Filesender in effect sends chunks of data until all are sent and you need to buffer them until they are complete. I would write the server part as: class TestServer(Protocol): def connectionMade(self): filesender = FileSender() f = open('00000.jpg', 'rb') d = filesender.beginFileTransfer(f, self.transport) d.addCallback(lambda _: self.transport.loseConnection()) # signals end of transmission Then the client as: class TestClient(Protocol): def __init__(self): self.f = open('13.jpg', 'wb') def connectionMade(self): print("Client did connection") def dataReceived(self, data): self.f.write(data) # I would buffer all the received data and # write them all at once for efficiencey # but this one will do def connectionLost(self, reason: failure.Failure = connectionDone): self.f.close() # server is done sending all chunks, close the file print("Client lost Connection Protocol") HTH
Twisted sending files. python
I'm trying to transfer images and other files over the network using Twisted. I use for this the class "FileSender" and in particular the method "beginFileTransfer", which I use on the server. But the file is not fully received by the client and I can't open it. At the same time, if I send a small file it comes. So the problem is the size. Can you tell me how I can send big files? Below is the server and client code: from twisted.internet.protocol import Protocol, connectionDone from twisted.python import failure from twisted.protocols.basic import FileSender from twisted.internet.protocol import Factory from twisted.internet.endpoints import TCP4ServerEndpoint from twisted.internet import reactor class TestServer(Protocol): def connectionMade(self): filesender = FileSender() f = open('00000.jpg', 'rb') filesender.beginFileTransfer(f, self.transport) def dataReceived(self, data: bytes): data = data.decode('UTF-8') print(data) def connectionLost(self, reason: failure.Failure = connectionDone): print("Server lost Connection") class QOTDFactory(Factory): def buildProtocol(self, addr): return TestServer() # 8007 is the port you want to run under. Choose something >1024 endpoint = TCP4ServerEndpoint(reactor, 8007, interface="127.0.0.1") endpoint.listen(QOTDFactory()) reactor.run() from twisted.internet.protocol import Protocol, ClientFactory, connectionDone from sys import stdout from twisted.protocols.basic import FileSender from twisted.python import failure class TestClient(Protocol): def connectionMade(self): print("Client did connection") def dataReceived(self, data): f = open('13.jpg', 'wb') f.write(data) self.transport.write("Client take connection".encode('UTF-8')) def connectionLost(self, reason: failure.Failure = connectionDone): print("Client lost Connection Protocol") class EchoClientFactory(ClientFactory): def startedConnecting(self, connector): print('Started to connect.') def buildProtocol(self, addr): print('Connected.') return TestClient() def clientConnectionLost(self, connector, reason): print('Lost connection factory. Reason:', reason) def clientConnectionFailed(self, connector, reason): print('Connection failed factory. Reason:', reason) from twisted.internet import reactor reactor.connectTCP('127.0.0.1', 8007, EchoClientFactory()) reactor.run()
[ "Twisted uses non-blocking socket operations: data written to or read from sockets are just enough to not block. Filesender in effect sends chunks of data until all are sent and you need to buffer them until they are complete.\nI would write the server part as:\nclass TestServer(Protocol):\n \n def connectionMade(self):\n filesender = FileSender()\n f = open('00000.jpg', 'rb')\n d = filesender.beginFileTransfer(f, self.transport)\n d.addCallback(lambda _: self.transport.loseConnection()) # signals end of transmission\n\nThen the client as:\nclass TestClient(Protocol):\n\n def __init__(self):\n self.f = open('13.jpg', 'wb')\n\n def connectionMade(self):\n print(\"Client did connection\")\n\n def dataReceived(self, data):\n self.f.write(data) # I would buffer all the received data and\n # write them all at once for efficiencey\n # but this one will do\n\n def connectionLost(self, reason: failure.Failure = connectionDone):\n self.f.close() # server is done sending all chunks, close the file\n print(\"Client lost Connection Protocol\")\n\nHTH\n" ]
[ 0 ]
[]
[]
[ "networking", "python", "twisted" ]
stackoverflow_0073391864_networking_python_twisted.txt
Q: Python's range() analog in Common Lisp How to create a list of consecutive numbers in Common Lisp? In other words, what is the equivalent of Python's range function in Common Lisp? In Python range(2, 10, 2) returns [2, 4, 6, 8], with first and last arguments being optional. I couldn't find the idiomatic way to create a sequence of numbers, though Emacs Lisp has number-sequence. Range could be emulated using loop macro, but i want to know the accepted way to generate a sequence of numbers with start and end points and step. Related: Analog of Python's range in Scheme A: There is no built-in way of generating a sequence of numbers, the canonical way of doing so is to do one of: Use loop Write a utility function that uses loop An example implementation would be (this only accepts counting "from low" to "high"): (defun range (max &key (min 0) (step 1)) (loop for n from min below max by step collect n)) This allows you to specify an (optional) minimum value and an (optional) step value. To generate odd numbers: (range 10 :min 1 :step 2) A: alexandria implements scheme's iota: (ql:quickload :alexandria) (alexandria:iota 4 :start 2 :step 2) ;; (2 4 6 8) A: Here's how I'd approach the problem: (defun generate (from to &optional (by 1)) #'(lambda (f) (when (< from to) (prog1 (or (funcall f from) t) (incf from by))))) (defmacro with-generator ((var from to &optional (by 1)) &body body) (let ((generator (gensym))) `(loop with ,generator = (generate ,from ,to ,by) while (funcall ,generator #'(lambda (,var) ,@body))))) (with-generator (i 1 10) (format t "~&i = ~s" i)) But this is just the general idea, there's a lot of room for improvement. OK, since there seems to be a discussion here. I've assumed that what is really needed is the analogue to Python's range generator function. Which, in certain sense generates a list of numbers, but does it so by yielding a number each iteration (so that it doesn't create more then one item at a time). Generators are a somewhat rare concept (few languages implement it), so I assumed that the mention of Python suggested that this exact feature is desired. Following some criticism of my example above, here's a different example that illustrates the reason to why a generator might be used rather then a simple loop. (defun generate (from to &optional (by 1)) #'(lambda () (when (< from to) (prog1 from (incf from by))))) (defmacro with-generator ((var generator &optional (exit-condition t)) &body body) (let ((g (gensym))) `(do ((,g ,generator)) (nil) (let ((,var (funcall ,g))) (when (or (null ,var) ,exit-condition) (return ,g)) ,@body)))) (let ((gen (with-generator (i (generate 1 10) (> i 4)) (format t "~&i = ~s" i)))) (format t "~&in the middle") (with-generator (j gen (> j 7)) (format t "~&j = ~s" j))) ;; i = 1 ;; i = 2 ;; i = 3 ;; i = 4 ;; in the middle ;; j = 6 ;; j = 7 This is, again, only an illustration of the purpose of this function. It is probably wasteful to use it for generating integers, even if you need to do that in two steps, but generators are best with parsers, when you want to yield a more complex object which is built based upon the previous state of the parser, for example, and a bunch of other things. Well, you can read an argument about it here: http://en.wikipedia.org/wiki/Generator_%28computer_programming%29 A: Using recursion: (defun range (min max &optional (step 1)) (when (<= min max) (cons min (range (+ min step) max step)))) A: In simple form specifying start, stop, step: (defun range (start stop step) (do ( (i start (+ i step)) (acc '() (push i acc))) ((>= i stop) (nreverse acc)))) A: You may want to try snakes: "Python style generators for Common Lisp. Includes a port of itertools." It is available in Quicklisp. There may be other Common Lisp libraries that can help. A: Not finding what I wanted nor wanting to use an external package, I ended up writing my own version which differs from the python version (hopefully improving on it) and avoids loop. If you think it is really inefficient and can improve on it, please do. ;; A version of range taking the form (range [[first] last [[step]]]). ;; It takes negative numbers and corrects STEP to the same direction ;; as FIRST to LAST then returns a list starting from FIRST and ;; ending before LAST (defun range (&rest args) (case (length args) ( (0) '()) ( (1) (range 0 (car args) (if (minusp (car args)) -1 1))) ( (2) (range (car args) (cadr args) (if (>= (car args) (cadr args)) -1 1))) ( (3) (let* ((start (car args)) (end (cadr args)) (step (abs (caddr args)))) (if (>= end start) (do ((i start (+ i step)) (acc '() (push i acc))) ((>= i end) (nreverse acc))) (do ((i start (- i step)) (acc '() (push i acc))) ((<= i end) (nreverse acc)))))) (t (error "ERROR, too many arguments for range")))) ;; (range-inc [[first] last [[step]]] ) includes LAST in the returned range (defun range-inc (&rest args) (case (length args) ( (0) '()) ( (1) (append (range (car args)) args)) ( (2) (append (range (car args) (cadr args)) (cdr args))) ( (3) (append (range (car args) (cadr args) (caddr args)) (list (cadr args)))) (t (error "ERROR, too many arguments for range-inc")))) Note: I wrote a scheme version as well A: Here is a range function to generate a list of numbers. We use the do "loop". If there is such a thing as a functional loop, then do macro is it. Although there is no recursion, when you construct a do, I find the thinking is very similar. You consider each variable in the do in the same way you consider each argument in a recursive call. I use list* instead of cons. list* is exactly the same as cons except you can have 1, 2, or more arguments. (list 1 2 3 4 nil) and (cons 1 (cons 2 (cons 3 (cons 4 nil)))). (defun range (from-n to-n &optional (step 1)) ; step defaults to 1 (do ((n from-n (+ n step)) ; n initializes to from-n, increments by step (lst nil (list* n lst))) ; n "pushed" or "prepended" to lst ((> n to-n) ; the "recursion" termination condition (reverse lst)))) ; prepending with list* more efficient than using append ; however, need extra step to reverse lst so that ; numbers are in order Here is a test session: CL-USER 23 > (range 0 10) (0 1 2 3 4 5 6 7 8 9 10) CL-USER 24 > (range 10 0 -1) NIL CL-USER 25 > (range 10 0 1) NIL CL-USER 26 > (range 1 21 2) (1 3 5 7 9 11 13 15 17 19 21) CL-USER 27 > (reverse (range 1 21 2)) (21 19 17 15 13 11 9 7 5 3 1) CL-USER 28 > This version does not work for decreasing sequences. However, you see that you can use reverse to get a decreasing sequence. A: Needed to implement (range n) in a tiny Lisp that just had dotimes and setq available: (defun range (&rest args) (let ( (to '()) ) (cond ((= (length args) 1) (dotimes (i (car args)) (push i to))) ((= (length args) 2) (dotimes (i (- (cadr args) (car args))) (push (+ i (car args)) to)))) (nreverse to))) Example: > (range 10) (0 1 2 3 4 5 6 7 8 9) > (range 10 15) (10 11 12 13 14) A: Just in case, here is an analogue to user1969453's answer that returns a vector instead of a list: (defun seq (from to &optional (step 1)) (do ((acc (make-array 1 :adjustable t :fill-pointer 0)) (i from (+ i step))) ((> i to) acc) (vector-push-extend i acc))) Or, if you want to pre-allocate the vector and skip the 'vector-push' idiom: (defun seq2 (from to &optional (step 1)) (let ((size (+ 1 (floor (/ (- to from) step))))) ; size is 1 + floor((to - from) / step)) (do ((acc (make-array size)) (i from (+ i step)) (count 0 (1+ count))) ((> i to) acc) (setf (aref acc count) i))))
Python's range() analog in Common Lisp
How to create a list of consecutive numbers in Common Lisp? In other words, what is the equivalent of Python's range function in Common Lisp? In Python range(2, 10, 2) returns [2, 4, 6, 8], with first and last arguments being optional. I couldn't find the idiomatic way to create a sequence of numbers, though Emacs Lisp has number-sequence. Range could be emulated using loop macro, but i want to know the accepted way to generate a sequence of numbers with start and end points and step. Related: Analog of Python's range in Scheme
[ "There is no built-in way of generating a sequence of numbers, the canonical way of doing so is to do one of:\n\nUse loop\nWrite a utility function that uses loop\n\nAn example implementation would be (this only accepts counting \"from low\" to \"high\"):\n(defun range (max &key (min 0) (step 1))\n (loop for n from min below max by step\n collect n))\n\nThis allows you to specify an (optional) minimum value and an (optional) step value.\nTo generate odd numbers: (range 10 :min 1 :step 2)\n", "alexandria implements scheme's iota:\n(ql:quickload :alexandria)\n(alexandria:iota 4 :start 2 :step 2)\n;; (2 4 6 8)\n\n", "Here's how I'd approach the problem:\n(defun generate (from to &optional (by 1))\n #'(lambda (f)\n (when (< from to)\n (prog1 (or (funcall f from) t)\n (incf from by)))))\n\n(defmacro with-generator ((var from to &optional (by 1)) &body body)\n (let ((generator (gensym)))\n `(loop with ,generator = (generate ,from ,to ,by)\n while\n (funcall ,generator\n #'(lambda (,var) ,@body)))))\n\n(with-generator (i 1 10)\n (format t \"~&i = ~s\" i))\n\nBut this is just the general idea, there's a lot of room for improvement.\n\nOK, since there seems to be a discussion here. I've assumed that what is really needed is the analogue to Python's range generator function. Which, in certain sense generates a list of numbers, but does it so by yielding a number each iteration (so that it doesn't create more then one item at a time). Generators are a somewhat rare concept (few languages implement it), so I assumed that the mention of Python suggested that this exact feature is desired.\nFollowing some criticism of my example above, here's a different example that illustrates the reason to why a generator might be used rather then a simple loop.\n(defun generate (from to &optional (by 1))\n #'(lambda ()\n (when (< from to)\n (prog1 from\n (incf from by)))))\n\n(defmacro with-generator\n ((var generator &optional (exit-condition t)) &body body)\n (let ((g (gensym)))\n `(do ((,g ,generator))\n (nil)\n (let ((,var (funcall ,g)))\n (when (or (null ,var) ,exit-condition)\n (return ,g))\n ,@body))))\n\n(let ((gen\n (with-generator (i (generate 1 10) (> i 4))\n (format t \"~&i = ~s\" i))))\n (format t \"~&in the middle\")\n (with-generator (j gen (> j 7))\n (format t \"~&j = ~s\" j)))\n\n;; i = 1\n;; i = 2\n;; i = 3\n;; i = 4\n;; in the middle\n;; j = 6\n;; j = 7\n\nThis is, again, only an illustration of the purpose of this function. It is probably wasteful to use it for generating integers, even if you need to do that in two steps, but generators are best with parsers, when you want to yield a more complex object which is built based upon the previous state of the parser, for example, and a bunch of other things. Well, you can read an argument about it here: http://en.wikipedia.org/wiki/Generator_%28computer_programming%29\n", "Using recursion:\n(defun range (min max &optional (step 1))\n (when (<= min max)\n (cons min (range (+ min step) max step))))\n\n", "In simple form specifying start, stop, step:\n(defun range (start stop step) \n (do (\n (i start (+ i step)) \n (acc '() (push i acc))) \n ((>= i stop) (nreverse acc))))\n\n", "You may want to try snakes:\n\"Python style generators for Common Lisp. Includes a port of itertools.\"\nIt is available in Quicklisp. There may be other Common Lisp libraries that can help.\n", "Not finding what I wanted nor wanting to use an external package, I ended up writing my own version which differs from the python version (hopefully improving on it) and avoids loop. If you think it is really inefficient and can improve on it, please do. \n;; A version of range taking the form (range [[first] last [[step]]]).\n;; It takes negative numbers and corrects STEP to the same direction\n;; as FIRST to LAST then returns a list starting from FIRST and\n;; ending before LAST\n(defun range (&rest args)\n (case (length args) \n ( (0) '()) \n ( (1) (range 0 (car args) (if (minusp (car args)) -1 1))) \n ( (2) (range (car args) (cadr args) \n (if (>= (car args) (cadr args)) -1 1))) \n ( (3) (let* ((start (car args)) (end (cadr args)) \n (step (abs (caddr args))))\n (if (>= end start)\n (do ((i start (+ i step))\n (acc '() (push i acc)))\n ((>= i end) (nreverse acc)))\n (do ((i start (- i step))\n (acc '() (push i acc)))\n ((<= i end) (nreverse acc))))))\n (t (error \"ERROR, too many arguments for range\"))))\n\n\n;; (range-inc [[first] last [[step]]] ) includes LAST in the returned range\n(defun range-inc (&rest args)\n (case (length args)\n ( (0) '())\n ( (1) (append (range (car args)) args))\n ( (2) (append (range (car args) (cadr args)) (cdr args)))\n ( (3) (append (range (car args) (cadr args) (caddr args))\n (list (cadr args))))\n (t (error \"ERROR, too many arguments for range-inc\"))))\n\nNote: I wrote a scheme version as well\n", "Here is a range function to generate a list of numbers.\nWe use the do \"loop\". If there is such a thing as a functional loop, then do macro is it. Although there is no recursion, when you construct a do, I find the thinking is very similar. You consider each variable in the do in the same way you consider each argument in a recursive call.\nI use list* instead of cons. list* is exactly the same as cons except you can have 1, 2, or more arguments. (list 1 2 3 4 nil) and (cons 1 (cons 2 (cons 3 (cons 4 nil)))).\n(defun range (from-n to-n &optional (step 1)) ; step defaults to 1\n (do ((n from-n (+ n step)) ; n initializes to from-n, increments by step\n (lst nil (list* n lst))) ; n \"pushed\" or \"prepended\" to lst\n\n ((> n to-n) ; the \"recursion\" termination condition\n (reverse lst)))) ; prepending with list* more efficient than using append\n ; however, need extra step to reverse lst so that\n ; numbers are in order\n\nHere is a test session:\n\nCL-USER 23 > (range 0 10)\n(0 1 2 3 4 5 6 7 8 9 10)\nCL-USER 24 > (range 10 0 -1)\nNIL\nCL-USER 25 > (range 10 0 1)\nNIL\nCL-USER 26 > (range 1 21 2)\n(1 3 5 7 9 11 13 15 17 19 21)\nCL-USER 27 > (reverse (range 1 21 2))\n(21 19 17 15 13 11 9 7 5 3 1)\nCL-USER 28 > \n\nThis version does not work for decreasing sequences. However, you see that you can use reverse to get a decreasing sequence.\n", "Needed to implement (range n) in a tiny Lisp that just had dotimes and setq available:\n(defun range (&rest args)\n (let ( (to '()) )\n (cond \n ((= (length args) 1) (dotimes (i (car args))\n (push i to)))\n ((= (length args) 2) (dotimes (i (- (cadr args) (car args)))\n (push (+ i (car args)) to))))\n (nreverse to)))\n\nExample:\n> (range 10)\n(0 1 2 3 4 5 6 7 8 9)\n\n> (range 10 15)\n(10 11 12 13 14)\n\n", "Just in case, here is an analogue to user1969453's answer that returns a vector instead of a list:\n(defun seq (from to &optional (step 1))\n (do ((acc (make-array 1 :adjustable t :fill-pointer 0))\n (i from (+ i step)))\n ((> i to) acc) (vector-push-extend i acc)))\n\nOr, if you want to pre-allocate the vector and skip the 'vector-push' idiom:\n(defun seq2 (from to &optional (step 1))\n (let ((size (+ 1 (floor (/ (- to from) step))))) ; size is 1 + floor((to - from) / step))\n (do ((acc (make-array size))\n (i from (+ i step))\n (count 0 (1+ count)))\n ((> i to) acc) (setf (aref acc count) i))))\n\n" ]
[ 41, 19, 6, 4, 2, 1, 1, 0, 0, 0 ]
[ "Recursive solution:\n(defun range(min max &optional (step 1))\n (if (> min max)\n ()\n (cons min (range (+ min step) max step))))\n\nExample:\n(range 1 10 3)\n(1 4 7 10)\n\n" ]
[ -1 ]
[ "common_lisp", "number_sequence", "python" ]
stackoverflow_0013937520_common_lisp_number_sequence_python.txt
Q: Get maximum rows from all subgroups with groupby method (Python) I have this data frame, inside of it I have 3 columns 'Region', 'State or Province', 'Sales' I already grouped by Regions and State or Province and wanted to get values in sales. But I want to get maximum State from every Region!how can I get that? sales_by_state = df_n.groupby(['Region', 'State or Province'])['Sales'].sum() sales_by_state = sales_by_state.to_frame() sales_by_state A: To get the maximum value of sales for each region, you can use the 'idxmax()' function on the groupby object. This will return the index of the maximum value for each group, which you can then use to index into the original data frame to get the corresponding rows. Here is an example: # Get the maximum sales for each region max_sales = sales_by_state.groupby(level=0)['Sales'].idxmax() # Use the index of the maximum sales to index into the original data frame max_sales_by_state = df_n.loc[max_sales] This will return a new data frame containing the rows from the original data frame that correspond to the maximum sales for each region. You can then access the values in the 'State or Province' column to get the maximum state for each region. Alternatively, you can use the 'apply()' method on the groupby object to apply a custom function to each group. This function can return the state with the maximum sales for the group, which you can then use to create a new column in the data frame containing the maximum state for each region. Here is an example: # Define a custom function that returns the state with the maximum sales for a group def get_max_state(group): # Index into the group to get the state with the maximum sales return group.loc[group['Sales'].idxmax()]['State or Province'] # Apply the custom function to each group and create a new column with the results sales_by_state['Max State'] = sales_by_state.groupby(level=0).apply(get_max_state) This will add a new column to the 'sales_by_state' data frame containing the maximum state for each region.
Get maximum rows from all subgroups with groupby method (Python)
I have this data frame, inside of it I have 3 columns 'Region', 'State or Province', 'Sales' I already grouped by Regions and State or Province and wanted to get values in sales. But I want to get maximum State from every Region!how can I get that? sales_by_state = df_n.groupby(['Region', 'State or Province'])['Sales'].sum() sales_by_state = sales_by_state.to_frame() sales_by_state
[ "To get the maximum value of sales for each region, you can use the 'idxmax()' function on the groupby object. This will return the index of the maximum value for each group, which you can then use to index into the original data frame to get the corresponding rows.\nHere is an example:\n# Get the maximum sales for each region\nmax_sales = sales_by_state.groupby(level=0)['Sales'].idxmax()\n\n# Use the index of the maximum sales to index into the original data frame\nmax_sales_by_state = df_n.loc[max_sales]\n\nThis will return a new data frame containing the rows from the original data frame that correspond to the maximum sales for each region. You can then access the values in the 'State or Province' column to get the maximum state for each region.\nAlternatively, you can use the 'apply()' method on the groupby object to apply a custom function to each group. This function can return the state with the maximum sales for the group, which you can then use to create a new column in the data frame containing the maximum state for each region.\nHere is an example:\n# Define a custom function that returns the state with the maximum sales for a group\ndef get_max_state(group):\n # Index into the group to get the state with the maximum sales\n return group.loc[group['Sales'].idxmax()]['State or Province']\n\n# Apply the custom function to each group and create a new column with the results\nsales_by_state['Max State'] = sales_by_state.groupby(level=0).apply(get_max_state)\n\nThis will add a new column to the 'sales_by_state' data frame containing the maximum state for each region.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "group_by", "max", "pandas", "python" ]
stackoverflow_0074662271_dataframe_group_by_max_pandas_python.txt
Q: Python can't locate .so shared library with ctypes.CDLL - Windows I am trying to run a C function in Python. I followed examples online, and compiled the C source file into a .so shared library, and tried to pass it into the ctypes CDLL() initializer function. import ctypes cFile = ctypes.CDLL("libchess.so") At this point python crashes with the message: Could not find module 'C:\Users\user\PycharmProjects\project\libchess.so' (or one of its dependencies). Try using the full path with constructor syntax. libchess.so is in the same directory as this Python file, so I don't see why there would be an issue finding it. I read some stuff about how shared libraries might be hidden from later versions of python, but the suggested solutions I tried did not work. Most solutions were also referring to fixes involving linux system environment variables, but I'm on Windows. Things I've tried that have not worked: changing "libchess.so" to "./libchess.so" or the full path using cdll.LoadLibrary() instead of CDLL() (apparently both do the same thing) adding the parent directory to system PATH variable putting os.add_dll_directory(os.getcwd()) in the code before trying to load the file Any more suggestions are appreciated. A: Solved: Detailed explanation here: https://stackoverflow.com/a/64472088/16044321 The issue is specific to how Python performs a DLL/SO search on Windows. While the ctypes docs do not specify this, the CDLL() function requires the optional argument winmode=0 to work correctly on Windows when loading a .dll or .so. This issue is also specific to Python versions greater than 3.8. Thus, simply changing the 2nd line to cFile = ctypes.CDLL("libchess.so", winmode=0) works as expected.
Python can't locate .so shared library with ctypes.CDLL - Windows
I am trying to run a C function in Python. I followed examples online, and compiled the C source file into a .so shared library, and tried to pass it into the ctypes CDLL() initializer function. import ctypes cFile = ctypes.CDLL("libchess.so") At this point python crashes with the message: Could not find module 'C:\Users\user\PycharmProjects\project\libchess.so' (or one of its dependencies). Try using the full path with constructor syntax. libchess.so is in the same directory as this Python file, so I don't see why there would be an issue finding it. I read some stuff about how shared libraries might be hidden from later versions of python, but the suggested solutions I tried did not work. Most solutions were also referring to fixes involving linux system environment variables, but I'm on Windows. Things I've tried that have not worked: changing "libchess.so" to "./libchess.so" or the full path using cdll.LoadLibrary() instead of CDLL() (apparently both do the same thing) adding the parent directory to system PATH variable putting os.add_dll_directory(os.getcwd()) in the code before trying to load the file Any more suggestions are appreciated.
[ "Solved:\nDetailed explanation here: https://stackoverflow.com/a/64472088/16044321\nThe issue is specific to how Python performs a DLL/SO search on Windows. While the ctypes docs do not specify this, the CDLL() function requires the optional argument winmode=0 to work correctly on Windows when loading a .dll or .so. This issue is also specific to Python versions greater than 3.8.\nThus, simply changing the 2nd line to cFile = ctypes.CDLL(\"libchess.so\", winmode=0) works as expected.\n" ]
[ 0 ]
[]
[]
[ "ctypes", "python", "shared_libraries" ]
stackoverflow_0074655061_ctypes_python_shared_libraries.txt
Q: Overlapping Text in Animation in Python I'm making Terror Attacks analysis using Python. And I wanted make an animation. I made it but I have a problem the text above the animation overlaps in every frame. How can I fix it? fig = plt.figure(figsize = (7,4)) def animate(Year): ax = plt.axes() ax.clear() ax.set_title('Terrorism In Turkey\n'+ str(Year)) m5 = Basemap(projection='lcc',resolution='l' ,width=1800000, height=900000 ,lat_0=38.9637, lon_0=35.2433) lat_gif=list(terror_turkey[terror_turkey['Year']==Year].Latitude) long_gif=list(terror_turkey[terror_turkey['Year']==Year].Longitude) x_gif,y_gif=m5(long_gif,lat_gif) m5.scatter(x_gif, y_gif,s=[Death+Injured for Death,Injured in zip(terror_turkey[terror_turkey['Year']==Year].Death,terror_turkey[terror_turkey['Year']==Year].Injured)],color = 'r') m5.drawcoastlines() m5.drawcountries() m5.fillcontinents(color='coral',lake_color='aqua', zorder = 1,alpha=0.4) m5.drawmapboundary(fill_color='aqua') ani = animation.FuncAnimation(fig,animate, list(terror_turkey.Year.unique()), interval = 1500) ani.save('animation_tr.gif', writer='imagemagick', fps=1) plt.close(1) filename = 'animation_tr.gif' video = io.open(filename, 'r+b').read() encoded = base64.b64encode(video) HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii'))) Output: A: @JohanC's Answer: Did you consider creating the axes the usual way, as in fig, ax = plt.subplots(figsize = (7,4)) (in the main code, not inside the animate function)? And leaving out the call to plt.axes()?
Overlapping Text in Animation in Python
I'm making Terror Attacks analysis using Python. And I wanted make an animation. I made it but I have a problem the text above the animation overlaps in every frame. How can I fix it? fig = plt.figure(figsize = (7,4)) def animate(Year): ax = plt.axes() ax.clear() ax.set_title('Terrorism In Turkey\n'+ str(Year)) m5 = Basemap(projection='lcc',resolution='l' ,width=1800000, height=900000 ,lat_0=38.9637, lon_0=35.2433) lat_gif=list(terror_turkey[terror_turkey['Year']==Year].Latitude) long_gif=list(terror_turkey[terror_turkey['Year']==Year].Longitude) x_gif,y_gif=m5(long_gif,lat_gif) m5.scatter(x_gif, y_gif,s=[Death+Injured for Death,Injured in zip(terror_turkey[terror_turkey['Year']==Year].Death,terror_turkey[terror_turkey['Year']==Year].Injured)],color = 'r') m5.drawcoastlines() m5.drawcountries() m5.fillcontinents(color='coral',lake_color='aqua', zorder = 1,alpha=0.4) m5.drawmapboundary(fill_color='aqua') ani = animation.FuncAnimation(fig,animate, list(terror_turkey.Year.unique()), interval = 1500) ani.save('animation_tr.gif', writer='imagemagick', fps=1) plt.close(1) filename = 'animation_tr.gif' video = io.open(filename, 'r+b').read() encoded = base64.b64encode(video) HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii'))) Output:
[ "@JohanC's Answer:\nDid you consider creating the axes the usual way, as in fig, ax = plt.subplots(figsize = (7,4)) (in the main code, not inside the animate function)? And leaving out the call to plt.axes()?\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "matplotlib_basemap", "python" ]
stackoverflow_0074640564_matplotlib_matplotlib_basemap_python.txt
Q: Sample python code provided by GCP - service variable undefined The following sample code is provided by GCP to use the restAPI to list out group membership when you provide the group_id. Code sample can be found here. I can run the sample directly from the URI given, but when trying to run it from Python with the sample code provided. My IDE intellisense says that service in the very last line is an undefined variable. I can find nothing in GCP to indicate what library this might come from or what I should replace it with. def search_transitive_memberships(service, parent, page_size): try: memberships = [] next_page_token = '' while True: query_params = urlencode( { "page_size": page_size, "page_token": next_page_token } ) request = service.groups().memberships().searchTransitiveMemberships(parent=parent) request.uri += "&" + query_params response = request.execute() if 'memberships' in response: memberships += response['memberships'] if 'nextPageToken' in response: next_page_token = response['nextPageToken'] else: next_page_token = '' if len(next_page_token) == 0: break; print(memberships) except Exception as e: print(e) # Return results with a page size of 50 search_transitive_memberships(service, 'groups/01234567abcdefg', 50) ## <- service undefined Appreciate assistance in identifying what I need to add to have service recognized. A: Ok, it appears what is missing from the code samples provided by GCP are the steps to build and use a service object. Documentation on that can be found here: https://github.com/googleapis/google-api-python-client/blob/main/docs/start.md#building-and-calling-a-service So for my sample above the last line would actually be: service = build('cloudidentity', 'v1') search_transitive_memberships(service, 'groups/01234567abcdefg', 50) service.close After importing build from googleapiclient.discovery Github link above also details how to provide oauth creds for it to work.
Sample python code provided by GCP - service variable undefined
The following sample code is provided by GCP to use the restAPI to list out group membership when you provide the group_id. Code sample can be found here. I can run the sample directly from the URI given, but when trying to run it from Python with the sample code provided. My IDE intellisense says that service in the very last line is an undefined variable. I can find nothing in GCP to indicate what library this might come from or what I should replace it with. def search_transitive_memberships(service, parent, page_size): try: memberships = [] next_page_token = '' while True: query_params = urlencode( { "page_size": page_size, "page_token": next_page_token } ) request = service.groups().memberships().searchTransitiveMemberships(parent=parent) request.uri += "&" + query_params response = request.execute() if 'memberships' in response: memberships += response['memberships'] if 'nextPageToken' in response: next_page_token = response['nextPageToken'] else: next_page_token = '' if len(next_page_token) == 0: break; print(memberships) except Exception as e: print(e) # Return results with a page size of 50 search_transitive_memberships(service, 'groups/01234567abcdefg', 50) ## <- service undefined Appreciate assistance in identifying what I need to add to have service recognized.
[ "Ok, it appears what is missing from the code samples provided by GCP are the steps to build and use a service object.\nDocumentation on that can be found here: https://github.com/googleapis/google-api-python-client/blob/main/docs/start.md#building-and-calling-a-service\nSo for my sample above the last line would actually be:\nservice = build('cloudidentity', 'v1')\nsearch_transitive_memberships(service, 'groups/01234567abcdefg', 50)\nservice.close\n\nAfter importing build from googleapiclient.discovery\nGithub link above also details how to provide oauth creds for it to work.\n" ]
[ 1 ]
[]
[]
[ "gcloud", "google_cloud_platform", "google_iam", "python" ]
stackoverflow_0074662133_gcloud_google_cloud_platform_google_iam_python.txt
Q: Opening and Reading files in Python For some reason I am unable to open my .txt file within python. I have the .py and .txt file within a folder. Both files are stored Workspace -> Folder(Crash Course) -> Folder(Lessons) -> Folder(Ch 10)-> both files within this Ch 10 Folder. I am getting FileNotFoundError: [Errno 2] No such file or directory: 'pi_digits.txt' With the code: with open('pi_digits.txt') as file_object: contents = file_object.read() print(contents) A: This is less for the person that asked the question but more for people like myself that come here from Python Crash Course with the same question and don't get the answer they were looking for: If, like me, you were running the code from your text editor (in my case VS Code), it's possible that the terminal window within the editor wasn't in the proper directory. I didn't realize myself, because I was thinking that because I opened the .py file from the correct working directory in the terminal that everything should work as planned. It wasn't until I realized that the terminal in the editor is a separate instance (thus making the present working directory home instead of my folder for PCC work) that I was able to get the program to run as intended. In short, navigate to the proper directory in your editor's terminal instance and the program should run as intended. Hope this helps! image with terminal open on desktop and in text editor to show working directory difference A: I used full path of the file along with r, which is for raw string. Worked for me. example: filename = **r**'C:\Python\CrashCourse\pi_digits.txt' with open(filename) as file_object: content = file_object.read() print(content) A: The path to the file is relative to where you run the python file from, not from where the python file is located. Either run your code from the same directory as the files, or make the file path absolute, based on the python file's location. import os with open(os.path.join(os.path.dirname(__file__), 'pi_digits.txt')) as file_object: contents = file_object.read() print(contents) Hope that helps A: You can try getting the full path to the file import os dir_path = os.path.dirname(os.path.realpath(__file__)) pi_digits = os.path.join(dir_path, 'pi_digits.txt') with open(pi_digits, r) as file_object: print(file_object.read()) A: Try this: with open('c:\\Workspace\\Crash Course\\Lessons\\Ch 10\\pi_digits.txt') as file_object: contents = file_object.read() print(contents)
Opening and Reading files in Python
For some reason I am unable to open my .txt file within python. I have the .py and .txt file within a folder. Both files are stored Workspace -> Folder(Crash Course) -> Folder(Lessons) -> Folder(Ch 10)-> both files within this Ch 10 Folder. I am getting FileNotFoundError: [Errno 2] No such file or directory: 'pi_digits.txt' With the code: with open('pi_digits.txt') as file_object: contents = file_object.read() print(contents)
[ "This is less for the person that asked the question but more for people like myself that come here from Python Crash Course with the same question and don't get the answer they were looking for:\nIf, like me, you were running the code from your text editor (in my case VS Code), it's possible that the terminal window within the editor wasn't in the proper directory. I didn't realize myself, because I was thinking that because I opened the .py file from the correct working directory in the terminal that everything should work as planned. It wasn't until I realized that the terminal in the editor is a separate instance (thus making the present working directory home instead of my folder for PCC work) that I was able to get the program to run as intended.\nIn short, navigate to the proper directory in your editor's terminal instance and the program should run as intended.\nHope this helps!\nimage with terminal open on desktop and in text editor to show working directory difference\n", "I used full path of the file along with r, which is for raw string. Worked for me.\nexample:\nfilename = **r**'C:\\Python\\CrashCourse\\pi_digits.txt'\n\nwith open(filename) as file_object:\n content = file_object.read()\n print(content)\n\n", "The path to the file is relative to where you run the python file from, not from where the python file is located. \nEither run your code from the same directory as the files, or make the file path absolute, based on the python file's location.\nimport os\n\nwith open(os.path.join(os.path.dirname(__file__), 'pi_digits.txt')) as file_object:\n contents = file_object.read()\n print(contents)\n\nHope that helps\n", "You can try getting the full path to the file\nimport os\n\ndir_path = os.path.dirname(os.path.realpath(__file__))\npi_digits = os.path.join(dir_path, 'pi_digits.txt')\n\nwith open(pi_digits, r) as file_object:\n print(file_object.read())\n\n", "Try this:\nwith open('c:\\\\Workspace\\\\Crash Course\\\\Lessons\\\\Ch 10\\\\pi_digits.txt') as file_object:\n contents = file_object.read()\n print(contents)\n\n" ]
[ 2, 2, 0, 0, 0 ]
[ "You might have to enable \"Execute in file dir\"\nvscode setting\n", "The comment that Travis1797 posted is much better for VS code users that are just starting out with learning python.\nClick on the cog icon in the bottom left hand of corner of vscode\nThen click settings.\nThen type: execute in file dir\n(WARNING: this is only useful when calling upon files saved in the same directory as your python file using the method explained in the python crash course)\nTHE HIGHER VOTED POSTS WORK WITHOUT THE NEED TO CHANGE SETTINGS WITHIN VSCODE.\nClick the checkbox next to the wording thats states \"When executing a file...\" and run your code again and it works.\nPs(I spent over an hour trying to figure out what I was doing wrong and also solving it with the comments above but this was the most python elegant solution )\n" ]
[ -1, -1 ]
[ "python" ]
stackoverflow_0055695410_python.txt
Q: Exception has occurred: NoSuchElementException Message: no such element: Unable to locate element: I'm trying to make a script in python that fills out the form on this website: (https://freesim.vodafone.co.uk/check-out-payasyougo-campaign) multiple times. However, I get this error when running the program : Exception has occurred: NoSuchElementException Message: no such element: Unable to locate element: from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager import time import random web = webdriver.Chrome() web.get('https://freesim.vodafone.co.uk/check-out-payasyougo-campaign') time.sleep(10) random_words = ["Adriel", "Anabelle", "Abagail", "Milo", "Raven", "Halle", "Max", "Collin", ['Dane', 'Jaylynn', 'Micah'] FirstName = "Max" first = web.find_element("xpath", '//*[@id="txtFirstName"]/div[5]/div[5]/div/div/div[1]/div[1]/div/input') first.send_keys(FirstName) LastName = "Lombardo" last = web.find_element("xpath", '//*[@id="txtLastName"]/div[5]/div[5]/div/div/div[1]/div[2]/div/input') last.send_keys(LastName) Email = random.choice(random_words) + "@westlondonmail.xyz" emailpath = web.find_element("xpath", '/html/body/form/div[5]/div[5]/div/div/div[1]/div[3]/div/input') emailpath.send_keys(Email) i tried putting in the XPATH, the full XPATH and none work Any ideas ? Thank you A: It looks like you're using the webdriver.Chrome() syntax to create a new instance of the Chrome web driver, but this is incorrect. Instead, you need to use the webdriver.Chrome(ChromeDriverManager().install()) syntax to create a new instance of the Chrome web driver and automatically download and install the appropriate version of the Chrome driver. Try this code: from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager import time import random web = webdriver.Chrome(ChromeDriverManager().install()) web.get('https://freesim.vodafone.co.uk/check-out-payasyougo-campaign') time.sleep(10) random_words = ["Adriel", "Anabelle", "Abagail", "Milo", "Raven", "Halle", "Max", "Collin", "Dane", "Jaylynn", "Micah"] FirstName = "Max" first = web.find_element("xpath", '//*[@id="txtFirstName"]/div[5]/div[5]/div/div/div[1]/div[1]/div/input') first.send_keys(FirstName) LastName = "Lombardo" last = web.find_element("xpath", '//*[@id="txtLastName"]/div[5]/div[5]/div/div/div[1]/div[2]/div/input') last.send_keys(LastName) Email = random.choice(random_words) + "@westlondonmail.xyz" emailpath = web.find_element("xpath", '/html/body/form/div[5]/div[5]/div/div/div[1]/div[3]/div/input') emailpath.send_keys(Email)
Exception has occurred: NoSuchElementException Message: no such element: Unable to locate element:
I'm trying to make a script in python that fills out the form on this website: (https://freesim.vodafone.co.uk/check-out-payasyougo-campaign) multiple times. However, I get this error when running the program : Exception has occurred: NoSuchElementException Message: no such element: Unable to locate element: from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager import time import random web = webdriver.Chrome() web.get('https://freesim.vodafone.co.uk/check-out-payasyougo-campaign') time.sleep(10) random_words = ["Adriel", "Anabelle", "Abagail", "Milo", "Raven", "Halle", "Max", "Collin", ['Dane', 'Jaylynn', 'Micah'] FirstName = "Max" first = web.find_element("xpath", '//*[@id="txtFirstName"]/div[5]/div[5]/div/div/div[1]/div[1]/div/input') first.send_keys(FirstName) LastName = "Lombardo" last = web.find_element("xpath", '//*[@id="txtLastName"]/div[5]/div[5]/div/div/div[1]/div[2]/div/input') last.send_keys(LastName) Email = random.choice(random_words) + "@westlondonmail.xyz" emailpath = web.find_element("xpath", '/html/body/form/div[5]/div[5]/div/div/div[1]/div[3]/div/input') emailpath.send_keys(Email) i tried putting in the XPATH, the full XPATH and none work Any ideas ? Thank you
[ "It looks like you're using the webdriver.Chrome() syntax to create a new instance of the Chrome web driver, but this is incorrect. Instead, you need to use the webdriver.Chrome(ChromeDriverManager().install()) syntax to create a new instance of the Chrome web driver and automatically download and install the appropriate version of the Chrome driver.\nTry this code:\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\nimport time\nimport random \n\nweb = webdriver.Chrome(ChromeDriverManager().install())\nweb.get('https://freesim.vodafone.co.uk/check-out-payasyougo-campaign')\n\ntime.sleep(10)\n\nrandom_words = [\"Adriel\", \"Anabelle\", \"Abagail\", \"Milo\", \"Raven\", \"Halle\", \"Max\", \"Collin\", \"Dane\", \"Jaylynn\", \"Micah\"]\n\nFirstName = \"Max\"\nfirst = web.find_element(\"xpath\", '//*[@id=\"txtFirstName\"]/div[5]/div[5]/div/div/div[1]/div[1]/div/input')\nfirst.send_keys(FirstName)\n\nLastName = \"Lombardo\"\nlast = web.find_element(\"xpath\", '//*[@id=\"txtLastName\"]/div[5]/div[5]/div/div/div[1]/div[2]/div/input')\nlast.send_keys(LastName)\n\nEmail = random.choice(random_words) + \"@westlondonmail.xyz\"\nemailpath = web.find_element(\"xpath\", '/html/body/form/div[5]/div[5]/div/div/div[1]/div[3]/div/input')\nemailpath.send_keys(Email)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074662400_python.txt
Q: Using user input function in Classes I am making a simple game with multiple players, which each player can insert their first name, last name and each player is assigned 100 poins at the begging. In my code once I am done with coding the "essential" information, but when it comes to user input it does not work. The "base" for the player class: (this part works) class Players(): def __init__ (self, firstname, lastname, coins): #initialising attributes self.firstname = firstname self.lastname = lastname self.coins= coins def full_info(self): return self.firstname + self.lastname + self.coins This is the second part where the problem is, the input is not stored in the attributes def get_user_input(self): firstname= input("Please enter your first name:") lastname= input ("Please enter your second name: ") coins= 100 #they are assigned automatically return self(firstname, lastname, coins) I would appriciate any suggesting regarding the user input. A: Your function to build a new instance from user input should be a classmethod or a staticmethod, since you want to call it to create a new instance. I'd also suggest using @dataclass so you don't need to copy and paste all the variable names in __init__, and using an f-string in your full_info function so you don't hit an error when you try to add coins to the name strings. All together it might look like: from dataclasses import dataclass @dataclass class Player: firstname: str lastname: str coins: int def full_info(self) -> str: return f"{self.firstname} {self.lastname} {self.coins}" @classmethod def from_user_input(cls) -> 'Player': return cls( firstname=input("Please enter your first name:"), lastname=input("Please enter your second name: "), coins=100, ) Then you can call Player.from_user_input() to prompt the user for a name and return a new Player object: >>> player = Player.from_user_input() Please enter your first name:Bob Please enter your second name: Small >>> player Player(firstname='Bob', lastname='Small', coins=100) >>> player.full_info() 'Bob Small 100' A: If you want to stay close to your original code, a few changes will work: class Players(): def __init__ (self, firstname, lastname, coins): #initialising attributes self.firstname = firstname self.lastname = lastname self.coins= coins def full_info(self): return self.firstname + ' ' + self.lastname + ' ' + str(self.coins) def get_user_input(): firstname= input("Please enter your first name:") lastname= input ("Please enter your second name: ") coins= 100 #they are assigned automatically return Players(firstname, lastname, coins) An example: a=get_user_input() Please enter your first name:UN Please enter your second name: Owen a.full_info() Out[21]: 'UN Owen 100'
Using user input function in Classes
I am making a simple game with multiple players, which each player can insert their first name, last name and each player is assigned 100 poins at the begging. In my code once I am done with coding the "essential" information, but when it comes to user input it does not work. The "base" for the player class: (this part works) class Players(): def __init__ (self, firstname, lastname, coins): #initialising attributes self.firstname = firstname self.lastname = lastname self.coins= coins def full_info(self): return self.firstname + self.lastname + self.coins This is the second part where the problem is, the input is not stored in the attributes def get_user_input(self): firstname= input("Please enter your first name:") lastname= input ("Please enter your second name: ") coins= 100 #they are assigned automatically return self(firstname, lastname, coins) I would appriciate any suggesting regarding the user input.
[ "Your function to build a new instance from user input should be a classmethod or a staticmethod, since you want to call it to create a new instance.\nI'd also suggest using @dataclass so you don't need to copy and paste all the variable names in __init__, and using an f-string in your full_info function so you don't hit an error when you try to add coins to the name strings.\nAll together it might look like:\nfrom dataclasses import dataclass\n\n@dataclass\nclass Player: \n firstname: str\n lastname: str\n coins: int\n\n def full_info(self) -> str:\n return f\"{self.firstname} {self.lastname} {self.coins}\"\n\n @classmethod\n def from_user_input(cls) -> 'Player':\n return cls(\n firstname=input(\"Please enter your first name:\"),\n lastname=input(\"Please enter your second name: \"),\n coins=100,\n )\n\nThen you can call Player.from_user_input() to prompt the user for a name and return a new Player object:\n>>> player = Player.from_user_input()\nPlease enter your first name:Bob\nPlease enter your second name: Small\n>>> player\nPlayer(firstname='Bob', lastname='Small', coins=100)\n>>> player.full_info()\n'Bob Small 100'\n\n", "If you want to stay close to your original code, a few changes will work:\nclass Players(): \n def __init__ (self, firstname, lastname, coins): #initialising attributes\n self.firstname = firstname \n self.lastname = lastname\n self.coins= coins\n \n def full_info(self):\n return self.firstname + ' ' + self.lastname + ' ' + str(self.coins)\n \ndef get_user_input():\n firstname= input(\"Please enter your first name:\")\n lastname= input (\"Please enter your second name: \")\n coins= 100 #they are assigned automatically \n return Players(firstname, lastname, coins)\n\nAn example:\na=get_user_input()\n\nPlease enter your first name:UN\n\nPlease enter your second name: Owen\n\na.full_info()\nOut[21]: 'UN Owen 100'\n\n" ]
[ 2, 0 ]
[]
[]
[ "input", "object", "oop", "python" ]
stackoverflow_0074662334_input_object_oop_python.txt
Q: Find new blobs comparing two different binary images I have two images taken on same sample at t=0 and t=t. There are few new blobs present in image taken at t. I need to find these new blobs (new blobs are the blobs which are present in new XY location at t=t). I am wondering if someone can help? I tried OR,AND,XOR, reconstructions but the issue is the blobs which are same between two images are not exactly the same. Sometimes they might have size difference which makes the problem complicated. Image at t=0 Image at t=t A: Instead of using OR,AND,XOR, we may sum the two images. Before summing the images, replace the 255 values with 100 (keeping the range of uint8 [0, 255]). In the summed image, there are going to be three values: 0 - Background 100 - Non-overlapping area 200 - Overlapping area We may assume that pixels with value 100 that touches value 200 belongs to the same original blob. For clearing the overlapping pixels (200) with the touching pixels (100 around them), we may use cv2.floodFill. After clearing the overlapping pixels and the pixels around them, the pixels that are left (with value 100) are the new blobs. Example for clearing the pixels using cv2.floodFill: if sum_img[y, x] == 200: cv2.floodFill(sum_img, None, (x, y), 0, loDiff=100, upDiff=0) Setting loDiff=100 is used for filling pixels=100 (and pixels=200) with 0 value (200-loDiff=100, so the 100 is filled with zero). For making the solution better, we may find contours (of pixels=200), and ignore the tiny contours. Code sample: import cv2 import numpy as np # Read input images as Grayscale. img1 = cv2.imread('image1.png', cv2.IMREAD_GRAYSCALE) img2 = cv2.imread('image2.png', cv2.IMREAD_GRAYSCALE) # Replace 255 with 100 (we want the sum img1+img2 not to overflow) img1[img1 >= 128] = 100 img2[img2 >= 128] = 100 # Sum two images - in the sum, the value of overlapping parts of blobs is going to be 200 sum_img = img1 + img2 cv2.floodFill(sum_img, None, (0, 0), 0, loDiff=0, upDiff=0) # Remove the white frame. cv2.imshow('sum_img before floodFill', sum_img) # Show image for testing. # Find pixels with value 200 (the overlapping blobs). thesh = cv2.threshold(sum_img, 199, 255, cv2.THRESH_BINARY)[1] # Find contours (of overlapping blobs parts) cnts = cv2.findContours(thesh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0] # Iterate contours and fill the overlapping part, and the non-zero pixels around it (i.e with value 100) with zero. for c in cnts: area_tresh = 50 area = cv2.contourArea(c) if area > area_tresh: # Ignore very tiny contours x, y = tuple(c[0, 0]) # Get coordinates of first pixel in the contour if sum_img[y, x] == 200: cv2.floodFill(sum_img, None, (x, y), 0, loDiff=100, upDiff=0) # Setting loDiff=100 is set for filling pixels=100 (and pixels=200) sum_img[sum_img == 200] = 0 # Remove the small remainders #thesh = cv2.cvtColor(thesh, cv2.COLOR_GRAY2BGR) # Convert to BGR for testing (for drawing the contours) #cv2.drawContours(thesh, cnts, -1, (0, 255, 0), 2) # Draw contours for testing # Show images for testing. cv2.imshow('thesh', thesh) cv2.imshow('sum_img after floodFill', sum_img) cv2.waitKey() cv2.destroyAllWindows() Note: We may dilate the images first, if the two blobs in proximity are considered to be the same blob (I don't no if a blob can "swim") Output sum_img (after floodFill): Update: The above solution finds the blobs that exist in image1 and not in image2 and blobs exist in image2 and not in image1. In case we want to find only the blobs that are new in image2, and we also assume that blobs that are close in both images are the same one, we may add the following stages: Dilate img1 and img2 before summing (two close blobs are going to be overlapping). Remove the dilated pixels from sum_img at the end. Remove from sum_img all the blobs that exist only in img1 (and not in img2). Code sample: import cv2 import numpy as np # Read input images as Grayscale. img1 = cv2.imread('image1.png', cv2.IMREAD_GRAYSCALE) img2 = cv2.imread('image2.png', cv2.IMREAD_GRAYSCALE) # Replace 255 with 100 (we want the sum img1+img2 not to overflow) img1[img1 >= 128] = 100 img2[img2 >= 128] = 100 # Dilate both images - assume close blobs are the same blob (two blobs are considered overlapped even if they are close but not tuching). dilated_img1 = cv2.dilate(img1, np.ones((11, 11), np.uint8)) dilated_img2 = cv2.dilate(img2, np.ones((11, 11), np.uint8)) # Sum two images - in the sum, the value of overlapping parts of blobs is going to be 200 sum_img = dilated_img1 + dilated_img2 cv2.floodFill(sum_img, None, (0, 0), 0, loDiff=0, upDiff=0) # Remove the white frame. #cv2.imshow('sum_img before floodFill', sum_img) # Show image for testing. # Find pixels with value 200 (the overlapping blobs). thesh = cv2.threshold(sum_img, 199, 255, cv2.THRESH_BINARY)[1] # Find contours (of overlapping blobs parts) cnts = cv2.findContours(thesh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0] # Iterate contours and fill the overlapping part, and the non-zero pixels around it (i.e with value 100) with zero. for c in cnts: area_tresh = 0 # Optional area = cv2.contourArea(c) if area > area_tresh: # Ignore very tiny contours x, y = tuple(c[0, 0]) # Get coordinates of first pixel in the contour if sum_img[y, x] == 200: cv2.floodFill(sum_img, None, (x, y), 0, loDiff=100, upDiff=0) # Setting loDiff=100 is set for filling pixels=100 (and pixels=200) sum_img[sum_img == 200] = 0 # Remove the small remainders sum_img[(img1 == 0) & (dilated_img1 == 100)] = 0 # Remove dilated pixels from dilated_img1 sum_img[(img2 == 0) & (dilated_img2 == 100)] = 0 # Remove dilated pixels from dilated_img2 sum_img[(img1 == 100) & (img2 == 0)] = 0 # Remove all the blobs that are only in first image (assume new blobs are "bored" only in image2) # Visualization: merged_img = cv2.merge((sum_img*2, img1*2, img2*2)) # The output image is img1, without the output_image = img1.copy() output_image[sum_img == 100] = 0 # Show images for testing. cv2.imshow('sum_img', sum_img) cv2.imshow('merged_img', merged_img) cv2.waitKey() cv2.destroyAllWindows() Output (sum_img*2): Visualization for testing (merged_img): Green - exist only in img1 Yellow - exist both in img1 and img2 Magenta - exist only in img2, and not too close to blob in img1 (we are looking for the magenta blobs).
Find new blobs comparing two different binary images
I have two images taken on same sample at t=0 and t=t. There are few new blobs present in image taken at t. I need to find these new blobs (new blobs are the blobs which are present in new XY location at t=t). I am wondering if someone can help? I tried OR,AND,XOR, reconstructions but the issue is the blobs which are same between two images are not exactly the same. Sometimes they might have size difference which makes the problem complicated. Image at t=0 Image at t=t
[ "Instead of using OR,AND,XOR, we may sum the two images.\nBefore summing the images, replace the 255 values with 100 (keeping the range of uint8 [0, 255]).\nIn the summed image, there are going to be three values:\n\n0 - Background\n100 - Non-overlapping area\n200 - Overlapping area\n\nWe may assume that pixels with value 100 that touches value 200 belongs to the same original blob.\nFor clearing the overlapping pixels (200) with the touching pixels (100 around them), we may use cv2.floodFill.\nAfter clearing the overlapping pixels and the pixels around them, the pixels that are left (with value 100) are the new blobs.\nExample for clearing the pixels using cv2.floodFill:\nif sum_img[y, x] == 200:\n cv2.floodFill(sum_img, None, (x, y), 0, loDiff=100, upDiff=0)\n\nSetting loDiff=100 is used for filling pixels=100 (and pixels=200) with 0 value (200-loDiff=100, so the 100 is filled with zero).\nFor making the solution better, we may find contours (of pixels=200), and ignore the tiny contours.\n\nCode sample:\nimport cv2\nimport numpy as np\n\n# Read input images as Grayscale.\nimg1 = cv2.imread('image1.png', cv2.IMREAD_GRAYSCALE)\nimg2 = cv2.imread('image2.png', cv2.IMREAD_GRAYSCALE)\n\n# Replace 255 with 100 (we want the sum img1+img2 not to overflow)\nimg1[img1 >= 128] = 100\nimg2[img2 >= 128] = 100\n\n# Sum two images - in the sum, the value of overlapping parts of blobs is going to be 200\nsum_img = img1 + img2\n\ncv2.floodFill(sum_img, None, (0, 0), 0, loDiff=0, upDiff=0) # Remove the white frame.\n\ncv2.imshow('sum_img before floodFill', sum_img) # Show image for testing.\n\n# Find pixels with value 200 (the overlapping blobs).\nthesh = cv2.threshold(sum_img, 199, 255, cv2.THRESH_BINARY)[1]\n\n# Find contours (of overlapping blobs parts)\ncnts = cv2.findContours(thesh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]\n\n\n# Iterate contours and fill the overlapping part, and the non-zero pixels around it (i.e with value 100) with zero.\nfor c in cnts:\n area_tresh = 50\n area = cv2.contourArea(c)\n if area > area_tresh: # Ignore very tiny contours\n x, y = tuple(c[0, 0]) # Get coordinates of first pixel in the contour\n if sum_img[y, x] == 200:\n cv2.floodFill(sum_img, None, (x, y), 0, loDiff=100, upDiff=0) # Setting loDiff=100 is set for filling pixels=100 (and pixels=200)\n\nsum_img[sum_img == 200] = 0 # Remove the small remainders\n\n#thesh = cv2.cvtColor(thesh, cv2.COLOR_GRAY2BGR) # Convert to BGR for testing (for drawing the contours)\n#cv2.drawContours(thesh, cnts, -1, (0, 255, 0), 2) # Draw contours for testing\n\n# Show images for testing.\ncv2.imshow('thesh', thesh)\ncv2.imshow('sum_img after floodFill', sum_img)\ncv2.waitKey()\ncv2.destroyAllWindows()\n\n\nNote:\nWe may dilate the images first, if the two blobs in proximity are considered to be the same blob (I don't no if a blob can \"swim\")\nOutput sum_img (after floodFill):\n\n\nUpdate:\nThe above solution finds the blobs that exist in image1 and not in image2 and blobs exist in image2 and not in image1.\nIn case we want to find only the blobs that are new in image2, and we also assume that blobs that are close in both images are the same one, we may add the following stages:\n\nDilate img1 and img2 before summing (two close blobs are going to be overlapping).\nRemove the dilated pixels from sum_img at the end.\nRemove from sum_img all the blobs that exist only in img1 (and not in img2).\n\n\nCode sample:\nimport cv2\nimport numpy as np\n\n# Read input images as Grayscale.\nimg1 = cv2.imread('image1.png', cv2.IMREAD_GRAYSCALE)\nimg2 = cv2.imread('image2.png', cv2.IMREAD_GRAYSCALE)\n\n# Replace 255 with 100 (we want the sum img1+img2 not to overflow)\nimg1[img1 >= 128] = 100\nimg2[img2 >= 128] = 100\n\n# Dilate both images - assume close blobs are the same blob (two blobs are considered overlapped even if they are close but not tuching).\ndilated_img1 = cv2.dilate(img1, np.ones((11, 11), np.uint8))\ndilated_img2 = cv2.dilate(img2, np.ones((11, 11), np.uint8))\n\n# Sum two images - in the sum, the value of overlapping parts of blobs is going to be 200\nsum_img = dilated_img1 + dilated_img2\n\ncv2.floodFill(sum_img, None, (0, 0), 0, loDiff=0, upDiff=0) # Remove the white frame.\n\n#cv2.imshow('sum_img before floodFill', sum_img) # Show image for testing.\n\n# Find pixels with value 200 (the overlapping blobs).\nthesh = cv2.threshold(sum_img, 199, 255, cv2.THRESH_BINARY)[1]\n\n# Find contours (of overlapping blobs parts)\ncnts = cv2.findContours(thesh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]\n\n\n# Iterate contours and fill the overlapping part, and the non-zero pixels around it (i.e with value 100) with zero.\nfor c in cnts:\n area_tresh = 0 # Optional\n area = cv2.contourArea(c)\n if area > area_tresh: # Ignore very tiny contours\n x, y = tuple(c[0, 0]) # Get coordinates of first pixel in the contour\n if sum_img[y, x] == 200:\n cv2.floodFill(sum_img, None, (x, y), 0, loDiff=100, upDiff=0) # Setting loDiff=100 is set for filling pixels=100 (and pixels=200)\n\nsum_img[sum_img == 200] = 0 # Remove the small remainders\n\nsum_img[(img1 == 0) & (dilated_img1 == 100)] = 0 # Remove dilated pixels from dilated_img1\nsum_img[(img2 == 0) & (dilated_img2 == 100)] = 0 # Remove dilated pixels from dilated_img2\nsum_img[(img1 == 100) & (img2 == 0)] = 0 # Remove all the blobs that are only in first image (assume new blobs are \"bored\" only in image2)\n\n# Visualization:\nmerged_img = cv2.merge((sum_img*2, img1*2, img2*2))\n\n# The output image is img1, without the \noutput_image = img1.copy()\noutput_image[sum_img == 100] = 0\n\n# Show images for testing.\ncv2.imshow('sum_img', sum_img)\ncv2.imshow('merged_img', merged_img)\ncv2.waitKey()\ncv2.destroyAllWindows()\n\n\nOutput (sum_img*2):\n\nVisualization for testing (merged_img):\n\nGreen - exist only in img1\nYellow - exist both in img1 and img2\nMagenta - exist only in img2, and not too close to blob in img1 (we are looking for the magenta blobs).\n\n\n" ]
[ 3 ]
[]
[]
[ "computer_vision", "matlab", "object_tracking", "opencv", "python" ]
stackoverflow_0074657074_computer_vision_matlab_object_tracking_opencv_python.txt
Q: How to count number of occurrences per day over a large data set? I have a dataset that looks something like this but much larger, over 1000 unique products: | Hour | Date || Pallet ID| PRODUCT || Move Type| | -------- | -------- || -------- | -------- || -------- | | 1 PM | 10/01 || 101 | Shoes || Storage | | 1 PM | 10/01 || 202 | Pants || Load | | 1 PM | 10/01 || 101 | Shoes || Storage | | 1 PM | 10/01 || 101 | Shoes || Load | | 1 PM | 10/01 || 202 | Pants || Storage | | 3 PM | 10/01 || 202 | Pants || Storage | | 3 PM | 10/01 || 101 | Shoes || Load | | 3 PM | 10/01 || 202 | Pants || Storage |` What I want to do is create a new table that looks like this: | Hour | Date || Pallet ID| PRODUCT || Move Type| Total Moves | | -------- | -------- || -------- | -------- || -------- | -------- | | 1 PM | 10/01 || 101 | Shoes || Storage | 2 | | 1 PM | 10/01 || 101 | Shoes || Load | 1 | | 1 PM | 10/01 || 202 | Pants || Load | 1 | | 1 PM | 10/01 || 202 | Pants || Storage | 1 | | 3 PM | 10/01 || 101 | Shoes || Load | 1 | | 3 PM | 10/01 || 202 | Pants || Storage | 2 | Here is my attempt at doing this. This cannot be the correct way as this takes hours to fully run completely. Is there any way of doing this better than I am currently? listy = df['PROD_CODE'].unique().tolist() calc_df = pd.DataFrame() count = 0 for x in listy: new_df = df.loc[df['PROD_CODE'] == x] dates = new_df['Date'].unique().tolist() count = count + 1 print(f'{count} / {len(listy)} loops have been completed') for z in dates: dates_df = new_df[new_df['Date'] == z] hours = new_df['Hour'].unique().tolist() for h in hours: hours_df = dates_df.loc[new_df['Hour'] == h] hours_df[['Hour','Date','PALLET_ID','PROD_CODE','CASE_QTY','Move Type']] hours_df['Total Moves'] = hours_df.groupby('Move Type')['Move Type'].transform('count') calc_df = calc_df.append(hours_df,ignore_index=False) A: You should be able to use df.groupby() with .size() to get the counts for moves of the same date/time/pallet id/product/move type. df.groupby(['Hour','Date','PALLET_ID','PROD_CODE','CASE_QTY','Move Type']).size().reset_index(name='Total Moves') Source: Get statistics for each group (such as count, mean, etc) using pandas GroupBy?
How to count number of occurrences per day over a large data set?
I have a dataset that looks something like this but much larger, over 1000 unique products: | Hour | Date || Pallet ID| PRODUCT || Move Type| | -------- | -------- || -------- | -------- || -------- | | 1 PM | 10/01 || 101 | Shoes || Storage | | 1 PM | 10/01 || 202 | Pants || Load | | 1 PM | 10/01 || 101 | Shoes || Storage | | 1 PM | 10/01 || 101 | Shoes || Load | | 1 PM | 10/01 || 202 | Pants || Storage | | 3 PM | 10/01 || 202 | Pants || Storage | | 3 PM | 10/01 || 101 | Shoes || Load | | 3 PM | 10/01 || 202 | Pants || Storage |` What I want to do is create a new table that looks like this: | Hour | Date || Pallet ID| PRODUCT || Move Type| Total Moves | | -------- | -------- || -------- | -------- || -------- | -------- | | 1 PM | 10/01 || 101 | Shoes || Storage | 2 | | 1 PM | 10/01 || 101 | Shoes || Load | 1 | | 1 PM | 10/01 || 202 | Pants || Load | 1 | | 1 PM | 10/01 || 202 | Pants || Storage | 1 | | 3 PM | 10/01 || 101 | Shoes || Load | 1 | | 3 PM | 10/01 || 202 | Pants || Storage | 2 | Here is my attempt at doing this. This cannot be the correct way as this takes hours to fully run completely. Is there any way of doing this better than I am currently? listy = df['PROD_CODE'].unique().tolist() calc_df = pd.DataFrame() count = 0 for x in listy: new_df = df.loc[df['PROD_CODE'] == x] dates = new_df['Date'].unique().tolist() count = count + 1 print(f'{count} / {len(listy)} loops have been completed') for z in dates: dates_df = new_df[new_df['Date'] == z] hours = new_df['Hour'].unique().tolist() for h in hours: hours_df = dates_df.loc[new_df['Hour'] == h] hours_df[['Hour','Date','PALLET_ID','PROD_CODE','CASE_QTY','Move Type']] hours_df['Total Moves'] = hours_df.groupby('Move Type')['Move Type'].transform('count') calc_df = calc_df.append(hours_df,ignore_index=False)
[ "You should be able to use df.groupby() with .size() to get the counts for moves of the same date/time/pallet id/product/move type.\ndf.groupby(['Hour','Date','PALLET_ID','PROD_CODE','CASE_QTY','Move Type']).size().reset_index(name='Total Moves')\n\nSource: Get statistics for each group (such as count, mean, etc) using pandas GroupBy?\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074662368_python.txt
Q: python2 can't file a file in current folder For the next file info: [jzun@hscd8a25e93f9vm dates]$ pwd /home/jzun/vivo_mod_samples/dates [jzun@hscd8a25e93f9vm dates]$ ls date_def.json dates_add.rdf dates.bak dates.rdf dates_sub.rdf dates.txt datetime_precision_enum.txt gen_date_rdf.py gen_date_rdf.py.bak gen_dates.py gen_dates.py.bak get.txt README.md run_pump_2_to_create_date_defs.sh sv.cfg I have the next python2 function: def read_csv(filename, skip=True, delimiter='|'): """ Read a CSV file, return dictionary object :param filename: name of file to read :param skip: should lines with invalid number of columns be skipped? False=Throw Exception :param delimiter: The delimiter for CSV files :return: Dictionary object """ cwd = os.getcwd() print("read_csv>current dir = " + cwd) # fp = open(filename, 'rU') # print(fp) # data = read_csv_fp(fp, skip, delimiter) # fp.close() with open(filename, 'rU') as fp: data = read_csv_fp(fp, skip, delimiter) fp.close() return data After running it with filename = dates.txt I get the next result: read_csv>current dir = /home/jzun/vivo_mod_samples/dates dates.txt file not found I know similar questions have been posted but interestingly I can not find anything that could help me to solve this problem. Any ideas? A: Try passing the full path of the file dates.txt: cwd = os.getcwd() file_name = "dates.txt" file_path = os.path.join(cwd, file_name) # Double check that file exist assert os.path.isfile(file_path) is True with open(file_path, 'rU') as fp: data = read_csv_fp(fp, skip, delimiter) fp.close()
python2 can't file a file in current folder
For the next file info: [jzun@hscd8a25e93f9vm dates]$ pwd /home/jzun/vivo_mod_samples/dates [jzun@hscd8a25e93f9vm dates]$ ls date_def.json dates_add.rdf dates.bak dates.rdf dates_sub.rdf dates.txt datetime_precision_enum.txt gen_date_rdf.py gen_date_rdf.py.bak gen_dates.py gen_dates.py.bak get.txt README.md run_pump_2_to_create_date_defs.sh sv.cfg I have the next python2 function: def read_csv(filename, skip=True, delimiter='|'): """ Read a CSV file, return dictionary object :param filename: name of file to read :param skip: should lines with invalid number of columns be skipped? False=Throw Exception :param delimiter: The delimiter for CSV files :return: Dictionary object """ cwd = os.getcwd() print("read_csv>current dir = " + cwd) # fp = open(filename, 'rU') # print(fp) # data = read_csv_fp(fp, skip, delimiter) # fp.close() with open(filename, 'rU') as fp: data = read_csv_fp(fp, skip, delimiter) fp.close() return data After running it with filename = dates.txt I get the next result: read_csv>current dir = /home/jzun/vivo_mod_samples/dates dates.txt file not found I know similar questions have been posted but interestingly I can not find anything that could help me to solve this problem. Any ideas?
[ "Try passing the full path of the file dates.txt:\ncwd = os.getcwd()\nfile_name = \"dates.txt\"\nfile_path = os.path.join(cwd, file_name)\n\n# Double check that file exist\nassert os.path.isfile(file_path) is True\n\nwith open(file_path, 'rU') as fp:\n data = read_csv_fp(fp, skip, delimiter)\n fp.close()\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_2.7" ]
stackoverflow_0074660016_python_python_2.7.txt
Q: Matrix multiplication of a 2d numpy array to cpp using ctypes What is a correct way to do the matrix multiplication using ctype ? in my current implementation data going back and forth consuming lots of time, is there any way to do it optimally ? by passing array address and getting pointer in return instead of generating entire array using .contents method. cpp_function.cpp compile using g++ -shared -fPIC cpp_function.cpp -o cpp_function.so #include <iostream> extern "C" { double* mult_matrix(double *a1, double *a2, size_t a1_h, size_t a1_w, size_t a2_h, size_t a2_w, int size) { double* ret_arr = new double[size]; for(size_t i = 0; i < a1_h; i++){ for (size_t j = 0; j < a2_w; j++) { double val = 0; for (size_t k = 0; k < a2_h; k++){ val += a1[i * a1_h + k] * a2[k * a2_h +j] ; } ret_arr[i * a1_h +j ] = val; // printf("%f ", ret_arr[i * a1_h +j ]); } // printf("\n"); } return ret_arr; } } Python file to call the so file main.py import ctypes import numpy from time import time libmatmult = ctypes.CDLL("./cpp_function.so") ND_POINTER_1 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, ndim=2, flags="C") ND_POINTER_2 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, ndim=2, flags="C") libmatmult.mult_matrix.argtypes = [ND_POINTER_1, ND_POINTER_2, ctypes.c_size_t, ctypes.c_size_t] def mult_matrix_cpp(a,b): shape = a.shape[0] * a.shape[1] libmatmult.mult_matrix.restype = ctypes.POINTER(ctypes.c_double * shape ) ret_cpp = libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1]) out_list_c = [i for i in ret_cpp.contents] # <---- regenrating list which is time consuming return out_list_c size_a = (300,300) size_b = size_a a = numpy.random.uniform(low=1, high=255, size=size_a) b = numpy.random.uniform(low=1, high=255, size=size_b) t2 = time() out_cpp = mult_matrix_cpp(a,b) print("cpp time taken:{:.2f} ms".format((time() - t2) * 1000)) out_cpp = numpy.array(out_cpp).reshape(size_a[0], size_a[1]) t3 = time() out_np = numpy.dot(a,b) # print(out_np) print("Numpy dot() time taken:{:.2f} ms".format((time() - t3) * 1000)) This solution works but time consuming is there any way to make it faster ? A: One reason for the time consumption is not using an ndpointer for the return value and copying it into a Python list. Instead use the following restype. You won't need the later reshape as well. But take the commenters' advice and don't reinvent the wheel. def mult_matrix_cpp(a, b): shape = a.shape[0] * a.shape[1] libmatmult.mult_matrix.restype = np.ctypeslib.ndpointer(dtype=np.float64, ndim=2, shape=a.shape, flags="C") return libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1]) A: One way to do this is to create a function that takes two two-dimensional arrays (representing matrices) as arguments, and returns a two-dimensional array containing the result of the matrix multiplication. The code for this function could look like this: #include <stdio.h> // This function multiplies two matrices, and returns the result // in a two-dimensional array int** matrix_multiply(int** matrix1, int** matrix2, int rows1, int cols1, int rows2, int cols2) { int** matrix_result = (int**)malloc(rows1 * sizeof(int*)); int i, j, k; // Check if the matrices can be multiplied if (cols1 != rows2) { printf("Matrices cannot be multiplied!\n"); return NULL; } // Perform matrix multiplication for (i = 0; i < rows1; i++) { matrix_result[i] = (int*)malloc(cols2 * sizeof(int)); for (j = 0; j < cols2; j++) { matrix_result[i][j] = 0; for (k = 0; k < cols1; k++) { matrix_result[i][j] += matrix1[i][k] * matrix2[k][j]; } } } return matrix_result; } int main(void) { // Example matrices int** matrix1 = (int**)malloc(2 * sizeof(int*)); matrix1[0] = (int*)malloc(2 * sizeof(int)); matrix1[1] = (int*)malloc(2 * sizeof(int)); matrix1[0][0] = 1; matrix1[0][1] = 2; matrix1[1][0] = 3; matrix1[1][1] = 4; int** matrix2 = (int**)malloc(2 * sizeof(int*)); matrix2[0] = (int*)malloc(2 * sizeof(int)); matrix2[1] = (int*)malloc(2 * sizeof(int)); matrix2[0][0] = 5; matrix2[0][1] = 6; matrix2[1][0] = 7; matrix2[1][1] = 8; // Do matrix multiplication int** matrix_result = matrix_multiply(matrix1, matrix2, 2, 2, 2, 2); // Print the result int i, j; for (i = 0; i < 2; i++) { for (j = 0; j < 2; j++) { printf("%d ", matrix_result[i][j]); } printf("\n"); } return 0; }
Matrix multiplication of a 2d numpy array to cpp using ctypes
What is a correct way to do the matrix multiplication using ctype ? in my current implementation data going back and forth consuming lots of time, is there any way to do it optimally ? by passing array address and getting pointer in return instead of generating entire array using .contents method. cpp_function.cpp compile using g++ -shared -fPIC cpp_function.cpp -o cpp_function.so #include <iostream> extern "C" { double* mult_matrix(double *a1, double *a2, size_t a1_h, size_t a1_w, size_t a2_h, size_t a2_w, int size) { double* ret_arr = new double[size]; for(size_t i = 0; i < a1_h; i++){ for (size_t j = 0; j < a2_w; j++) { double val = 0; for (size_t k = 0; k < a2_h; k++){ val += a1[i * a1_h + k] * a2[k * a2_h +j] ; } ret_arr[i * a1_h +j ] = val; // printf("%f ", ret_arr[i * a1_h +j ]); } // printf("\n"); } return ret_arr; } } Python file to call the so file main.py import ctypes import numpy from time import time libmatmult = ctypes.CDLL("./cpp_function.so") ND_POINTER_1 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, ndim=2, flags="C") ND_POINTER_2 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, ndim=2, flags="C") libmatmult.mult_matrix.argtypes = [ND_POINTER_1, ND_POINTER_2, ctypes.c_size_t, ctypes.c_size_t] def mult_matrix_cpp(a,b): shape = a.shape[0] * a.shape[1] libmatmult.mult_matrix.restype = ctypes.POINTER(ctypes.c_double * shape ) ret_cpp = libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1]) out_list_c = [i for i in ret_cpp.contents] # <---- regenrating list which is time consuming return out_list_c size_a = (300,300) size_b = size_a a = numpy.random.uniform(low=1, high=255, size=size_a) b = numpy.random.uniform(low=1, high=255, size=size_b) t2 = time() out_cpp = mult_matrix_cpp(a,b) print("cpp time taken:{:.2f} ms".format((time() - t2) * 1000)) out_cpp = numpy.array(out_cpp).reshape(size_a[0], size_a[1]) t3 = time() out_np = numpy.dot(a,b) # print(out_np) print("Numpy dot() time taken:{:.2f} ms".format((time() - t3) * 1000)) This solution works but time consuming is there any way to make it faster ?
[ "One reason for the time consumption is not using an ndpointer for the return value and copying it into a Python list. Instead use the following restype. You won't need the later reshape as well. But take the commenters' advice and don't reinvent the wheel.\ndef mult_matrix_cpp(a, b):\n shape = a.shape[0] * a.shape[1]\n libmatmult.mult_matrix.restype = np.ctypeslib.ndpointer(dtype=np.float64, ndim=2, shape=a.shape, flags=\"C\")\n return libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1])\n\n", "One way to do this is to create a function that takes two two-dimensional arrays (representing matrices) as arguments, and returns a two-dimensional array containing the result of the matrix multiplication.\nThe code for this function could look like this:\n#include <stdio.h>\n\n// This function multiplies two matrices, and returns the result\n// in a two-dimensional array\nint** matrix_multiply(int** matrix1, int** matrix2, int rows1, int cols1, int rows2, int cols2) {\n int** matrix_result = (int**)malloc(rows1 * sizeof(int*));\n int i, j, k;\n\n // Check if the matrices can be multiplied\n if (cols1 != rows2) {\n printf(\"Matrices cannot be multiplied!\\n\");\n return NULL;\n }\n\n // Perform matrix multiplication\n for (i = 0; i < rows1; i++) {\n matrix_result[i] = (int*)malloc(cols2 * sizeof(int));\n for (j = 0; j < cols2; j++) {\n matrix_result[i][j] = 0;\n for (k = 0; k < cols1; k++) {\n matrix_result[i][j] += matrix1[i][k] * matrix2[k][j];\n }\n }\n }\n\n return matrix_result;\n}\n\nint main(void) {\n // Example matrices\n int** matrix1 = (int**)malloc(2 * sizeof(int*));\n matrix1[0] = (int*)malloc(2 * sizeof(int));\n matrix1[1] = (int*)malloc(2 * sizeof(int));\n matrix1[0][0] = 1; matrix1[0][1] = 2;\n matrix1[1][0] = 3; matrix1[1][1] = 4;\n\n int** matrix2 = (int**)malloc(2 * sizeof(int*));\n matrix2[0] = (int*)malloc(2 * sizeof(int));\n matrix2[1] = (int*)malloc(2 * sizeof(int));\n matrix2[0][0] = 5; matrix2[0][1] = 6;\n matrix2[1][0] = 7; matrix2[1][1] = 8;\n\n // Do matrix multiplication\n int** matrix_result = matrix_multiply(matrix1, matrix2, 2, 2, 2, 2);\n\n // Print the result\n int i, j;\n for (i = 0; i < 2; i++) {\n for (j = 0; j < 2; j++) {\n printf(\"%d \", matrix_result[i][j]);\n }\n printf(\"\\n\");\n }\n\n return 0;\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "c++", "ctypes", "numpy", "python" ]
stackoverflow_0074612029_c++_ctypes_numpy_python.txt
Q: how to update a variable in a text file I have a program that opens an account and there are several lines but i want it to update this one line credits = 0 Whenever a purchase is made I want it to add one more to the amount this is what the file looks like ['namef', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2'] credits = 0 this bit of info is kept inside of a text file I don't care if you replace it (as long as it has 1 more) or whether you just update it. Please help me out :) sorry if this question is trival A: The below code snippet should give you an idea on how to go about. This code updates, the value of the counter variable present within a file counter_file.txt import os counter_file = open(r'./counter_file.txt', 'r+') content_lines = [] for line in counter_file: if 'counter=' in line: line_components = line.split('=') int_value = int(line_components[1]) + 1 line_components[1] = str(int_value) updated_line= "=".join(line_components) content_lines.append(updated_line) else: content_lines.append(line) counter_file.seek(0) counter_file.truncate() counter_file.writelines(content_lines) counter_file.close() Hopefully, this sheds some light on how to go about with solving your problem A: You can create a general text file replacer based on a dictionary containing what to look for as keys and as corresponding values what to replace: In the template text file put some flags where you want variables: ['<namef>', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2'] credits = <credit_var> Then create a mapping dictionary: map_dict = {'<namef>':'New name', '<credit_var>':1} Then rewrite the text file doing the replacements: newfile = open('new_file.txt', 'w') for l in open('template.txt'): for k,v in map_dict.iteritems(): l = l.replace(k,str(v)) newfile.write(l) newfile.close() A: Rather than creating new files like new_file.txt or template.txt, you can use your existing file. Here I am using client_name and client_credit_score as place holders reflecting user data: client_name="Joseph" client_credit_score=1 map_dict = {'<namef>':f'{client_name}', '<credit_var>':f'{client_credit_score}'} template = """['<namef>', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2'] credits = <credit_var>""" with open('existing_file.txt', 'w') as f: for k,v in map_dict.items(): template = template.replace(k,str(v)) f.write(template) Here the existing_file.txt will have this code initially (like you question): ['namef', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2'] credits = 0 And thanks to the map_dict, all the keys will be updated with new values in your file. Here's what output will look like: ['Joseph', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2'] credits = 1 Note: Special thanks to @Saullo and @Tarun for the above answer.
how to update a variable in a text file
I have a program that opens an account and there are several lines but i want it to update this one line credits = 0 Whenever a purchase is made I want it to add one more to the amount this is what the file looks like ['namef', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2'] credits = 0 this bit of info is kept inside of a text file I don't care if you replace it (as long as it has 1 more) or whether you just update it. Please help me out :) sorry if this question is trival
[ "The below code snippet should give you an idea on how to go about. This code updates, the value of the counter variable present within a file counter_file.txt\nimport os\n\ncounter_file = open(r'./counter_file.txt', 'r+')\ncontent_lines = []\n\nfor line in counter_file:\n if 'counter=' in line:\n line_components = line.split('=')\n int_value = int(line_components[1]) + 1\n line_components[1] = str(int_value)\n updated_line= \"=\".join(line_components)\n content_lines.append(updated_line)\n else:\n content_lines.append(line)\n\ncounter_file.seek(0)\ncounter_file.truncate()\ncounter_file.writelines(content_lines)\ncounter_file.close()\n\nHopefully, this sheds some light on how to go about with solving your problem\n", "You can create a general text file replacer based on a dictionary containing what to look for as keys and as corresponding values what to replace:\nIn the template text file put some flags where you want variables:\n['<namef>', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2']\n\ncredits = <credit_var>\n\nThen create a mapping dictionary:\nmap_dict = {'<namef>':'New name', '<credit_var>':1}\n\nThen rewrite the text file doing the replacements:\nnewfile = open('new_file.txt', 'w')\nfor l in open('template.txt'):\n for k,v in map_dict.iteritems():\n l = l.replace(k,str(v))\n newfile.write(l)\nnewfile.close()\n\n", "Rather than creating new files like new_file.txt or template.txt, you can use your existing file. Here I am using client_name and client_credit_score as place holders reflecting user data:\nclient_name=\"Joseph\"\nclient_credit_score=1\n\nmap_dict = {'<namef>':f'{client_name}', '<credit_var>':f'{client_credit_score}'}\ntemplate = \"\"\"['<namef>', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2'] \ncredits = <credit_var>\"\"\"\n\nwith open('existing_file.txt', 'w') as f:\n for k,v in map_dict.items():\n template = template.replace(k,str(v))\n f.write(template)\n\nHere the existing_file.txt will have this code initially (like you question):\n['namef', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2']\ncredits = 0\n\nAnd thanks to the map_dict, all the keys will be updated with new values in your file. Here's what output will look like:\n['Joseph', 'namel', 'email', 'adress', 'city', 'state', 'zip', 'phone', 'phone 2']\ncredits = 1\n\nNote: Special thanks to @Saullo and @Tarun for the above answer.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "file", "python", "python_3.x", "variables" ]
stackoverflow_0018040596_file_python_python_3.x_variables.txt
Q: Reversing a txt File in python Assume you are given a file called newText.txt which contains the lines: line 1 line 2 line 3 Write a python program that reads the data from newText.txt and writes a new file called newerText.txt in the following format: line 3 Python Inserted a new line line 2 line 1 I can get it reversed but the line 2 and line 3 are in the same line. I also need help appending the new line between line 2 and line 3. Input lines = [] with open('text.txt') as f: lines = f.readlines() with open('newtext.txt', 'w') as f: for line in reversed(lines): f.write(line) Output line3line2 line1 A: If you look into what text.txt actually is, it's probably something like this: line1\nline2\nline3 Notice how each line is divided by a new line character (\n). This means the last line doesn't have a new line at the end of it, so when you write it to newText.txt, it won't have a newline. What you can do is strip away all possible new lines, then add one yourself: with open('newtext.txt', 'w') as f: for line in reversed(lines): f.write(line.strip() + "\n")
Reversing a txt File in python
Assume you are given a file called newText.txt which contains the lines: line 1 line 2 line 3 Write a python program that reads the data from newText.txt and writes a new file called newerText.txt in the following format: line 3 Python Inserted a new line line 2 line 1 I can get it reversed but the line 2 and line 3 are in the same line. I also need help appending the new line between line 2 and line 3. Input lines = [] with open('text.txt') as f: lines = f.readlines() with open('newtext.txt', 'w') as f: for line in reversed(lines): f.write(line) Output line3line2 line1
[ "If you look into what text.txt actually is, it's probably something like this:\nline1\\nline2\\nline3\n\nNotice how each line is divided by a new line character (\\n). This means the last line doesn't have a new line at the end of it, so when you write it to newText.txt, it won't have a newline.\nWhat you can do is strip away all possible new lines, then add one yourself:\nwith open('newtext.txt', 'w') as f:\n for line in reversed(lines):\n f.write(line.strip() + \"\\n\")\n\n" ]
[ 1 ]
[]
[]
[ "append", "python", "reverse" ]
stackoverflow_0074662524_append_python_reverse.txt
Q: Appending a list with a dictionary key that contains multiple values from a json response I'm traversing a json response in which stats are grouped by games played. I want to gather all of the values from the set and assign a single dictionary key to them. Currently every value has its own key. Here is a sample of the json response: {'data': [{'ast': 9, 'blk': 0, 'dreb': 4, 'fg3_pct': 0.0, 'fg3a': 2, 'fg3m': 0, 'fg_pct': 0.6, 'fga': 20, 'fgm': 12, 'ft_pct': 0.333, 'fta': 3, 'ftm': 1, 'game': {'date': '2003-10-29T00:00:00.000Z', 'home_team_id': 26, 'home_team_score': 106, 'id': 15946, 'period': 4, 'postseason': False, 'season': 2003, 'status': 'Final', 'time': ' ', 'visitor_team_id': 6, 'visitor_team_score': 92}, 'id': 361330, 'min': '42:50', 'oreb': 2, 'pf': 3, 'player': {'first_name': 'LeBron', 'height_feet': 6, 'height_inches': 8, 'id': 237, 'last_name': 'James', 'position': 'F', 'team_id': 14, 'weight_pounds': 250}, 'pts': 25, 'reb': 6, 'stl': 4, 'team': {'abbreviation': 'CLE', 'city': 'Cleveland', 'conference': 'East', 'division': 'Central', 'full_name': 'Cleveland Cavaliers', 'id': 6, 'name': 'Cavaliers'}, 'turnover': 2}, {'ast': 8, 'blk': 0, 'dreb': 10, 'fg3_pct': 0.2, 'fg3a': 5, 'fg3m': 1, 'fg_pct': 0.471, 'fga': 17, 'fgm': 8, 'ft_pct': 0.571, 'fta': 7, 'ftm': 4, 'game': {'date': '2003-10-30T00:00:00.000Z', 'home_team_id': 24, 'home_team_score': 95, 'id': 16277, 'period': 4, 'postseason': False, 'season': 2003, 'status': 'Final', 'time': ' ', 'visitor_team_id': 6, 'visitor_team_score': 86}, 'id': 361523, 'min': '40:21', 'oreb': 2, 'pf': 1, 'player': {'first_name': 'LeBron', 'height_feet': 6, 'height_inches': 8, 'id': 237, 'last_name': 'James', 'position': 'F', 'team_id': 14, 'weight_pounds': 250}, 'pts': 21, 'reb': 12, 'stl': 1, 'team': {'abbreviation': 'CLE', 'city': 'Cleveland', 'conference': 'East', 'division': 'Central', 'full_name': 'Cleveland Cavaliers', 'id': 6, 'name': 'Cavaliers'}, 'turnover': 7}, This is my code: players_stats=[] players_game_stats_url="https://balldontlie.io/api/v1/stats?season[]=2018&player_ids[]=237" result=requests.get(players_game_stats_url).json() for assists in result['data']: players_stats.append({"Assists per game":assists['ast']}) What I would like to produce is something like this: {Assists per game:9,8,6,7,3} But what I'm getting is this: [{'Assists per game': 9}, {'Assists per game': 8}, {'Assists per game': 6}, {'Assists per game': 7}, {'Assists per game': 3}, {'Assists per game': 9}, {'Assists per game': 4}, {'Assists per game': 7}, {'Assists per game': 3}, {'Assists per game': 8}, {'Assists per game': 8}, {'Assists per game': 8}, {'Assists per game': 2}, {'Assists per game': 4}, {'Assists per game': 9}, {'Assists per game': 7}, {'Assists per game': 7}, {'Assists per game': 5}, {'Assists per game': 8}, {'Assists per game': 5}, {'Assists per game': 3}, {'Assists per game': 9}, {'Assists per game': 4}, {'Assists per game': 6}, {'Assists per game': 3}] A: Based the comment from Kenny Ostrom output = {"Assists per game": [game_info['ast'] for game_info in result['data']]} which is going to result like you asked {'Assists per game': [9, 8]}
Appending a list with a dictionary key that contains multiple values from a json response
I'm traversing a json response in which stats are grouped by games played. I want to gather all of the values from the set and assign a single dictionary key to them. Currently every value has its own key. Here is a sample of the json response: {'data': [{'ast': 9, 'blk': 0, 'dreb': 4, 'fg3_pct': 0.0, 'fg3a': 2, 'fg3m': 0, 'fg_pct': 0.6, 'fga': 20, 'fgm': 12, 'ft_pct': 0.333, 'fta': 3, 'ftm': 1, 'game': {'date': '2003-10-29T00:00:00.000Z', 'home_team_id': 26, 'home_team_score': 106, 'id': 15946, 'period': 4, 'postseason': False, 'season': 2003, 'status': 'Final', 'time': ' ', 'visitor_team_id': 6, 'visitor_team_score': 92}, 'id': 361330, 'min': '42:50', 'oreb': 2, 'pf': 3, 'player': {'first_name': 'LeBron', 'height_feet': 6, 'height_inches': 8, 'id': 237, 'last_name': 'James', 'position': 'F', 'team_id': 14, 'weight_pounds': 250}, 'pts': 25, 'reb': 6, 'stl': 4, 'team': {'abbreviation': 'CLE', 'city': 'Cleveland', 'conference': 'East', 'division': 'Central', 'full_name': 'Cleveland Cavaliers', 'id': 6, 'name': 'Cavaliers'}, 'turnover': 2}, {'ast': 8, 'blk': 0, 'dreb': 10, 'fg3_pct': 0.2, 'fg3a': 5, 'fg3m': 1, 'fg_pct': 0.471, 'fga': 17, 'fgm': 8, 'ft_pct': 0.571, 'fta': 7, 'ftm': 4, 'game': {'date': '2003-10-30T00:00:00.000Z', 'home_team_id': 24, 'home_team_score': 95, 'id': 16277, 'period': 4, 'postseason': False, 'season': 2003, 'status': 'Final', 'time': ' ', 'visitor_team_id': 6, 'visitor_team_score': 86}, 'id': 361523, 'min': '40:21', 'oreb': 2, 'pf': 1, 'player': {'first_name': 'LeBron', 'height_feet': 6, 'height_inches': 8, 'id': 237, 'last_name': 'James', 'position': 'F', 'team_id': 14, 'weight_pounds': 250}, 'pts': 21, 'reb': 12, 'stl': 1, 'team': {'abbreviation': 'CLE', 'city': 'Cleveland', 'conference': 'East', 'division': 'Central', 'full_name': 'Cleveland Cavaliers', 'id': 6, 'name': 'Cavaliers'}, 'turnover': 7}, This is my code: players_stats=[] players_game_stats_url="https://balldontlie.io/api/v1/stats?season[]=2018&player_ids[]=237" result=requests.get(players_game_stats_url).json() for assists in result['data']: players_stats.append({"Assists per game":assists['ast']}) What I would like to produce is something like this: {Assists per game:9,8,6,7,3} But what I'm getting is this: [{'Assists per game': 9}, {'Assists per game': 8}, {'Assists per game': 6}, {'Assists per game': 7}, {'Assists per game': 3}, {'Assists per game': 9}, {'Assists per game': 4}, {'Assists per game': 7}, {'Assists per game': 3}, {'Assists per game': 8}, {'Assists per game': 8}, {'Assists per game': 8}, {'Assists per game': 2}, {'Assists per game': 4}, {'Assists per game': 9}, {'Assists per game': 7}, {'Assists per game': 7}, {'Assists per game': 5}, {'Assists per game': 8}, {'Assists per game': 5}, {'Assists per game': 3}, {'Assists per game': 9}, {'Assists per game': 4}, {'Assists per game': 6}, {'Assists per game': 3}]
[ "Based the comment from Kenny Ostrom\n output = {\"Assists per game\": [game_info['ast'] for game_info in result['data']]}\n\nwhich is going to result like you asked\n {'Assists per game': [9, 8]}\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "json", "list", "python", "python_3.x" ]
stackoverflow_0074661957_dictionary_json_list_python_python_3.x.txt
Q: Inverse of a complicated matrix not working in Sympy/ Python So I was trying to formulate some matrix out of another matrix's elements using sympy. But while the taking the inverse it didn't work I believe cause of the complicity of the matrix I am taking the inverse of. x0, x1, x2, x3 = smp.symbols('x^0 x^1 x^2 x^3') COORDS = [x0, x1, x2, x3] N = len(COORDS) g00 = smp.Function('g00')(x0, x1, x2, x3) g01 = smp.Function('g01')(x0, x1, x2, x3) g02 = smp.Function('g02')(x0, x1, x2, x3) g03 = smp.Function('g03')(x0, x1, x2, x3) g10 = smp.Function('g10')(x0, x1, x2, x3) g11 = smp.Function('g11')(x0, x1, x2, x3) g12 = smp.Function('g12')(x0, x1, x2, x3) g13 = smp.Function('g13')(x0, x1, x2, x3) g20 = smp.Function('g20')(x0, x1, x2, x3) g21 = smp.Function('g21')(x0, x1, x2, x3) g22 = smp.Function('g22')(x0, x1, x2, x3) g23 = smp.Function('g23')(x0, x1, x2, x3) g30 = smp.Function('g30')(x0, x1, x2, x3) g31 = smp.Function('g31')(x0, x1, x2, x3) g32 = smp.Function('g_32')(x0, x1, x2, x3) g33 = smp.Function('g_33')(x0, x1, x2, x3) g = smp.Matrix([[g00,g01,g02,g03],[g10,g11,g12,g13],[g20,g21,g22,g23],[g30,g31,g32,g33]]) and when I do g.inv() the kernel just doesn't finish. What should I do in order to take this matrix's inverse? Thank you so much in advance :) A: A fully symbolic matrix has a complicated expression for its inverse (shown below). Feel free to substitute whatever you like for the entries of the matrix but it's unlikely that you can do anything useful with such an expression. Here is the inverse of a fully symbolic 4x4 matrix (computed in less than a second): In [8]: from sympy import * In [9]: from sympy.polys.matrices import DomainMatrix In [10]: dm = DomainMatrix.from_Matrix(Matrix(symbols('x:16')).reshape(4, 4)) In [11]: dm.to_field().inv().to_Matrix() Out[11]: ⎡ ⎢───────────────────────────────────────────────────────────────────────────── ⎢-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ ⎢ ⎢ ⎢───────────────────────────────────────────────────────────────────────────── ⎢-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ ⎢ ⎢ ⎢───────────────────────────────────────────────────────────────────────────── ⎢-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ ⎢ ⎢ ⎢───────────────────────────────────────────────────────────────────────────── ⎣-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ -x₁₀⋅x₁ ────────────────────────────────────────────────────────────────────────────── - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x x₁₀⋅x₁₂ ────────────────────────────────────────────────────────────────────────────── - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x x₁₁⋅x₁ ────────────────────────────────────────────────────────────────────────────── - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x -x₁₀⋅x ────────────────────────────────────────────────────────────────────────────── - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x ₃⋅x₇ + x₁₀⋅x₁₅⋅x₅ + x₁₁⋅x₁₃⋅x₆ - x₁₁⋅x₁₄⋅x₅ + x₁₄⋅x₇⋅x₉ - x₁₅⋅x₆⋅x₉ ────────────────────────────────────────────────────────────────────────────── ₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x ⋅x₇ - x₁₀⋅x₁₅⋅x₄ - x₁₁⋅x₁₂⋅x₆ + x₁₁⋅x₁₄⋅x₄ - x₁₄⋅x₇⋅x₈ + x₁₅⋅x₆⋅x₈ ────────────────────────────────────────────────────────────────────────────── ₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x ₂⋅x₅ - x₁₁⋅x₁₃⋅x₄ - x₁₂⋅x₇⋅x₉ + x₁₃⋅x₇⋅x₈ + x₁₅⋅x₄⋅x₉ - x₁₅⋅x₅⋅x₈ ────────────────────────────────────────────────────────────────────────────── ₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x ₁₂⋅x₅ + x₁₀⋅x₁₃⋅x₄ + x₁₂⋅x₆⋅x₉ - x₁₃⋅x₆⋅x₈ - x₁₄⋅x₄⋅x₉ + x₁₄⋅x₅⋅x₈ ────────────────────────────────────────────────────────────────────────────── ₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x ────────────────────────────────────────────────────────────────────────────── ₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ ────────────────────────────────────────────────────────────────────────────── ₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ ────────────────────────────────────────────────────────────────────────────── ₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ ────────────────────────────────────────────────────────────────────────────── ₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ ─────────────────────────────────────────────────────────── ───────────────── - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + ─────────────────────────────────────────────────────────── ───────────────── - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + ─────────────────────────────────────────────────────────── ───────────────── - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + ─────────────────────────────────────────────────────────── ───────────────── - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + ────────────────────────────────────────────────────────────────────────────── x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + ────────────────────────────────────────────────────────────────────────────── x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + ────────────────────────────────────────────────────────────────────────────── x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + ────────────────────────────────────────────────────────────────────────────── x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + -x₁⋅x₁₀⋅x₁₅ + x₁⋅x₁₁⋅x₁₄ ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₀⋅x₁₀⋅x₁₅ - x₀⋅x₁₁⋅x₁₄ - ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₀⋅x₁₁⋅x₁₃ - x₀⋅x₁₅⋅x₉ - ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + -x₀⋅x₁₀⋅x₁₃ + x₀⋅x₁₄⋅x₉ ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + + x₁₀⋅x₁₃⋅x₃ - x₁₁⋅x₁₃⋅x₂ - x₁₄⋅x₃⋅x₉ + x₁₅⋅x₂⋅x₉ ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ x₁₀⋅x₁₂⋅x₃ + x₁₁⋅x₁₂⋅x₂ + x₁₄⋅x₃⋅x₈ - x₁₅⋅x₂⋅x₈ ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ x₁⋅x₁₁⋅x₁₂ + x₁⋅x₁₅⋅x₈ + x₁₂⋅x₃⋅x₉ - x₁₃⋅x₃⋅x₈ ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ + x₁⋅x₁₀⋅x₁₂ - x₁⋅x₁₄⋅x₈ - x₁₂⋅x₂⋅x₉ + x₁₃⋅x₂⋅x₈ ────────────────────────────────────────────────────────────────────────────── x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ ────────────────────────────────────────────────────────────────────────────── - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x ────────────────────────────────────────────────────────────────────────────── - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x ────────────────────────────────────────────────────────────────────────────── - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x ────────────────────────────────────────────────────────────────────────────── - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x ───────────────────────────────────────── ─────────────────────────────────── ₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀ ───────────────────────────────────────── ─────────────────────────────────── ₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀ ───────────────────────────────────────── ─────────────────────────────────── ₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀ ───────────────────────────────────────── ─────────────────────────────────── ₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀ ────────────────────────────────────────────────────────────────────────────── ⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁ ────────────────────────────────────────────────────────────────────────────── ⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁ ────────────────────────────────────────────────────────────────────────────── ⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁ ────────────────────────────────────────────────────────────────────────────── ⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁ -x₁⋅x₁₄⋅x₇ + x₁⋅x₁₅⋅x₆ + x₁₃⋅x₂⋅x₇ - x₁₃⋅ ────────────────────────────────────────────────────────────────────────────── ⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁ x₀⋅x₁₄⋅x₇ - x₀⋅x₁₅⋅x₆ - x₁₂⋅x₂⋅x₇ + x₁₂⋅x ────────────────────────────────────────────────────────────────────────────── ⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁ -x₀⋅x₁₃⋅x₇ + x₀⋅x₁₅⋅x₅ + x₁⋅x₁₂⋅x₇ - x₁⋅x ────────────────────────────────────────────────────────────────────────────── ⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁ x₀⋅x₁₃⋅x₆ - x₀⋅x₁₄⋅x₅ - x₁⋅x₁₂⋅x₆ + x₁⋅x₁ ────────────────────────────────────────────────────────────────────────────── ⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁ x₃⋅x₆ + x₁₄⋅x₃⋅x₅ - x₁₅⋅x₂⋅x₅ ────────────────────────────────────────────────────────────────────────────── ₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x ₃⋅x₆ - x₁₄⋅x₃⋅x₄ + x₁₅⋅x₂⋅x₄ ────────────────────────────────────────────────────────────────────────────── ₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x ₁₅⋅x₄ - x₁₂⋅x₃⋅x₅ + x₁₃⋅x₃⋅x₄ ────────────────────────────────────────────────────────────────────────────── ₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x ₄⋅x₄ + x₁₂⋅x₂⋅x₅ - x₁₃⋅x₂⋅x₄ ────────────────────────────────────────────────────────────────────────────── ₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x ────────────────────────────────────────────────────────────────────────────── ₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅ ────────────────────────────────────────────────────────────────────────────── ₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅ ────────────────────────────────────────────────────────────────────────────── ₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅ ────────────────────────────────────────────────────────────────────────────── ₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅ ─────────────────────── ───────────────────────────────────────────────────── x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x ─────────────────────── ───────────────────────────────────────────────────── x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x ─────────────────────── ───────────────────────────────────────────────────── x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x ─────────────────────── ───────────────────────────────────────────────────── x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x x₁⋅x₁₀⋅x₇ - x₁⋅x₁₁⋅x₆ - x₁₀⋅x₃⋅x₅ + x₁₁⋅x₂⋅x₅ - x₂⋅x₇⋅x₉ + ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅ -x₀⋅x₁₀⋅x₇ + x₀⋅x₁₁⋅x₆ + x₁₀⋅x₃⋅x₄ - x₁₁⋅x₂⋅x₄ + x₂⋅x₇⋅x₈ ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅ -x₀⋅x₁₁⋅x₅ + x₀⋅x₇⋅x₉ + x₁⋅x₁₁⋅x₄ - x₁⋅x₇⋅x₈ - x₃⋅x₄⋅x₉ + ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅ x₀⋅x₁₀⋅x₅ - x₀⋅x₆⋅x₉ - x₁⋅x₁₀⋅x₄ + x₁⋅x₆⋅x₈ + x₂⋅x₄⋅x₉ - ────────────────────────────────────────────────────────────────────────────── ₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅ x₃⋅x₆⋅x₉ ────────────────────────────────────────────────────────────────────────────── x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅ - x₃⋅x₆⋅x₈ ────────────────────────────────────────────────────────────────────────────── x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅ x₃⋅x₅⋅x₈ ────────────────────────────────────────────────────────────────────────────── x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅ x₂⋅x₅⋅x₈ ────────────────────────────────────────────────────────────────────────────── x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅ ────────────────────────────────────────────────────────────────────────────── x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅ ────────────────────────────────────────────────────────────────────────────── x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅ ────────────────────────────────────────────────────────────────────────────── x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅ ────────────────────────────────────────────────────────────────────────────── x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅ ⎤ ─────⎥ x₅⋅x₈⎥ ⎥ ⎥ ─────⎥ x₅⋅x₈⎥ ⎥ ⎥ ─────⎥ x₅⋅x₈⎥ ⎥ ⎥ ─────⎥ x₅⋅x₈⎦
Inverse of a complicated matrix not working in Sympy/ Python
So I was trying to formulate some matrix out of another matrix's elements using sympy. But while the taking the inverse it didn't work I believe cause of the complicity of the matrix I am taking the inverse of. x0, x1, x2, x3 = smp.symbols('x^0 x^1 x^2 x^3') COORDS = [x0, x1, x2, x3] N = len(COORDS) g00 = smp.Function('g00')(x0, x1, x2, x3) g01 = smp.Function('g01')(x0, x1, x2, x3) g02 = smp.Function('g02')(x0, x1, x2, x3) g03 = smp.Function('g03')(x0, x1, x2, x3) g10 = smp.Function('g10')(x0, x1, x2, x3) g11 = smp.Function('g11')(x0, x1, x2, x3) g12 = smp.Function('g12')(x0, x1, x2, x3) g13 = smp.Function('g13')(x0, x1, x2, x3) g20 = smp.Function('g20')(x0, x1, x2, x3) g21 = smp.Function('g21')(x0, x1, x2, x3) g22 = smp.Function('g22')(x0, x1, x2, x3) g23 = smp.Function('g23')(x0, x1, x2, x3) g30 = smp.Function('g30')(x0, x1, x2, x3) g31 = smp.Function('g31')(x0, x1, x2, x3) g32 = smp.Function('g_32')(x0, x1, x2, x3) g33 = smp.Function('g_33')(x0, x1, x2, x3) g = smp.Matrix([[g00,g01,g02,g03],[g10,g11,g12,g13],[g20,g21,g22,g23],[g30,g31,g32,g33]]) and when I do g.inv() the kernel just doesn't finish. What should I do in order to take this matrix's inverse? Thank you so much in advance :)
[ "A fully symbolic matrix has a complicated expression for its inverse (shown below). Feel free to substitute whatever you like for the entries of the matrix but it's unlikely that you can do anything useful with such an expression.\nHere is the inverse of a fully symbolic 4x4 matrix (computed in less than a second):\nIn [8]: from sympy import *\n\nIn [9]: from sympy.polys.matrices import DomainMatrix\n\nIn [10]: dm = DomainMatrix.from_Matrix(Matrix(symbols('x:16')).reshape(4, 4))\n\nIn [11]: dm.to_field().inv().to_Matrix()\nOut[11]: \n⎡ \n⎢─────────────────────────────────────────────────────────────────────────────\n⎢-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉\n⎢ \n⎢ \n⎢─────────────────────────────────────────────────────────────────────────────\n⎢-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉\n⎢ \n⎢ \n⎢─────────────────────────────────────────────────────────────────────────────\n⎢-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉\n⎢ \n⎢ \n⎢─────────────────────────────────────────────────────────────────────────────\n⎣-x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉\n\n -x₁₀⋅x₁\n──────────────────────────────────────────────────────────────────────────────\n - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x\n \n x₁₀⋅x₁₂\n──────────────────────────────────────────────────────────────────────────────\n - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x\n \n x₁₁⋅x₁\n──────────────────────────────────────────────────────────────────────────────\n - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x\n \n -x₁₀⋅x\n──────────────────────────────────────────────────────────────────────────────\n - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x\n\n₃⋅x₇ + x₁₀⋅x₁₅⋅x₅ + x₁₁⋅x₁₃⋅x₆ - x₁₁⋅x₁₄⋅x₅ + x₁₄⋅x₇⋅x₉ - x₁₅⋅x₆⋅x₉ \n──────────────────────────────────────────────────────────────────────────────\n₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x\n \n⋅x₇ - x₁₀⋅x₁₅⋅x₄ - x₁₁⋅x₁₂⋅x₆ + x₁₁⋅x₁₄⋅x₄ - x₁₄⋅x₇⋅x₈ + x₁₅⋅x₆⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\n₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x\n \n₂⋅x₅ - x₁₁⋅x₁₃⋅x₄ - x₁₂⋅x₇⋅x₉ + x₁₃⋅x₇⋅x₈ + x₁₅⋅x₄⋅x₉ - x₁₅⋅x₅⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\n₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x\n \n₁₂⋅x₅ + x₁₀⋅x₁₃⋅x₄ + x₁₂⋅x₆⋅x₉ - x₁₃⋅x₆⋅x₈ - x₁₄⋅x₄⋅x₉ + x₁₄⋅x₅⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\n₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x\n\n \n──────────────────────────────────────────────────────────────────────────────\n₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ \n \n \n──────────────────────────────────────────────────────────────────────────────\n₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ \n \n \n──────────────────────────────────────────────────────────────────────────────\n₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ \n \n \n──────────────────────────────────────────────────────────────────────────────\n₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ \n\n \n─────────────────────────────────────────────────────────── ─────────────────\n- x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + \n \n \n─────────────────────────────────────────────────────────── ─────────────────\n- x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + \n \n \n─────────────────────────────────────────────────────────── ─────────────────\n- x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + \n \n \n─────────────────────────────────────────────────────────── ─────────────────\n- x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + \n\n \n──────────────────────────────────────────────────────────────────────────────\nx₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + \n \n \n──────────────────────────────────────────────────────────────────────────────\nx₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + \n \n \n──────────────────────────────────────────────────────────────────────────────\nx₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + \n \n \n──────────────────────────────────────────────────────────────────────────────\nx₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + \n\n -x₁⋅x₁₀⋅x₁₅ + x₁⋅x₁₁⋅x₁₄ \n──────────────────────────────────────────────────────────────────────────────\nx₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ +\n \n x₀⋅x₁₀⋅x₁₅ - x₀⋅x₁₁⋅x₁₄ -\n──────────────────────────────────────────────────────────────────────────────\nx₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ +\n \n x₀⋅x₁₁⋅x₁₃ - x₀⋅x₁₅⋅x₉ -\n──────────────────────────────────────────────────────────────────────────────\nx₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ +\n \n -x₀⋅x₁₀⋅x₁₃ + x₀⋅x₁₄⋅x₉ \n──────────────────────────────────────────────────────────────────────────────\nx₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ +\n\n+ x₁₀⋅x₁₃⋅x₃ - x₁₁⋅x₁₃⋅x₂ - x₁₄⋅x₃⋅x₉ + x₁₅⋅x₂⋅x₉ \n──────────────────────────────────────────────────────────────────────────────\n x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ \n \n x₁₀⋅x₁₂⋅x₃ + x₁₁⋅x₁₂⋅x₂ + x₁₄⋅x₃⋅x₈ - x₁₅⋅x₂⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\n x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ \n \n x₁⋅x₁₁⋅x₁₂ + x₁⋅x₁₅⋅x₈ + x₁₂⋅x₃⋅x₉ - x₁₃⋅x₃⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\n x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ \n \n+ x₁⋅x₁₀⋅x₁₂ - x₁⋅x₁₄⋅x₈ - x₁₂⋅x₂⋅x₉ + x₁₃⋅x₂⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\n x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ \n\n \n──────────────────────────────────────────────────────────────────────────────\n- x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x\n \n \n──────────────────────────────────────────────────────────────────────────────\n- x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x\n \n \n──────────────────────────────────────────────────────────────────────────────\n- x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x\n \n \n──────────────────────────────────────────────────────────────────────────────\n- x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x\n\n \n───────────────────────────────────────── ───────────────────────────────────\n₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀\n \n \n───────────────────────────────────────── ───────────────────────────────────\n₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀\n \n \n───────────────────────────────────────── ───────────────────────────────────\n₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀\n \n \n───────────────────────────────────────── ───────────────────────────────────\n₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀\n\n \n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁\n \n \n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁\n \n \n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁\n \n \n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁\n\n -x₁⋅x₁₄⋅x₇ + x₁⋅x₁₅⋅x₆ + x₁₃⋅x₂⋅x₇ - x₁₃⋅\n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁\n \n x₀⋅x₁₄⋅x₇ - x₀⋅x₁₅⋅x₆ - x₁₂⋅x₂⋅x₇ + x₁₂⋅x\n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁\n \n -x₀⋅x₁₃⋅x₇ + x₀⋅x₁₅⋅x₅ + x₁⋅x₁₂⋅x₇ - x₁⋅x\n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁\n \n x₀⋅x₁₃⋅x₆ - x₀⋅x₁₄⋅x₅ - x₁⋅x₁₂⋅x₆ + x₁⋅x₁\n──────────────────────────────────────────────────────────────────────────────\n⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁\n\nx₃⋅x₆ + x₁₄⋅x₃⋅x₅ - x₁₅⋅x₂⋅x₅ \n──────────────────────────────────────────────────────────────────────────────\n₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x\n \n₃⋅x₆ - x₁₄⋅x₃⋅x₄ + x₁₅⋅x₂⋅x₄ \n──────────────────────────────────────────────────────────────────────────────\n₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x\n \n₁₅⋅x₄ - x₁₂⋅x₃⋅x₅ + x₁₃⋅x₃⋅x₄ \n──────────────────────────────────────────────────────────────────────────────\n₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x\n \n₄⋅x₄ + x₁₂⋅x₂⋅x₅ - x₁₃⋅x₂⋅x₄ \n──────────────────────────────────────────────────────────────────────────────\n₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅x₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x\n\n \n──────────────────────────────────────────────────────────────────────────────\n₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅\n \n \n──────────────────────────────────────────────────────────────────────────────\n₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅\n \n \n──────────────────────────────────────────────────────────────────────────────\n₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅\n \n \n──────────────────────────────────────────────────────────────────────────────\n₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅x₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅\n\n \n─────────────────────── ─────────────────────────────────────────────────────\nx₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x\n \n \n─────────────────────── ─────────────────────────────────────────────────────\nx₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x\n \n \n─────────────────────── ─────────────────────────────────────────────────────\nx₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x\n \n \n─────────────────────── ─────────────────────────────────────────────────────\nx₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅x₅⋅x₈ -x₀⋅x₁₀⋅x₁₃⋅x₇ + x₀⋅x₁₀⋅x₁₅⋅x₅ + x₀⋅x₁₁⋅x₁₃⋅x₆ - x₀⋅x\n\n \n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x\n \n \n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x\n \n \n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x\n \n \n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₄⋅x₅ + x₀⋅x₁₄⋅x₇⋅x₉ - x₀⋅x₁₅⋅x₆⋅x₉ + x₁⋅x₁₀⋅x₁₂⋅x₇ - x₁⋅x₁₀⋅x₁₅⋅x₄ - x₁⋅x\n\n x₁⋅x₁₀⋅x₇ - x₁⋅x₁₁⋅x₆ - x₁₀⋅x₃⋅x₅ + x₁₁⋅x₂⋅x₅ - x₂⋅x₇⋅x₉ +\n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅\n \n -x₀⋅x₁₀⋅x₇ + x₀⋅x₁₁⋅x₆ + x₁₀⋅x₃⋅x₄ - x₁₁⋅x₂⋅x₄ + x₂⋅x₇⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅\n \n -x₀⋅x₁₁⋅x₅ + x₀⋅x₇⋅x₉ + x₁⋅x₁₁⋅x₄ - x₁⋅x₇⋅x₈ - x₃⋅x₄⋅x₉ +\n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅\n \n x₀⋅x₁₀⋅x₅ - x₀⋅x₆⋅x₉ - x₁⋅x₁₀⋅x₄ + x₁⋅x₆⋅x₈ + x₂⋅x₄⋅x₉ - \n──────────────────────────────────────────────────────────────────────────────\n₁₁⋅x₁₂⋅x₆ + x₁⋅x₁₁⋅x₁₄⋅x₄ - x₁⋅x₁₄⋅x₇⋅x₈ + x₁⋅x₁₅⋅x₆⋅x₈ - x₁₀⋅x₁₂⋅x₃⋅x₅ + x₁₀⋅\n\n x₃⋅x₆⋅x₉ \n──────────────────────────────────────────────────────────────────────────────\nx₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅\n \n- x₃⋅x₆⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\nx₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅\n \n x₃⋅x₅⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\nx₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅\n \nx₂⋅x₅⋅x₈ \n──────────────────────────────────────────────────────────────────────────────\nx₁₃⋅x₃⋅x₄ + x₁₁⋅x₁₂⋅x₂⋅x₅ - x₁₁⋅x₁₃⋅x₂⋅x₄ - x₁₂⋅x₂⋅x₇⋅x₉ + x₁₂⋅x₃⋅x₆⋅x₉ + x₁₃⋅\n\n \n──────────────────────────────────────────────────────────────────────────────\nx₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅\n \n \n──────────────────────────────────────────────────────────────────────────────\nx₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅\n \n \n──────────────────────────────────────────────────────────────────────────────\nx₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅\n \n \n──────────────────────────────────────────────────────────────────────────────\nx₂⋅x₇⋅x₈ - x₁₃⋅x₃⋅x₆⋅x₈ - x₁₄⋅x₃⋅x₄⋅x₉ + x₁₄⋅x₃⋅x₅⋅x₈ + x₁₅⋅x₂⋅x₄⋅x₉ - x₁₅⋅x₂⋅\n\n ⎤\n─────⎥\nx₅⋅x₈⎥\n ⎥\n ⎥\n─────⎥\nx₅⋅x₈⎥\n ⎥\n ⎥\n─────⎥\nx₅⋅x₈⎥\n ⎥\n ⎥\n─────⎥\nx₅⋅x₈⎦\n\n" ]
[ 2 ]
[]
[]
[ "matrix", "python", "sympy", "tensor" ]
stackoverflow_0074658912_matrix_python_sympy_tensor.txt
Q: python + psycopg2.errors.SyntaxError: syntax error at end of input I'm getting this error message cursor.execute(query, variables) psycopg2.errors.SyntaxError: syntax error at end of input My code data = { 'country': data['country'][x], 'year': data['year'][x].astype(float), 'month': data['month'][x].astype(float) } db_connection.execute( f""" INSERT INTO my_table (country, year, month) VALUES (%(country)s, %(year)s, %(month)s) ON CONFLICT (country, year, month) """, data, ) A: You're missing a conflict_action in your sql statement. See https://www.postgresql.org/docs/current/sql-insert.html#:~:text=ON%20CONFLICT%20DO%20NOTHING%20simply,can%20perform%20unique%20index%20inference. for details. EG You might want ON CONFLICT (country, year, month) DO NOTHING
python + psycopg2.errors.SyntaxError: syntax error at end of input
I'm getting this error message cursor.execute(query, variables) psycopg2.errors.SyntaxError: syntax error at end of input My code data = { 'country': data['country'][x], 'year': data['year'][x].astype(float), 'month': data['month'][x].astype(float) } db_connection.execute( f""" INSERT INTO my_table (country, year, month) VALUES (%(country)s, %(year)s, %(month)s) ON CONFLICT (country, year, month) """, data, )
[ "You're missing a conflict_action in your sql statement. See https://www.postgresql.org/docs/current/sql-insert.html#:~:text=ON%20CONFLICT%20DO%20NOTHING%20simply,can%20perform%20unique%20index%20inference. for details.\nEG You might want ON CONFLICT (country, year, month) DO NOTHING\n" ]
[ 2 ]
[]
[]
[ "postgresql", "python" ]
stackoverflow_0074662600_postgresql_python.txt
Q: How do I add images to a PyPI readme (that works on GitHub)? In my readme on GitHub I have several images that are present there in my project's source tree which I reference successfully with directives like .. image:: ./doc/source/_static/figs/moon_probe.png I would also like to have these images appear when this same readme is generated in PyPi. How do I (a) ensure that images are present on PyPi for the readme to access and (b) formulate the .. image:: directive to access them? A: PyPI will not read your package distributions for the image. You have to use the image's external link, for example: .. image:: https://raw.githubusercontent.com/greyli/flask-share/master/images/demo.png If you are using Markdown description, use this: ![](https://raw.githubusercontent.com/greyli/flask-share/master/images/demo.png) Be sure to replace the URL in the above examples with your image URL, here I use the image hosted by GitHub, the real demo is on PyPI. P.S. To get the image's raw link on GitHub, right-click the image and choose Copy image address. A: Go to the image address in the Github repository. The path shown will be like this: https://github.com/tensorbored/kds/blob/master/docs/_static/readme_lift.png Change the blob term in the image address to raw https://github.com/tensorbored/kds/raw/master/docs/_static/readme_lift.png A: If you have your images on Github, navigate to the image then right click on download button and copy link address: Then you can add it in your README.md file: ![](https://github.com/your_username/your_repository/raw/master/images/img2.png) It should be rendered properly both on Github and PyPi. A: Setting ?raw=True at the end of GitHub image link seems to work. example: ![Sample image](https://github.com/usename/reponame/blob/master/sample.png?raw=True) I had found this somewhere on the internet previously, but couldn't find it now. I'll credit the original author when I find it again. A: The long_description_content_type should be text/x-rst if you use restructured text syntax (README.rst). Assuming that is what you are using based on ".. image::". If you save the image on GitHub and use that external link for the image then it works (typically something like raw.githubusercontent.com.... as described in one of the above answers). It does work as shown in this pypi package: https://pypi.org/project/gammath-spot/
How do I add images to a PyPI readme (that works on GitHub)?
In my readme on GitHub I have several images that are present there in my project's source tree which I reference successfully with directives like .. image:: ./doc/source/_static/figs/moon_probe.png I would also like to have these images appear when this same readme is generated in PyPi. How do I (a) ensure that images are present on PyPi for the readme to access and (b) formulate the .. image:: directive to access them?
[ "PyPI will not read your package distributions for the image. You have to use the image's external link, for example:\n.. image:: https://raw.githubusercontent.com/greyli/flask-share/master/images/demo.png\n\nIf you are using Markdown description, use this:\n![](https://raw.githubusercontent.com/greyli/flask-share/master/images/demo.png)\n\nBe sure to replace the URL in the above examples with your image URL, here I use the image hosted by GitHub, the real demo is on PyPI.\nP.S. To get the image's raw link on GitHub, right-click the image and choose Copy image address.\n", "Go to the image address in the Github repository. The path shown will be like this:\nhttps://github.com/tensorbored/kds/blob/master/docs/_static/readme_lift.png\n\nChange the blob term in the image address to raw\nhttps://github.com/tensorbored/kds/raw/master/docs/_static/readme_lift.png\n\n", "If you have your images on Github, navigate to the image then right click on download button and copy link address:\n\nThen you can add it in your README.md file:\n![](https://github.com/your_username/your_repository/raw/master/images/img2.png)\n\nIt should be rendered properly both on Github and PyPi.\n", "Setting ?raw=True at the end of GitHub image link seems to work.\nexample:\n![Sample image](https://github.com/usename/reponame/blob/master/sample.png?raw=True)\n\nI had found this somewhere on the internet previously, but couldn't find it now. I'll credit the original author when I find it again.\n", "The long_description_content_type should be text/x-rst if you use restructured text syntax (README.rst). Assuming that is what you are using based on \".. image::\". If you save the image on GitHub and use that external link for the image then it works (typically something like raw.githubusercontent.com.... as described in one of the above answers).\nIt does work as shown in this pypi package:\nhttps://pypi.org/project/gammath-spot/\n" ]
[ 34, 8, 3, 3, 0 ]
[]
[]
[ "github", "pypi", "python", "readme", "restructuredtext" ]
stackoverflow_0041983209_github_pypi_python_readme_restructuredtext.txt
Q: unusual result from joining two Data Frames I have two tables: first name A id x 1 123 2 456 3 789 second name B: id y 1 4 3 5 3 6 I need join tables A and B with result: id x y 1 123 4 2 456 3 789 5 3 6 Of course instead of x and y columns I have a lot of columns and tables have a lot of rows, so the solution cannot be use"left join" and remove one value. I've no idea how to get that result in python. Could You help me ? A: Use Pandas merge import pandas as pd # load the data from the two tables into pandas dataframes df1 = pd.read_csv('A.csv') df2 = pd.read_csv('B.csv') # merge the df using 'id' column merged_df = pd.merge(df1, df2, on='id') print(merged_df)
unusual result from joining two Data Frames
I have two tables: first name A id x 1 123 2 456 3 789 second name B: id y 1 4 3 5 3 6 I need join tables A and B with result: id x y 1 123 4 2 456 3 789 5 3 6 Of course instead of x and y columns I have a lot of columns and tables have a lot of rows, so the solution cannot be use"left join" and remove one value. I've no idea how to get that result in python. Could You help me ?
[ "Use Pandas merge\nimport pandas as pd\n\n# load the data from the two tables into pandas dataframes\ndf1 = pd.read_csv('A.csv')\ndf2 = pd.read_csv('B.csv')\n\n# merge the df using 'id' column\nmerged_df = pd.merge(df1, df2, on='id')\n\nprint(merged_df)\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074662569_pandas_python.txt