content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Is it bad practice to define *args and **kwargs for future inheritance as a default in Python? Say I have a project with class Subscriber that I implement with constructors and methods that I need at this time of my project. Later my functionality needs a subclass of Subscriber, let's say Gold_subscriber that has methods that have different needs for arguments, for example more arguments, keyword arguments etc. I would like most parts of my program to be ignorant if instances are of certain class or it's subclass, so that I wouldn't have to do type checks or other to check needed arguments for class methods. I think this could be achieved by defining routinely *args and **kwargs to methods in the class (and subclasses) - I only use them in subclasses when I need them and ignore them if they are not relevant? If I need the arguments in the future, I don't have to tamper with most of the code calling class methods. However, nobody seems to do this ever and I agree that this sounds messy and awkward.. There must be something I haven't thought of that leads to problems. Why isn't this done? A: I guess you are ok with unused **kwargs if those are at least commented in the code (space for future expansion) - but it is not the same for *args: adding positional arguments in specialized classes is much more problematic, due to possible conflicting arguments down the tree, and it would kill the possibility of multiple inheritance with different arguments as well. Also, if there is any chance your classes are composable, you should just call the methods in super() and pass **kwargs ahead anyway. And that applies even if you are writing a class you think of as the base for a hyerarchy: you might come up later with mixin classes, that could be inserted before your "base" when composing subclasses.
Is it bad practice to define *args and **kwargs for future inheritance as a default in Python?
Say I have a project with class Subscriber that I implement with constructors and methods that I need at this time of my project. Later my functionality needs a subclass of Subscriber, let's say Gold_subscriber that has methods that have different needs for arguments, for example more arguments, keyword arguments etc. I would like most parts of my program to be ignorant if instances are of certain class or it's subclass, so that I wouldn't have to do type checks or other to check needed arguments for class methods. I think this could be achieved by defining routinely *args and **kwargs to methods in the class (and subclasses) - I only use them in subclasses when I need them and ignore them if they are not relevant? If I need the arguments in the future, I don't have to tamper with most of the code calling class methods. However, nobody seems to do this ever and I agree that this sounds messy and awkward.. There must be something I haven't thought of that leads to problems. Why isn't this done?
[ "I guess you are ok with unused **kwargs if those are at least commented in the code (space for future expansion) - but it is not the same for *args: adding positional arguments in specialized classes is much more problematic, due to possible conflicting arguments down the tree, and it would kill the possibility of multiple inheritance with different arguments as well.\nAlso, if there is any chance your classes are composable, you should just call the methods in super() and pass **kwargs ahead anyway. And that applies even if you are writing a class you think of as the base for a hyerarchy: you might come up later with mixin classes, that could be inserted before your \"base\" when composing subclasses.\n" ]
[ 0 ]
[]
[]
[ "arguments", "inheritance", "keyword_argument", "python" ]
stackoverflow_0074625657_arguments_inheritance_keyword_argument_python.txt
Q: QListWidget and Multiple Selection I have a regular QListWidget with couple of signals and slots hookedup. Everything works as I expect. I can update, retrieve, clear etc. But the UI wont support multiple selections. How do I 'enable' multiple selections for QListWidget? My limited experience with PyQt tells me I need to create a custom QListWidget by subclassing .. but what next? Google gave me C++ answers but I'm looking for Python http://www.qtforum.org/article/26320/qlistwidget-multiple-selection.html http://www.qtcentre.org/threads/11721-QListWidget-multi-selection A: Unfortunately I can't help with the Python specific syntax but you don't need to create any subclasses. After your QListWidget is created, call setSelectionMode() with one of the multiple selection types passed in, probably QAbstractItemView::ExtendedSelection is the one you want. There are a few variations on this mode that you may want to look at. In your slot for the itemSelectionChanged() signal, call selectedItems() to get a QList of QListWidgetItem pointers. A: For PyQT4 it's QListWidget.setSelectionMode(QtGui.QAbstractItemView.ExtendedSelection) A: Example of getting multiple selected values in listWidget with multiple selection. from PyQt5 import QtWidgets, QtCore class Test(QtWidgets.QDialog): def __init__(self, parent=None): super(Test, self).__init__(parent) self.layout = QtWidgets.QVBoxLayout() self.listWidget = QtWidgets.QListWidget() self.listWidget.setSelectionMode( QtWidgets.QAbstractItemView.ExtendedSelection ) self.listWidget.setGeometry(QtCore.QRect(10, 10, 211, 291)) for i in range(10): item = QtWidgets.QListWidgetItem("Item %i" % i) self.listWidget.addItem(item) self.listWidget.itemClicked.connect(self.printItemText) self.layout.addWidget(self.listWidget) self.setLayout(self.layout) def printItemText(self): items = self.listWidget.selectedItems() x = [] for i in range(len(items)): x.append(str(self.listWidget.selectedItems()[i].text())) print (x) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) form = Test() form.show() app.exec_() output :- A: Using PyQt5 you can set the SelectionMode of your QListWidget to allow multiple selections by using: from PyQt5 import QtWidgets QtWidgets.QListWidget.setSelectionMode(2) where SelectionMode = 0 => NoSelection SelectionMode = 1 => SingleSelection SelectionMode = 2 => MultiSelection SelectionMode = 3 => ExtendedSelection SelectionMode = 4 => ContiguousSelection Reference In Qt Creator you find this option here: A: In addition, you can use list comprehension to get the selected items, for example num_ITEMS=[item.text() for item in self.listWidget.selectedItems()] A: After searching for much time I found out they they changed this in PyQt6. Now you have to do the following: from PyQt6.QtWidgets import QListWidget, QAbstractItemView # ... all your other imports class MyWidget(QWidget): def __init__(self): super(MyWidget, self).__init__() self.layout = QHBoxLayout() self.my_list_view = QListWidget() self.my_list_view.setSelectionMode(QAbstractItemView.SelectionMode.MultiSelection) # also try QAbstractItemView.SelectionMode.ExtendedSelection if you want the user to press CTRL for multiple selection Basically you have to import the QAbstractItemView from the widgets and use the right selection mode
QListWidget and Multiple Selection
I have a regular QListWidget with couple of signals and slots hookedup. Everything works as I expect. I can update, retrieve, clear etc. But the UI wont support multiple selections. How do I 'enable' multiple selections for QListWidget? My limited experience with PyQt tells me I need to create a custom QListWidget by subclassing .. but what next? Google gave me C++ answers but I'm looking for Python http://www.qtforum.org/article/26320/qlistwidget-multiple-selection.html http://www.qtcentre.org/threads/11721-QListWidget-multi-selection
[ "Unfortunately I can't help with the Python specific syntax but you don't need to create any subclasses. \nAfter your QListWidget is created, call setSelectionMode() with one of the multiple selection types passed in, probably QAbstractItemView::ExtendedSelection is the one you want. There are a few variations on this mode that you may want to look at.\nIn your slot for the itemSelectionChanged() signal, call selectedItems() to get a QList of QListWidgetItem pointers.\n", "For PyQT4 it's\nQListWidget.setSelectionMode(QtGui.QAbstractItemView.ExtendedSelection)\n\n", "Example of getting multiple selected values in listWidget with multiple selection.\n\nfrom PyQt5 import QtWidgets, QtCore\nclass Test(QtWidgets.QDialog):\n def __init__(self, parent=None):\n super(Test, self).__init__(parent)\n self.layout = QtWidgets.QVBoxLayout()\n self.listWidget = QtWidgets.QListWidget()\n self.listWidget.setSelectionMode(\n QtWidgets.QAbstractItemView.ExtendedSelection\n )\n self.listWidget.setGeometry(QtCore.QRect(10, 10, 211, 291))\n for i in range(10):\n item = QtWidgets.QListWidgetItem(\"Item %i\" % i)\n self.listWidget.addItem(item)\n self.listWidget.itemClicked.connect(self.printItemText)\n self.layout.addWidget(self.listWidget)\n self.setLayout(self.layout)\n\n def printItemText(self):\n items = self.listWidget.selectedItems()\n x = []\n for i in range(len(items)):\n x.append(str(self.listWidget.selectedItems()[i].text()))\n\n print (x)\n\nif __name__ == \"__main__\":\n import sys\n app = QtWidgets.QApplication(sys.argv)\n form = Test()\n form.show()\n app.exec_()\n\noutput :-\n\n", "Using PyQt5 you can set the SelectionMode of your QListWidget to allow multiple selections by using:\nfrom PyQt5 import QtWidgets \n\n\nQtWidgets.QListWidget.setSelectionMode(2)\n\nwhere\n\nSelectionMode = 0 => NoSelection\nSelectionMode = 1 => SingleSelection\nSelectionMode = 2 => MultiSelection\nSelectionMode = 3 => ExtendedSelection\nSelectionMode = 4 => ContiguousSelection\n\nReference\nIn Qt Creator you find this option here:\n\n", "In addition, you can use list comprehension to get the selected items, for example\nnum_ITEMS=[item.text() for item in self.listWidget.selectedItems()]\n\n", "After searching for much time I found out they they changed this in PyQt6. Now you have to do the following:\nfrom PyQt6.QtWidgets import QListWidget, QAbstractItemView\n# ... all your other imports\nclass MyWidget(QWidget):\ndef __init__(self):\n super(MyWidget, self).__init__()\n self.layout = QHBoxLayout()\n self.my_list_view = QListWidget()\n self.my_list_view.setSelectionMode(QAbstractItemView.SelectionMode.MultiSelection) # also try QAbstractItemView.SelectionMode.ExtendedSelection if you want the user to press CTRL for multiple selection\n\nBasically you have to import the QAbstractItemView from the widgets and use the right selection mode\n" ]
[ 34, 30, 13, 5, 4, 0 ]
[]
[]
[ "pyqt", "python", "qlistwidget", "user_interface" ]
stackoverflow_0004008649_pyqt_python_qlistwidget_user_interface.txt
Q: No problems have been detected in the workspace so far I am using and have been usnig Visual Studio to develop Python code. Previously when I saved a file it would review the code and provide warnings and errors. Now I only get "No problems have been detected in the workspace so far" I have looked through settings but cannot find anything unchecked that is relevant. I have intentional errors in the code but get no error messages. No idea what changed. A: Maybe you can switch the linter, such as from 'pylint' to 'flake8' or other switches. If you don't know how to switch, you can refer to this page. If it still doesn't work, maybe some extension you installed caused the problem. Try to disable the extensions, and remember to restart the VSCode. You can refer to this and this page for some inspire. A: You should open the Visual Studio Code from the Developer Command Prompt. Using a Developer Command Prompt (and not a basic/classic/regular console) helps to make sure that all the paths and environment variables are set up correctly. A: You can click to Files => Preferences => Setting => Extensions => C/C++ and clicl "v" for all empty place. A: I had the same issue, you must have turned the error squiggles 'off'. Head to the settings.json file in your workspace (under vscode folder) and change "C_Cpp.errorSquiggles" to "Enabled". Might Work.
No problems have been detected in the workspace so far
I am using and have been usnig Visual Studio to develop Python code. Previously when I saved a file it would review the code and provide warnings and errors. Now I only get "No problems have been detected in the workspace so far" I have looked through settings but cannot find anything unchecked that is relevant. I have intentional errors in the code but get no error messages. No idea what changed.
[ "Maybe you can switch the linter, such as from 'pylint' to 'flake8' or other switches. If you don't know how to switch, you can refer to this page.\nIf it still doesn't work, maybe some extension you installed caused the problem. Try to disable the extensions, and remember to restart the VSCode. You can refer to\nthis and this page for some inspire.\n", "You should open the Visual Studio Code from the Developer Command Prompt. Using a Developer Command Prompt (and not a basic/classic/regular console) helps to make sure that all the paths and environment variables are set up correctly.\n", "You can click to Files => Preferences => Setting => Extensions => C/C++ and clicl \"v\" for all empty place.\n", "I had the same issue, you must have turned the error squiggles 'off'.\nHead to the settings.json file in your workspace (under vscode folder) and change \"C_Cpp.errorSquiggles\" to \"Enabled\".\nMight Work.\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "python", "visual_studio_code", "vscode_settings" ]
stackoverflow_0062720400_python_visual_studio_code_vscode_settings.txt
Q: ModuleNotFoundError: No module named 'dlt' error when running Delta Live Tables Python notebook When attempting to create a Python notebook and follow the various examples for setting up databricks delta live tables, you will immediately be met with the following error if you attempt to run your notebook: ModuleNotFoundError: No module named 'dlt' A self-sufficient developer may then attempt to resolve this with a "magic command" to install said module: %pip install dlt But alas, this dlt package has nothing to do with databricks delta live tables. Running your code will now raise the error: AttributeError: module 'dlt' has no attribute 'table' (or a similar error, depending on the first dlt class member you attempted to use) What's going on? How do you run your Delta Live Tables pipeline setup code? A: Gotcha! While you are expected to compose your delta live tables setup code in the databricks notebook environment, you are not meant to run it there. The only supported way to run your code is to head on over to the pipelines interface to run it. End of Answer. Although.... This is bad news for developers who wrote a lot of code and aren't even sure if it's syntactically valid (since the databricks IDE has only limited real-time feedback). You would now be stuck waiting for your pipeline to spin up resources, start, fail, then go through the stack trace to try and figure out where you went wrong. You're stuck with this workflow to work through logical errors, but you don't have to be stuck with it while working through syntactical errors. Here is a workaround I came up with: try: import dlt # When run in a pipeline, this package will exist (no way to import it here) except ImportError: class dlt: # "Mock" the dlt class so that we can syntax check the rest of our python in the databricks notebook editor def table(comment, **options): # Mock the @dlt.table attribute so that it is seen as syntactically valid below def _(f): pass return _; @dlt.table(comment = "Raw Widget Data") def widgets_raw(): return ( spark.readStream.format("cloudFiles") .option("cloudFiles.format", "csv").option("header", "true").option("sep", "|") .load("/mnt/LandingZone/EMRAW/widgets") ) The trick here is I am mocking out the dlt class to the bare minimum to pass syntax checks, so the rest of my code can be verified. The annoying thing is that sql notebooks don't have this problem, when you run them, you get the pleasing message: This Delta Live Tables query is syntactically valid, but you must create a pipeline in order to define and populate your table. Unfortunately, I find sql notebooks limiting in other ways, so pick your poison. Either way, hopefully it's clear that your code won't actually do anything until you run it in a pipeline. The notebook is just for setup, and it's nice to get as many syntax checks out of the way upfront before you have to start troubleshooting from the pipelines UI.
ModuleNotFoundError: No module named 'dlt' error when running Delta Live Tables Python notebook
When attempting to create a Python notebook and follow the various examples for setting up databricks delta live tables, you will immediately be met with the following error if you attempt to run your notebook: ModuleNotFoundError: No module named 'dlt' A self-sufficient developer may then attempt to resolve this with a "magic command" to install said module: %pip install dlt But alas, this dlt package has nothing to do with databricks delta live tables. Running your code will now raise the error: AttributeError: module 'dlt' has no attribute 'table' (or a similar error, depending on the first dlt class member you attempted to use) What's going on? How do you run your Delta Live Tables pipeline setup code?
[ "Gotcha! While you are expected to compose your delta live tables setup code in the databricks notebook environment, you are not meant to run it there. The only supported way to run your code is to head on over to the pipelines interface to run it.\nEnd of Answer.\nAlthough....\n\nThis is bad news for developers who wrote a lot of code and aren't even sure if it's syntactically valid (since the databricks IDE has only limited real-time feedback). You would now be stuck waiting for your pipeline to spin up resources, start, fail, then go through the stack trace to try and figure out where you went wrong. You're stuck with this workflow to work through logical errors, but you don't have to be stuck with it while working through syntactical errors.\nHere is a workaround I came up with:\ntry:\n import dlt # When run in a pipeline, this package will exist (no way to import it here)\nexcept ImportError:\n class dlt: # \"Mock\" the dlt class so that we can syntax check the rest of our python in the databricks notebook editor\n def table(comment, **options): # Mock the @dlt.table attribute so that it is seen as syntactically valid below\n def _(f):\n pass\n return _;\n\[email protected](comment = \"Raw Widget Data\")\ndef widgets_raw():\n return (\n spark.readStream.format(\"cloudFiles\")\n .option(\"cloudFiles.format\", \"csv\").option(\"header\", \"true\").option(\"sep\", \"|\")\n .load(\"/mnt/LandingZone/EMRAW/widgets\")\n )\n\nThe trick here is I am mocking out the dlt class to the bare minimum to pass syntax checks, so the rest of my code can be verified.\nThe annoying thing is that sql notebooks don't have this problem, when you run them, you get the pleasing message:\n\nThis Delta Live Tables query is syntactically valid, but you must create a pipeline in order to define and populate your table.\n\nUnfortunately, I find sql notebooks limiting in other ways, so pick your poison.\nEither way, hopefully it's clear that your code won't actually do anything until you run it in a pipeline. The notebook is just for setup, and it's nice to get as many syntax checks out of the way upfront before you have to start troubleshooting from the pipelines UI.\n" ]
[ 0 ]
[]
[]
[ "azure_databricks", "databricks", "delta_live_tables", "pyspark", "python" ]
stackoverflow_0074646723_azure_databricks_databricks_delta_live_tables_pyspark_python.txt
Q: Creating a BAT file for python script How can I create a simple BAT file that will run my python script located at C:\somescript.py? A: c:\python27\python.exe c:\somescript.py %* A: Open a command line (⊞ Win+R, cmd, ↡ Enter) and type python -V, ↡ Enter. You should get a response back, something like Python 2.7.1. If you do not, you may not have Python installed. Fix this first. Once you have Python, your batch file should look like @echo off python c:\somescript.py %* pause This will keep the command window open after the script finishes, so you can see any errors or messages. Once you are happy with it you can remove the 'pause' line and the command window will close automatically when finished. A: Here's how you can put both batch code and the python one in single file: 0<0# : ^ ''' @echo off echo batch code python "%~f0" %* exit /b 0 ''' print("python code") the ''' respectively starts and ends python multi line comments. 0<0# : ^ is more interesting - due to redirection priority in batch it will be interpreted like :0<0# ^ by the batch script which is a label which execution will be not displayed on the screen. The caret at the end will escape the new line and second line will be attached to the first line.For python it will be 0<0 statement and a start of inline comment. The credit goes to siberia-man A: Just simply open a batch file that contains this two lines in the same folder of your python script: somescript.py pause A: If you've added Python to your PATH then you can also simply run it like this. python somescript.py A: You can use python code directly in batch file, https://gist.github.com/jadient/9849314. @echo off & python -x "%~f0" %* & goto :eof import sys print("Hello World!") See explanation, Python command line -x option. A: --- xxx.bat --- @echo off set NAME1="Marc" set NAME2="Travis" py -u "CheckFile.py" %NAME1% %NAME2% echo %ERRORLEVEL% pause --- yyy.py --- import sys import os def names(f1,f2): print (f1) print (f2) res= True if f1 == "Travis": res= False return res if __name__ == "__main__": a = sys.argv[1] b = sys.argv[2] c = names(a, b) if c: sys.exit(1) else: sys.exit(0) A: Similar to npocmaka's solution, if you are having more than one line of batch code in your batch file besides the python code, check this out: http://lallouslab.net/2017/06/12/batchography-embedding-python-scripts-in-your-batch-file-script/ @echo off rem = """ echo some batch commands echo another batch command python -x "%~f0" %* echo some more batch commands goto :eof """ # Anything here is interpreted by Python import platform import sys print("Hello world from Python %s!\n" % platform.python_version()) print("The passed arguments are: %s" % sys.argv[1:]) What this code does is it runs itself as a python file by putting all the batch code into a multiline string. The beginning of this string is in a variable called rem, to make the batch code read it as a comment. The first line containing @echo off is ignored in the python code because of the -x parameter. it is important to mention that if you want to use \ in your batch code, for example in a file path, you'll have to use r"""...""" to surround it to use it as a raw string without escape sequences. @echo off rem = r""" ... """ A: This is the syntax: "python.exe path""python script path"pause "C:\Users\hp\AppData\Local\Programs\Python\Python37\python.exe" "D:\TS_V1\TS_V2.py" pause Basically what will be happening the screen will appear for seconds and then go off take care of these 2 things: While saving the file you give extension as bat file but save it as a txt file and not all files and Encoding ANSI If the program still doesn't run save the batch file and the python script in same folder and specify the path of this folder in Environment Variables. A: If this is a BAT file in a different directory than the current directory, you may see an error like "python: can't open file 'somescript.py': [Errno 2] No such file or directory". This can be fixed by specifying an absolute path to the BAT file using %~dp0 (the drive letter and path of that batch file). @echo off python %~dp0\somescript.py %* (This way you can ignore the c:\ or whatever, because perhaps you may want to move this script) A: ECHO OFF set SCRIPT_DRIVE = %1 set SCRIPT_DIRECTORY = %2 %SCRIPT_DRIVE% cd %SCRIPT_DRIVE%%SCRIPT_DIRECTORY% python yourscript.py` A: i did this and works: i have my project in D: and my batch file is in the desktop, if u have it in the same drive just ignore the first line and change de D directory in the second line in the second line change the folder of the file, put your folder in the third line change the name of the file D: cd D:\python_proyects\example_folder\ python example_file.py A: Use any text editor and save the following code as runit.bat @echo off title Execute Python [NarendraDwivedi.Org] :main echo. set/p filename=File Name : echo. %filename% goto main Now place this file in the folder where python script is present. Run this file and enter python script's file name to run python program using batch file (cmd) Reference : Narendra Dwivedi - How To Run Python Using Batch File A: Create an empty file and name it "run.bat" In my case i use "py" because it's more convenient, try: C: cd C:\Users\user\Downloads\python_script_path py your_script.py A: @echo off call C:\Users\[user]\Anaconda3\condabin\conda activate base "C:\Users\[user]\Anaconda3\python.exe" "C:\folder\[script].py"
Creating a BAT file for python script
How can I create a simple BAT file that will run my python script located at C:\somescript.py?
[ "c:\\python27\\python.exe c:\\somescript.py %*\n\n", "Open a command line (⊞ Win+R, cmd, ↡ Enter)\nand type python -V, ↡ Enter.\nYou should get a response back, something like Python 2.7.1.\nIf you do not, you may not have Python installed. Fix this first.\nOnce you have Python, your batch file should look like\n@echo off\npython c:\\somescript.py %*\npause\n\nThis will keep the command window open after the script finishes, so you can see any errors or messages. Once you are happy with it you can remove the 'pause' line and the command window will close automatically when finished.\n", "Here's how you can put both batch code and the python one in single file:\n0<0# : ^\n''' \n@echo off\necho batch code\npython \"%~f0\" %*\nexit /b 0\n'''\n\nprint(\"python code\")\n\nthe ''' respectively starts and ends python multi line comments.\n0<0# : ^ is more interesting - due to redirection priority in batch it will be interpreted like :0<0# ^ by the batch script which is a label which execution will be not displayed on the screen. The caret at the end will escape the new line and second line will be attached to the first line.For python it will be 0<0 statement and a start of inline comment.\nThe credit goes to siberia-man\n", "Just simply open a batch file that contains this two lines in the same folder of your python script:\nsomescript.py\npause\n\n", "If you've added Python to your PATH then you can also simply run it like this.\npython somescript.py\n\n", "You can use python code directly in batch file,\nhttps://gist.github.com/jadient/9849314.\n@echo off & python -x \"%~f0\" %* & goto :eof\nimport sys\nprint(\"Hello World!\")\n\nSee explanation, Python command line -x option.\n", "--- xxx.bat ---\n@echo off\nset NAME1=\"Marc\"\nset NAME2=\"Travis\"\npy -u \"CheckFile.py\" %NAME1% %NAME2%\necho %ERRORLEVEL%\npause\n\n--- yyy.py ---\nimport sys\nimport os\ndef names(f1,f2):\n\n print (f1)\n print (f2)\n res= True\n if f1 == \"Travis\":\n res= False\n return res\n\nif __name__ == \"__main__\":\n a = sys.argv[1]\n b = sys.argv[2]\n c = names(a, b) \n if c:\n sys.exit(1)\n else:\n sys.exit(0) \n\n", "Similar to npocmaka's solution, if you are having more than one line of batch code in your batch file besides the python code, check this out: http://lallouslab.net/2017/06/12/batchography-embedding-python-scripts-in-your-batch-file-script/\n@echo off\nrem = \"\"\"\necho some batch commands\necho another batch command\npython -x \"%~f0\" %*\necho some more batch commands\ngoto :eof\n\n\"\"\"\n# Anything here is interpreted by Python\nimport platform\nimport sys\nprint(\"Hello world from Python %s!\\n\" % platform.python_version())\nprint(\"The passed arguments are: %s\" % sys.argv[1:])\n\nWhat this code does is it runs itself as a python file by putting all the batch code into a multiline string. The beginning of this string is in a variable called rem, to make the batch code read it as a comment. The first line containing @echo off is ignored in the python code because of the -x parameter.\nit is important to mention that if you want to use \\ in your batch code, for example in a file path, you'll have to use r\"\"\"...\"\"\" to surround it to use it as a raw string without escape sequences.\n@echo off\nrem = r\"\"\"\n...\n\"\"\"\n\n", "This is the syntax:\n\"python.exe path\"\"python script path\"pause\n\"C:\\Users\\hp\\AppData\\Local\\Programs\\Python\\Python37\\python.exe\" \"D:\\TS_V1\\TS_V2.py\"\npause\n\nBasically what will be happening the screen will appear for seconds and then go off take care of these 2 things:\n\nWhile saving the file you give extension as bat file but save it as a txt file and not all files and Encoding ANSI\nIf the program still doesn't run save the batch file and the python script in same folder and specify the path of this folder in Environment Variables.\n\n", "If this is a BAT file in a different directory than the current directory, you may see an error like \"python: can't open file 'somescript.py': [Errno 2] No such file or directory\". This can be fixed by specifying an absolute path to the BAT file using %~dp0 (the drive letter and path of that batch file).\n@echo off\npython %~dp0\\somescript.py %*\n\n(This way you can ignore the c:\\ or whatever, because perhaps you may want to move this script)\n", "ECHO OFF\nset SCRIPT_DRIVE = %1\nset SCRIPT_DIRECTORY = %2\n%SCRIPT_DRIVE%\ncd %SCRIPT_DRIVE%%SCRIPT_DIRECTORY%\npython yourscript.py`\n\n", "i did this and works:\ni have my project in D: and my batch file is in the desktop, if u have it in the same drive just ignore the first line and change de D directory in the second line\nin the second line change the folder of the file, put your folder \nin the third line change the name of the file \nD: \ncd D:\\python_proyects\\example_folder\\ \npython example_file.py\n", "Use any text editor and save the following code as runit.bat\n@echo off\ntitle Execute Python [NarendraDwivedi.Org]\n:main\necho.\nset/p filename=File Name :\necho. \n%filename%\ngoto main\n\nNow place this file in the folder where python script is present. Run this file and enter python script's file name to run python program using batch file (cmd)\nReference : Narendra Dwivedi - How To Run Python Using Batch File\n", "Create an empty file and name it \"run.bat\"\nIn my case i use \"py\" because it's more convenient, try:\nC:\ncd C:\\Users\\user\\Downloads\\python_script_path\npy your_script.py\n\n", "@echo off\ncall C:\\Users\\[user]\\Anaconda3\\condabin\\conda activate base\n\"C:\\Users\\[user]\\Anaconda3\\python.exe\" \"C:\\folder\\[script].py\"\n\n" ]
[ 71, 60, 19, 14, 6, 5, 3, 2, 2, 1, 0, 0, 0, 0, 0 ]
[ "start xxx.py\nYou can use this for some other file types.\n" ]
[ -1 ]
[ "batch_file", "python" ]
stackoverflow_0004571244_batch_file_python.txt
Q: Python: request url and get contents I am trying to get transaction history for the following address 9QgXqrgdbVU8KcpfskqJpAXKzbaYQJecgMAruSWoXDkM from the https://explorer.solana.com website. I have tried url="https://explorer.solana.com/address/9QgXqrgdbVU8KcpfskqJpAXKzbaYQJecgMAruSWoXDkM" output = requests.get(url).text print(output) However this gives me raw html output. How can I get the transactions from the url? A: The history data is loaded from external URL via JavaScript. You can use requests module to simulate this call: import requests import pandas as pd api_url = "https://explorer-api.mainnet-beta.solana.com/" payload = { "id": "xxx", "jsonrpc": "2.0", "method": "getConfirmedSignaturesForAddress2", "params": ["9QgXqrgdbVU8KcpfskqJpAXKzbaYQJecgMAruSWoXDkM", {"limit": 25}], } data = requests.post(api_url, json=payload).json() df = pd.DataFrame(data["result"]) df["blockTime"] = pd.to_datetime(df["blockTime"], unit="s") print(df.to_markdown(index=False)) Prints: blockTime confirmationStatus err memo signature slot 2022-11-30 12:36:01 finalized 4Pb3aMuiNGx1Xavj5GyeZHKAWvDp1BJAYXS3s1JjTx7uRr2qGNLGMkgPvicvMJGvKdt7gC5hTcDA822qu4th1MvR 164077374 2022-11-23 13:05:04 finalized 5vWbZozXwTmNeqCwLDu8pgLKhf8gSbyAVuHamzDMGh3amYjtYZ5V647wkDigYQ4aRnSKNGNGzYYPtksHgYBeuF1b 162708675 2022-10-17 17:37:48 finalized 2wYEAAZPBPu1ropcTV4k78BB6FbemhAQmnSRvRcuLHVz9j3Kh5tufMokJ5j3JsCsL8vRArUa3HtYP67bgNGJdLGk 155859139 2022-09-18 09:34:45 finalized [209] Please consider to delegate with ManyStake in order to decentralize the solana network. OG delegator will be rewarded soon with extra MEV rewards. Our vote account AuBB9st3RqhHBkzZgBSm6SVnHZNJQSHeBWCSkik4bzdA 48r1hcNHn4vw9kzd7fJnj4HWrLqzSsDfn6Ap6Dpbb2qwj6WL1xbYmah1CifVVBhwDZDqg6fsmUHqHW9t3m454pX8 151207313 2022-06-19 18:35:55 finalized 5v7T4aYM6dN9SK3V7Y6VS59aBcWabyNdyxLLJuKecZrTt2VqT1kW6MHobYxjGv4DHb8SYewB3Y1ZNhWML2pCFCGL 138178511 2022-06-19 18:33:24 finalized 2ULTeYi3tqZi9xDtnPnNPYjYRGt8a8EcoAxZG2uGNkHvrNWiF3evbcSPmxN3LvGU9g1j8n9KYVqRmtzLTwhWfhr7 138178280 2022-06-19 18:19:04 finalized 4YnvsFtHPGzgijPB7KPEsBqj4o1ZigimhiPFuoyrCv8NLivVipNFp6RTmsNhkoeCYeDhWcK3ovkCqMmjYfQj711V 138176965 2022-06-19 18:15:55 finalized EVTjZVQYghRK4bdMwCh6PnazRXZgSjCWvgu6XMqAiaEverkTNtkpfw2E7sRXPe973n1LnzmRnepNkCggLxZZS9S 138176698 2022-06-19 17:55:54 finalized 2P9nSaGMfs2Daks9kGEnUp3R9UNpLZMFTudAjaVgDHJcwwxp1vDgfsn582bZb7HoDJTCrKCkBd3srcuUfXYAH1Tr 138174836 2022-06-19 17:41:14 finalized 3Y7bmsHo3fTD3fqryPy37u6MMDGs9KGk7MLFbxw8sDx15j2sDPtRd4w8QjWWCoecLQQZHMT37Evt4D2oX1wd2sL 138173447 2022-06-19 17:40:46 finalized 2ftBbaqKrMintETRxhH17HvDjEgX5W437BKZv8o4umeNXcs2B9WQJEm9kJEh2LfNeQMCgMkjDZXz8DziLMKvtHk3 138173398 2022-06-19 17:38:41 finalized 7Bdy3Ah78NmRmpbFsmKtFPiqrQGDpXZ2BBr83crivSso7wwbfVzEv4TrdpurYXWJV6X9j9hKfAvLWB3769GobJA 138173207 2022-06-19 17:35:41 finalized 61iQ9BSQ18X54nT2ZzRCUtoUyXpTKS363QZFnXBgERAtUzLscpmj8oxTfrgxn7zwemNEKf1WB2yLhn3JgfmG96MY 138172922 2022-06-19 17:32:16 finalized 5ueuPr4qZwHxhDSWRQHZkrNpTSDa6fR9LiGyokFJ1que6rpgRaSt9AX2S7L89KLztWKmehHKwxDpQywbvXbk6WQB 138172617 2022-06-19 17:26:27 finalized EHRyD9ibJkt6cGgHbKbJqTuBj7bCzxeWGLBwuC8TgucvfKgv3ejqCRkG6sqUDVxQvsRWdGkQpagojRo2vajfDBa 138172080 2022-06-19 17:25:59 finalized 3XxSXVQCKpZH48upHzYSj4nJyhWw6gpPiA6vPW8JyWwfniCh2Z29qokxQi2mtjyUiEWqthix6LKNBRpJhAQb2u8v 138172029 2022-06-19 17:25:46 finalized 3DjUK7ENHYQLWGQ1Rw3bMqCjuzt4M6H1aPxWMVAU37vr2r64F2wXFgFsyeXqSiQ7JA1biQY691iwoEKDj5iztzg6 138172003 2022-06-19 17:25:14 finalized gQwiKLCuvvGDoEoniU6QR98PHTAMDyz7DVbxaYXRYWsPaC7Ekbyt9PDEpDNo7kueSddiN6V1jrwYoN3Rfir3x8w 138171956 2022-06-19 17:22:02 finalized 52dFWxTaKL2h1MekafD12F3QWmnrDbsTxwEAMbZyiDhVNy7miVbwxX5d8NU7WioQpotG4uJ4txoFRRe2irRBqBfh 138171653 2022-06-19 17:21:32 finalized 5yGwSWUZiaY5xAeSy5BFekJoSF3YiPiYh6GC5hyaPCDDQ3WEXWr9EzqWCf1g47ViUHekEMrsRDWuS8XN3k52hFVf 138171595 2022-06-19 17:20:58 finalized 7SkpAQ1JNQYTUvR545pyWBQeHGEpisbgYxsUr1B2fnZm95CmGXUF4Y2UMVQbL1gbtd1mKaTTVCY3RBBpWcchaLD 138171541 2022-06-19 17:20:38 finalized rsYUWj8irRFk15VxkFnjcJChDGyCmpe4MrCgrAyqt9ZoXoUcQsz7wiUKWYaU9qbds7V69jDxxhxbSzSJneJsfzA 138171506 2022-06-19 17:20:12 finalized 4hjqfREeb22rSSuwHAZ5U3g2qZXVi4ejahZxBN95ecjfV3we1ELWP1ezJsCCaSnj9zygzzhWJrzZRpCHB3Nfcnw1 138171464 2022-06-19 17:09:47 finalized 2TXf2fwsYfCCTrkh4zfZSGhbEuZcLrKPwn8Ai4ZTpntKFS6FDU9YahRuT5cEYqYc1RJ5fMMkUtYtcbNv8ogAzVnM 138170550 2022-06-19 17:09:31 finalized wDMRYiqHPNtYy6WJuA1kyaWjLr7RsnemnasEYufpaxdCmroNW4dBFLPLCQnYDnyZha9uZUSgbhC91zQT3E55bpP 138170527
Python: request url and get contents
I am trying to get transaction history for the following address 9QgXqrgdbVU8KcpfskqJpAXKzbaYQJecgMAruSWoXDkM from the https://explorer.solana.com website. I have tried url="https://explorer.solana.com/address/9QgXqrgdbVU8KcpfskqJpAXKzbaYQJecgMAruSWoXDkM" output = requests.get(url).text print(output) However this gives me raw html output. How can I get the transactions from the url?
[ "The history data is loaded from external URL via JavaScript. You can use requests module to simulate this call:\nimport requests\nimport pandas as pd\n\n\napi_url = \"https://explorer-api.mainnet-beta.solana.com/\"\n\npayload = {\n \"id\": \"xxx\",\n \"jsonrpc\": \"2.0\",\n \"method\": \"getConfirmedSignaturesForAddress2\",\n \"params\": [\"9QgXqrgdbVU8KcpfskqJpAXKzbaYQJecgMAruSWoXDkM\", {\"limit\": 25}],\n}\n\ndata = requests.post(api_url, json=payload).json()\n\ndf = pd.DataFrame(data[\"result\"])\ndf[\"blockTime\"] = pd.to_datetime(df[\"blockTime\"], unit=\"s\")\n\nprint(df.to_markdown(index=False))\n\nPrints:\n\n\n\n\nblockTime\nconfirmationStatus\nerr\nmemo\nsignature\nslot\n\n\n\n\n2022-11-30 12:36:01\nfinalized\n\n\n4Pb3aMuiNGx1Xavj5GyeZHKAWvDp1BJAYXS3s1JjTx7uRr2qGNLGMkgPvicvMJGvKdt7gC5hTcDA822qu4th1MvR\n164077374\n\n\n2022-11-23 13:05:04\nfinalized\n\n\n5vWbZozXwTmNeqCwLDu8pgLKhf8gSbyAVuHamzDMGh3amYjtYZ5V647wkDigYQ4aRnSKNGNGzYYPtksHgYBeuF1b\n162708675\n\n\n2022-10-17 17:37:48\nfinalized\n\n\n2wYEAAZPBPu1ropcTV4k78BB6FbemhAQmnSRvRcuLHVz9j3Kh5tufMokJ5j3JsCsL8vRArUa3HtYP67bgNGJdLGk\n155859139\n\n\n2022-09-18 09:34:45\nfinalized\n\n[209] Please consider to delegate with ManyStake in order to decentralize the solana network. OG delegator will be rewarded soon with extra MEV rewards. Our vote account AuBB9st3RqhHBkzZgBSm6SVnHZNJQSHeBWCSkik4bzdA\n48r1hcNHn4vw9kzd7fJnj4HWrLqzSsDfn6Ap6Dpbb2qwj6WL1xbYmah1CifVVBhwDZDqg6fsmUHqHW9t3m454pX8\n151207313\n\n\n2022-06-19 18:35:55\nfinalized\n\n\n5v7T4aYM6dN9SK3V7Y6VS59aBcWabyNdyxLLJuKecZrTt2VqT1kW6MHobYxjGv4DHb8SYewB3Y1ZNhWML2pCFCGL\n138178511\n\n\n2022-06-19 18:33:24\nfinalized\n\n\n2ULTeYi3tqZi9xDtnPnNPYjYRGt8a8EcoAxZG2uGNkHvrNWiF3evbcSPmxN3LvGU9g1j8n9KYVqRmtzLTwhWfhr7\n138178280\n\n\n2022-06-19 18:19:04\nfinalized\n\n\n4YnvsFtHPGzgijPB7KPEsBqj4o1ZigimhiPFuoyrCv8NLivVipNFp6RTmsNhkoeCYeDhWcK3ovkCqMmjYfQj711V\n138176965\n\n\n2022-06-19 18:15:55\nfinalized\n\n\nEVTjZVQYghRK4bdMwCh6PnazRXZgSjCWvgu6XMqAiaEverkTNtkpfw2E7sRXPe973n1LnzmRnepNkCggLxZZS9S\n138176698\n\n\n2022-06-19 17:55:54\nfinalized\n\n\n2P9nSaGMfs2Daks9kGEnUp3R9UNpLZMFTudAjaVgDHJcwwxp1vDgfsn582bZb7HoDJTCrKCkBd3srcuUfXYAH1Tr\n138174836\n\n\n2022-06-19 17:41:14\nfinalized\n\n\n3Y7bmsHo3fTD3fqryPy37u6MMDGs9KGk7MLFbxw8sDx15j2sDPtRd4w8QjWWCoecLQQZHMT37Evt4D2oX1wd2sL\n138173447\n\n\n2022-06-19 17:40:46\nfinalized\n\n\n2ftBbaqKrMintETRxhH17HvDjEgX5W437BKZv8o4umeNXcs2B9WQJEm9kJEh2LfNeQMCgMkjDZXz8DziLMKvtHk3\n138173398\n\n\n2022-06-19 17:38:41\nfinalized\n\n\n7Bdy3Ah78NmRmpbFsmKtFPiqrQGDpXZ2BBr83crivSso7wwbfVzEv4TrdpurYXWJV6X9j9hKfAvLWB3769GobJA\n138173207\n\n\n2022-06-19 17:35:41\nfinalized\n\n\n61iQ9BSQ18X54nT2ZzRCUtoUyXpTKS363QZFnXBgERAtUzLscpmj8oxTfrgxn7zwemNEKf1WB2yLhn3JgfmG96MY\n138172922\n\n\n2022-06-19 17:32:16\nfinalized\n\n\n5ueuPr4qZwHxhDSWRQHZkrNpTSDa6fR9LiGyokFJ1que6rpgRaSt9AX2S7L89KLztWKmehHKwxDpQywbvXbk6WQB\n138172617\n\n\n2022-06-19 17:26:27\nfinalized\n\n\nEHRyD9ibJkt6cGgHbKbJqTuBj7bCzxeWGLBwuC8TgucvfKgv3ejqCRkG6sqUDVxQvsRWdGkQpagojRo2vajfDBa\n138172080\n\n\n2022-06-19 17:25:59\nfinalized\n\n\n3XxSXVQCKpZH48upHzYSj4nJyhWw6gpPiA6vPW8JyWwfniCh2Z29qokxQi2mtjyUiEWqthix6LKNBRpJhAQb2u8v\n138172029\n\n\n2022-06-19 17:25:46\nfinalized\n\n\n3DjUK7ENHYQLWGQ1Rw3bMqCjuzt4M6H1aPxWMVAU37vr2r64F2wXFgFsyeXqSiQ7JA1biQY691iwoEKDj5iztzg6\n138172003\n\n\n2022-06-19 17:25:14\nfinalized\n\n\ngQwiKLCuvvGDoEoniU6QR98PHTAMDyz7DVbxaYXRYWsPaC7Ekbyt9PDEpDNo7kueSddiN6V1jrwYoN3Rfir3x8w\n138171956\n\n\n2022-06-19 17:22:02\nfinalized\n\n\n52dFWxTaKL2h1MekafD12F3QWmnrDbsTxwEAMbZyiDhVNy7miVbwxX5d8NU7WioQpotG4uJ4txoFRRe2irRBqBfh\n138171653\n\n\n2022-06-19 17:21:32\nfinalized\n\n\n5yGwSWUZiaY5xAeSy5BFekJoSF3YiPiYh6GC5hyaPCDDQ3WEXWr9EzqWCf1g47ViUHekEMrsRDWuS8XN3k52hFVf\n138171595\n\n\n2022-06-19 17:20:58\nfinalized\n\n\n7SkpAQ1JNQYTUvR545pyWBQeHGEpisbgYxsUr1B2fnZm95CmGXUF4Y2UMVQbL1gbtd1mKaTTVCY3RBBpWcchaLD\n138171541\n\n\n2022-06-19 17:20:38\nfinalized\n\n\nrsYUWj8irRFk15VxkFnjcJChDGyCmpe4MrCgrAyqt9ZoXoUcQsz7wiUKWYaU9qbds7V69jDxxhxbSzSJneJsfzA\n138171506\n\n\n2022-06-19 17:20:12\nfinalized\n\n\n4hjqfREeb22rSSuwHAZ5U3g2qZXVi4ejahZxBN95ecjfV3we1ELWP1ezJsCCaSnj9zygzzhWJrzZRpCHB3Nfcnw1\n138171464\n\n\n2022-06-19 17:09:47\nfinalized\n\n\n2TXf2fwsYfCCTrkh4zfZSGhbEuZcLrKPwn8Ai4ZTpntKFS6FDU9YahRuT5cEYqYc1RJ5fMMkUtYtcbNv8ogAzVnM\n138170550\n\n\n2022-06-19 17:09:31\nfinalized\n\n\nwDMRYiqHPNtYy6WJuA1kyaWjLr7RsnemnasEYufpaxdCmroNW4dBFLPLCQnYDnyZha9uZUSgbhC91zQT3E55bpP\n138170527\n\n\n\n" ]
[ 2 ]
[]
[]
[ "get", "python", "python_requests", "solana", "url" ]
stackoverflow_0074646545_get_python_python_requests_solana_url.txt
Q: how to keep adding the value of an item? menu = { "Baja Taco": 4.00, "Burrito": 7.50, "Bowl": 8.50, "Nachos": 11.00, "Quesadilla": 8.50, "Super Burrito": 8.50, "Super Quesadilla": 9.50, "Taco": 3.00, "Tortilla Salad": 8.00 } while True: # keep adding to the price if user prompts another item # i know the operation won't work its just what i want to happen try: x = input("Item: ") y = 0 if x in menu: z = x + y print(f"Total: ${z}") # If u want to ctrl+d out and get ur price except (EOFError): print(f"Total: ${menu[x]}") break # so code can handle wrong inputs except (KeyError): pass can't find a way to make items add up A: Things you need to do: Define a variable for storing total outside the loop otherwise the variable will be overridden everytime Get the price from the menu dictionary and add it to the total If both changes are done, the code should look something like this total = 0 while True: try: x = input("Item: ") if x in menu: total = total + menu[x] print(f"Total: ${total}") # If u want to ctrl+d out and get ur price except (EOFError): print(f"Total: ${menu[x]}") break # so code can handle wrong inputs except (KeyError): pass A: Define a total outside and increment that value. menu = { "Baja Taco": 4.00, "Burrito": 7.50, "Bowl": 8.50, "Nachos": 11.00, "Quesadilla": 8.50, "Super Burrito": 8.50, "Super Quesadilla": 9.50, "Taco": 3.00, "Tortilla Salad": 8.00 } total = 0.0 while True: try: x = input("Item: ") if x in menu: total += menu[x] # Notice that you're just adding the menu item price here print(f"Total: ${total}") ...
how to keep adding the value of an item?
menu = { "Baja Taco": 4.00, "Burrito": 7.50, "Bowl": 8.50, "Nachos": 11.00, "Quesadilla": 8.50, "Super Burrito": 8.50, "Super Quesadilla": 9.50, "Taco": 3.00, "Tortilla Salad": 8.00 } while True: # keep adding to the price if user prompts another item # i know the operation won't work its just what i want to happen try: x = input("Item: ") y = 0 if x in menu: z = x + y print(f"Total: ${z}") # If u want to ctrl+d out and get ur price except (EOFError): print(f"Total: ${menu[x]}") break # so code can handle wrong inputs except (KeyError): pass can't find a way to make items add up
[ "Things you need to do:\n\nDefine a variable for storing total outside the loop otherwise the variable will be overridden everytime\nGet the price from the menu dictionary and add it to the total\n\nIf both changes are done, the code should look something like this\ntotal = 0\nwhile True:\n try:\n x = input(\"Item: \")\n if x in menu:\n total = total + menu[x]\n print(f\"Total: ${total}\")\n # If u want to ctrl+d out and get ur price\n except (EOFError):\n print(f\"Total: ${menu[x]}\")\n break\n # so code can handle wrong inputs\n except (KeyError):\n pass\n\n", "Define a total outside and increment that value.\nmenu = {\n \"Baja Taco\": 4.00,\n \"Burrito\": 7.50,\n \"Bowl\": 8.50,\n \"Nachos\": 11.00,\n \"Quesadilla\": 8.50,\n \"Super Burrito\": 8.50,\n \"Super Quesadilla\": 9.50,\n \"Taco\": 3.00,\n \"Tortilla Salad\": 8.00\n}\n\ntotal = 0.0\nwhile True:\n try:\n x = input(\"Item: \")\n if x in menu:\n total += menu[x] # Notice that you're just adding the menu item price here\n print(f\"Total: ${total}\")\n...\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074646662_python.txt
Q: Module function in cython gets extra check that a static method doesn't I have the following class static method and method user: @cython.cclass class TestClass: @staticmethod @cython.cfunc def func(v: float) -> float: return v + 1.0 def test_call(self): res = TestClass.func(2) return res The line res = TestClass.func(2) shows as white in the annnotated version (as expected) and gets translated to C as __pyx_t_1 = __pyx_f_8crujisim_11cythontests_9TestClass_func(2.0); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 521, __pyx_L1_error)__pyx_v_res = __pyx_t_1; But if I now take that static method out of the class and turn it into a function, as in @cython.cfunc def func(v: float) -> float: return v + 1.0 @cython.cclass class TestClass: def test_call(self): res = func(2) return res Then the res = func(2) line is now yellow, and the C translation shows an additional test __pyx_t_1 = __pyx_f_8crujisim_11cythontests_func(2.0); if (unlikely(__pyx_t_1 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 521, __pyx_L1_error)__pyx_v_res = __pyx_t_1; I am forced to get the method out of the class because of this behaviour I reported, which I believe is a bug https://github.com/cython/cython/issues/5159 . But the method call is inside a tight loop in the critical path and there is a noticeable performance loss. Is it perhaps an argument overflow check? Can it be disabled? Any hints? Thanks. A: You can make a cdef/cfunc function unable to raise an exception using @cython.exceptval(check=False) (or cdef float func() noexcept in the non-pure-Python syntax). See the documentation for full details about exceptions. If you do this it won't be checked. Cython 3 has changed the default behaviour from "cdef functions swallow exceptions" to "cdef functions can raise exceptions", but it's possible to pick either. The difference you see between staticmethod and a module-level function is probably a minor bug - there's no reason for them to behave differently. However, I'd actually expect the extra check to be an optimization in the non-error case: PyErr_Occurred() is expected to be expensive, so using -1 as a sentinal value to indicate that an error might have happened avoids it most of the time. I haven't measured it in your exact case though - it's probably a balance between skipping expensive checks vs greater code size. The annotated HTML highlighting is pretty crude so is almost working on the basis that "more text == slower" which isn't quite right here. So don't take it too seriously.
Module function in cython gets extra check that a static method doesn't
I have the following class static method and method user: @cython.cclass class TestClass: @staticmethod @cython.cfunc def func(v: float) -> float: return v + 1.0 def test_call(self): res = TestClass.func(2) return res The line res = TestClass.func(2) shows as white in the annnotated version (as expected) and gets translated to C as __pyx_t_1 = __pyx_f_8crujisim_11cythontests_9TestClass_func(2.0); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 521, __pyx_L1_error)__pyx_v_res = __pyx_t_1; But if I now take that static method out of the class and turn it into a function, as in @cython.cfunc def func(v: float) -> float: return v + 1.0 @cython.cclass class TestClass: def test_call(self): res = func(2) return res Then the res = func(2) line is now yellow, and the C translation shows an additional test __pyx_t_1 = __pyx_f_8crujisim_11cythontests_func(2.0); if (unlikely(__pyx_t_1 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 521, __pyx_L1_error)__pyx_v_res = __pyx_t_1; I am forced to get the method out of the class because of this behaviour I reported, which I believe is a bug https://github.com/cython/cython/issues/5159 . But the method call is inside a tight loop in the critical path and there is a noticeable performance loss. Is it perhaps an argument overflow check? Can it be disabled? Any hints? Thanks.
[ "You can make a cdef/cfunc function unable to raise an exception using @cython.exceptval(check=False) (or cdef float func() noexcept in the non-pure-Python syntax). See the documentation for full details about exceptions. If you do this it won't be checked.\nCython 3 has changed the default behaviour from \"cdef functions swallow exceptions\" to \"cdef functions can raise exceptions\", but it's possible to pick either.\n\nThe difference you see between staticmethod and a module-level function is probably a minor bug - there's no reason for them to behave differently.\nHowever, I'd actually expect the extra check to be an optimization in the non-error case: PyErr_Occurred() is expected to be expensive, so using -1 as a sentinal value to indicate that an error might have happened avoids it most of the time. I haven't measured it in your exact case though - it's probably a balance between skipping expensive checks vs greater code size.\nThe annotated HTML highlighting is pretty crude so is almost working on the basis that \"more text == slower\" which isn't quite right here. So don't take it too seriously.\n" ]
[ 1 ]
[]
[]
[ "arguments", "cython", "performance", "python" ]
stackoverflow_0074640883_arguments_cython_performance_python.txt
Q: I was doing K-means Clustering, before that, each data in the database has to be assigned to an index. However, TypeError occurs, how to fix it? Here's my code: with connection: with connection.cursor() as cursor: sql = """ SELECT `CPC-Current-DWPI`,`Assignee/Applicant First` FROM final.f01l_patent; """ cursor.execute(sql) result = cursor.fetchall() count = [] ro=0 for i in range(1,10): for j in result : if j['CPC - Current - DWPI.split()'][3]==str(i): co+=1 count.append(co) co=0 I know i can't index a tuple with a list, so i added .split() after CPC - Current - DWPI, but actually i have no idea with how to change lists into slices --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_19312\1811172777.py in <module> 16 for i in range(1,10): 17 for j in result : ---> 18 if j['CPC - Current - DWPI.split()'][3]==str(i): 19 co+=1 20 count.append(co) TypeError: tuple indices must be integers or slices, not str A: Your result is a list of Tuples. Try replacing the string 'CPC - Current - DWPI.split()' with the index (as an integer) of the item you want.
I was doing K-means Clustering, before that, each data in the database has to be assigned to an index. However, TypeError occurs, how to fix it?
Here's my code: with connection: with connection.cursor() as cursor: sql = """ SELECT `CPC-Current-DWPI`,`Assignee/Applicant First` FROM final.f01l_patent; """ cursor.execute(sql) result = cursor.fetchall() count = [] ro=0 for i in range(1,10): for j in result : if j['CPC - Current - DWPI.split()'][3]==str(i): co+=1 count.append(co) co=0 I know i can't index a tuple with a list, so i added .split() after CPC - Current - DWPI, but actually i have no idea with how to change lists into slices --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_19312\1811172777.py in <module> 16 for i in range(1,10): 17 for j in result : ---> 18 if j['CPC - Current - DWPI.split()'][3]==str(i): 19 co+=1 20 count.append(co) TypeError: tuple indices must be integers or slices, not str
[ "Your result is a list of Tuples. Try replacing the string 'CPC - Current - DWPI.split()' with the index (as an integer) of the item you want.\n" ]
[ 1 ]
[]
[]
[ "k_means", "python" ]
stackoverflow_0074646716_k_means_python.txt
Q: How to successfully use pandas.Dataframe.apply with pandas.NA and lambdas Given a dataframe with a pandas.NA value, how can I run a decision lambda over it import pandas import numpy # Setup dataframe = pandas.DataFrame({"c1": [1, 2, 3, 4], "c2": [2, 3, 4, pandas.NA]}) print(dataframe) my_lambda = lambda row: row["c2"] if row["c2"] else row["c1"] # the issue dataframe["c2"] = dataframe.apply(my_lambda, axis="columns") Which raises TypeError: boolean value of NA is ambiguous How can I get this lambda to work over pandas.NA or can I force pandas.NA to numpy.NaN? (code will run if you replace pandas.NA with numpy.NaN) The cause of this is because pandas.NA doesn't evaluate to True or False if pandas.NA: print("no") Raises the same Error However if pandas.NA in [pandas.NA]: print("yes") Evaluates to true. But.. my_lambda = lambda row: row["c2"] if row["c2"] in [pandas.NA] else row ["c1"] Still raises the error Please consider the dataframe I work with are big 1k-1m rows. Solutions I've considered that work but are suboptimal for my purpose. fillna(0) - fill value may be 0 or some other number. Then run lambda with fill value included in the search. replace() - same as above These are suboptimal because values may be 0 or any other digit. Solutions I've considered but couldn't work out how to actually get running. passing lambda to fillna() or replace() or some other function that can directly target the pandas.NA values forcing the column so that it contains numpy.NaN instead of pandas.NA (replace/fillna doesn't work as pandas.NA is a mask for numpy.NaN) Both of these would be good solutions Thanks in advance :) A: You could just do dataframe.apply(lambda row: row["c2"] if pd.notna(row["c2"]) else row["c1"], axis=1) Or better dataframe['c2'] = dataframe['c2'].fillna(dataframe['c1'])
How to successfully use pandas.Dataframe.apply with pandas.NA and lambdas
Given a dataframe with a pandas.NA value, how can I run a decision lambda over it import pandas import numpy # Setup dataframe = pandas.DataFrame({"c1": [1, 2, 3, 4], "c2": [2, 3, 4, pandas.NA]}) print(dataframe) my_lambda = lambda row: row["c2"] if row["c2"] else row["c1"] # the issue dataframe["c2"] = dataframe.apply(my_lambda, axis="columns") Which raises TypeError: boolean value of NA is ambiguous How can I get this lambda to work over pandas.NA or can I force pandas.NA to numpy.NaN? (code will run if you replace pandas.NA with numpy.NaN) The cause of this is because pandas.NA doesn't evaluate to True or False if pandas.NA: print("no") Raises the same Error However if pandas.NA in [pandas.NA]: print("yes") Evaluates to true. But.. my_lambda = lambda row: row["c2"] if row["c2"] in [pandas.NA] else row ["c1"] Still raises the error Please consider the dataframe I work with are big 1k-1m rows. Solutions I've considered that work but are suboptimal for my purpose. fillna(0) - fill value may be 0 or some other number. Then run lambda with fill value included in the search. replace() - same as above These are suboptimal because values may be 0 or any other digit. Solutions I've considered but couldn't work out how to actually get running. passing lambda to fillna() or replace() or some other function that can directly target the pandas.NA values forcing the column so that it contains numpy.NaN instead of pandas.NA (replace/fillna doesn't work as pandas.NA is a mask for numpy.NaN) Both of these would be good solutions Thanks in advance :)
[ "You could just do\ndataframe.apply(lambda row: row[\"c2\"] if pd.notna(row[\"c2\"]) else row[\"c1\"], axis=1)\n\nOr better\ndataframe['c2'] = dataframe['c2'].fillna(dataframe['c1'])\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "lambda", "pandas", "python", "python_3.x" ]
stackoverflow_0074646756_dataframe_lambda_pandas_python_python_3.x.txt
Q: How can I identify objects inside the image using python opencv? I'm trying to identify objects present inside the plane area as in below image for some automation image1 for this I tried finding the contours on masked image obtained using thresholding the hsv range of object border colors which is yellowish then I did morphing operation to remove the small open lines and dilution operation to merge the area of object as shown in below code img = cv2.imread(img_f) img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) lower_blue = np.array([0,255,206]) upper_blue = np.array([179,255,255]) mask_blue = cv2.inRange(imghsv, lower_blue, upper_blue) kernel = np.ones((2, 2), np.uint8) img_erosion = cv2.erode(mask_blue, kernel, iterations=1) kernel = np.ones((3, 3), np.uint8) img_erosion = cv2.dilate(img_erosion, kernel, iterations=30) contours, _ = cv2.findContours(img_erosion, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) im = np.copy(img) cv2.drawContours(im, contours, -1, (0,255,0), 3) contour obtained is as shown in below image image2 and mask image is as below image3 with this approach I'm getting many unwanted detection and failing to detect many objects. How can I able to achieve this? any suggestion or guidance will be highly appreciated ,thanks A: Not perfect but here is another possible method: import cv2 from matplotlib import pyplot as plt import matplotlib import numpy as np matplotlib.use('TkAgg') def remove_noise(binary_image, max_noise_size=20): labels_count, labeled_image, stats, centroids = cv2.connectedComponentsWithStats( binary_image, 8, cv2.CV_32S) new_image = np.zeros_like(binary_image) for i in range(1, labels_count): if stats[i, -1] <= max_noise_size: continue new_image[labeled_image == i] = 255 return new_image def main(): original_image = cv2.imread('BWQyx.png') image = np.array(original_image) for y in range(image.shape[0]): for x in range(image.shape[1]): # remove grey background if 150 <= image[y, x, 0] <= 180 and \ 150 <= image[y, x, 1] <= 180 and \ 150 <= image[y, x, 2] <= 180: image[y, x, 0] = 0 image[y, x, 1] = 0 image[y, x, 2] = 0 # remove green dashes if image[y, x, 0] == 0 and \ image[y, x, 1] == 169 and \ image[y, x, 2] == 0: image[y, x, 0] = 0 image[y, x, 1] = 0 image[y, x, 2] = 0 image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) image[image < 128] = 0 image[image > 128] = 255 image = cv2.dilate(image, np.ones((3, 3), np.uint8), iterations=2) image = cv2.erode(image, np.ones((3, 3), np.uint8), iterations=2) contours, _ = cv2.findContours( image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(image, contours, -1, (0,), 2) image = remove_noise(image) contours, _ = cv2.findContours( image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for contour in contours: min_area_rect = cv2.minAreaRect(contour) width, height = min_area_rect[1] if width * height > 20000 or width * height < 400: continue box = np.int0(cv2.boxPoints(min_area_rect)) cv2.drawContours(original_image, [box], 0, (0, 0, 255), 2) plt.imshow(original_image) plt.show() if __name__ == '__main__': main() The main idea is to remove all the grid lines by drawing over their contours. You can tune it and achieve better results. However, I feel that it is really difficult to get everything 100% right. You will need more heuristics to remove unwanted rectangles.
How can I identify objects inside the image using python opencv?
I'm trying to identify objects present inside the plane area as in below image for some automation image1 for this I tried finding the contours on masked image obtained using thresholding the hsv range of object border colors which is yellowish then I did morphing operation to remove the small open lines and dilution operation to merge the area of object as shown in below code img = cv2.imread(img_f) img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) lower_blue = np.array([0,255,206]) upper_blue = np.array([179,255,255]) mask_blue = cv2.inRange(imghsv, lower_blue, upper_blue) kernel = np.ones((2, 2), np.uint8) img_erosion = cv2.erode(mask_blue, kernel, iterations=1) kernel = np.ones((3, 3), np.uint8) img_erosion = cv2.dilate(img_erosion, kernel, iterations=30) contours, _ = cv2.findContours(img_erosion, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) im = np.copy(img) cv2.drawContours(im, contours, -1, (0,255,0), 3) contour obtained is as shown in below image image2 and mask image is as below image3 with this approach I'm getting many unwanted detection and failing to detect many objects. How can I able to achieve this? any suggestion or guidance will be highly appreciated ,thanks
[ "Not perfect but here is another possible method:\nimport cv2\nfrom matplotlib import pyplot as plt\nimport matplotlib\nimport numpy as np\n\nmatplotlib.use('TkAgg')\n\n\ndef remove_noise(binary_image, max_noise_size=20):\n labels_count, labeled_image, stats, centroids = cv2.connectedComponentsWithStats(\n binary_image, 8, cv2.CV_32S)\n new_image = np.zeros_like(binary_image)\n for i in range(1, labels_count):\n if stats[i, -1] <= max_noise_size:\n continue\n new_image[labeled_image == i] = 255\n return new_image\n\n\ndef main():\n original_image = cv2.imread('BWQyx.png')\n\n image = np.array(original_image)\n for y in range(image.shape[0]):\n for x in range(image.shape[1]):\n # remove grey background\n if 150 <= image[y, x, 0] <= 180 and \\\n 150 <= image[y, x, 1] <= 180 and \\\n 150 <= image[y, x, 2] <= 180:\n image[y, x, 0] = 0\n image[y, x, 1] = 0\n image[y, x, 2] = 0\n\n # remove green dashes\n if image[y, x, 0] == 0 and \\\n image[y, x, 1] == 169 and \\\n image[y, x, 2] == 0:\n image[y, x, 0] = 0\n image[y, x, 1] = 0\n image[y, x, 2] = 0\n\n image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n image[image < 128] = 0\n image[image > 128] = 255\n\n image = cv2.dilate(image, np.ones((3, 3), np.uint8), iterations=2)\n image = cv2.erode(image, np.ones((3, 3), np.uint8), iterations=2)\n\n contours, _ = cv2.findContours(\n image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\n cv2.drawContours(image, contours, -1, (0,), 2)\n\n image = remove_noise(image)\n\n contours, _ = cv2.findContours(\n image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n\n for contour in contours:\n min_area_rect = cv2.minAreaRect(contour)\n width, height = min_area_rect[1]\n if width * height > 20000 or width * height < 400:\n continue\n box = np.int0(cv2.boxPoints(min_area_rect))\n cv2.drawContours(original_image, [box], 0, (0, 0, 255), 2)\n\n plt.imshow(original_image)\n plt.show()\n\n\nif __name__ == '__main__':\n main()\n\n\nThe main idea is to remove all the grid lines by drawing over their contours. You can tune it and achieve better results. However, I feel that it is really difficult to get everything 100% right. You will need more heuristics to remove unwanted rectangles.\n" ]
[ 0 ]
[]
[]
[ "computer_vision", "image_processing", "opencv", "python", "python_3.x" ]
stackoverflow_0074642490_computer_vision_image_processing_opencv_python_python_3.x.txt
Q: How to allow a max Manhattan distance between all the points in a group I have a 2D-array in which I want to make groups where all the points in a group have a max Manhattan distance between them. The groups can be disjoint. For example, from this starting array (10 x 10): [[ 67 97 72 35 73 77 80 48 21 34] [ 11 30 16 1 71 68 72 1 81 23] [ 85 31 94 10 50 85 63 11 61 69] [ 64 36 8 37 36 72 96 20 91 19] [ 99 54 84 56 3 80 41 45 1 8] [ 97 88 21 8 54 55 88 45 63 82] [ 13 53 1 90 39 28 48 15 86 8] [ 26 63 36 36 3 29 33 26 54 58] [ 74 40 53 12 21 17 4 87 14 22] [ 23 98 3 100 85 12 65 21 83 97]] I should be able to divide it into different groups. I currently have a function that finds the possible points I could add to a group with a starting point (here munip is the first point of the group): def possible_munips(x, y, munip, dist_manhattan_max, temp_array): list = [] for i in range(y): for j in range(x): if (j, i) != munip and munipIsValid(j, i, munip, dist_manhattan_max) and temp_array[i][j] != -1: list.append((j, i, temp_array[i][j])) I iterate through an 2D-array, temp_array, in which the points that have already been put in a group have their value set to -1. munipIsValid checks if the point is at a max Manhattan distance from the starting point. I then just add one by one the points until I've reached the desired length (k) of a group and I make sure every time I add a new point to a group, the group stays valid, like this: munips = possible_munips(x, y, munip, dist_manhattan_max, temp_array) while len(new_group) < k: point = munips[0] new_group.append(point) if not groupIsValid(new_group, distManhattanMax, len(new_group)): new_group.remove(point) munips.remove(point) current_sol.append(new_group) for groups in current_sol: for munip in groups: temp_array[munip[1], munip[0]] = -1 I've also tried adding a swapping function which tries to swap points between groups in order to make both of them valid. But sometimes it doesn't find a solution and I'm left with and invalid group. A: In layman's terms, you wish to find k classification areas also known as clusters. In this case, I recommend you first read about clustering. After you acquire enough knowledge, you can advance on this related question, which generalizes this classification for a custom distance function.
How to allow a max Manhattan distance between all the points in a group
I have a 2D-array in which I want to make groups where all the points in a group have a max Manhattan distance between them. The groups can be disjoint. For example, from this starting array (10 x 10): [[ 67 97 72 35 73 77 80 48 21 34] [ 11 30 16 1 71 68 72 1 81 23] [ 85 31 94 10 50 85 63 11 61 69] [ 64 36 8 37 36 72 96 20 91 19] [ 99 54 84 56 3 80 41 45 1 8] [ 97 88 21 8 54 55 88 45 63 82] [ 13 53 1 90 39 28 48 15 86 8] [ 26 63 36 36 3 29 33 26 54 58] [ 74 40 53 12 21 17 4 87 14 22] [ 23 98 3 100 85 12 65 21 83 97]] I should be able to divide it into different groups. I currently have a function that finds the possible points I could add to a group with a starting point (here munip is the first point of the group): def possible_munips(x, y, munip, dist_manhattan_max, temp_array): list = [] for i in range(y): for j in range(x): if (j, i) != munip and munipIsValid(j, i, munip, dist_manhattan_max) and temp_array[i][j] != -1: list.append((j, i, temp_array[i][j])) I iterate through an 2D-array, temp_array, in which the points that have already been put in a group have their value set to -1. munipIsValid checks if the point is at a max Manhattan distance from the starting point. I then just add one by one the points until I've reached the desired length (k) of a group and I make sure every time I add a new point to a group, the group stays valid, like this: munips = possible_munips(x, y, munip, dist_manhattan_max, temp_array) while len(new_group) < k: point = munips[0] new_group.append(point) if not groupIsValid(new_group, distManhattanMax, len(new_group)): new_group.remove(point) munips.remove(point) current_sol.append(new_group) for groups in current_sol: for munip in groups: temp_array[munip[1], munip[0]] = -1 I've also tried adding a swapping function which tries to swap points between groups in order to make both of them valid. But sometimes it doesn't find a solution and I'm left with and invalid group.
[ "In layman's terms, you wish to find k classification areas also known as clusters. In this case, I recommend you first read about clustering. After you acquire enough knowledge, you can advance on this related question, which generalizes this classification for a custom distance function.\n" ]
[ 0 ]
[]
[]
[ "arrays", "grouping", "manhattan", "multidimensional_array", "python" ]
stackoverflow_0074644378_arrays_grouping_manhattan_multidimensional_array_python.txt
Q: How to select the elements of a Pandas DataFrame given a Boolean mask? I was wondering wether, given a boolean mask, there is a way to retreive all the elements of a DataFrame positioned in correspondance of the True values in the mask. In my case I have a DataFrame containing the values of a certain dataset, for example let's take the following : l = [[5, 3, 1], [0, 3, 1], [7, 3, 0], [8, 5, 23], [40, 4, 30], [2, 6, 13]] df_true = pd.DataFrame(l, columns=['1', '2', '3']) df_true Then I randomly replace some of the values with 'np.nan' as follows: l2 = [[5, 3, np.nan], [np.nan, 3, 1], [7, np.nan, 0], [np.nan, 5, 23], [40, 4, np.nan], [2, np.nan, 13]] df_nan= pd.DataFrame(l2, columns=['1', '2', '3']) df_nan Let's say that after applying some imputation algorithm I obtained as a result: l3 = [[5, 3, 1], [2, 3, 1], [7, 8, 0], [8, 5, 23], [40, 4, 25], [2, 6, 13]] df_imp= pd.DataFrame(l3, columns=['1', '2', '3']) df_imp Now I would like to create two lists (or arrays), one containing the imputed values and the other one the true values in order to compare them. To do so I first created a mask m = df_nan.isnull() which has value True in correspondance of the cells containing the imputed values. By applying the mask as df_imp[m] I obtain: 1 2 3 0 NaN NaN 1.0 1 2.0 NaN NaN 2 NaN 8.0 NaN 3 8.0 NaN NaN 4 NaN NaN 25.0 5 NaN 6.0 NaN Is there a way to get instead only the values without also the Nan, and put them into a list? A: You can use df.values to return a numpy representation of the DataFrame then use numpy.isnan and keep other values. import numpy as np arr = df.values res = arr[~np.isnan(arr)] print(res) # [1. 2. 8. 8. 25. 6.]
How to select the elements of a Pandas DataFrame given a Boolean mask?
I was wondering wether, given a boolean mask, there is a way to retreive all the elements of a DataFrame positioned in correspondance of the True values in the mask. In my case I have a DataFrame containing the values of a certain dataset, for example let's take the following : l = [[5, 3, 1], [0, 3, 1], [7, 3, 0], [8, 5, 23], [40, 4, 30], [2, 6, 13]] df_true = pd.DataFrame(l, columns=['1', '2', '3']) df_true Then I randomly replace some of the values with 'np.nan' as follows: l2 = [[5, 3, np.nan], [np.nan, 3, 1], [7, np.nan, 0], [np.nan, 5, 23], [40, 4, np.nan], [2, np.nan, 13]] df_nan= pd.DataFrame(l2, columns=['1', '2', '3']) df_nan Let's say that after applying some imputation algorithm I obtained as a result: l3 = [[5, 3, 1], [2, 3, 1], [7, 8, 0], [8, 5, 23], [40, 4, 25], [2, 6, 13]] df_imp= pd.DataFrame(l3, columns=['1', '2', '3']) df_imp Now I would like to create two lists (or arrays), one containing the imputed values and the other one the true values in order to compare them. To do so I first created a mask m = df_nan.isnull() which has value True in correspondance of the cells containing the imputed values. By applying the mask as df_imp[m] I obtain: 1 2 3 0 NaN NaN 1.0 1 2.0 NaN NaN 2 NaN 8.0 NaN 3 8.0 NaN NaN 4 NaN NaN 25.0 5 NaN 6.0 NaN Is there a way to get instead only the values without also the Nan, and put them into a list?
[ "You can use df.values to return a numpy representation of the DataFrame then use numpy.isnan and keep other values.\nimport numpy as np\narr = df.values\nres = arr[~np.isnan(arr)]\nprint(res)\n# [1. 2. 8. 8. 25. 6.]\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074646803_dataframe_pandas_python.txt
Q: Save html to file to work with later using Beautiful Soup I am doing a lot of work with Beautiful Soup. However, my supervisor does not want me doing the work "in real time" from the web. Instead, he wants me to download all the text from a webpage and then work on it later. He wants to avoid repeated hits on a website. Here is my code: import requests from bs4 import BeautifulSoup url = 'https://scholar.google.com/citations?user=XpmZBggAAAAJ' page = requests.get(url) soup = BeautifulSoup(page.text, 'lxml') I am unsure whether I should save "page" as a file and then import that into Beautiful Soup, or whether I should save "soup" as a file to open later. I also do not know how to save this as a file in a way that can be accessed as if it were "live" from the internet. I know almost nothing about Python, so I need the absolute easiest and simplest process for this. A: So saving soup would be... tough, and out of my experience (read more about the pickleing process if interested). You can save the page as follows: page = requests.get(url) with open('path/to/saving.html', 'wb+') as f: f.write(page.content) Then later, when you want to do analysis on it: with open('path/to/saving.html', 'rb') as f: soup = BeautifulSoup(f.read(), 'lxml') Something like that, anyway. A: The following code iterates over url_list and saves all the responses into the list all_pages, which is stored to the response.pickle file. import pickle import requests from bs4 import BeautifulSoup all_pages = [] for url in url_list: all_pages.append(requests.get(url)) with open("responses.pickle", "wb") as f: pickle.dump(all_pages, f) Then later on, you can load this data, "soupify" each response and do whatever you need with it. with open("responses.pickle", "rb") as f: all_pages = pickle.load(f) for page in all_pages: soup = BeautifulSoup(page.text, 'lxml') # do stuff A: Working with our request: url = 'https://scholar.google.com/citations?user=XpmZBggAAAAJ' page = requests.get(url) soup = BeautifulSoup(page.text, 'lxml') you can use this also: f=open("path/page.html","w") f.write(page.prettify()) f.close
Save html to file to work with later using Beautiful Soup
I am doing a lot of work with Beautiful Soup. However, my supervisor does not want me doing the work "in real time" from the web. Instead, he wants me to download all the text from a webpage and then work on it later. He wants to avoid repeated hits on a website. Here is my code: import requests from bs4 import BeautifulSoup url = 'https://scholar.google.com/citations?user=XpmZBggAAAAJ' page = requests.get(url) soup = BeautifulSoup(page.text, 'lxml') I am unsure whether I should save "page" as a file and then import that into Beautiful Soup, or whether I should save "soup" as a file to open later. I also do not know how to save this as a file in a way that can be accessed as if it were "live" from the internet. I know almost nothing about Python, so I need the absolute easiest and simplest process for this.
[ "So saving soup would be... tough, and out of my experience (read more about the pickleing process if interested). You can save the page as follows:\npage = requests.get(url)\nwith open('path/to/saving.html', 'wb+') as f:\n f.write(page.content)\n\nThen later, when you want to do analysis on it:\nwith open('path/to/saving.html', 'rb') as f:\n soup = BeautifulSoup(f.read(), 'lxml')\n\nSomething like that, anyway.\n", "The following code iterates over url_list and saves all the responses into the list all_pages, which is stored to the response.pickle file.\nimport pickle\nimport requests\nfrom bs4 import BeautifulSoup\n\nall_pages = []\nfor url in url_list:\n all_pages.append(requests.get(url))\n\nwith open(\"responses.pickle\", \"wb\") as f:\n pickle.dump(all_pages, f)\n\nThen later on, you can load this data, \"soupify\" each response and do whatever you need with it.\nwith open(\"responses.pickle\", \"rb\") as f:\n all_pages = pickle.load(f)\n\nfor page in all_pages:\n soup = BeautifulSoup(page.text, 'lxml')\n # do stuff\n\n", "Working with our request:\nurl = 'https://scholar.google.com/citations?user=XpmZBggAAAAJ' \npage = requests.get(url)\nsoup = BeautifulSoup(page.text, 'lxml')\n\nyou can use this also:\nf=open(\"path/page.html\",\"w\")\nf.write(page.prettify())\nf.close\n\n" ]
[ 4, 2, 0 ]
[]
[]
[ "file", "html", "python", "save" ]
stackoverflow_0067829316_file_html_python_save.txt
Q: OSError [Errno 22] invalid argument when use open() in Python def choose_option(self): if self.option_picker.currentRow() == 0: description = open(":/description_files/program_description.txt","r") self.information_shower.setText(description.read()) elif self.option_picker.currentRow() == 1: requirements = open(":/description_files/requirements_for_client_data.txt", "r") self.information_shower.setText(requirements.read()) elif self.option_picker.currentRow() == 2: menus = open(":/description_files/menus.txt", "r") self.information_shower.setText(menus.read()) I am using resource files and something is going wrong when i am using it as argument in open function, but when i am using it for loading of pictures and icons everything is fine. A: That is not a valid file path. You must either use a full path open(r"C:\description_files\program_description.txt","r") Or a relative path open("program_description.txt","r") A: Add 'r' in starting of path: path = r"D:\Folder\file.txt" That works for me. A: I also ran into this fault when I used open(file_path). My reason for this fault was that my file_path had a special character like "?" or "<". A: I received the same error when trying to print an absolutely enormous dictionary. When I attempted to print just the keys of the dictionary, all was well! A: In my case, I was using an invalid string prefix. Wrong: path = f"D:\Folder\file.txt" Right: path = r"D:\Folder\file.txt" A: I had the same problem It happens because files can't contain special characters like ":", "?", ">" and etc. You should replace these files by using replace() function: filename = filename.replace("special character to replace", "-") A: In my case the error was due to lack of permissions to the folder path. I entered and saved the credentials and the issue was solved. A: you should add one more "/" in the last "/" of path, that is: open('C:\Python34\book.csv') to open('C:\Python34\\book.csv'). For example: import csv with open('C:\Python34\\book.csv', newline='') as csvfile: spamreader = csv.reader(csvfile, delimiter='', quotechar='|') for row in spamreader: print(row) A: In Windows-Pycharm: If File Location|Path contains any string like \t then need to escape that with additional \ like \\t A: Just replace with "/" for file path : open("description_files/program_description.txt","r") A: just use single quotation marks only and use 'r' raw string upfront and a single '/' for eg f = open(r'C:/Desktop/file.txt','r') print(f.read()) A: I had special characters like '' in my strings, for example for one location I had a file Varzea*, then when I tried to save ('Varzea.csv') with f-string Windows complained. I just "sanitized" the string and all got back to normal. The best way in my case was to let the strings with just letters, without special characters! A: for folder, subs, files in os.walk(unicode(docs_dir, 'utf-8')): for filename in files: if not filename.startswith('.'): file_path = os.path.join(folder, filename) A: In my case,the problem exists beacause I have not set permission for drive "C:\" and when I change my path to other drive like "F:\" my problem resolved. A: import pandas as pd df = pd.read_excel ('C:/Users/yourlogin/new folder/file.xlsx') print (df) A: I got this error because old server instance was running and using log file, hence new instance was not able to write to log file. Post deleting log file this issue got resolved. A: When I copy the path by right clicking the file---> properties-->security, it shows the error. The working method for this is to copy path and filename separately. A: For me this issue was caused by trying to write a datetime to file. Note: this doesn't work: myFile = open(str(datetime.now()),"a") The datetime.now() object contains the colon ''':''' character To fix this, use a filename which avoid restricted special characters. Note this resource on detecting and replacing invalid characters: https://stackoverflow.com/a/13593932/9053474 For completeness, replace unwanted characters with the following: import re re.sub(r'[^\w_. -]', '_', filename) Note these are Windows restricted characters and invalid characters differ by platform.
OSError [Errno 22] invalid argument when use open() in Python
def choose_option(self): if self.option_picker.currentRow() == 0: description = open(":/description_files/program_description.txt","r") self.information_shower.setText(description.read()) elif self.option_picker.currentRow() == 1: requirements = open(":/description_files/requirements_for_client_data.txt", "r") self.information_shower.setText(requirements.read()) elif self.option_picker.currentRow() == 2: menus = open(":/description_files/menus.txt", "r") self.information_shower.setText(menus.read()) I am using resource files and something is going wrong when i am using it as argument in open function, but when i am using it for loading of pictures and icons everything is fine.
[ "That is not a valid file path. You must either use a full path\nopen(r\"C:\\description_files\\program_description.txt\",\"r\")\n\nOr a relative path\nopen(\"program_description.txt\",\"r\")\n\n", "Add 'r' in starting of path:\npath = r\"D:\\Folder\\file.txt\"\n\nThat works for me.\n", "I also ran into this fault when I used open(file_path). My reason for this fault was that my file_path had a special character like \"?\" or \"<\".\n", "I received the same error when trying to print an absolutely enormous dictionary. When I attempted to print just the keys of the dictionary, all was well!\n", "In my case, I was using an invalid string prefix.\nWrong:\npath = f\"D:\\Folder\\file.txt\"\n\nRight:\npath = r\"D:\\Folder\\file.txt\"\n\n", "I had the same problem\nIt happens because files can't contain special characters like \":\", \"?\", \">\" and etc.\nYou should replace these files by using replace() function:\nfilename = filename.replace(\"special character to replace\", \"-\")\n\n", "In my case the error was due to lack of permissions to the folder path. I entered and saved the credentials and the issue was solved.\n", "you should add one more \"/\" in the last \"/\" of path, that is:\nopen('C:\\Python34\\book.csv') to open('C:\\Python34\\\\book.csv'). For example:\nimport csv\nwith open('C:\\Python34\\\\book.csv', newline='') as csvfile:\n spamreader = csv.reader(csvfile, delimiter='', quotechar='|')\n for row in spamreader:\n print(row)\n\n", "In Windows-Pycharm: If File Location|Path contains any string like \\t then need to escape that with additional \\ like \\\\t\n", "Just replace with \"/\" for file path :\n open(\"description_files/program_description.txt\",\"r\")\n\n", "just use single quotation marks only and use 'r' raw string upfront and a single '/'\nfor eg\nf = open(r'C:/Desktop/file.txt','r')\nprint(f.read())\n\n", "I had special characters like '' in my strings, for example for one location I had a file Varzea*, then when I tried to save ('Varzea.csv') with f-string Windows complained. I just \"sanitized\" the string and all got back to normal.\nThe best way in my case was to let the strings with just letters, without special characters!\n", "for folder, subs, files in os.walk(unicode(docs_dir, 'utf-8')):\n for filename in files:\n if not filename.startswith('.'):\n file_path = os.path.join(folder, filename)\n\n", "In my case,the problem exists beacause I have not set permission for drive \"C:\\\" and when I change my path to other drive like \"F:\\\" my problem resolved.\n", "import pandas as pd\ndf = pd.read_excel ('C:/Users/yourlogin/new folder/file.xlsx')\nprint (df)\n\n", "I got this error because old server instance was running and using log file, hence new instance was not able to write to log file. Post deleting log file this issue got resolved.\n", "When I copy the path by right clicking the file---> properties-->security, it shows the error. The working method for this is to copy path and filename separately.\n", "For me this issue was caused by trying to write a datetime to file.\nNote: this doesn't work:\nmyFile = open(str(datetime.now()),\"a\")\nThe datetime.now() object contains the colon ''':''' character\nTo fix this, use a filename which avoid restricted special characters. Note this resource on detecting and replacing invalid characters:\nhttps://stackoverflow.com/a/13593932/9053474\nFor completeness, replace unwanted characters with the following:\nimport re\nre.sub(r'[^\\w_. -]', '_', filename)\nNote these are Windows restricted characters and invalid characters differ by platform.\n" ]
[ 48, 12, 9, 7, 4, 3, 3, 2, 2, 2, 2, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0025584124_python.txt
Q: python CSV writer keep escape character I have a CSV file, here are two lines in the file. c1,c2,c3,c4,c5 17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus\' place. - Behind the scenes look at the WWE women division.,NULL 16641,2425413,11,"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, \"Alligator\".",NULL 127472,2130098,13,"FACT: Dunn uploads a file from an Apple Powerbook in \"C:\\\", which would be appropriate for a DOS/Windows system.",NULL I would like to cut the c4 columns to maximal length (say 500) and keep everything else unchanged and save it to a new csv file. Here is my implementation. import csv import sys with open("new_file_name.csv", 'w', newline='') as csvwriter: spamwriter = csv.writer(csvwriter, delimiter=',', quotechar='"', escapechar='\\') with open("old_file_name.csv", newline='') as csvreader: spamreader = csv.reader(csvreader, delimiter=',', quotechar='"', escapechar='\\') for row in spamreader: if len(row[3]) > 500: print("cut this line") row[n] = row[n][:500] spamwriter.writerow(row) However, the CSV file that I obtained is 17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus' place. - Behind the scenes look at the WWE women division.,NULL 16641,2425413,11,"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, ""Alligator"".",NULL 127472,2130098,13,"FACT: Dunn uploads a file from an Apple Powerbook in \"C:\\", which would be appropriate for a DOS/Windows system.",NULL The black-slash is missing in my new csv file. What I want is 17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus\' place. - Behind the scenes look at the WWE women division.,NULL 16641,2425413,11,"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, \"Alligator\".",NULL 127472,2130098,13,"FACT: Dunn uploads a file from an Apple Powerbook in \"C:\\\", which would be appropriate for a DOS/Windows system.",NULL I try something like quoting=csv.QUOTE_ALL, but it also changes my origin CSV file when value of c4 is less than 500. What I want is a new CSV file without changing any origin character for the first 500 characters. Thanks. A: You can use doublequote=False in csv.writer: import csv with open("input.csv", "r") as f_in, open("output.csv", "w") as f_out: reader = csv.reader(f_in, delimiter=",", quotechar='"', escapechar="\\") writer = csv.writer( f_out, delimiter=",", quotechar='"', escapechar="\\", doublequote=False, ) writer.writerow(next(reader)) for row in reader: row[3] = row[3][:500] writer.writerow(row) The output.csv becomes: c1,c2,c3,c4,c5 17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus' place. - Behind the scenes look at the WWE women division.,NULL 16641,2425413,11,"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, \"Alligator\".",NULL
python CSV writer keep escape character
I have a CSV file, here are two lines in the file. c1,c2,c3,c4,c5 17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus\' place. - Behind the scenes look at the WWE women division.,NULL 16641,2425413,11,"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, \"Alligator\".",NULL 127472,2130098,13,"FACT: Dunn uploads a file from an Apple Powerbook in \"C:\\\", which would be appropriate for a DOS/Windows system.",NULL I would like to cut the c4 columns to maximal length (say 500) and keep everything else unchanged and save it to a new csv file. Here is my implementation. import csv import sys with open("new_file_name.csv", 'w', newline='') as csvwriter: spamwriter = csv.writer(csvwriter, delimiter=',', quotechar='"', escapechar='\\') with open("old_file_name.csv", newline='') as csvreader: spamreader = csv.reader(csvreader, delimiter=',', quotechar='"', escapechar='\\') for row in spamreader: if len(row[3]) > 500: print("cut this line") row[n] = row[n][:500] spamwriter.writerow(row) However, the CSV file that I obtained is 17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus' place. - Behind the scenes look at the WWE women division.,NULL 16641,2425413,11,"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, ""Alligator"".",NULL 127472,2130098,13,"FACT: Dunn uploads a file from an Apple Powerbook in \"C:\\", which would be appropriate for a DOS/Windows system.",NULL The black-slash is missing in my new csv file. What I want is 17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus\' place. - Behind the scenes look at the WWE women division.,NULL 16641,2425413,11,"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, \"Alligator\".",NULL 127472,2130098,13,"FACT: Dunn uploads a file from an Apple Powerbook in \"C:\\\", which would be appropriate for a DOS/Windows system.",NULL I try something like quoting=csv.QUOTE_ALL, but it also changes my origin CSV file when value of c4 is less than 500. What I want is a new CSV file without changing any origin character for the first 500 characters. Thanks.
[ "You can use doublequote=False in csv.writer:\nimport csv\n\nwith open(\"input.csv\", \"r\") as f_in, open(\"output.csv\", \"w\") as f_out:\n reader = csv.reader(f_in, delimiter=\",\", quotechar='\"', escapechar=\"\\\\\")\n writer = csv.writer(\n f_out,\n delimiter=\",\",\n quotechar='\"',\n escapechar=\"\\\\\",\n doublequote=False,\n )\n\n writer.writerow(next(reader))\n for row in reader:\n row[3] = row[3][:500]\n writer.writerow(row)\n\nThe output.csv becomes:\nc1,c2,c3,c4,c5\n17939,2507974,11,DVD version has 1 hour of extras of 5 bonus matches including: - Stacy Keibler vs Torrie Wilson in a bikini contest. - A tour of Trish Stratus' place. - Behind the scenes look at the WWE women division.,NULL\n16641,2425413,11,\"The Australian TV version had a scene included at the end where a cop car was driving in an alley way, narrowly missing someone walking. This scene was also used in the 1980 film, \\\"Alligator\\\".\",NULL\n\n" ]
[ 0 ]
[]
[]
[ "backslash", "csv", "csvreader", "python" ]
stackoverflow_0074646678_backslash_csv_csvreader_python.txt
Q: Error: "metadata generation failed", can't install Artic Module I am trying to get started with downloading this project: https://github.com/sadighian/crypto-rl And I've downloaded the packages in the requirements file but I can't figure out why the artic package won't download. I am getting this error: Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [312 lines of output] /Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( error: subprocess-exited-with-error Γ— Building wheel for numpy (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─> [269 lines of output] … error: Command "/opt/concourse/worker/volumes/live/c1a1a6ef-e724-4ad9-52a7-d6d68451dacb/volume/python-split_1631807121927/_build_env/bin/llvm-ar rcs build/temp.macosx-10.9-x86_64-3.9/libnpymath.a build/temp.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/npy_math.o build/temp.macosx-10.9-x86_64-3.9/build/src.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/ieee754.o build/temp.macosx-10.9-x86_64-3.9/build/src.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.o build/temp.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/halffloat.o" failed with exit status 127 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy ERROR: Failed to build one or more wheels Traceback (most recent call last): File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/installer.py", line 82, in fetch_build_egg subprocess.check_call(cmd) File "/Users/aishahalane/opt/anaconda3/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/Users/aishahalane/venv/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/var/folders/_p/xqkc7m_n2_ngn8wdd3pgytp80000gn/T/tmph3_m5ewf', '--quiet', 'numpy<=1.18.4']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/private/var/folders/_p/xqkc7m_n2_ngn8wdd3pgytp80000gn/T/pip-req-build-s7mgwt47/setup.py", line 59, in <module> setup( File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/__init__.py", line 154, in setup _install_setup_requires(attrs) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/__init__.py", line 148, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/dist.py", line 812, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "/Users/aishahalane/venv/lib/python3.9/site-packages/pkg_resources/__init__.py", line 771, in resolve dist = best[req.key] = env.best_match( File "/Users/aishahalane/venv/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1056, in best_match return self.obtain(req, installer) File "/Users/aishahalane/venv/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1068, in obtain return installer(requirement) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/dist.py", line 883, in fetch_build_egg return fetch_build_egg(self, req) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/installer.py", line 84, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['/Users/aishahalane/venv/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/var/folders/_p/xqkc7m_n2_ngn8wdd3pgytp80000gn/T/tmph3_m5ewf', '--quiet', 'numpy<=1.18.4']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. How do I solve this problem? A: I faced a similar issue, due a recent change in pip. I solved it by adding the following to the installation command: --use-deprecated=backtrack-on-build-failures E.g. instead of pip install numpy I now ran: pip install numpy --use-deprecated=backtrack-on-build-failures A: I had the same problem with metadata-generation-failed. This GitHub issue comment helped me (Ubuntu 18.04): sudo apt-get install python-dev sudo apt-get install build-essential python -m pip install -U pip or python3 -m pip install -U pip pip3 install --upgrade setuptools A: I had this error while trying to install dotenv. With this command: pip install dotenv For anyone facing this error with dotenv. The right command actually is: pip install python-dotenv A: pip install numpy --use-deprecated=legacy-resolver can actually help, but there will be a red warning: pip's legacy dependency resolver does not consider dependency conflicts when selecting packages. anyway it solve my problem!
Error: "metadata generation failed", can't install Artic Module
I am trying to get started with downloading this project: https://github.com/sadighian/crypto-rl And I've downloaded the packages in the requirements file but I can't figure out why the artic package won't download. I am getting this error: Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [312 lines of output] /Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( error: subprocess-exited-with-error Γ— Building wheel for numpy (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─> [269 lines of output] … error: Command "/opt/concourse/worker/volumes/live/c1a1a6ef-e724-4ad9-52a7-d6d68451dacb/volume/python-split_1631807121927/_build_env/bin/llvm-ar rcs build/temp.macosx-10.9-x86_64-3.9/libnpymath.a build/temp.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/npy_math.o build/temp.macosx-10.9-x86_64-3.9/build/src.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/ieee754.o build/temp.macosx-10.9-x86_64-3.9/build/src.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.o build/temp.macosx-10.9-x86_64-3.9/numpy/core/src/npymath/halffloat.o" failed with exit status 127 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy ERROR: Failed to build one or more wheels Traceback (most recent call last): File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/installer.py", line 82, in fetch_build_egg subprocess.check_call(cmd) File "/Users/aishahalane/opt/anaconda3/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/Users/aishahalane/venv/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/var/folders/_p/xqkc7m_n2_ngn8wdd3pgytp80000gn/T/tmph3_m5ewf', '--quiet', 'numpy<=1.18.4']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/private/var/folders/_p/xqkc7m_n2_ngn8wdd3pgytp80000gn/T/pip-req-build-s7mgwt47/setup.py", line 59, in <module> setup( File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/__init__.py", line 154, in setup _install_setup_requires(attrs) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/__init__.py", line 148, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/dist.py", line 812, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "/Users/aishahalane/venv/lib/python3.9/site-packages/pkg_resources/__init__.py", line 771, in resolve dist = best[req.key] = env.best_match( File "/Users/aishahalane/venv/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1056, in best_match return self.obtain(req, installer) File "/Users/aishahalane/venv/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1068, in obtain return installer(requirement) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/dist.py", line 883, in fetch_build_egg return fetch_build_egg(self, req) File "/Users/aishahalane/venv/lib/python3.9/site-packages/setuptools/installer.py", line 84, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['/Users/aishahalane/venv/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/var/folders/_p/xqkc7m_n2_ngn8wdd3pgytp80000gn/T/tmph3_m5ewf', '--quiet', 'numpy<=1.18.4']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. How do I solve this problem?
[ "I faced a similar issue, due a recent change in pip. I solved it by adding the following to the installation command:\n--use-deprecated=backtrack-on-build-failures\n\nE.g. instead of pip install numpy I now ran:\npip install numpy --use-deprecated=backtrack-on-build-failures\n\n", "I had the same problem with metadata-generation-failed. This GitHub issue comment helped me (Ubuntu 18.04):\nsudo apt-get install python-dev \nsudo apt-get install build-essential\npython -m pip install -U pip or python3 -m pip install -U pip \npip3 install --upgrade setuptools\n\n", "I had this error while trying to install dotenv. With this command:\npip install dotenv\n\nFor anyone facing this error with dotenv. The right command actually is:\npip install python-dotenv\n\n", "pip install numpy --use-deprecated=legacy-resolver can actually help, but there will be a red warning: pip's legacy dependency resolver does not consider dependency conflicts when selecting packages. anyway it solve my problem!\n" ]
[ 21, 3, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0070916814_python.txt
Q: How to sort this list to make the exponent value go first? Given list = [10, '3 ^ 2', '2 ^ 3'], how to sort the list to make the exponents values ( '3 ^ 2' and 2 ^ 3 ) are before any integer / float value and exponent values are sorted from the base. Desired Output: ['2 ^ 3', '3 ^ 2', 10] I have tried to remove() and insert() the value but I can't figure out how to find the index to insert the value in. A: You shouldn't use list as a variable, it is reserved in python. Not pretty, but you can use this sorted(mylist, key=lambda x: (-len(str(x).split('^')), str(x).split('^')[0])) This is sorting according to two criteria, first if there is an exponent, and then by the value of the base.
How to sort this list to make the exponent value go first?
Given list = [10, '3 ^ 2', '2 ^ 3'], how to sort the list to make the exponents values ( '3 ^ 2' and 2 ^ 3 ) are before any integer / float value and exponent values are sorted from the base. Desired Output: ['2 ^ 3', '3 ^ 2', 10] I have tried to remove() and insert() the value but I can't figure out how to find the index to insert the value in.
[ "You shouldn't use list as a variable, it is reserved in python.\nNot pretty, but you can use this\nsorted(mylist, key=lambda x: (-len(str(x).split('^')), str(x).split('^')[0]))\n\nThis is sorting according to two criteria, first if there is an exponent, and then by the value of the base.\n" ]
[ 1 ]
[]
[]
[ "exponent", "list", "python" ]
stackoverflow_0074646874_exponent_list_python.txt
Q: Pandas: resample hourly values to monthly values with offset I want to aggregate a pandas.Series with an hourly DatetimeIndex to monthly values - while considering the offset to midnight. Example Consider the following (uniform) timeseries that spans about 1.5 months. import pandas as pd hours = pd.Series(1, pd.date_range('2020-02-23 06:00', freq = 'H', periods=1008)) hours # 2020-02-23 06:00:00 1 # 2020-02-23 07:00:00 1 # .. # 2020-04-05 04:00:00 1 # 2020-04-05 05:00:00 1 # Freq: H, Length: 1000, dtype: int64 I would like to sum these to months while considering, that days start at 06:00 in this use-case. The result should be: 2020-02-01 06:00:00 168 2020-03-01 06:00:00 744 2020-04-01 06:00:00 96 freq: MS, dtype: int64 How do I do that?? What I've tried and what works I can aggregate to days while considering the offset, using the offset parameter: days = hours.resample('D', offset=pd.Timedelta('06:00:00')).sum() days # 2020-02-23 06:00:00 24 # 2020-02-24 06:00:00 24 # .. # 2020-04-03 06:00:00 24 # 2020-04-04 06:00:00 24 # Freq: D, dtype: int64 Using the same method to aggregate to months does not work. The timestamps do not have a time component, and the values are incorrect: months = hours.resample('MS', offset=pd.Timedelta('06:00:00')).sum() months # 2020-02-01 162 # wrong # 2020-03-01 744 # 2020-04-01 102 # wrong # Freq: MS, dtype: int64 I could do the aggregation to months as a second step after aggregating to days. In that case, the values are correct, but the time component is still missing from the timestamps: days = hours.resample('D', offset=pd.Timedelta('06:00:00')).sum() months = days.resample('MS', offset=pd.Timedelta('06:00:00')).sum() months # 2020-02-01 168 # 2020-03-01 744 # 2020-04-01 96 # Freq: MS, dtype: int64 My current workaround is adding the timedelta and resetting the frequency manually. months.index += pd.Timedelta('06:00:00') months.index.freq = 'MS' months # 2020-02-01 06:00:00 168 # 2020-03-01 06:00:00 744 # 2020-04-01 06:00:00 96 # freq: MS, dtype: int64 A: Not too much of an improvement on your attempt, but you could write the resampling as months = hours.resample('D', offset='06:00:00').sum().resample('MS').sum() changing the index labels still requires the hack you've been doing, as in adding the time delta manually and setting freq to MS note that you can pass a string representation of the time delta to offset. The reason two resampling operations are needed is because when the resampling frequency is greater than 'D', the offset is ignored. Once your resample at the daily level is performed with the offset, the result can be further resampled without specifying the offset. I believe this is buggy behaviour, and I agree with you that hours.resample('MS', offset='06:00:00').sum() should produce the expected result. Essentially, there are two issues: the binning is incorrect when there is an offset applied & the frequency is greater than 'D'. The offset is ignored. the offset is not reflected in the final output, the output truncates to the start or end of the period. I'm not sure if the behaviour you're expecting can be generalized for all users. That there is a related bug issue impacting resampling with offsets. I have not determined yet whether that and the issue you face have the same root cause. Its the same root cause.
Pandas: resample hourly values to monthly values with offset
I want to aggregate a pandas.Series with an hourly DatetimeIndex to monthly values - while considering the offset to midnight. Example Consider the following (uniform) timeseries that spans about 1.5 months. import pandas as pd hours = pd.Series(1, pd.date_range('2020-02-23 06:00', freq = 'H', periods=1008)) hours # 2020-02-23 06:00:00 1 # 2020-02-23 07:00:00 1 # .. # 2020-04-05 04:00:00 1 # 2020-04-05 05:00:00 1 # Freq: H, Length: 1000, dtype: int64 I would like to sum these to months while considering, that days start at 06:00 in this use-case. The result should be: 2020-02-01 06:00:00 168 2020-03-01 06:00:00 744 2020-04-01 06:00:00 96 freq: MS, dtype: int64 How do I do that?? What I've tried and what works I can aggregate to days while considering the offset, using the offset parameter: days = hours.resample('D', offset=pd.Timedelta('06:00:00')).sum() days # 2020-02-23 06:00:00 24 # 2020-02-24 06:00:00 24 # .. # 2020-04-03 06:00:00 24 # 2020-04-04 06:00:00 24 # Freq: D, dtype: int64 Using the same method to aggregate to months does not work. The timestamps do not have a time component, and the values are incorrect: months = hours.resample('MS', offset=pd.Timedelta('06:00:00')).sum() months # 2020-02-01 162 # wrong # 2020-03-01 744 # 2020-04-01 102 # wrong # Freq: MS, dtype: int64 I could do the aggregation to months as a second step after aggregating to days. In that case, the values are correct, but the time component is still missing from the timestamps: days = hours.resample('D', offset=pd.Timedelta('06:00:00')).sum() months = days.resample('MS', offset=pd.Timedelta('06:00:00')).sum() months # 2020-02-01 168 # 2020-03-01 744 # 2020-04-01 96 # Freq: MS, dtype: int64 My current workaround is adding the timedelta and resetting the frequency manually. months.index += pd.Timedelta('06:00:00') months.index.freq = 'MS' months # 2020-02-01 06:00:00 168 # 2020-03-01 06:00:00 744 # 2020-04-01 06:00:00 96 # freq: MS, dtype: int64
[ "Not too much of an improvement on your attempt, but you could write the resampling as\nmonths = hours.resample('D', offset='06:00:00').sum().resample('MS').sum()\n\nchanging the index labels still requires the hack you've been doing, as in adding the time delta manually and setting freq to MS\nnote that you can pass a string representation of the time delta to offset.\nThe reason two resampling operations are needed is because when the resampling frequency is greater than 'D', the offset is ignored. Once your resample at the daily level is performed with the offset, the result can be further resampled without specifying the offset.\nI believe this is buggy behaviour, and I agree with you that hours.resample('MS', offset='06:00:00').sum() should produce the expected result.\nEssentially, there are two issues:\n\nthe binning is incorrect when there is an offset applied & the frequency is greater than 'D'. The offset is ignored.\nthe offset is not reflected in the final output, the output truncates to the start or end of the period. I'm not sure if the behaviour you're expecting can be generalized for all users.\n\nThat there is a related bug issue impacting resampling with offsets. I have not determined yet whether that and the issue you face have the same root cause. Its the same root cause.\n" ]
[ 1 ]
[ "To aggregate a pandas.Series with an hourly DatetimeIndex to monthly values while considering the offset to midnight, you can use the offset parameter in the resample method to specify the offset from midnight to start the aggregation from. For example, if you want to start the aggregation from 6:00 AM, you can use an offset of pd.Timedelta('06:00:00').\nHere is an example of how you could use the offset parameter to aggregate the timeseries to monthly values while considering the offset to midnight:\nimport pandas as pd\n\nhours = pd.Series(1, pd.date_range('2020-02-23 06:00', freq = 'H', periods=1008))\n\n# Aggregate the timeseries to monthly values using the offset parameter\nmonths = hours.resample('MS', offset=pd.Timedelta('06:00:00')).sum()\n\n# Print the result\nprint(months)\n\n\nThis will print the following:\n2020-02-01 06:00:00 168\n2020-03-01 06:00:00 744\n2020-04-01 06:00:00 96\nFreq: MS, dtype: int64\n\n\n" ]
[ -1 ]
[ "datetime", "pandas", "pandas_resample", "python" ]
stackoverflow_0074401212_datetime_pandas_pandas_resample_python.txt
Q: How can I find out the amount of susceptible, infected and recovered individuals in time = 50, where S(50), I(50), R(50)? (SIR MODEL) How can I find out the amount of susceptible, infected and recovered individuals in time = 50, where S(50), I(50), R(50)? (SIR MODEL) # EquaΓ§Γ΅es diferenciais e suas condiΓ§Γ΅es iniciais h = 0.05 beta = 0.8 nu = 0.3125 def derivada_S(time,I,S): return -beta*I*S def derivada_I(time,I,S): return beta*I*S - nu*I def derivada_R(time,I): return nu*I S0 = 0.99 I0 = 0.01 R0 = 0.0 time_0 = 0.0 time_k = 100 data = 1000 # vetor representativo do tempo time = np.linspace(time_0,time_k,data) S = np.zeros(data) I = np.zeros(data) R = np.zeros(data) S[0] = S0 I[0] = I0 R[0] = R0 for i in range(data-1): S_k1 = derivada_S(time[i], I[i], S[i]) S_k2 = derivada_S(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*S_k1) S_k3 = derivada_S(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*S_k2) S_k4 = derivada_S(time[i] + h, I[i], S[i] + h + S_k3) S[i+1] = S[i] + (h/6)*(S_k1 + 2*S_k2 + 2*S_k3 + S_k4) I_k1 = derivada_I(time[i], I[i], S[i]) I_k2 = derivada_I(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*I_k1) I_k3 = derivada_I(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*I_k2) I_k4 = derivada_I(time[i] + h, I[i], S[i] + h + I_k3) I[i+1] = I[i] + (h/6)*(I_k1 + 2*I_k2 + 2*I_k3 + I_k4) R_k1 = derivada_R(time[i], I[i]) R_k2 = derivada_R(time[i] + (1/2)*h, I[i]) R_k3 = derivada_R(time[i] + (1/2)*h, I[i]) R_k4 = derivada_R(time[i] + h, I[i]) R[i+1] = R[i] + (h/6)*(R_k1 + 2*R_k2 + 2*R_k3 + R_k4) plt.figure(figsize=(8,6)) plt.plot(time,S, label = 'S') plt.plot(time,I, label = 'I') plt.plot(time,R, label = 'R') plt.xlabel('tempo (t)') plt.ylabel('SusceptΓ­vel, Infectado e Recuperado') plt.grid() plt.legend() plt.show() I'm solving an university problem with python applying Runge-Kutta's fourth order, but a I don't know how to collect the data for time = 50. A: This link maybe help you to build the model SIR-derived ODE models also here by I have code for you: import numpy as np import matplotlib.pyplot as plt Beta = 1.00205 Gamma = 0.23000 N = 1000 def func_S(t,I,S): return - Beta*I*S/N def func_I(t,I,S): return Beta*I*S/N - Gamma*I def func_R(t,I): return Gamma*I # physical parameters I0 = 1 R0 = 0 S0 = N - I0 - R0 t0 = 0 tn = 50 # Numerical Parameters ndata = 1000 t = np.linspace(t0,tn,ndata) h = t[2] - t[1] S = np.zeros(ndata) I = np.zeros(ndata) R = np.zeros(ndata) S[0] = S0 I[0] = I0 R[0] = R0 for i in range(ndata-1): k1 = func_S(t[i], I[i], S[i]) k2 = func_S(t[i]+0.5*h, I[i], S[i]+h+0.5*k1) k3 = func_S(t[i]+0.5*h, I[i], S[i]+h+0.5*k2) k4 = func_S(t[i]+h, I[i], S[i]+h+k3) S[i+1] = S[i] + (h/6)*(k1 + 2*k2 + 2*k3 + k4) kk1 = func_I(t[i], I[i], S[i]) kk2 = func_I(t[i]+0.5*h, I[i], S[i]+h+0.5*kk1) kk3 = func_I(t[i]+0.5*h, I[i], S[i]+h+0.5*kk2) kk4 = func_I(t[i]+h, I[i], S[i]+h+kk3) I[i+1] = I[i] + (h/6)*(kk1 + 2*kk2 + 2*kk3 + kk4) l1 = func_R(t[i], I[i]) l2 = func_R(t[i]+0.5*h, I[i]) l3 = func_R(t[i]+0.5*h, I[i]) l4 = func_R(t[i]+h, I[i]) R[i+1] = R[i] + (h/6)*(l1 + 2*l2 + 2*l3 + l4) plt.figure(1) plt.plot(t,S) plt.plot(t,I) plt.plot(t,R) plt.show() the output will be like this: A: The easiest way to get a value at time 50 is to compute a value at time 50. As you compute data over 100 days, with about 10 data points per day, reflect this in the time array construction time = np.linspace(0,days,10*days+1) Note that linspace(a,b,N) produces N nodes that have between them a step of size (b-a)/(N-1). Then you get the data for day 50 at time index 500 (and the 9 following). For this slow-moving system and this relatively small time step, you will get reasonable accuracy with the implemented order-1 method, but will get better accuracy with a higher-order method like RK4. You need to apply associated updates to all components everywhere. This requires to interleave the RK4 steps that you have, as for instance the corrected step S_k2 = derivada_S(time[i] + (h/2), I[i] + (h/2)*I_k1, S[i] + (h/2)*S_k1) requires that the value I_k1 is previously computed. Note also that h should be a factor to S_k1, it should not be added. In total you should get for i in range(data-1): S_k1 = derivada_S(time[i], I[i], S[i]) I_k1 = derivada_I(time[i], I[i], S[i]) R_k1 = derivada_R(time[i], I[i]) S_k2 = derivada_S(time[i] + (1/2)*h, I[i] + (h/2)*I_k1, S[i] + (h/2)*S_k1) I_k2 = derivada_I(time[i] + (1/2)*h, I[i] + (h/2)*I_k1, S[i] + (h/2)*S_k1) R_k2 = derivada_R(time[i] + (1/2)*h, I[i] + (h/2)*I_k1) S_k3 = derivada_S(time[i] + (h/2), I[i] + (h/2)*I_k2, S[i] + (h/2)*S_k2) I_k3 = derivada_I(time[i] + (h/2), I[i] + (h/2)*I_k2, S[i] + (h/2)*S_k2) R_k3 = derivada_R(time[i] + (h/2), I[i] + (h/2)*I_k2) S_k4 = derivada_S(time[i] + h, I[i] + I_k3, S[i] + S_k3) I_k4 = derivada_I(time[i] + h, I[i] + I_k3, S[i] + S_k3) R_k4 = derivada_R(time[i] + h, I[i] + I_k3) S[i+1] = S[i] + (h/6)*(S_k1 + 2*S_k2 + 2*S_k3 + S_k4) I[i+1] = I[i] + (h/6)*(I_k1 + 2*I_k2 + 2*I_k3 + I_k4) R[i+1] = R[i] + (h/6)*(R_k1 + 2*R_k2 + 2*R_k3 + R_k4) Note that h is a factor to I_k1, S_k1 etc. You have a sum there. Replacing just this piece of code gives the plot But there is another problem before that. You defined the time step as 0.05 so that t=50 is reached at the last place. As the system is autonomous, the contents of the time array makes no difference, but the labeling of the x axis has to be divided by 2. The values that you want are in fact the last values computed with data = 10*time_k+1. S[-1]=0.10483, I[-1]=8.11098e-05, R[-1]=0.89509 For the previous discussion to remain valid, you could also set h=t[1]-t[0], so that t=50 is reached in the middle at i=500. A: You can use the integrator available at scipy.integrate.solve_ivp, and with it use the fourth-order Runge-Kutta method (DOP853, RK23, RK45 and Radau). ########################################## # AUTHOR : CARLOS DUARDO DA SILVA LIMA # # DATE : 12/01/2022 # # LANGUAGE: python # # IDE : GOOGLE COLAB # # PROBLEM : MODEL SIR # ########################################## import numpy as np from scipy.integrate import odeint, solve_ivp, RK45 import matplotlib.pyplot as plt t_i = 0.0 # START TIME t_f = 50.0 # FINAL TIME N = 1000 #t = np.linspace(t_i,t_f,N) t_span = np.array([t_i,t_f]) # INITIAL CONDITIONS OF THE SOR MODEL S0 = 0.99 I0 = 0.01 R0 = 0.0 r0 = np.array([S0,I0,R0]) # ORDINARY DIFFERENTIAL EQUATIONS OF THE SIR MODEL def SIR(t,y,b,k): s,i,r = y ode1 = -b*s*i ode2 = b*s*i-k*i ode3 = k*i return np.array([ode1,ode2,ode3]) # INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS (FOURTH ORDER RUNGE-KUTTA, RADAU) #sol_solve_ivp = solve_ivp(SIR,t_span,y0 = r0,method='Radau', rtol=1E-09, atol=1e-09, args = (0.8,0.3125)) sol_solve_ivp = solve_ivp(SIR,t_span,y0 = r0,method='RK45', rtol=1E-09, atol=1e-09, args = (0.8,0.3125)) # T, S, I, R FUNCTIONS t_= sol_solve_ivp.t s = sol_solve_ivp.y[0, :] i = sol_solve_ivp.y[1, :] r = sol_solve_ivp.y[2, :] # GRAPHIC plt.figure(1) plt.style.use('dark_background') plt.figure(figsize = (8,8)) plt.plot(t_,s,'c-',t_,i,'g-',t_,r,'y-',lw=1.5) #plt.title(r'$\frac{dS(t)}{dt} = -bs(t)i(t)$, $\frac{dI(t)}{dt} = bs(t)i(t)-ki(t)$ and $\frac{dR(t)}{dt} = ki(t)$') plt.title(r'SIR Model', color = 'm') plt.xlabel(r'$t(t)$', color = 'm') plt.ylabel(r'$S(t)$, $I(t)$ and $R(t)$', color = 'm') plt.legend(['S', 'I', 'R'], shadow=True) plt.grid(lw = 0.95,color = 'white',linestyle = '--') plt.show() ''' SEARCH WEBSITES https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology https://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-the-differential-equation-model ''' Output Chart
How can I find out the amount of susceptible, infected and recovered individuals in time = 50, where S(50), I(50), R(50)? (SIR MODEL)
How can I find out the amount of susceptible, infected and recovered individuals in time = 50, where S(50), I(50), R(50)? (SIR MODEL) # EquaΓ§Γ΅es diferenciais e suas condiΓ§Γ΅es iniciais h = 0.05 beta = 0.8 nu = 0.3125 def derivada_S(time,I,S): return -beta*I*S def derivada_I(time,I,S): return beta*I*S - nu*I def derivada_R(time,I): return nu*I S0 = 0.99 I0 = 0.01 R0 = 0.0 time_0 = 0.0 time_k = 100 data = 1000 # vetor representativo do tempo time = np.linspace(time_0,time_k,data) S = np.zeros(data) I = np.zeros(data) R = np.zeros(data) S[0] = S0 I[0] = I0 R[0] = R0 for i in range(data-1): S_k1 = derivada_S(time[i], I[i], S[i]) S_k2 = derivada_S(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*S_k1) S_k3 = derivada_S(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*S_k2) S_k4 = derivada_S(time[i] + h, I[i], S[i] + h + S_k3) S[i+1] = S[i] + (h/6)*(S_k1 + 2*S_k2 + 2*S_k3 + S_k4) I_k1 = derivada_I(time[i], I[i], S[i]) I_k2 = derivada_I(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*I_k1) I_k3 = derivada_I(time[i] + (1/2)*h, I[i], S[i] + h + (1/2)*I_k2) I_k4 = derivada_I(time[i] + h, I[i], S[i] + h + I_k3) I[i+1] = I[i] + (h/6)*(I_k1 + 2*I_k2 + 2*I_k3 + I_k4) R_k1 = derivada_R(time[i], I[i]) R_k2 = derivada_R(time[i] + (1/2)*h, I[i]) R_k3 = derivada_R(time[i] + (1/2)*h, I[i]) R_k4 = derivada_R(time[i] + h, I[i]) R[i+1] = R[i] + (h/6)*(R_k1 + 2*R_k2 + 2*R_k3 + R_k4) plt.figure(figsize=(8,6)) plt.plot(time,S, label = 'S') plt.plot(time,I, label = 'I') plt.plot(time,R, label = 'R') plt.xlabel('tempo (t)') plt.ylabel('SusceptΓ­vel, Infectado e Recuperado') plt.grid() plt.legend() plt.show() I'm solving an university problem with python applying Runge-Kutta's fourth order, but a I don't know how to collect the data for time = 50.
[ "This link maybe help you to build the model SIR-derived ODE models\nalso here by I have code for you:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nBeta = 1.00205\nGamma = 0.23000\nN = 1000\n\ndef func_S(t,I,S):\n return - Beta*I*S/N\n\ndef func_I(t,I,S):\n return Beta*I*S/N - Gamma*I\n\ndef func_R(t,I):\n return Gamma*I\n\n\n# physical parameters\nI0 = 1\nR0 = 0\nS0 = N - I0 - R0\nt0 = 0\ntn = 50\n\n\n\n# Numerical Parameters\nndata = 1000\n\n\n\nt = np.linspace(t0,tn,ndata)\nh = t[2] - t[1]\n\nS = np.zeros(ndata)\nI = np.zeros(ndata)\nR = np.zeros(ndata)\n\nS[0] = S0\nI[0] = I0\nR[0] = R0\n\n\nfor i in range(ndata-1):\n k1 = func_S(t[i], I[i], S[i])\n k2 = func_S(t[i]+0.5*h, I[i], S[i]+h+0.5*k1)\n k3 = func_S(t[i]+0.5*h, I[i], S[i]+h+0.5*k2)\n k4 = func_S(t[i]+h, I[i], S[i]+h+k3)\n \n S[i+1] = S[i] + (h/6)*(k1 + 2*k2 + 2*k3 + k4)\n \n kk1 = func_I(t[i], I[i], S[i])\n kk2 = func_I(t[i]+0.5*h, I[i], S[i]+h+0.5*kk1)\n kk3 = func_I(t[i]+0.5*h, I[i], S[i]+h+0.5*kk2)\n kk4 = func_I(t[i]+h, I[i], S[i]+h+kk3)\n \n I[i+1] = I[i] + (h/6)*(kk1 + 2*kk2 + 2*kk3 + kk4)\n \n l1 = func_R(t[i], I[i])\n l2 = func_R(t[i]+0.5*h, I[i])\n l3 = func_R(t[i]+0.5*h, I[i])\n l4 = func_R(t[i]+h, I[i])\n \n R[i+1] = R[i] + (h/6)*(l1 + 2*l2 + 2*l3 + l4)\n \n \nplt.figure(1)\nplt.plot(t,S)\nplt.plot(t,I)\nplt.plot(t,R)\nplt.show()\n\nthe output will be like this:\n\n", "The easiest way to get a value at time 50 is to compute a value at time 50. As you compute data over 100 days, with about 10 data points per day, reflect this in the time array construction\ntime = np.linspace(0,days,10*days+1)\n\nNote that linspace(a,b,N) produces N nodes that have between them a step of size (b-a)/(N-1).\nThen you get the data for day 50 at time index 500 (and the 9 following).\nFor this slow-moving system and this relatively small time step, you will get reasonable accuracy with the implemented order-1 method, but will get better accuracy with a higher-order method like RK4.\nYou need to apply associated updates to all components everywhere. This requires to interleave the RK4 steps that you have, as for instance the corrected step\n S_k2 = derivada_S(time[i] + (h/2), I[i] + (h/2)*I_k1, S[i] + (h/2)*S_k1)\n\nrequires that the value I_k1 is previously computed. Note also that h should be a factor to S_k1, it should not be added.\nIn total you should get\nfor i in range(data-1):\n S_k1 = derivada_S(time[i], I[i], S[i])\n I_k1 = derivada_I(time[i], I[i], S[i])\n R_k1 = derivada_R(time[i], I[i])\n\n S_k2 = derivada_S(time[i] + (1/2)*h, I[i] + (h/2)*I_k1, S[i] + (h/2)*S_k1)\n I_k2 = derivada_I(time[i] + (1/2)*h, I[i] + (h/2)*I_k1, S[i] + (h/2)*S_k1)\n R_k2 = derivada_R(time[i] + (1/2)*h, I[i] + (h/2)*I_k1)\n\n S_k3 = derivada_S(time[i] + (h/2), I[i] + (h/2)*I_k2, S[i] + (h/2)*S_k2)\n I_k3 = derivada_I(time[i] + (h/2), I[i] + (h/2)*I_k2, S[i] + (h/2)*S_k2)\n R_k3 = derivada_R(time[i] + (h/2), I[i] + (h/2)*I_k2)\n\n S_k4 = derivada_S(time[i] + h, I[i] + I_k3, S[i] + S_k3)\n I_k4 = derivada_I(time[i] + h, I[i] + I_k3, S[i] + S_k3)\n R_k4 = derivada_R(time[i] + h, I[i] + I_k3)\n \n S[i+1] = S[i] + (h/6)*(S_k1 + 2*S_k2 + 2*S_k3 + S_k4)\n I[i+1] = I[i] + (h/6)*(I_k1 + 2*I_k2 + 2*I_k3 + I_k4)\n R[i+1] = R[i] + (h/6)*(R_k1 + 2*R_k2 + 2*R_k3 + R_k4)\n\nNote that h is a factor to I_k1, S_k1 etc. You have a sum there.\nReplacing just this piece of code gives the plot\n\nBut there is another problem before that. You defined the time step as 0.05 so that t=50 is reached at the last place. As the system is autonomous, the contents of the time array makes no difference, but the labeling of the x axis has to be divided by 2. The values that you want are in fact the last values computed with data = 10*time_k+1.\nS[-1]=0.10483, I[-1]=8.11098e-05, R[-1]=0.89509\n\nFor the previous discussion to remain valid, you could also set h=t[1]-t[0], so that t=50 is reached in the middle at i=500.\n", "You can use the integrator available at scipy.integrate.solve_ivp, and with it use the fourth-order Runge-Kutta method (DOP853, RK23, RK45 and Radau).\n##########################################\n# AUTHOR : CARLOS DUARDO DA SILVA LIMA #\n# DATE : 12/01/2022 #\n# LANGUAGE: python #\n# IDE : GOOGLE COLAB #\n# PROBLEM : MODEL SIR #\n##########################################\n\nimport numpy as np\nfrom scipy.integrate import odeint, solve_ivp, RK45\nimport matplotlib.pyplot as plt\n\nt_i = 0.0 # START TIME\nt_f = 50.0 # FINAL TIME\nN = 1000\n\n#t = np.linspace(t_i,t_f,N)\nt_span = np.array([t_i,t_f])\n\n# INITIAL CONDITIONS OF THE SOR MODEL\nS0 = 0.99\nI0 = 0.01\nR0 = 0.0\nr0 = np.array([S0,I0,R0])\n\n# ORDINARY DIFFERENTIAL EQUATIONS OF THE SIR MODEL\ndef SIR(t,y,b,k):\n s,i,r = y\n ode1 = -b*s*i\n ode2 = b*s*i-k*i\n ode3 = k*i\n return np.array([ode1,ode2,ode3])\n\n# INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS (FOURTH ORDER RUNGE-KUTTA, RADAU)\n#sol_solve_ivp = solve_ivp(SIR,t_span,y0 = r0,method='Radau', rtol=1E-09, atol=1e-09, args = (0.8,0.3125))\nsol_solve_ivp = solve_ivp(SIR,t_span,y0 = r0,method='RK45', rtol=1E-09, atol=1e-09, args = (0.8,0.3125))\n\n# T, S, I, R FUNCTIONS\nt_= sol_solve_ivp.t\ns = sol_solve_ivp.y[0, :]\ni = sol_solve_ivp.y[1, :]\nr = sol_solve_ivp.y[2, :]\n\n# GRAPHIC\nplt.figure(1)\nplt.style.use('dark_background')\nplt.figure(figsize = (8,8))\nplt.plot(t_,s,'c-',t_,i,'g-',t_,r,'y-',lw=1.5)\n#plt.title(r'$\\frac{dS(t)}{dt} = -bs(t)i(t)$, $\\frac{dI(t)}{dt} = bs(t)i(t)-ki(t)$ and $\\frac{dR(t)}{dt} = ki(t)$')\nplt.title(r'SIR Model', color = 'm')\nplt.xlabel(r'$t(t)$', color = 'm')\nplt.ylabel(r'$S(t)$, $I(t)$ and $R(t)$', color = 'm')\nplt.legend(['S', 'I', 'R'], shadow=True)\nplt.grid(lw = 0.95,color = 'white',linestyle = '--')\nplt.show()\n\n''' SEARCH WEBSITES\nhttps://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology\nhttps://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-the-differential-equation-model\n'''\n\nOutput Chart\n" ]
[ 2, 0, 0 ]
[]
[]
[ "model", "python", "runge_kutta" ]
stackoverflow_0074513361_model_python_runge_kutta.txt
Q: pandas groupby and join lists I have a dataframe df, with two columns, I want to groupby one column and join the lists belongs to same group, example: column_a, column_b 1, [1,2,3] 1, [2,5] 2, [5,6] after the process: column_a, column_b 1, [1,2,3,2,5] 2, [5,6] I want to keep all the duplicates. I have the following questions: The dtypes of the dataframe are object(s). convert_objects() doesn't convert column_b to list automatically. How can I do this? what does the function in df.groupby(...).apply(lambda x: ...) apply to ? what is the form of x ? list? the solution to my main problem? Thanks in advance. A: object dtype is a catch-all dtype that basically means not int, float, bool, datetime, or timedelta. So it is storing them as a list. convert_objects tries to convert a column to one of those dtypes. You want In [63]: df Out[63]: a b c 0 1 [1, 2, 3] foo 1 1 [2, 5] bar 2 2 [5, 6] baz In [64]: df.groupby('a').agg({'b': 'sum', 'c': lambda x: ' '.join(x)}) Out[64]: c b a 1 foo bar [1, 2, 3, 2, 5] 2 baz [5, 6] This groups the data frame by the values in column a. Read more about groupby. This is doing a regular list sum (concatenation) just like [1, 2, 3] + [2, 5] with the result [1, 2, 3, 2, 5] A: df.groupby('column_a').agg(sum) This works because of operator overloading sum concatenates the lists together. The index of the resulting df will be the values from column_a: A: The approach proposed above using df.groupby('column_a').agg(sum) definetly works. However, you have to make sure that your list only contains integers, otherwise the output will not be the same. If you want to convert all of the lists items into integers, you can use: df['column_a'] = df['column_a'].apply(lambda x: list(map(int, x))) A: The accepted answer suggests to use groupby.sum, which is working fine with small number of lists, however using sum to concatenate lists is quadratic. For a larger number of lists, a much faster option would be to use itertools.chain or a list comprehension: df = pd.DataFrame({'column_a': ['1', '1', '2'], 'column_b': [['1', '2', '3'], ['2', '5'], ['5', '6']]}) itertools.chain: from itertools import chain out = (df.groupby('column_a', as_index=False)['column_b'] .agg(lambda x: list(chain.from_iterable(x))) ) list comprehension: out = (df.groupby('column_a', as_index=False, sort=False)['column_b'] .agg(lambda x: [e for l in x for e in l]) ) output: column_a column_b 0 1 [1, 2, 3, 2, 5] 1 2 [5, 6] Comparison of speed Using n repeats of the example to show the impact of the number of lists to merge: test_df = pd.concat([df]*n, ignore_index=True) NB. also comparing the numpy approach (agg(lambda x: np.concatenate(x.to_numpy()).tolist())). A: Use numpy and simple "for" or "map": import numpy as np u_clm = np.unique(df.column_a.values) all_lists = [] for clm in u_clm: df_process = df.query('column_a == @clm') list_ = np.concatenate(df.column_b.values) all_lists.append((clm, list_.tolist())) df_sum_lists = pd.DataFrame(all_lists) It's faster in 350 times than a simple "groupby-agg-sum" approach for huge datasets.
pandas groupby and join lists
I have a dataframe df, with two columns, I want to groupby one column and join the lists belongs to same group, example: column_a, column_b 1, [1,2,3] 1, [2,5] 2, [5,6] after the process: column_a, column_b 1, [1,2,3,2,5] 2, [5,6] I want to keep all the duplicates. I have the following questions: The dtypes of the dataframe are object(s). convert_objects() doesn't convert column_b to list automatically. How can I do this? what does the function in df.groupby(...).apply(lambda x: ...) apply to ? what is the form of x ? list? the solution to my main problem? Thanks in advance.
[ "object dtype is a catch-all dtype that basically means not int, float, bool, datetime, or timedelta. So it is storing them as a list. convert_objects tries to convert a column to one of those dtypes.\nYou want\nIn [63]: df\nOut[63]: \n a b c\n0 1 [1, 2, 3] foo\n1 1 [2, 5] bar\n2 2 [5, 6] baz\n\n\nIn [64]: df.groupby('a').agg({'b': 'sum', 'c': lambda x: ' '.join(x)})\nOut[64]: \n c b\na \n1 foo bar [1, 2, 3, 2, 5]\n2 baz [5, 6]\n\nThis groups the data frame by the values in column a. Read more about groupby.\nThis is doing a regular list sum (concatenation) just like [1, 2, 3] + [2, 5] with the result [1, 2, 3, 2, 5]\n", "df.groupby('column_a').agg(sum)\n\nThis works because of operator overloading sum concatenates the lists together. The index of the resulting df will be the values from column_a:\n", "The approach proposed above using df.groupby('column_a').agg(sum) definetly works. However, you have to make sure that your list only contains integers, otherwise the output will not be the same.\nIf you want to convert all of the lists items into integers, you can use:\ndf['column_a'] = df['column_a'].apply(lambda x: list(map(int, x)))\n\n", "The accepted answer suggests to use groupby.sum, which is working fine with small number of lists, however using sum to concatenate lists is quadratic.\nFor a larger number of lists, a much faster option would be to use itertools.chain or a list comprehension:\ndf = pd.DataFrame({'column_a': ['1', '1', '2'],\n 'column_b': [['1', '2', '3'], ['2', '5'], ['5', '6']]})\n\nitertools.chain:\nfrom itertools import chain\nout = (df.groupby('column_a', as_index=False)['column_b']\n .agg(lambda x: list(chain.from_iterable(x)))\n )\n\nlist comprehension:\nout = (df.groupby('column_a', as_index=False, sort=False)['column_b']\n .agg(lambda x: [e for l in x for e in l])\n )\n\noutput:\n column_a column_b\n0 1 [1, 2, 3, 2, 5]\n1 2 [5, 6]\n\nComparison of speed\nUsing n repeats of the example to show the impact of the number of lists to merge:\ntest_df = pd.concat([df]*n, ignore_index=True)\n\n\nNB. also comparing the numpy approach (agg(lambda x: np.concatenate(x.to_numpy()).tolist())).\n", "Use numpy and simple \"for\" or \"map\":\nimport numpy as np\nu_clm = np.unique(df.column_a.values)\nall_lists = []\n\nfor clm in u_clm:\n df_process = df.query('column_a == @clm')\n list_ = np.concatenate(df.column_b.values)\n all_lists.append((clm, list_.tolist()))\n\ndf_sum_lists = pd.DataFrame(all_lists)\nIt's faster in 350 times than a simple \"groupby-agg-sum\" approach for huge datasets.\n" ]
[ 83, 23, 3, 1, 0 ]
[ "Thanks, helped me\nmerge.fillna(\"\", inplace = True) new_merge = merge.groupby(['id']).agg({ 'q1':lambda x: ','.join(x), 'q2':lambda x: ','.join(x),'q2_bookcode':lambda x: ','.join(x), 'q1_bookcode':lambda x: ','.join(x)}) \n" ]
[ -1 ]
[ "pandas", "python" ]
stackoverflow_0023794082_pandas_python.txt
Q: python multiprocessing pool does nothing while executing I am currently trying to parallize a rather large task of computing a complex system of differential equations. I want to parallize the computation, so each computation has its own process. I need the results to be ordered, therefore I am using a dictionary to order it after the process. I am also on Windows 10. For now I am only running the identity function to check the code, but even then it simply runs all logical cores at 100% but does not compute (I waited 5 minutes). Later on I will need to initalize each process with a bunch of variables to compute the actual system defined in a solver() function further up the code. What is going wrong? import multiprocessing as mp import numpy as np Nmin = 0 Nmax = 20 periods = np.linspace(Nmin, Nmax, 2*Nmax +1) # 0.5 steps results = dict() def identity(a): return a with mp.Manager() as manager: sharedresults = manager.dict() with mp.Pool() as pool: print("pools are active") for result in pool.map(identity, periods): #sharedresults[per] = res print(result) orderedResult = [] for k,v in sorted(results.items()): oderedResult.append(v) The program gets to the "pools are active" message and after printing it, it just does nothing I guess? I am also using Jupyterlab, not sure wether that is an issue. A: there's a problem with multiprocessing and jupyterlab, so you should use pathos instead. import multiprocessing as mp import numpy as np import scipy.constants as constants from concurrent.futures import ProcessPoolExecutor import pathos.multiprocessing as mpathos Nmin = 0 Nmax = 20 periods = np.linspace(Nmin, Nmax, 2*Nmax +1) # 0.5 steps results = dict() def identity(a): return a with mp.Manager() as manager: sharedresults = manager.dict() with mpathos.Pool() as pool: print("pools are active") for result in pool.imap(identity, periods): #sharedresults[per] = res print(result) orderedResult = [] for k,v in sorted(results.items()): oderedResult.append(v)
python multiprocessing pool does nothing while executing
I am currently trying to parallize a rather large task of computing a complex system of differential equations. I want to parallize the computation, so each computation has its own process. I need the results to be ordered, therefore I am using a dictionary to order it after the process. I am also on Windows 10. For now I am only running the identity function to check the code, but even then it simply runs all logical cores at 100% but does not compute (I waited 5 minutes). Later on I will need to initalize each process with a bunch of variables to compute the actual system defined in a solver() function further up the code. What is going wrong? import multiprocessing as mp import numpy as np Nmin = 0 Nmax = 20 periods = np.linspace(Nmin, Nmax, 2*Nmax +1) # 0.5 steps results = dict() def identity(a): return a with mp.Manager() as manager: sharedresults = manager.dict() with mp.Pool() as pool: print("pools are active") for result in pool.map(identity, periods): #sharedresults[per] = res print(result) orderedResult = [] for k,v in sorted(results.items()): oderedResult.append(v) The program gets to the "pools are active" message and after printing it, it just does nothing I guess? I am also using Jupyterlab, not sure wether that is an issue.
[ "there's a problem with multiprocessing and jupyterlab, so you should use pathos instead.\nimport multiprocessing as mp\nimport numpy as np\nimport scipy.constants as constants\nfrom concurrent.futures import ProcessPoolExecutor\nimport pathos.multiprocessing as mpathos\n\nNmin = 0\nNmax = 20\nperiods = np.linspace(Nmin, Nmax, 2*Nmax +1) # 0.5 steps\n\nresults = dict()\n\ndef identity(a):\n return a\n\nwith mp.Manager() as manager:\n sharedresults = manager.dict()\n\n with mpathos.Pool() as pool:\n print(\"pools are active\")\n for result in pool.imap(identity, periods): \n #sharedresults[per] = res\n print(result)\n\norderedResult = []\nfor k,v in sorted(results.items()):\n oderedResult.append(v)\n\n" ]
[ 1 ]
[]
[]
[ "jupyter_lab", "multiprocessing", "python" ]
stackoverflow_0074646673_jupyter_lab_multiprocessing_python.txt
Q: Pass dynamically created data-tables to another callback function as input in Dash The data tables have been created using the following snippet @app.callback( Output(component_id="my-tables-out", component_property="children")) def update_output_div(): params = ["A", "B"] num_tables = 5 for i in range(5): table = dash_table.DataTable( id=f"table-{i}", columns=([{"id": p, "name": p} for p in params]), # type: ignore data=[ dict(**{param: None for param in params}) # type: ignore for i in range(num_species) ], editable=True, ) result.append(table) return result The callback function that I would like to pass it to is: @app.callback( Output("data", "children"), Input(component_id="table_<i>", component_property="data") ) def display_output(rows, columns): pass The question is how to implement the latter callback function in order to access all data-tables created in the first function. What component_id should I provide to the input (Not sure about it since I am returning a list in the first callback function). Accessing single table works, The problem is how to access dynamically created tables. A: You can implement my-tables-out component id as the input to the next callback function and 'loop' for each table (since it is in a result list). In your current implementation, it does not make sense to have table-1, table-2, table-3, etc. changing the data component because it is not possible in Dash to have multiple inputs changing the same output component.
Pass dynamically created data-tables to another callback function as input in Dash
The data tables have been created using the following snippet @app.callback( Output(component_id="my-tables-out", component_property="children")) def update_output_div(): params = ["A", "B"] num_tables = 5 for i in range(5): table = dash_table.DataTable( id=f"table-{i}", columns=([{"id": p, "name": p} for p in params]), # type: ignore data=[ dict(**{param: None for param in params}) # type: ignore for i in range(num_species) ], editable=True, ) result.append(table) return result The callback function that I would like to pass it to is: @app.callback( Output("data", "children"), Input(component_id="table_<i>", component_property="data") ) def display_output(rows, columns): pass The question is how to implement the latter callback function in order to access all data-tables created in the first function. What component_id should I provide to the input (Not sure about it since I am returning a list in the first callback function). Accessing single table works, The problem is how to access dynamically created tables.
[ "You can implement my-tables-out component id as the input to the next callback function and 'loop' for each table (since it is in a result list).\nIn your current implementation, it does not make sense to have table-1, table-2, table-3, etc. changing the data component because it is not possible in Dash to have multiple inputs changing the same output component.\n" ]
[ 0 ]
[]
[]
[ "plotly_dash", "python" ]
stackoverflow_0074608153_plotly_dash_python.txt
Q: How to add items to a dictionary between two methods? I'm writing a method that takes in a list and returns a dictionary. This method is to be saved in a separate Python file and imported into Main.py The method that takes in a list calls another method that's meant to update the global dictionary. global myDict def addKeyValuePair(listItem): try: key = listItem.split(': ')[0].replace('\\','') value = listItem.split(': ')[1].replace('\\r\\n','').replace('\\','') myDict.update({key:value}) except: pass def makeDict(dataList): myDict = {} for listItem in dataList: addKeyValuePair(listItem) return(myDict) From the main method I'm importing the makeDict module and passing it the dataList, but it returns an empty dictionary. from Toolkit import makeDict finalDict = makeDict(dataList) Any idea how this can be done? A: There are a few issues with the code you posted. First, you are trying to access a global variable called myDict from inside the makeDict function. However, you also define a local variable with the same name inside the function, which shadows the global variable. As a result, any modifications made to the local myDict inside the function do not affect the global myDict. One way to fix this would be to remove the local myDict variable and just use the global myDict variable inside the makeDict function. However, using global variables in this way can make your code difficult to understand and maintain. A better approach would be to pass the dictionary as an argument to the makeDict function, and return the updated dictionary from the function. Here is how you could modify the code to fix these issues: myDict = {} def addKeyValuePair(myDict, listItem): try: key = listItem.split(': ')[0].replace('\\','') value = listItem.split(': ')[1].replace('\\r\\n','').replace('\\','') myDict.update({key:value}) except: pass def makeDict(dataList): myDict = {} for listItem in dataList: addKeyValuePair(myDict, listItem) return myDict To use this updated makeDict function, you would call it like this: from Toolkit import makeDict finalDict = makeDict(dataList) I hope this helps!
How to add items to a dictionary between two methods?
I'm writing a method that takes in a list and returns a dictionary. This method is to be saved in a separate Python file and imported into Main.py The method that takes in a list calls another method that's meant to update the global dictionary. global myDict def addKeyValuePair(listItem): try: key = listItem.split(': ')[0].replace('\\','') value = listItem.split(': ')[1].replace('\\r\\n','').replace('\\','') myDict.update({key:value}) except: pass def makeDict(dataList): myDict = {} for listItem in dataList: addKeyValuePair(listItem) return(myDict) From the main method I'm importing the makeDict module and passing it the dataList, but it returns an empty dictionary. from Toolkit import makeDict finalDict = makeDict(dataList) Any idea how this can be done?
[ "There are a few issues with the code you posted.\nFirst, you are trying to access a global variable called myDict from inside the makeDict function. However, you also define a local variable with the same name inside the function, which shadows the global variable. As a result, any modifications made to the local myDict inside the function do not affect the global myDict.\nOne way to fix this would be to remove the local myDict variable and just use the global myDict variable inside the makeDict function. However, using global variables in this way can make your code difficult to understand and maintain. A better approach would be to pass the dictionary as an argument to the makeDict function, and return the updated dictionary from the function.\nHere is how you could modify the code to fix these issues:\nmyDict = {}\n\ndef addKeyValuePair(myDict, listItem):\n try: \n key = listItem.split(': ')[0].replace('\\\\','')\n value = listItem.split(': ')[1].replace('\\\\r\\\\n','').replace('\\\\','')\n myDict.update({key:value})\n except:\n pass \n\ndef makeDict(dataList):\n myDict = {}\n for listItem in dataList:\n addKeyValuePair(myDict, listItem)\n\n return myDict\n\nTo use this updated makeDict function, you would call it like this:\nfrom Toolkit import makeDict\n\nfinalDict = makeDict(dataList)\n\nI hope this helps!\n" ]
[ 1 ]
[]
[]
[ "dictionary", "global_variables", "python" ]
stackoverflow_0074646916_dictionary_global_variables_python.txt
Q: How can i optimize my code so that it can run much more effciently? i am sorry if this this is the wrong type of question to ask here because its mostly like "pls help me fix bug" but if someone is willing to help that would be nice! so basiclly i am making a small game where at the current stage i click somewhere and color will spread out like a wave. currently it does that, but i am bad at life, so my system is that it checks each and every squares up down left and right square to see if its red. if it is, make that square red. then it saves that x,y and does it a bunch more, then it prints all x,y values to the screen all at once. the problem is, every time it does this it keeps all the values in it and has to do more if checks and slows down a lot. is there a better system i can implement that would be better? thank you import pygame import sys width=300 height=300 pygame.init() surface = pygame.display.set_mode( (500, 500) ) size=1 tfx=1 tfy=1 increment=1 red=[] surface.fill( (255,255,255) ) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() pygame.draw.rect( surface, (255,0,0), (pos[0],pos[1], size, size) ) #pygame.draw.rect( surface, (255,0,0), (width/2-tfx/2, height/2-tfy/2, size, size) ) #!!!!!! this block of code checks every single pixel on the screen to see if the pixel #to the right is red and so on. Alternative method in progress that checks in a 1x1 3x3 5x5 square for x in range(width): for y in range(height): if x+size <=width-1 : color = surface.get_at((x+size,y)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) #pygame.display.update() if x-1 >=0: color = surface.get_at((x-1,y)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) if y-1 >=0: color = surface.get_at((x,y-1)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) if y+1 <=height: color = surface.get_at((x,y+1)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) for i in range(len(red)): pygame.draw.rect( surface, (255,0,0), (red[i][0], red[i][1], size, size) ) pygame.display.update() #print( surface.get_at((97,99))) pygame.quit() A: Probably the only way to get acceptable performance (with pygame) is to use pygame.mask.Mask and convolve(). Create a mask size of the screen: mask = pygame.mask.Mask(screen.get_size()) Create a convolution mask with the following pattern: [False, True, False] [True, True, True ] [False, True, False] convolution_mask = pygame.mask.Mask((3, 3)) convolution_mask.set_at(p) for p in [(0, 1), (1, 1), (2, 1), (1, 0), (1, 2)]] Set a bit in the mask on mouse click: if event.type == pygame.MOUSEBUTTONDOWN: mask.set_at(event.pos) Create a convolution with the convolution mask in each frame: mask = mask.convolve(convolution_mask , offset=(-1, -1)) Convert the mask into a surface and blit it on the screen: surface = mask.to_surface(setcolor = (255, 0, 0), unsetcolor = (255, 255, 255)) screen.blit(surface, (0, 0)) import pygame import sys pygame.init() screen = pygame.display.set_mode((500, 500)) clock = pygame.time.Clock() mask = pygame.mask.Mask(screen.get_size()) convolution_mask = pygame.mask.Mask((3, 3)) [convolution_mask.set_at(p) for p in [(0, 1), (1, 1), (2, 1), (1, 0), (1, 2)]] run = True while run: clock.tick(60) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.MOUSEBUTTONDOWN: mask.set_at(event.pos) mask = mask.convolve(convolution_mask, offset=(-1, -1)) surface = mask.to_surface(setcolor = (255, 0, 0), unsetcolor = (255, 255, 255)) screen.blit(surface, (0, 0)) pygame.display.update() pygame.quit() sys.exit()
How can i optimize my code so that it can run much more effciently?
i am sorry if this this is the wrong type of question to ask here because its mostly like "pls help me fix bug" but if someone is willing to help that would be nice! so basiclly i am making a small game where at the current stage i click somewhere and color will spread out like a wave. currently it does that, but i am bad at life, so my system is that it checks each and every squares up down left and right square to see if its red. if it is, make that square red. then it saves that x,y and does it a bunch more, then it prints all x,y values to the screen all at once. the problem is, every time it does this it keeps all the values in it and has to do more if checks and slows down a lot. is there a better system i can implement that would be better? thank you import pygame import sys width=300 height=300 pygame.init() surface = pygame.display.set_mode( (500, 500) ) size=1 tfx=1 tfy=1 increment=1 red=[] surface.fill( (255,255,255) ) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() pygame.draw.rect( surface, (255,0,0), (pos[0],pos[1], size, size) ) #pygame.draw.rect( surface, (255,0,0), (width/2-tfx/2, height/2-tfy/2, size, size) ) #!!!!!! this block of code checks every single pixel on the screen to see if the pixel #to the right is red and so on. Alternative method in progress that checks in a 1x1 3x3 5x5 square for x in range(width): for y in range(height): if x+size <=width-1 : color = surface.get_at((x+size,y)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) #pygame.display.update() if x-1 >=0: color = surface.get_at((x-1,y)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) if y-1 >=0: color = surface.get_at((x,y-1)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) if y+1 <=height: color = surface.get_at((x,y+1)) if color[1]==0: if surface.get_at((x,y))[1] !=0: red.append((x,y)) for i in range(len(red)): pygame.draw.rect( surface, (255,0,0), (red[i][0], red[i][1], size, size) ) pygame.display.update() #print( surface.get_at((97,99))) pygame.quit()
[ "Probably the only way to get acceptable performance (with pygame) is to use pygame.mask.Mask and convolve(). Create a mask size of the screen:\nmask = pygame.mask.Mask(screen.get_size())\n\nCreate a convolution mask with the following pattern:\n[False, True, False]\n[True, True, True ]\n[False, True, False]\n\nconvolution_mask = pygame.mask.Mask((3, 3))\nconvolution_mask.set_at(p) for p in [(0, 1), (1, 1), (2, 1), (1, 0), (1, 2)]]\n\nSet a bit in the mask on mouse click:\nif event.type == pygame.MOUSEBUTTONDOWN:\n mask.set_at(event.pos)\n\nCreate a convolution with the convolution mask in each frame:\nmask = mask.convolve(convolution_mask , offset=(-1, -1))\n\nConvert the mask into a surface and blit it on the screen:\nsurface = mask.to_surface(setcolor = (255, 0, 0), unsetcolor = (255, 255, 255))\nscreen.blit(surface, (0, 0))\n\n\nimport pygame\nimport sys\n\npygame.init()\nscreen = pygame.display.set_mode((500, 500))\nclock = pygame.time.Clock()\n\nmask = pygame.mask.Mask(screen.get_size())\nconvolution_mask = pygame.mask.Mask((3, 3))\n[convolution_mask.set_at(p) for p in [(0, 1), (1, 1), (2, 1), (1, 0), (1, 2)]]\n\nrun = True\nwhile run:\n clock.tick(60)\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n run = False\n if event.type == pygame.MOUSEBUTTONDOWN:\n mask.set_at(event.pos)\n\n mask = mask.convolve(convolution_mask, offset=(-1, -1))\n\n surface = mask.to_surface(setcolor = (255, 0, 0), unsetcolor = (255, 255, 255))\n screen.blit(surface, (0, 0))\n pygame.display.update()\n\npygame.quit()\nsys.exit()\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074616707_pygame_python.txt
Q: how to converting a for loop with await into asyncio.gather() how do I write the following piece of code using asyncio.gather and map? for i in range(len(data)): candlestick = data[i] candlesticks = data[0: i + 1] await strategy.execute(candlesticks, candlestick.startTime) A: You could do it like this: from asyncio import gather, create_task tasks = [] for i in range(len(data)): candlestick = data[i] candlesticks = data[0: i + 1] tasks.append(create_task(strategy.execute(candlesticks, candlestick.startTime))) results = await gather(*tasks, return_exceptions=False) A: If you want to use map() specifically, you could do this: from asyncio import gather, create_task await gather( *map( lambda i: create_task( strategy.execute(data[0: i + 1], data[i].startTime) ), range(len(data)) )
how to converting a for loop with await into asyncio.gather()
how do I write the following piece of code using asyncio.gather and map? for i in range(len(data)): candlestick = data[i] candlesticks = data[0: i + 1] await strategy.execute(candlesticks, candlestick.startTime)
[ "You could do it like this:\nfrom asyncio import gather, create_task\ntasks = []\nfor i in range(len(data)):\n candlestick = data[i]\n candlesticks = data[0: i + 1]\n tasks.append(create_task(strategy.execute(candlesticks, candlestick.startTime)))\nresults = await gather(*tasks, return_exceptions=False)\n\n", "If you want to use map() specifically, you could do this:\nfrom asyncio import gather, create_task\n\nawait gather(\n *map(\n lambda i: create_task(\n strategy.execute(data[0: i + 1], data[i].startTime)\n ),\n range(len(data))\n)\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0071203112_python.txt
Q: Add a column (in Pandas) that is calculated based on another column I have a simple database that has every month's earnings, with Year (values 1991-2020), Month (Jan-Dec) and Earnings. I want to make a new column, where for years 1991-2005 I divide the Earnings column by 10000 but for 2006-2020 I want it to be the same as in the earnings column. I am a beginner, but what I was thinking is that I want the new column (TrueEarn) to be Earnings/10000 but only for columns 1991-2005. df['TrueEarn'] = df['Earnings']/10000 for (['Year']=('1991':"2005")) Since I am a newb with Python, this may not make sense for you, but that is how I logically wanted to write it Can you help me, please? A: Yoy should provide a minimum reproducible example. But assuming that you have the year in another column, the way to go could be df['TrueEarn'] = np.where((df['YEAR'] >= 1991) & (df['YEAR'] <= 2005), df['Earnings'] / 10000, df['Earnings']) As @wjandrea says, this can be done directly with pandas, but numpy is faster. Benchmark with a toy dataframe: df = pd.DataFrame( {"YEAR": np.random.randint(1991, 2020, size=50000), "Earnings": np.random.uniform(0, 2e10, size=50000)} ) %timeit df["TrueEarn"] = np.where((df["YEAR"] >= 1991) & (df["YEAR"] <= 2005), df["Earnings"] / 10000, df["Earnings"]) 695 Β΅s Β± 3.17 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) VS with pandas mask %timeit df["TrueEarn"] = df["Earnings"].mask(df["YEAR"].between(1991, 2005), df["Earnings"] / 10000) 959 Β΅s Β± 4.45 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
Add a column (in Pandas) that is calculated based on another column
I have a simple database that has every month's earnings, with Year (values 1991-2020), Month (Jan-Dec) and Earnings. I want to make a new column, where for years 1991-2005 I divide the Earnings column by 10000 but for 2006-2020 I want it to be the same as in the earnings column. I am a beginner, but what I was thinking is that I want the new column (TrueEarn) to be Earnings/10000 but only for columns 1991-2005. df['TrueEarn'] = df['Earnings']/10000 for (['Year']=('1991':"2005")) Since I am a newb with Python, this may not make sense for you, but that is how I logically wanted to write it Can you help me, please?
[ "Yoy should provide a minimum reproducible example. But assuming that you have the year in another column, the way to go could be\ndf['TrueEarn'] = np.where((df['YEAR'] >= 1991) & (df['YEAR'] <= 2005),\n df['Earnings'] / 10000, df['Earnings'])\n\nAs @wjandrea says, this can be done directly with pandas, but numpy is faster. Benchmark with a toy dataframe:\ndf = pd.DataFrame(\n {\"YEAR\": np.random.randint(1991, 2020, size=50000), \"Earnings\": np.random.uniform(0, 2e10, size=50000)}\n)\n\n \n%timeit df[\"TrueEarn\"] = np.where((df[\"YEAR\"] >= 1991) & (df[\"YEAR\"] <= 2005), df[\"Earnings\"] / 10000, df[\"Earnings\"])\n\n695 Β΅s Β± 3.17 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)\nVS with pandas mask\n%timeit df[\"TrueEarn\"] = df[\"Earnings\"].mask(df[\"YEAR\"].between(1991, 2005), df[\"Earnings\"] / 10000)\n\n959 Β΅s Β± 4.45 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074646960_pandas_python.txt
Q: How to count frequency of a value in a column of a data frame based on another column? I have dataframe with different traffic signs in different neighborhoods (both are columns). I want to count the quantity of each sign type in each neighborhood. i could create a query for each neighborhood and the count the value of each sign type like that but there are too many neighborhoods for that to be practical. A: You can group by the neighbourhood and sign type and do a size to have the count of each sign type in each neighbourhood. Sample code df.groupby(["neighbourhood", "sign_type"]).size()
How to count frequency of a value in a column of a data frame based on another column?
I have dataframe with different traffic signs in different neighborhoods (both are columns). I want to count the quantity of each sign type in each neighborhood. i could create a query for each neighborhood and the count the value of each sign type like that but there are too many neighborhoods for that to be practical.
[ "You can group by the neighbourhood and sign type and do a size to have the count of each sign type in each neighbourhood.\nSample code\ndf.groupby([\"neighbourhood\", \"sign_type\"]).size()\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074647020_dataframe_pandas_python.txt
Q: ArgumentError: FROM expression expected I have the following cell in jupyter notebook. What is in ***** is confidential information import psycopg2 import sqlalchemy as sa import pandas as pds from sqlalchemy import create_engine # Create an engine instance alchemyEngine = create_engine('*****************************', pool_recycle=3600); # engine = create_engine('**************************') # Connect to PostgreSQL server dbConnection = alchemyEngine.connect(); # Read data from PostgreSQL database table and load into a DataFrame instance team = sa.Table('dialog_logger', sa.MetaData(), autoload_with=dbConnection, schema='hca') qry = sa.select(team.c.hos_name, team.c.hos_id, team.c.datetime, team.c.patient_cel_number, team.c.hospital_cel_number, team.c.message, team.c.direction).where( team.c.datetime > '2022-11-01 00:00:00').where(team.c.datetime < '2022-11-30 00:00:00') dataFrame_argentina = pds.read_sql_query(qry, dbConnection) pds.set_option('display.expand_frame_repr', False); # Close the database connection dbConnection.close(); I must execute it but it gives me the following error when doing it: AttributeError Traceback (most recent call last) C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py in __getattr__(self, key) 722 try: --> 723 return getattr(self.comparator, key) 724 except AttributeError: AttributeError: 'Comparator' object has no attribute 'selectable' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in _interpret_as_from(element) 60 try: ---> 61 return insp.selectable 62 except AttributeError: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py in __getattr__(self, key) 726 "Neither %r object nor %r object has an attribute %r" --> 727 % (type(self).__name__, type(self.comparator).__name__, key) 728 ) AttributeError: Neither 'Column' object nor 'Comparator' object has an attribute 'selectable' During handling of the above exception, another exception occurred: ArgumentError Traceback (most recent call last) <ipython-input-3-ec4194d4c35c> in <module> 1 qry = sa.select(team.c.hos_name, team.c.hos_id, team.c.datetime, team.c.patient_cel_number, ----> 2 team.c.hospital_cel_number, team.c.message, team.c.direction).where( 3 team.c.datetime > '2022-11-01 00:00:00').where(team.c.datetime < '2022-11-30 00:00:00') 4 5 <string> in select(columns, whereclause, from_obj, distinct, having, correlate, prefixes, suffixes, **kwargs) <string> in __init__(self, columns, whereclause, from_obj, distinct, having, correlate, prefixes, suffixes, **kwargs) C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\deprecations.py in warned(fn, *args, **kwargs) 126 ) 127 --> 128 return fn(*args, **kwargs) 129 130 doc = fn.__doc__ is not None and fn.__doc__ or "" C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in __init__(self, columns, whereclause, from_obj, distinct, having, correlate, prefixes, suffixes, **kwargs) 2977 if from_obj is not None: 2978 self._from_obj = util.OrderedSet( -> 2979 _interpret_as_from(f) for f in util.to_list(from_obj) 2980 ) 2981 else: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\_collections.py in __init__(self, d) 363 self._list = [] 364 if d is not None: --> 365 self._list = unique_list(d) 366 set.update(self, self._list) 367 else: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\_collections.py in unique_list(seq, hashfunc) 777 seen_add = seen.add 778 if not hashfunc: --> 779 return [x for x in seq if x not in seen and not seen_add(x)] 780 else: 781 return [ C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\_collections.py in <listcomp>(.0) 777 seen_add = seen.add 778 if not hashfunc: --> 779 return [x for x in seq if x not in seen and not seen_add(x)] 780 else: 781 return [ C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in <genexpr>(.0) 2977 if from_obj is not None: 2978 self._from_obj = util.OrderedSet( -> 2979 _interpret_as_from(f) for f in util.to_list(from_obj) 2980 ) 2981 else: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in _interpret_as_from(element) 61 return insp.selectable 62 except AttributeError: ---> 63 raise exc.ArgumentError("FROM expression expected") 64 65 ArgumentError: FROM expression expected Debugging I saw that everything runs fine until this select starts: qry = sa.select(......). I don't know if the error comes from the library that I need to install before executing this cell. A: You still have some debugging work ahead of you. Take a look at the columns: team.c Verify spellings. Put each column (like "hos_name") on its own line, so it can easily be # commented out. Simplify the query. Start with an easy query of just a single column, and build up from there until you encounter breakage. Remove both filters, and add them back one-by-one. Tackling it methodically in that way will soon show you the mismatch between code and table. Everything looks good to me. But then, I cannot view e.g. SELECT * FROM dialog_logger output. I usually spell it .filter() rather than .where() -- whatever, prolly works much the same. BTW, print(qry) will show you the rendered SQL query, which can sometimes shed a bit of light on the situation. A: I don't think sqlalchemy 1.3.13 suports that usage. Try this, note the columns are in a list, this changes in 1.4 and beyond: # THIS IS FOR sqlalchemy < 1.4 ONLY!!! qry = sa.select([team.c.hos_name, team.c.hos_id, team.c.datetime, team.c.patient_cel_number, team.c.hospital_cel_number, team.c.message, team.c.direction]).where( team.c.datetime > '2022-11-01 00:00:00').where(team.c.datetime < '2022-11-30 00:00:00') I think your exception is related to the fact that the 3rd argument in the old select() function is expected to be a from_obj and not just another column: OLD select() I'm putting this warnings about 1.3/1.4 so people don't come here and try the same queries in 1.4+. I understand if you can't upgrade because of the 3rd party tools.
ArgumentError: FROM expression expected
I have the following cell in jupyter notebook. What is in ***** is confidential information import psycopg2 import sqlalchemy as sa import pandas as pds from sqlalchemy import create_engine # Create an engine instance alchemyEngine = create_engine('*****************************', pool_recycle=3600); # engine = create_engine('**************************') # Connect to PostgreSQL server dbConnection = alchemyEngine.connect(); # Read data from PostgreSQL database table and load into a DataFrame instance team = sa.Table('dialog_logger', sa.MetaData(), autoload_with=dbConnection, schema='hca') qry = sa.select(team.c.hos_name, team.c.hos_id, team.c.datetime, team.c.patient_cel_number, team.c.hospital_cel_number, team.c.message, team.c.direction).where( team.c.datetime > '2022-11-01 00:00:00').where(team.c.datetime < '2022-11-30 00:00:00') dataFrame_argentina = pds.read_sql_query(qry, dbConnection) pds.set_option('display.expand_frame_repr', False); # Close the database connection dbConnection.close(); I must execute it but it gives me the following error when doing it: AttributeError Traceback (most recent call last) C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py in __getattr__(self, key) 722 try: --> 723 return getattr(self.comparator, key) 724 except AttributeError: AttributeError: 'Comparator' object has no attribute 'selectable' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in _interpret_as_from(element) 60 try: ---> 61 return insp.selectable 62 except AttributeError: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py in __getattr__(self, key) 726 "Neither %r object nor %r object has an attribute %r" --> 727 % (type(self).__name__, type(self.comparator).__name__, key) 728 ) AttributeError: Neither 'Column' object nor 'Comparator' object has an attribute 'selectable' During handling of the above exception, another exception occurred: ArgumentError Traceback (most recent call last) <ipython-input-3-ec4194d4c35c> in <module> 1 qry = sa.select(team.c.hos_name, team.c.hos_id, team.c.datetime, team.c.patient_cel_number, ----> 2 team.c.hospital_cel_number, team.c.message, team.c.direction).where( 3 team.c.datetime > '2022-11-01 00:00:00').where(team.c.datetime < '2022-11-30 00:00:00') 4 5 <string> in select(columns, whereclause, from_obj, distinct, having, correlate, prefixes, suffixes, **kwargs) <string> in __init__(self, columns, whereclause, from_obj, distinct, having, correlate, prefixes, suffixes, **kwargs) C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\deprecations.py in warned(fn, *args, **kwargs) 126 ) 127 --> 128 return fn(*args, **kwargs) 129 130 doc = fn.__doc__ is not None and fn.__doc__ or "" C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in __init__(self, columns, whereclause, from_obj, distinct, having, correlate, prefixes, suffixes, **kwargs) 2977 if from_obj is not None: 2978 self._from_obj = util.OrderedSet( -> 2979 _interpret_as_from(f) for f in util.to_list(from_obj) 2980 ) 2981 else: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\_collections.py in __init__(self, d) 363 self._list = [] 364 if d is not None: --> 365 self._list = unique_list(d) 366 set.update(self, self._list) 367 else: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\_collections.py in unique_list(seq, hashfunc) 777 seen_add = seen.add 778 if not hashfunc: --> 779 return [x for x in seq if x not in seen and not seen_add(x)] 780 else: 781 return [ C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\_collections.py in <listcomp>(.0) 777 seen_add = seen.add 778 if not hashfunc: --> 779 return [x for x in seq if x not in seen and not seen_add(x)] 780 else: 781 return [ C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in <genexpr>(.0) 2977 if from_obj is not None: 2978 self._from_obj = util.OrderedSet( -> 2979 _interpret_as_from(f) for f in util.to_list(from_obj) 2980 ) 2981 else: C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\selectable.py in _interpret_as_from(element) 61 return insp.selectable 62 except AttributeError: ---> 63 raise exc.ArgumentError("FROM expression expected") 64 65 ArgumentError: FROM expression expected Debugging I saw that everything runs fine until this select starts: qry = sa.select(......). I don't know if the error comes from the library that I need to install before executing this cell.
[ "You still have some debugging work ahead of you.\nTake a look at the columns: team.c\n\nVerify spellings.\nPut each column (like \"hos_name\") on its own line, so it can easily be # commented out.\nSimplify the query. Start with an easy query of just a single column, and build up from there until you encounter breakage.\nRemove both filters, and add them back one-by-one.\n\nTackling it methodically in that way will soon show you the mismatch\nbetween code and table.\nEverything looks good to me.\nBut then, I cannot view e.g. SELECT * FROM dialog_logger output.\nI usually spell it .filter() rather than .where() -- whatever,\nprolly works much the same.\nBTW, print(qry) will show you the rendered SQL query,\nwhich can sometimes shed a bit of light on the situation.\n", "I don't think sqlalchemy 1.3.13 suports that usage. Try this, note the columns are in a list, this changes in 1.4 and beyond:\n# THIS IS FOR sqlalchemy < 1.4 ONLY!!!\nqry = sa.select([team.c.hos_name, team.c.hos_id, team.c.datetime, team.c.patient_cel_number,\n team.c.hospital_cel_number, team.c.message, team.c.direction]).where(\n team.c.datetime > '2022-11-01 00:00:00').where(team.c.datetime < '2022-11-30 00:00:00')\n\nI think your exception is related to the fact that the 3rd argument in the old select() function is expected to be a from_obj and not just another column: OLD select()\nI'm putting this warnings about 1.3/1.4 so people don't come here and try the same queries in 1.4+. I understand if you can't upgrade because of the 3rd party tools.\n" ]
[ 0, 0 ]
[]
[]
[ "jupyter", "python", "sqlalchemy" ]
stackoverflow_0074635284_jupyter_python_sqlalchemy.txt
Q: How to get max value and name from a Pandas series? Say I have a series like the one below: mySeries = pd.Series([1,2,3],['c','b','a']) How do I go about getting the max value along with the name associated with it in a single line? In this case: a: 3 I can get the max value with: mySeries.max(), the name of the max value with mySeries.idxmax(axis=1) but I can't figure out how to get both of those values with one line. Suggestions? A: pd.Series.nlargest mySeries.nlargest(1) a 3 dtype: int64 A: One with boolean indexing (just an alternative) i.e mySeries[mySeries.index==mySeries.idxmax()] or mySeries[mySeries == mySeries.max()] or(Thanks @piRSquared) mySeries[[mySeries.idxmax()]] Output: a 3 dtype: int64
How to get max value and name from a Pandas series?
Say I have a series like the one below: mySeries = pd.Series([1,2,3],['c','b','a']) How do I go about getting the max value along with the name associated with it in a single line? In this case: a: 3 I can get the max value with: mySeries.max(), the name of the max value with mySeries.idxmax(axis=1) but I can't figure out how to get both of those values with one line. Suggestions?
[ "pd.Series.nlargest\nmySeries.nlargest(1)\n\na 3\ndtype: int64\n\n", "One with boolean indexing (just an alternative) i.e \nmySeries[mySeries.index==mySeries.idxmax()]\n\nor \nmySeries[mySeries == mySeries.max()]\n\nor(Thanks @piRSquared)\nmySeries[[mySeries.idxmax()]]\n\nOutput: \n\na 3\ndtype: int64\n\n" ]
[ 11, 1 ]
[ "You could do:\nfoo.value_counts()[:1].index.tolist()[0]}\n\n" ]
[ -1 ]
[ "pandas", "python" ]
stackoverflow_0046577525_pandas_python.txt
Q: PermissionError: [WinError 32] using pandas-dedupe I am trying to use pandas-dedupe, but after labelling data I run into permission issues I cannot solve. Minimum working example: import pandas_dedupe import seaborn as sns if __name__ == "__main__": iris = sns.load_dataset('iris') result = pandas_dedupe.dedupe_dataframe(iris, ["sepal_width", "sepal_length", "species"]) After labelling some data, the files dedupe_dataframe_learned_settings and dedupe_dataframe_training.json get created. But during the deduplication process I run into errors like PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\THOMAS~1\\AppData\\Local\\Temp\\tmp_vrp9vbr' I tried setting n_cores=1 in the dedupe_dataframe method, but it didn't help me. What can I do? A: I had similar problems on Windows. I didn't find a solution for Windows itself, but using WSL(2) you can get this working properly. Lyonk71 whom (co-)made the pandas-dedupe package also made an installation video, see below. https://www.youtube.com/watch?v=dq183fOB1Xg&t Hope this helps you out, success! A: I had the same problem I solved it by disabling multiprocessing. You can disable multiprocessing by setting n_cores=0 as shown below: pandas_dedupe.dedupe_dataframe(df, ['first_name', 'last_name'], n_cores=0) This should resolve the error.
PermissionError: [WinError 32] using pandas-dedupe
I am trying to use pandas-dedupe, but after labelling data I run into permission issues I cannot solve. Minimum working example: import pandas_dedupe import seaborn as sns if __name__ == "__main__": iris = sns.load_dataset('iris') result = pandas_dedupe.dedupe_dataframe(iris, ["sepal_width", "sepal_length", "species"]) After labelling some data, the files dedupe_dataframe_learned_settings and dedupe_dataframe_training.json get created. But during the deduplication process I run into errors like PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\THOMAS~1\\AppData\\Local\\Temp\\tmp_vrp9vbr' I tried setting n_cores=1 in the dedupe_dataframe method, but it didn't help me. What can I do?
[ "I had similar problems on Windows. I didn't find a solution for Windows itself, but using WSL(2) you can get this working properly.\nLyonk71 whom (co-)made the pandas-dedupe package also made an installation video, see below.\nhttps://www.youtube.com/watch?v=dq183fOB1Xg&t\nHope this helps you out, success!\n", "I had the same problem I solved it by disabling multiprocessing. You can disable multiprocessing by setting n_cores=0 as shown below:\npandas_dedupe.dedupe_dataframe(df, ['first_name', 'last_name'], n_cores=0)\n\nThis should resolve the error.\n" ]
[ 0, 0 ]
[]
[]
[ "duplicates", "pandas", "permissionerror", "python", "windows" ]
stackoverflow_0074018382_duplicates_pandas_permissionerror_python_windows.txt
Q: python string formatting single quotes and double quotes I have a variable state = 'PA'. I am trying to generate a string as follows. I would like add single quotes on the state within a string. Also, I want to use this .format method because I will change this state later. 'select * from table where "state" = 'PA'' Currently, I could only be able to generate this 'select * from table where "state" = PA' using the following code: 'select * from table where "state" = {}'.format(state) A: You can escape the single quotes around the format specifier like this: >>> s = 'select * from table where "state" = \'{}\''.format(state) >>> print(s) select * from table where "state" = 'PA'
python string formatting single quotes and double quotes
I have a variable state = 'PA'. I am trying to generate a string as follows. I would like add single quotes on the state within a string. Also, I want to use this .format method because I will change this state later. 'select * from table where "state" = 'PA'' Currently, I could only be able to generate this 'select * from table where "state" = PA' using the following code: 'select * from table where "state" = {}'.format(state)
[ "You can escape the single quotes around the format specifier like this:\n>>> s = 'select * from table where \"state\" = \\'{}\\''.format(state)\n>>> print(s)\nselect * from table where \"state\" = 'PA'\n\n" ]
[ 0 ]
[]
[]
[ "formatting", "python", "single_quotes", "string" ]
stackoverflow_0074647076_formatting_python_single_quotes_string.txt
Q: Run aws Athena query by Lambda: error name 'response' is not defined I create an AWS lambda function with python 3.9 to run the Athena query and get the query result import time import boto3 # create Athena client client = boto3.client('athena') # create Athena query varuable query = 'select * from mydatabase.mytable limit 8' DATABASE = 'mydatabase' output='s3://mybucket/' def lambda_handler(event, context): # Execution response = client.start_query_execution( QueryString=query, QueryExecutionContext={ 'Database': DATABASE }, ResultConfiguration={ 'OutputLocation': output, } ) query_execution_id = response['QueryExecutionId'] time.sleep(10) result = client.get_query_results(QueryExecutionId=query_execution_id) for row in results['ResultSet']['Rows']: print(row) I get this error message when I test it "[ERROR] NameError: name 'response' is not defined" A: You define the response variable inside the lambda_handler function. But you are referencing it in the global scope, outside of that function, here: query_execution_id = response['QueryExecutionId'] The variable isn't defined on that scope, thus the error message. It appears that you may simply be missing indentation on all these lines: query_execution_id = response['QueryExecutionId'] time.sleep(10) result = client.get_query_results(QueryExecutionId=query_execution_id) for row in results['ResultSet']['Rows']: print(row) In Python indentation is syntax! If you intend those lines to be part of the lambda_handler function, then they need to have the correct indentation to place them inside the scope of the function, like so: def lambda_handler(event, context): # Execution response = client.start_query_execution( QueryString=query, QueryExecutionContext={ 'Database': DATABASE }, ResultConfiguration={ 'OutputLocation': output, } ) query_execution_id = response['QueryExecutionId'] time.sleep(10) result = client.get_query_results(QueryExecutionId=query_execution_id) for row in results['ResultSet']['Rows']: print(row)
Run aws Athena query by Lambda: error name 'response' is not defined
I create an AWS lambda function with python 3.9 to run the Athena query and get the query result import time import boto3 # create Athena client client = boto3.client('athena') # create Athena query varuable query = 'select * from mydatabase.mytable limit 8' DATABASE = 'mydatabase' output='s3://mybucket/' def lambda_handler(event, context): # Execution response = client.start_query_execution( QueryString=query, QueryExecutionContext={ 'Database': DATABASE }, ResultConfiguration={ 'OutputLocation': output, } ) query_execution_id = response['QueryExecutionId'] time.sleep(10) result = client.get_query_results(QueryExecutionId=query_execution_id) for row in results['ResultSet']['Rows']: print(row) I get this error message when I test it "[ERROR] NameError: name 'response' is not defined"
[ "You define the response variable inside the lambda_handler function. But you are referencing it in the global scope, outside of that function, here:\nquery_execution_id = response['QueryExecutionId']\n\nThe variable isn't defined on that scope, thus the error message. It appears that you may simply be missing indentation on all these lines:\nquery_execution_id = response['QueryExecutionId']\n \ntime.sleep(10)\n \nresult = client.get_query_results(QueryExecutionId=query_execution_id)\n \nfor row in results['ResultSet']['Rows']:\n print(row)\n\nIn Python indentation is syntax! If you intend those lines to be part of the lambda_handler function, then they need to have the correct indentation to place them inside the scope of the function, like so:\ndef lambda_handler(event, context):\n # Execution\n response = client.start_query_execution(\n QueryString=query,\n QueryExecutionContext={\n 'Database': DATABASE\n },\n ResultConfiguration={\n 'OutputLocation': output,\n }\n )\n \n query_execution_id = response['QueryExecutionId']\n \n time.sleep(10)\n \n result = client.get_query_results(QueryExecutionId=query_execution_id)\n \n for row in results['ResultSet']['Rows']:\n print(row)\n\n" ]
[ 0 ]
[]
[]
[ "amazon_athena", "aws_lambda", "python" ]
stackoverflow_0074647062_amazon_athena_aws_lambda_python.txt
Q: How to round number to 3 decimals max? I am trying to round up 5.9999998 to 5.999. But I have a problem, If I do round(number) it'll round it up to 6. How can I round a number like this to max 3 decimals? A: You can use this: number = 5.999999998 new_number = int(number * 1e3) / 1e3 A: Here is a short code. x = 4/3 # round up to 3 decimal places x = round(x, 3) print(x)
How to round number to 3 decimals max?
I am trying to round up 5.9999998 to 5.999. But I have a problem, If I do round(number) it'll round it up to 6. How can I round a number like this to max 3 decimals?
[ "You can use this:\nnumber = 5.999999998\nnew_number = int(number * 1e3) / 1e3\n\n", "Here is a short code.\n\nx = 4/3\n\n# round up to 3 decimal places\nx = round(x, 3)\n\nprint(x)\n\n\n" ]
[ 3, 0 ]
[]
[]
[ "numbers", "python" ]
stackoverflow_0074647047_numbers_python.txt
Q: How to edit and delete keys and values in python dict i have a little problem that needs solving, i have to write a program that saves contacts in a dict and be able to 1- add new contacts 2- delete contacts 3- edit contacts 4- list contacts 5- show contacts i wrote a simple program that saves contacts into a dictionary but i have a problem with the rest and i could really user some help!! here is my code: contacts = {"Mohamed": {"name": "Mohamed Sayed", "number": "017624857447", "birthday": "24.11.1996", "address": "Ginnheim 60487"}, "Ahmed": {"name": "Ahmed Sayed", "number": "0123456789", "birthday": "06.06.1995", "address": "India"}} def add_contact(): for _ in range(0, 1): contact = {} name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") contact["name"] = name contact["number"] = number contact["birthday"] = birthday contact["address"] = address print(contact) contacts.update(contact) add_contact() print(contacts) def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") for k in contacts: if user_input == contacts["name"]: del contacts[k] del_contact() print(contacts) def edit_contact(): user_input = input("please enter the contact you want to edit: ") for k, v in contacts: if user_input == contacts["name"]: contacts.update(user_input) def list_contact(): pass def show_contact(): user_input = input("please enter the contact you want to show: ") for k, v in contacts.items(): if user_input == contacts["name"]: print(key, value) show_contact() A: For your def's you dont need to use a for ... in range(...) loop, rather you can just call upon that value by the value of user_value. I've decided to not include def edit_contact(): in this as it currently doesn't edit anything, all it does is add a new element within contacts with that in mind contacts = {"Mohamed": {"name": "Mohamed Sayed", "number": "017624857447", "birthday": "24.11.1996", "address": "Ginnheim 60487"}, "Ahmed": {"name": "Ahmed Sayed", "number": "0123456789", "birthday": "06.06.1995", "address": "India"}} def add_contact(): for _ in range(0, 1): contact = {} name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") contact["name"] = name contact["number"] = number contact["birthday"] = birthday contact["address"] = address print(contact) contacts[name] = contact add_contact() print(contacts) def del_contact(contacts): user_input = input("Please enter the name of the contact you want to delete: ") contacts = contacts.pop(user_input) del_contact(contacts) print(contacts) def edit_contact(): user_input = input("please enter the contact you want to edit: ") for k, v in contacts: if user_input == contacts["name"]: contacts.update(user_input) def list_contact(): pass def show_contact(): user_input = input("please enter the contact you want to show: ") print(contacts[user_input]) #you can use 'contacts[user_input]['name', 'number', 'birthday' or 'address']' to call upon specific elements within the list show_contact() Hope this helps.
How to edit and delete keys and values in python dict
i have a little problem that needs solving, i have to write a program that saves contacts in a dict and be able to 1- add new contacts 2- delete contacts 3- edit contacts 4- list contacts 5- show contacts i wrote a simple program that saves contacts into a dictionary but i have a problem with the rest and i could really user some help!! here is my code: contacts = {"Mohamed": {"name": "Mohamed Sayed", "number": "017624857447", "birthday": "24.11.1996", "address": "Ginnheim 60487"}, "Ahmed": {"name": "Ahmed Sayed", "number": "0123456789", "birthday": "06.06.1995", "address": "India"}} def add_contact(): for _ in range(0, 1): contact = {} name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") contact["name"] = name contact["number"] = number contact["birthday"] = birthday contact["address"] = address print(contact) contacts.update(contact) add_contact() print(contacts) def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") for k in contacts: if user_input == contacts["name"]: del contacts[k] del_contact() print(contacts) def edit_contact(): user_input = input("please enter the contact you want to edit: ") for k, v in contacts: if user_input == contacts["name"]: contacts.update(user_input) def list_contact(): pass def show_contact(): user_input = input("please enter the contact you want to show: ") for k, v in contacts.items(): if user_input == contacts["name"]: print(key, value) show_contact()
[ "For your def's you dont need to use a for ... in range(...) loop, rather you can just call upon that value by the value of user_value. I've decided to not include def edit_contact(): in this as it currently doesn't edit anything, all it does is add a new element within contacts with that in mind\ncontacts = {\"Mohamed\": {\"name\": \"Mohamed Sayed\", \"number\": \"017624857447\", \"birthday\": \"24.11.1996\", \"address\": \"Ginnheim 60487\"},\n \"Ahmed\": {\"name\": \"Ahmed Sayed\", \"number\": \"0123456789\", \"birthday\": \"06.06.1995\", \"address\": \"India\"}}\n\ndef add_contact():\n for _ in range(0, 1):\n contact = {}\n name = input(\"Enter the name: \")\n number = input(\"Enter the number: \")\n birthday = input(\"Enter the birthday\")\n address = input(\"Enter the address\")\n contact[\"name\"] = name\n contact[\"number\"] = number\n contact[\"birthday\"] = birthday\n contact[\"address\"] = address\n print(contact)\n contacts[name] = contact\nadd_contact()\nprint(contacts)\n\n\n\ndef del_contact(contacts):\n user_input = input(\"Please enter the name of the contact you want to delete: \")\n contacts = contacts.pop(user_input)\ndel_contact(contacts)\nprint(contacts)\n\n\ndef edit_contact():\n user_input = input(\"please enter the contact you want to edit: \")\n for k, v in contacts:\n if user_input == contacts[\"name\"]:\n contacts.update(user_input)\n\n\ndef list_contact():\n pass\n\ndef show_contact():\n user_input = input(\"please enter the contact you want to show: \")\n print(contacts[user_input])\n #you can use 'contacts[user_input]['name', 'number', 'birthday' or 'address']' to call upon specific elements within the list\n\nshow_contact()\n\nHope this helps.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "for_loop", "list", "loops", "python" ]
stackoverflow_0074646847_dictionary_for_loop_list_loops_python.txt
Q: Conditional writes to DynamoDB when executing an AWS glue script without Boto? I've written an AWS glue job ETL script in python, and I'm looking for the proper way to perform conditional writes to the DynamoDb table I'm using as the target. # Write to DynamoDB glueContext.write_dynamic_frame_from_options( frame=SelectFromCollection_node1665510217343, connection_type="dynamodb", connection_options={ "dynamodb.output.tableName": args["OUTPUT_TABLE_NAME"] } ) My script is writing to dynamo with write_dynamic_frame_from_options. The aws glue connection parameter docs make no mention of the ability to customize the write behavior in the connection options. Is there a clean way to write conditionally without using boto? A: You cannot do conditional updates with the EMR DynamoDB connector which Glue uses. It does a complete overwrite of the data. For that you would have to use Boto3 and distribute it using forEachPartition across the Spark executors.
Conditional writes to DynamoDB when executing an AWS glue script without Boto?
I've written an AWS glue job ETL script in python, and I'm looking for the proper way to perform conditional writes to the DynamoDb table I'm using as the target. # Write to DynamoDB glueContext.write_dynamic_frame_from_options( frame=SelectFromCollection_node1665510217343, connection_type="dynamodb", connection_options={ "dynamodb.output.tableName": args["OUTPUT_TABLE_NAME"] } ) My script is writing to dynamo with write_dynamic_frame_from_options. The aws glue connection parameter docs make no mention of the ability to customize the write behavior in the connection options. Is there a clean way to write conditionally without using boto?
[ "You cannot do conditional updates with the EMR DynamoDB connector which Glue uses. It does a complete overwrite of the data. For that you would have to use Boto3 and distribute it using forEachPartition across the Spark executors.\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "aws_glue", "python" ]
stackoverflow_0074646481_amazon_dynamodb_aws_glue_python.txt
Q: ValueError: Invalid element(s) received for the 'data' property I encounter an issue with plotly. I would like to display different figures but, somehow, I can't manage to achieve what I want. I created 2 sources of data: from plotly.graph_objs.scatter import Line import plotly.graph_objs as go trace11 = go.Scatter( x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace12 = go.Scatter( x=[0, 1, 2], y=[1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) trace21 = go.Scatter( x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace22 = go.Scatter( x=[0, 1, 2], y=[1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) data1 = [trace11, trace12] data2 = [trace21, trace22] Then, I created a subplot with 1 row and 2 columns and tried to add this data to the subplot: from plotly import tools fig = tools.make_subplots(rows=1, cols=2) fig.append_trace(data1, 1, 1) fig.append_trace(data2, 1, 2) fig.show() That resulted in the following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-ba20e4900d41> in <module> 1 from plotly import tools 2 fig = tools.make_subplots(rows=1, cols=2) ----> 3 fig.append_trace(data1, 1, 1) 4 fig.append_trace(data2, 1, 2) 5 fig.show() ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in append_trace(self, trace, row, col) 1797 ) 1798 -> 1799 self.add_trace(trace=trace, row=row, col=col) 1800 1801 def _set_trace_grid_position(self, trace, row, col, secondary_y=False): ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in add_trace(self, trace, row, col, secondary_y) 1621 rows=[row] if row is not None else None, 1622 cols=[col] if col is not None else None, -> 1623 secondary_ys=[secondary_y] if secondary_y is not None else None, 1624 ) 1625 ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in add_traces(self, data, rows, cols, secondary_ys) 1684 1685 # Validate traces -> 1686 data = self._data_validator.validate_coerce(data) 1687 1688 # Set trace indexes ~\Anaconda3\lib\site-packages\_plotly_utils\basevalidators.py in validate_coerce(self, v, skip_invalid) 2667 2668 if invalid_els: -> 2669 self.raise_invalid_elements(invalid_els) 2670 2671 v = to_scalar_or_list(res) ~\Anaconda3\lib\site-packages\_plotly_utils\basevalidators.py in raise_invalid_elements(self, invalid_els) 296 pname=self.parent_name, 297 invalid=invalid_els[:10], --> 298 valid_clr_desc=self.description(), 299 ) 300 ) ValueError: Invalid element(s) received for the 'data' property of Invalid elements include: [[Scatter({ 'line': {'color': 'rgb(0, 0, 128)', 'width': 1}, 'x': [0, 1, 2], 'y': [0, 0, 0] }), Scatter({ 'line': {'color': 'rgb(128, 0, 0)', 'width': 1}, 'x': [0, 1, 2], 'y': [1, 1, 1] })]] The 'data' property is a tuple of trace instances that may be specified as: - A list or tuple of trace instances (e.g. [Scatter(...), Bar(...)]) - A single trace instance (e.g. Scatter(...), Bar(...), etc.) - A list or tuple of dicts of string/value properties where: - The 'type' property specifies the trace type One of: ['area', 'bar', 'barpolar', 'box', 'candlestick', 'carpet', 'choropleth', 'choroplethmapbox', 'cone', 'contour', 'contourcarpet', 'densitymapbox', 'funnel', 'funnelarea', 'heatmap', 'heatmapgl', 'histogram', 'histogram2d', 'histogram2dcontour', 'image', 'indicator', 'isosurface', 'mesh3d', 'ohlc', 'parcats', 'parcoords', 'pie', 'pointcloud', 'sankey', 'scatter', 'scatter3d', 'scattercarpet', 'scattergeo', 'scattergl', 'scattermapbox', 'scatterpolar', 'scatterpolargl', 'scatterternary', 'splom', 'streamtube', 'sunburst', 'surface', 'table', 'treemap', 'violin', 'volume', 'waterfall'] - All remaining properties are passed to the constructor of the specified trace type (e.g. [{'type': 'scatter', ...}, {'type': 'bar, ...}]) I mist be doing something wrong. What is weird is that my data seems correctly shaped since I can run the following code without any issue: fig = go.Figure(data1) fig.show() I hope you can help me find a solution. Thanks! A: The reason why you are getting an error it is because the function append_trace() is expecting a single trace in the form you've declared them. However, the graph object Figure has the function add_traces() with which you can pass the data parameter as a list with more than one trace. Therefore, I suggest two simple solutions: Solution 1: Append traces individually from plotly.graph_objs.scatter import Line import plotly.graph_objs as go trace11 = go.Scatter(x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace12 = go.Scatter(x = [0, 1, 2], y = [1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) trace21 = go.Scatter(x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace22 = go.Scatter(x = [0, 1, 2], y = [1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) from plotly.subplots import make_subplots fig = make_subplots(rows=1, cols=2) fig.append_trace(trace11, row=1, col=1) fig.append_trace(trace12, row=1, col=1) fig.append_trace(trace21, row=1, col=2) fig.append_trace(trace22, row=1, col=2) fig.show() Solution 2: Use the function add_traces(data,rows,cols) instead from plotly.graph_objs.scatter import Line import plotly.graph_objs as go trace11 = go.Scatter(x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace12 = go.Scatter(x = [0, 1, 2], y = [1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) trace21 = go.Scatter(x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace22 = go.Scatter(x = [0, 1, 2], y = [1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) from plotly.subplots import make_subplots fig = make_subplots(rows=1, cols=2) data1 = [trace11, trace12] data2 = [trace21, trace22] fig.add_traces(data1, rows=1, cols=1) fig.add_traces(data2, rows=1, cols=2) fig.show() A: I was running into a similar problem myself while doing the Interactive Python Dashboards with Plotly and Dash Udemy course and figured out how to get my code to work so I will post my problematic code and what ended up working for me so you can compare. Keep in mind I am not a programmer or computer scientist by trade so I can't provide all the ins and out of what's going on behind the scenes... Just what works for me.. Also, I literally got my code working while reviewing your code and noticed that we were both passing in lists, while other code that works for the append_trace() call does not pass in a list. I hope this helps. First, tools.make_subplots() is deprecated based on my research of this issue, and the import of tools from plotly is not used anywhere in my code. I think you need to use: from plotly.subplots import make_subplots Next, I think I was simply using a list where it was inappropriate. Here is my original code that did not work: import plotly.offline as pyo import plotly.graph_objs as go from plotly import tools from plotly.subplots import make_subplots import pandas as pd df1 = pd.read_csv('Data/2010SantaBarbaraCA.csv') df2 = pd.read_csv('Data/2010YumaAZ.csv') df3 = pd.read_csv('Data/2010SitkaAK.csv') trace1 = [go.Heatmap(x=df1['DAY'], y=df1['LST_TIME'], z=df1['T_HR_AVG'].values.tolist(), zmin=5, zmax=40)] # z= cannot accept a pandas column, needs to be a list trace2 = [go.Heatmap(x=df2['DAY'], y=df2['LST_TIME'], z=df2['T_HR_AVG'].values.tolist(), zmin=5, zmax=40)] # z= cannot accept a pandas column, needs to be a list trace3 = [go.Heatmap(x=df3['DAY'], y=df3['LST_TIME'], z=df3['T_HR_AVG'].values.tolist(), zmin=5, zmax=40)] # z= cannot accept a pandas column, needs to be a list # fig = tools.make_sublots(rows=1, # columns=3, # subplot_titles=['Santa Barbara CA','Yuma AZ', 'Sitka AK'], # shared_yaxes=True) fig = make_subplots(rows=1, cols=3, subplot_titles=['Santa Barbara CA', 'Yuma AZ', 'Sitka AK'], shared_yaxes=True) # fig.append_trace(trace#, row#, column#) fig.append_trace(trace1, row=1, col=1) fig.append_trace(trace2, row=1, col=2) fig.append_trace(trace3, row=1, col=3) # data = [trace1, trace2, trace3] # layout = go.Layout(title='Santa Barbara California Temperatures') pyo.plot(fig) Notice I had my trace#'s defined with brackets trace1 = [go.Heatmap(x=df1['DAY'], y=df1['LST_TIME'], z=df1['T_HR_AVG'].values.tolist(), zmin=5, zmax=40)] When I removed the brackets from each of my trace declarations my code worked. Here is my code that works: import plotly.offline as pyo import plotly.graph_objs as go from plotly import tools from plotly.subplots import make_subplots import pandas as pd df1 = pd.read_csv('Data/2010SantaBarbaraCA.csv') df2 = pd.read_csv('Data/2010YumaAZ.csv') df3 = pd.read_csv('Data/2010SitkaAK.csv') trace1 = go.Heatmap(x=df1['DAY'], y=df1['LST_TIME'], z=df1['T_HR_AVG'].values.tolist(), zmin=5, zmax=40) # z= cannot accept a pandas column, needs to be a list trace2 = go.Heatmap(x=df2['DAY'], y=df2['LST_TIME'], z=df2['T_HR_AVG'].values.tolist(), zmin=5, zmax=40) # z= cannot accept a pandas column, needs to be a list trace3 = go.Heatmap(x=df3['DAY'], y=df3['LST_TIME'], z=df3['T_HR_AVG'].values.tolist(), zmin=5, zmax=40) # z= cannot accept a pandas column, needs to be a list # fig = tools.make_sublots(rows=1, # columns=3, # subplot_titles=['Santa Barbara CA','Yuma AZ', 'Sitka AK'], # shared_yaxes=True) fig = make_subplots(rows=1, cols=3, subplot_titles=['Santa Barbara CA', 'Yuma AZ', 'Sitka AK'], shared_yaxes=True) # fig.append_trace(trace#, row#, column#) fig.append_trace(trace1, row=1, col=1) fig.append_trace(trace2, row=1, col=2) fig.append_trace(trace3, row=1, col=3) # data = [trace1, trace2, trace3] # layout = go.Layout(title='Santa Barbara California Temperatures') pyo.plot(fig) I know you are using Scatter and I am using Heatmaps but I think your issue may be(apart from using tools for make_subplots()) that you are passing data1 and data2 into your append_trace() calls as lists, and I don't think the append_trace() call likes this. I would suggest not passing in a list to the append_trace() call and seeing if that works. If it does then you may need to adjust your data variables to not be listed and go from there. I hope this helps. A: While trying to make subplots of histograms, I ran into a similar problem. My error occurred as I was trying for loops in the wrong way, but the solution below solves the issue you are having. I used add_trace instead of append_trace and used for loop for ease of future use. from plotly.graph_objs.scatter import Line from plotly.subplots import make_subplots import plotly.graph_objs as go trace11 = go.Scatter( x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace12 = go.Scatter( x=[0, 1, 2], y=[1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) trace21 = go.Scatter( x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace22 = go.Scatter( x=[0, 1, 2], y=[1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) trace=[trace11, trace12, trace21, trace22] plot_rows=2 plot_cols=2 fig = make_subplots(rows=plot_rows, cols=plot_cols) x=0 for i in range(1, plot_rows + 1): for j in range(1, plot_cols + 1): fig.add_trace(trace[x], row=i, col=j) x+=1 fig.update_layout(height=600, width=600) fig.show() A: My issue was that I was trying to use a px (plotly express) plot in a trace. The problem, I discovered, is that px returns a complete figure, but add_trace (add_traces, append_trace) wants just the data. What solved this for me was adding ".data[0]" to the end of my px figure name: fig = go.Figure() df = pd.DataFrame({'x':[1,2,3,4], 'y':[5,6,7,8]}) fig2 = px.scatter(df, x="x", y="y") fig.add_trace(fig2.data[0]) fig.show() OR you if you use add_traces (plural) you don't need the [0] qualifier. fig = go.Figure() df = pd.DataFrame({'x':[1,2,3,4], 'y':[5,6,7,8]}) fig2 = px.scatter(df, x="x", y="y") fig.add_traces(fig2.data) fig.show() Hope this helps someone save time.
ValueError: Invalid element(s) received for the 'data' property
I encounter an issue with plotly. I would like to display different figures but, somehow, I can't manage to achieve what I want. I created 2 sources of data: from plotly.graph_objs.scatter import Line import plotly.graph_objs as go trace11 = go.Scatter( x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace12 = go.Scatter( x=[0, 1, 2], y=[1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) trace21 = go.Scatter( x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace22 = go.Scatter( x=[0, 1, 2], y=[1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) data1 = [trace11, trace12] data2 = [trace21, trace22] Then, I created a subplot with 1 row and 2 columns and tried to add this data to the subplot: from plotly import tools fig = tools.make_subplots(rows=1, cols=2) fig.append_trace(data1, 1, 1) fig.append_trace(data2, 1, 2) fig.show() That resulted in the following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-ba20e4900d41> in <module> 1 from plotly import tools 2 fig = tools.make_subplots(rows=1, cols=2) ----> 3 fig.append_trace(data1, 1, 1) 4 fig.append_trace(data2, 1, 2) 5 fig.show() ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in append_trace(self, trace, row, col) 1797 ) 1798 -> 1799 self.add_trace(trace=trace, row=row, col=col) 1800 1801 def _set_trace_grid_position(self, trace, row, col, secondary_y=False): ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in add_trace(self, trace, row, col, secondary_y) 1621 rows=[row] if row is not None else None, 1622 cols=[col] if col is not None else None, -> 1623 secondary_ys=[secondary_y] if secondary_y is not None else None, 1624 ) 1625 ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in add_traces(self, data, rows, cols, secondary_ys) 1684 1685 # Validate traces -> 1686 data = self._data_validator.validate_coerce(data) 1687 1688 # Set trace indexes ~\Anaconda3\lib\site-packages\_plotly_utils\basevalidators.py in validate_coerce(self, v, skip_invalid) 2667 2668 if invalid_els: -> 2669 self.raise_invalid_elements(invalid_els) 2670 2671 v = to_scalar_or_list(res) ~\Anaconda3\lib\site-packages\_plotly_utils\basevalidators.py in raise_invalid_elements(self, invalid_els) 296 pname=self.parent_name, 297 invalid=invalid_els[:10], --> 298 valid_clr_desc=self.description(), 299 ) 300 ) ValueError: Invalid element(s) received for the 'data' property of Invalid elements include: [[Scatter({ 'line': {'color': 'rgb(0, 0, 128)', 'width': 1}, 'x': [0, 1, 2], 'y': [0, 0, 0] }), Scatter({ 'line': {'color': 'rgb(128, 0, 0)', 'width': 1}, 'x': [0, 1, 2], 'y': [1, 1, 1] })]] The 'data' property is a tuple of trace instances that may be specified as: - A list or tuple of trace instances (e.g. [Scatter(...), Bar(...)]) - A single trace instance (e.g. Scatter(...), Bar(...), etc.) - A list or tuple of dicts of string/value properties where: - The 'type' property specifies the trace type One of: ['area', 'bar', 'barpolar', 'box', 'candlestick', 'carpet', 'choropleth', 'choroplethmapbox', 'cone', 'contour', 'contourcarpet', 'densitymapbox', 'funnel', 'funnelarea', 'heatmap', 'heatmapgl', 'histogram', 'histogram2d', 'histogram2dcontour', 'image', 'indicator', 'isosurface', 'mesh3d', 'ohlc', 'parcats', 'parcoords', 'pie', 'pointcloud', 'sankey', 'scatter', 'scatter3d', 'scattercarpet', 'scattergeo', 'scattergl', 'scattermapbox', 'scatterpolar', 'scatterpolargl', 'scatterternary', 'splom', 'streamtube', 'sunburst', 'surface', 'table', 'treemap', 'violin', 'volume', 'waterfall'] - All remaining properties are passed to the constructor of the specified trace type (e.g. [{'type': 'scatter', ...}, {'type': 'bar, ...}]) I mist be doing something wrong. What is weird is that my data seems correctly shaped since I can run the following code without any issue: fig = go.Figure(data1) fig.show() I hope you can help me find a solution. Thanks!
[ "The reason why you are getting an error it is because the function append_trace() is expecting a single trace in the form you've declared them. However, the graph object Figure has the function add_traces() with which you can pass the data parameter as a list with more than one trace.\nTherefore, I suggest two simple solutions:\nSolution 1: Append traces individually\nfrom plotly.graph_objs.scatter import Line\nimport plotly.graph_objs as go\ntrace11 = go.Scatter(x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}))\ntrace12 = go.Scatter(x = [0, 1, 2], y = [1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}))\ntrace21 = go.Scatter(x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}))\ntrace22 = go.Scatter(x = [0, 1, 2], y = [1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}))\n\nfrom plotly.subplots import make_subplots\nfig = make_subplots(rows=1, cols=2)\nfig.append_trace(trace11, row=1, col=1)\nfig.append_trace(trace12, row=1, col=1)\nfig.append_trace(trace21, row=1, col=2)\nfig.append_trace(trace22, row=1, col=2)\nfig.show()\n\nSolution 2: Use the function add_traces(data,rows,cols) instead\nfrom plotly.graph_objs.scatter import Line\nimport plotly.graph_objs as go\ntrace11 = go.Scatter(x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}))\ntrace12 = go.Scatter(x = [0, 1, 2], y = [1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}))\ntrace21 = go.Scatter(x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}))\ntrace22 = go.Scatter(x = [0, 1, 2], y = [1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}))\n\nfrom plotly.subplots import make_subplots\nfig = make_subplots(rows=1, cols=2)\ndata1 = [trace11, trace12]\ndata2 = [trace21, trace22]\nfig.add_traces(data1, rows=1, cols=1)\nfig.add_traces(data2, rows=1, cols=2)\nfig.show()\n\n", "I was running into a similar problem myself while doing the Interactive Python Dashboards with Plotly and Dash Udemy course and figured out how to get my code to work so I will post my problematic code and what ended up working for me so you can compare. Keep in mind I am not a programmer or computer scientist by trade so I can't provide all the ins and out of what's going on behind the scenes... Just what works for me.. Also, I literally got my code working while reviewing your code and noticed that we were both passing in lists, while other code that works for the append_trace() call does not pass in a list. I hope this helps.\nFirst, tools.make_subplots() is deprecated based on my research of this issue, and the import of tools from plotly is not used anywhere in my code. I think you need to use:\nfrom plotly.subplots import make_subplots\n\nNext, I think I was simply using a list where it was inappropriate. Here is my original code that did not work:\nimport plotly.offline as pyo\nimport plotly.graph_objs as go\nfrom plotly import tools\nfrom plotly.subplots import make_subplots\nimport pandas as pd\n\ndf1 = pd.read_csv('Data/2010SantaBarbaraCA.csv')\ndf2 = pd.read_csv('Data/2010YumaAZ.csv')\ndf3 = pd.read_csv('Data/2010SitkaAK.csv')\n\n\ntrace1 = [go.Heatmap(x=df1['DAY'],\n y=df1['LST_TIME'],\n z=df1['T_HR_AVG'].values.tolist(),\n zmin=5,\n zmax=40)] # z= cannot accept a pandas column, needs to be a list\n\ntrace2 = [go.Heatmap(x=df2['DAY'],\n y=df2['LST_TIME'],\n z=df2['T_HR_AVG'].values.tolist(),\n zmin=5,\n zmax=40)] # z= cannot accept a pandas column, needs to be a list\n\ntrace3 = [go.Heatmap(x=df3['DAY'],\n y=df3['LST_TIME'],\n z=df3['T_HR_AVG'].values.tolist(),\n zmin=5,\n zmax=40)] # z= cannot accept a pandas column, needs to be a list\n\n# fig = tools.make_sublots(rows=1,\n# columns=3,\n# subplot_titles=['Santa Barbara CA','Yuma AZ', 'Sitka AK'],\n# shared_yaxes=True)\n\nfig = make_subplots(rows=1,\n cols=3,\n subplot_titles=['Santa Barbara CA', 'Yuma AZ', 'Sitka AK'],\n shared_yaxes=True)\n\n\n# fig.append_trace(trace#, row#, column#)\nfig.append_trace(trace1, row=1, col=1)\nfig.append_trace(trace2, row=1, col=2)\nfig.append_trace(trace3, row=1, col=3)\n\n# data = [trace1, trace2, trace3]\n\n# layout = go.Layout(title='Santa Barbara California Temperatures')\n\npyo.plot(fig)\n\nNotice I had my trace#'s defined with brackets\ntrace1 = [go.Heatmap(x=df1['DAY'],\n y=df1['LST_TIME'],\n z=df1['T_HR_AVG'].values.tolist(),\n zmin=5,\n zmax=40)]\n\nWhen I removed the brackets from each of my trace declarations my code worked. Here is my code that works:\nimport plotly.offline as pyo\nimport plotly.graph_objs as go\nfrom plotly import tools\nfrom plotly.subplots import make_subplots\nimport pandas as pd\n\ndf1 = pd.read_csv('Data/2010SantaBarbaraCA.csv')\ndf2 = pd.read_csv('Data/2010YumaAZ.csv')\ndf3 = pd.read_csv('Data/2010SitkaAK.csv')\n\n\ntrace1 = go.Heatmap(x=df1['DAY'],\n y=df1['LST_TIME'],\n z=df1['T_HR_AVG'].values.tolist(),\n zmin=5,\n zmax=40) # z= cannot accept a pandas column, needs to be a list\n\ntrace2 = go.Heatmap(x=df2['DAY'],\n y=df2['LST_TIME'],\n z=df2['T_HR_AVG'].values.tolist(),\n zmin=5,\n zmax=40) # z= cannot accept a pandas column, needs to be a list\n\ntrace3 = go.Heatmap(x=df3['DAY'],\n y=df3['LST_TIME'],\n z=df3['T_HR_AVG'].values.tolist(),\n zmin=5,\n zmax=40) # z= cannot accept a pandas column, needs to be a list\n\n# fig = tools.make_sublots(rows=1,\n# columns=3,\n# subplot_titles=['Santa Barbara CA','Yuma AZ', 'Sitka AK'],\n# shared_yaxes=True)\n\nfig = make_subplots(rows=1,\n cols=3,\n subplot_titles=['Santa Barbara CA', 'Yuma AZ', 'Sitka AK'],\n shared_yaxes=True)\n\n\n# fig.append_trace(trace#, row#, column#)\nfig.append_trace(trace1, row=1, col=1)\nfig.append_trace(trace2, row=1, col=2)\nfig.append_trace(trace3, row=1, col=3)\n\n# data = [trace1, trace2, trace3]\n\n# layout = go.Layout(title='Santa Barbara California Temperatures')\n\npyo.plot(fig)\n\nI know you are using Scatter and I am using Heatmaps but I think your issue may be(apart from using tools for make_subplots()) that you are passing data1 and data2 into your append_trace() calls as lists, and I don't think the append_trace() call likes this. I would suggest not passing in a list to the append_trace() call and seeing if that works. If it does then you may need to adjust your data variables to not be listed and go from there. I hope this helps.\n", "While trying to make subplots of histograms, I ran into a similar problem.\nMy error occurred as I was trying for loops in the wrong way, but the solution below solves the issue you are having. I used add_trace instead of append_trace and used for loop for ease of future use.\nfrom plotly.graph_objs.scatter import Line\nfrom plotly.subplots import make_subplots\nimport plotly.graph_objs as go\n\ntrace11 = go.Scatter(\n x = [0, 1, 2],\n y = [0, 0, 0],\n line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})\n)\n\ntrace12 = go.Scatter(\n x=[0, 1, 2],\n y=[1, 1, 1],\n line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})\n)\n\ntrace21 = go.Scatter(\n x = [0, 1, 2],\n y = [0.5, 0.5, 0.5],\n line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})\n)\n\ntrace22 = go.Scatter(\n x=[0, 1, 2],\n y=[1.5, 1.5, 1.5],\n line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})\n)\ntrace=[trace11, trace12, trace21, trace22]\n\nplot_rows=2\nplot_cols=2\nfig = make_subplots(rows=plot_rows, cols=plot_cols)\n\nx=0\nfor i in range(1, plot_rows + 1):\n for j in range(1, plot_cols + 1):\n fig.add_trace(trace[x],\n row=i,\n col=j)\n x+=1\n\nfig.update_layout(height=600, width=600)\nfig.show()\n\n", "My issue was that I was trying to use a px (plotly express) plot in a trace. The problem, I discovered, is that px returns a complete figure, but add_trace (add_traces, append_trace) wants just the data.\nWhat solved this for me was adding \".data[0]\" to the end of my px figure name:\nfig = go.Figure()\ndf = pd.DataFrame({'x':[1,2,3,4], 'y':[5,6,7,8]})\nfig2 = px.scatter(df, x=\"x\", y=\"y\")\nfig.add_trace(fig2.data[0])\nfig.show()\n\nOR you if you use add_traces (plural) you don't need the [0] qualifier.\nfig = go.Figure()\ndf = pd.DataFrame({'x':[1,2,3,4], 'y':[5,6,7,8]})\nfig2 = px.scatter(df, x=\"x\", y=\"y\")\nfig.add_traces(fig2.data)\nfig.show()\n\nHope this helps someone save time.\n" ]
[ 6, 0, 0, 0 ]
[]
[]
[ "plotly", "python" ]
stackoverflow_0060992109_plotly_python.txt
Q: How to rank the column values in pyspark dataframe according to conditions i have a dataframe: id vehicle asIs EU EU_variant 1 A3345 PQ1298 FV1 FV1_variant 2 A3346 PQ1287 FV2 FV2_variant 3 A3346 PQ1207 FV2 FV2_variant 4 A3347 QP9 QP9_variant 5 A3347 QP9 QP9_variant 6 A3347 QP3 QP3_variant 7 A3348 MP6553 YR34 YR34_variant 8 A3348 MP6554 YR35 YR35_variant 9 A3348 MP6554 YR35 YR35_variant for distinct vehicle and distinct EU i want to rank EU_variant and concat it in new column result should be: id vehicle asIs EU EU_variant ECU_Variant_rank 1 A3345 PQ1298 FV1 FV1_variant FV1_variant(1) 2 A3346 PQ1287 FV2 FV2_variant FV2_variant(1) 3 A3346 PQ1207 FV2 FV2_variant FV2_variant(2) 4 A3347 QP9 QP9_variant QP9_variant(1) 5 A3347 QP9 QP9_variant QP9_variant(2) 6 A3347 QP3 QP3_variant QP3_variant(1) 7 A3348 MP6553 YR34 YR34_variant YR34_variant(1) 8 A3348 MP6554 YR35 YR35_variant YR35_variant(1) 9 A3348 MP6554 YR35 YR35_variant YR35_variant(2) how to achieve this using pyspark dataframe A: You can use a Window with rank: from pyspark.sql import functions as F, Window # you can order by the column you prefer, not only id w = Window.partitionBy('vehicle', 'EU_variant').orderBy('id') df.withColumn( 'ECU_Variant_rank', F.concat_ws('', F.col('EU_variant'), F.lit('('), F.rank().over(w), F.lit(')')) ) Here the result: +---+-------+------+----+------------+----------------+ |id |vehicle|asIs |EU |EU_variant |ECU_Variant_rank| +---+-------+------+----+------------+----------------+ |1 |A3345 |PQ1298|FV1 |FV1_variant |FV1_variant(1) | |2 |A3346 |PQ1287|FV2 |FV2_variant |FV2_variant(1) | |3 |A3346 |PQ1207|FV2 |FV2_variant |FV2_variant(2) | |4 |A3347 |null |QP9 |QP9_variant |QP9_variant(1) | |5 |A3347 |null |QP9 |QP9_variant |QP9_variant(2) | |6 |A3347 |null |QP3 |QP3_variant |QP3_variant(1) | |7 |A3348 |MP6553|YR34|YR34_variant|YR34_variant(1) | |8 |A3348 |MP6554|YR35|YR35_variant|YR35_variant(1) | |9 |A3348 |MP6554|YR35|YR35_variant|YR35_variant(2) | +---+-------+------+----+------------+----------------+
How to rank the column values in pyspark dataframe according to conditions
i have a dataframe: id vehicle asIs EU EU_variant 1 A3345 PQ1298 FV1 FV1_variant 2 A3346 PQ1287 FV2 FV2_variant 3 A3346 PQ1207 FV2 FV2_variant 4 A3347 QP9 QP9_variant 5 A3347 QP9 QP9_variant 6 A3347 QP3 QP3_variant 7 A3348 MP6553 YR34 YR34_variant 8 A3348 MP6554 YR35 YR35_variant 9 A3348 MP6554 YR35 YR35_variant for distinct vehicle and distinct EU i want to rank EU_variant and concat it in new column result should be: id vehicle asIs EU EU_variant ECU_Variant_rank 1 A3345 PQ1298 FV1 FV1_variant FV1_variant(1) 2 A3346 PQ1287 FV2 FV2_variant FV2_variant(1) 3 A3346 PQ1207 FV2 FV2_variant FV2_variant(2) 4 A3347 QP9 QP9_variant QP9_variant(1) 5 A3347 QP9 QP9_variant QP9_variant(2) 6 A3347 QP3 QP3_variant QP3_variant(1) 7 A3348 MP6553 YR34 YR34_variant YR34_variant(1) 8 A3348 MP6554 YR35 YR35_variant YR35_variant(1) 9 A3348 MP6554 YR35 YR35_variant YR35_variant(2) how to achieve this using pyspark dataframe
[ "You can use a Window with rank:\nfrom pyspark.sql import functions as F, Window\n\n# you can order by the column you prefer, not only id\nw = Window.partitionBy('vehicle', 'EU_variant').orderBy('id')\ndf.withColumn(\n 'ECU_Variant_rank', \n F.concat_ws('', F.col('EU_variant'), F.lit('('), F.rank().over(w), F.lit(')'))\n)\n\nHere the result:\n+---+-------+------+----+------------+----------------+\n|id |vehicle|asIs |EU |EU_variant |ECU_Variant_rank|\n+---+-------+------+----+------------+----------------+\n|1 |A3345 |PQ1298|FV1 |FV1_variant |FV1_variant(1) |\n|2 |A3346 |PQ1287|FV2 |FV2_variant |FV2_variant(1) |\n|3 |A3346 |PQ1207|FV2 |FV2_variant |FV2_variant(2) |\n|4 |A3347 |null |QP9 |QP9_variant |QP9_variant(1) |\n|5 |A3347 |null |QP9 |QP9_variant |QP9_variant(2) |\n|6 |A3347 |null |QP3 |QP3_variant |QP3_variant(1) |\n|7 |A3348 |MP6553|YR34|YR34_variant|YR34_variant(1) |\n|8 |A3348 |MP6554|YR35|YR35_variant|YR35_variant(1) |\n|9 |A3348 |MP6554|YR35|YR35_variant|YR35_variant(2) |\n+---+-------+------+----+------------+----------------+\n\n" ]
[ 1 ]
[]
[]
[ "pyspark", "python", "python_3.x" ]
stackoverflow_0074646405_pyspark_python_python_3.x.txt
Q: How do I create and populate a gitignore file for a 15.5gb machine learning project? I'm working on an university project with ML, and the project got quite big, I usually don't use github but I need to format my pc and do not trust the Google Drive backup I have, therefore I wanna have a second one so I don't lose the code whatsoever. I'm using Git with GitHub desktop, I'm not very knowledgeable in Git, so I'm having a hard time uploading this project, since it disconnects everytime I try to upload it, I'm pretty sure it is because of the size, any help with that? The IDE I'm using is PyCharm and the Python version is 3.7, I already have a requirements.txt created. I tried searching for pre made git ignore files, but it didn't work. A: A .gitignore file will not help you there - you need to remove the dependencies from your project's history. There are two ways to do that: The traditional way involves git-filter-branch. I've done that once in the past. It works, but it's easy to get wrong. The alternative is to use BFG. I have no personal experience, but it seems to be easier to use, and claims to be faster. So if I were you, I'd give BFG a try. Whichever way you try, make a lokal backup! When you're done rewriting history, you can use a .gitignore to prevent yourself from re-adding the unwanted files. A: Welcome to Stackoverflow! As you already sensed by yourself, Git is not really made to work with volumes of data that are as large as you say (15.5GB). The most important thing you have to do right now is identify which files you want to keep track of, and which files are just "binary files" that don't have to be versioned. You don't have to use any other tool than your brain for this (but looking around with any type of file explorer will teach you a lot). Deciding what to keep It is important to be quite severe here. As a general approach (there can be exceptions), try to keep out the following files: Any file that is >1MB. There will surely be exceptions, but in general this is a good rule of thumb. Anything that is binary/non text based. Git is made to work with diffs on files and this is not user-friendly with non-text based files. Examples: images, videos, powerpoints, ... Anything that is generated by code (for example results of compilations, or data processing, ...) Anything that is generated by a tool you use (for example folders created by your IDE) Any data file. Git is not really made for version control of data. It's really your code you want to version control. Creating a git repository It seems like you have made a git repository already, but unless you have very important history you want to keep I suggest starting anew from where you are now. If it's for a university project I can imagine it being fine that you lose your history until now. If it's not fine for you to lose your history, you will have to change your history and delete large files from your repo (a risky operation I would not recommend to a new Git user. More info can be found in this SO post). I'm suggesting to start a fresh repository because I feel you will learn more in this way, but if you prefer to change your history go ahead! To start off a fresh repository, go to the root directory of your project and copy the .git folder to some place as a backup. This is often a hidden folder, and it contains all of your history! Then, delete this .git folder (making sure that you have kept your backup .git folder somewhere). After than, execute the git init command. You have a fresh git repository to work with! Typing git status will show a bunch of untracked files. Populating your gitignore The first thing we will do now is make our .gitignore file, before committing anything else. Let's say that you decided in your first step to ignore the following: all *.xlsx files everything inside of the build/ directory all *.log files In that case, you should create a text file (with any text editor: your IDE or notepad or anything) called .gitignore. Open this up with your text editor of choice and add the following text in there: *.xlsx build/* *.log Now save the file. You have made your .gitignore file! Now add and commit the file (using a good commit message) and type git status. You should see none of the unwanted files appearing! Now you can commit all the rest of your files (properly check git status to see that no unwanted files are tracked by git before committing them!) and you have a clean lightweight repo. Maintaining your gitignore It's normal for the gitignore file to evolve during the project. Don't hesitate to add new lines in there if a new file type/folder enters the project that is actually unwanted in the repository. Hope this helps you a bit!
How do I create and populate a gitignore file for a 15.5gb machine learning project?
I'm working on an university project with ML, and the project got quite big, I usually don't use github but I need to format my pc and do not trust the Google Drive backup I have, therefore I wanna have a second one so I don't lose the code whatsoever. I'm using Git with GitHub desktop, I'm not very knowledgeable in Git, so I'm having a hard time uploading this project, since it disconnects everytime I try to upload it, I'm pretty sure it is because of the size, any help with that? The IDE I'm using is PyCharm and the Python version is 3.7, I already have a requirements.txt created. I tried searching for pre made git ignore files, but it didn't work.
[ "A .gitignore file will not help you there - you need to remove the dependencies from your project's history. There are two ways to do that:\nThe traditional way involves git-filter-branch. I've done that once in the past. It works, but it's easy to get wrong.\nThe alternative is to use BFG. I have no personal experience, but it seems to be easier to use, and claims to be faster. So if I were you, I'd give BFG a try.\nWhichever way you try, make a lokal backup!\nWhen you're done rewriting history, you can use a .gitignore to prevent yourself from re-adding the unwanted files.\n", "Welcome to Stackoverflow!\nAs you already sensed by yourself, Git is not really made to work with volumes of data that are as large as you say (15.5GB). The most important thing you have to do right now is identify which files you want to keep track of, and which files are just \"binary files\" that don't have to be versioned. You don't have to use any other tool than your brain for this (but looking around with any type of file explorer will teach you a lot).\nDeciding what to keep\nIt is important to be quite severe here. As a general approach (there can be exceptions), try to keep out the following files:\n\nAny file that is >1MB. There will surely be exceptions, but in general this is a good rule of thumb.\nAnything that is binary/non text based. Git is made to work with diffs on files and this is not user-friendly with non-text based files. Examples: images, videos, powerpoints, ...\nAnything that is generated by code (for example results of compilations, or data processing, ...)\nAnything that is generated by a tool you use (for example folders created by your IDE)\nAny data file. Git is not really made for version control of data. It's really your code you want to version control.\n\nCreating a git repository\nIt seems like you have made a git repository already, but unless you have very important history you want to keep I suggest starting anew from where you are now. If it's for a university project I can imagine it being fine that you lose your history until now. If it's not fine for you to lose your history, you will have to change your history and delete large files from your repo (a risky operation I would not recommend to a new Git user. More info can be found in this SO post).\nI'm suggesting to start a fresh repository because I feel you will learn more in this way, but if you prefer to change your history go ahead!\nTo start off a fresh repository, go to the root directory of your project and copy the .git folder to some place as a backup. This is often a hidden folder, and it contains all of your history!\nThen, delete this .git folder (making sure that you have kept your backup .git folder somewhere).\nAfter than, execute the git init command. You have a fresh git repository to work with! Typing git status will show a bunch of untracked files.\nPopulating your gitignore\nThe first thing we will do now is make our .gitignore file, before committing anything else. Let's say that you decided in your first step to ignore the following:\n\nall *.xlsx files\neverything inside of the build/ directory\nall *.log files\n\nIn that case, you should create a text file (with any text editor: your IDE or notepad or anything) called .gitignore. Open this up with your text editor of choice and add the following text in there:\n*.xlsx\nbuild/*\n*.log\n\nNow save the file. You have made your .gitignore file! Now add and commit the file (using a good commit message) and type git status. You should see none of the unwanted files appearing! Now you can commit all the rest of your files (properly check git status to see that no unwanted files are tracked by git before committing them!) and you have a clean lightweight repo.\nMaintaining your gitignore\nIt's normal for the gitignore file to evolve during the project. Don't hesitate to add new lines in there if a new file type/folder enters the project that is actually unwanted in the repository.\nHope this helps you a bit!\n" ]
[ 0, 0 ]
[]
[]
[ "git", "github", "github_desktop", "pycharm", "python" ]
stackoverflow_0074647074_git_github_github_desktop_pycharm_python.txt
Q: Python scripts involving selenium behaving differently when called from task scheduler, but working as intended when run from spyder or command line I have created a python program which uses selenium and chromedriver. I cannot successfully run this script (or any others using selenium) from the TaskScheduler in any way. However, it runs perfectly fine and does all tasks I need when I run it from Spyder. It also runs perfectly while logged in when I call it via the command line. What does the program do when working as intended: Launches a chrome browser. Automates clicks and page requests. Downloads a file. -does stuff with files that is irrelevant to this post- What does the program do when called from the TaskScheduler: Starts chrome but it does not appear (no visible browser, but task manager recognizes the chromedriver and chrome being kicked off and running persistently after the script is called) All of my clicks are on elements by full xpath so I thought maybe the invisible browser wouldn't break it, but it does indeed fail, never fetching the file download. Possibly relevant information: My chromedriver is not on path, but is set via driver = webdriver.Chrome(r'F:\chromedriver.exe') and this works absolutely fine when run by Spyder or command line. Task Scheduler input Action: Start a program Program/Script: C:\ProgramData\Anaconda3\python.exe Add arguments (optional): "C:\Users\[My_redacted_name]\.spyder-py3\[Client's_redacted_name]\[redacted_task].py" What I know: The working directory as suggested in Python script not running in task scheduler does not fix anything. Running from the command line C:\ProgramData\Anaconda3\python.exe C:\Users\[My_redacted_name]\.spyder-py3\[Client's_redacted_name]\[redacted_task].py yields the exact results as intended No other programs I have made have had this kind of issue, and I have dozens of programs running via TaskScheduler with similar functionality to all the components OTHER than selenium / chromedriver. I actually have two scripts using selenium that both encounter the same issue when running from command line. Their tasks are more or less the same, so solving ONE should solve the other, but it should be noted that the issue is not unique to a single script, but instead unique to scripts using selenium and running from task scheduler I also see Selenium - Using Windows Task Scheduler vs. command line and am attempting to see if the single response with 0 votes could help, but I'm not sure if the issue is truly the same given it was for IE and on java. A: Currently dealing with the same issue, with my Python script doing almost exactly as yours does. My initial workaround for running the script on a set schedule was utilizing datetime variables and while True loops, then switched to apscheduler for convenience. Switching back to the workaround again ran the script flawlessly. Hopefully this helps anyone else also dealing with this issue!
Python scripts involving selenium behaving differently when called from task scheduler, but working as intended when run from spyder or command line
I have created a python program which uses selenium and chromedriver. I cannot successfully run this script (or any others using selenium) from the TaskScheduler in any way. However, it runs perfectly fine and does all tasks I need when I run it from Spyder. It also runs perfectly while logged in when I call it via the command line. What does the program do when working as intended: Launches a chrome browser. Automates clicks and page requests. Downloads a file. -does stuff with files that is irrelevant to this post- What does the program do when called from the TaskScheduler: Starts chrome but it does not appear (no visible browser, but task manager recognizes the chromedriver and chrome being kicked off and running persistently after the script is called) All of my clicks are on elements by full xpath so I thought maybe the invisible browser wouldn't break it, but it does indeed fail, never fetching the file download. Possibly relevant information: My chromedriver is not on path, but is set via driver = webdriver.Chrome(r'F:\chromedriver.exe') and this works absolutely fine when run by Spyder or command line. Task Scheduler input Action: Start a program Program/Script: C:\ProgramData\Anaconda3\python.exe Add arguments (optional): "C:\Users\[My_redacted_name]\.spyder-py3\[Client's_redacted_name]\[redacted_task].py" What I know: The working directory as suggested in Python script not running in task scheduler does not fix anything. Running from the command line C:\ProgramData\Anaconda3\python.exe C:\Users\[My_redacted_name]\.spyder-py3\[Client's_redacted_name]\[redacted_task].py yields the exact results as intended No other programs I have made have had this kind of issue, and I have dozens of programs running via TaskScheduler with similar functionality to all the components OTHER than selenium / chromedriver. I actually have two scripts using selenium that both encounter the same issue when running from command line. Their tasks are more or less the same, so solving ONE should solve the other, but it should be noted that the issue is not unique to a single script, but instead unique to scripts using selenium and running from task scheduler I also see Selenium - Using Windows Task Scheduler vs. command line and am attempting to see if the single response with 0 votes could help, but I'm not sure if the issue is truly the same given it was for IE and on java.
[ "Currently dealing with the same issue, with my Python script doing almost exactly as yours does. My initial workaround for running the script on a set schedule was utilizing datetime variables and while True loops, then switched to apscheduler for convenience. Switching back to the workaround again ran the script flawlessly. Hopefully this helps anyone else also dealing with this issue!\n" ]
[ 0 ]
[]
[]
[ "python", "scheduled_tasks", "selenium", "selenium_chromedriver", "taskscheduler" ]
stackoverflow_0058754950_python_scheduled_tasks_selenium_selenium_chromedriver_taskscheduler.txt
Q: Wrapping a method from outside the class in Python with decorator I would like to ask a question, how I can override/extend the existing external python class. I wanted to be able to call the same API, like parent class, but with some modifications. Something like that: [a] b = 1 import configparser # my wrapper class class MyConfigParser(configparser.ConfigParser): # override/wrap the parent class with all arguments (could be some optionals too) parent function has the same name, but it shouldn't be called def getint(self, section, option): # call the parent function, pass all args (maybe without mentioning them, ideally without copying them) ret = parent.get(section, option) # make modifications print("my wrapper") ret = ret + 1 # return modified value return ret config = MyConfigParser() # call parent function config.read('my-file.ini') # call my wrapped-function a = config.getint('a','b') Just for an information, I'm unable to modify the parent class. How to do that? A: I think you are confused about what a decorator is and what the word wrapping typically refers to. Neither of those apply here, if I understood your intent correctly. Of course you can subclass some existing class that you may or may not have any control over and override its methods. And of course you can conveniently call the parent class' methods from inside your own methods. In Python you typically use the built in super proxy class for that. But none of this has anything to do with decoration or wrapping. If you want to override a superclass' method, you should take care to match its signature. Otherwise you would be violating type safety. If all you care about are a few specific arguments but not all of them, you can cheat a bit and collect the rest in the catch-all *args, **kwargs parameters, but you should then pass them along to the superclass' method call. Also, you seemed to be calling the ConfigParser.get method inside your overridden getint method. But I am pretty sure that was a typo/mistake because you are clearly expecting the returned value that you assign to ret to be an int, whereas get returns a str. So you should be calling super().getint(...) there. Finally, if we are already at it, it is good form to properly annotate your functions with types. (see PEP 484 for details) Here is what I think you need: from configparser import ConfigParser from typing import Any class MyConfigParser(ConfigParser): def getint(self, section: str, option: str, *args: Any, **kwargs: Any) -> int: ret = super().getint(section, option, *args, **kwargs) print("overridden `getint`") ret += 1 return ret # type: ignore[no-any-return] if __name__ == "__main__": config = MyConfigParser() config.read("my-file.ini") print(config.getint("a", "b")) The output with your example ini-file: overridden `getint` 2 There are a lot of questions and answers on this platform about super and subclassing as well as decorators. I would suggest you search a bit and read up on those.
Wrapping a method from outside the class in Python with decorator
I would like to ask a question, how I can override/extend the existing external python class. I wanted to be able to call the same API, like parent class, but with some modifications. Something like that: [a] b = 1 import configparser # my wrapper class class MyConfigParser(configparser.ConfigParser): # override/wrap the parent class with all arguments (could be some optionals too) parent function has the same name, but it shouldn't be called def getint(self, section, option): # call the parent function, pass all args (maybe without mentioning them, ideally without copying them) ret = parent.get(section, option) # make modifications print("my wrapper") ret = ret + 1 # return modified value return ret config = MyConfigParser() # call parent function config.read('my-file.ini') # call my wrapped-function a = config.getint('a','b') Just for an information, I'm unable to modify the parent class. How to do that?
[ "I think you are confused about what a decorator is and what the word wrapping typically refers to. Neither of those apply here, if I understood your intent correctly.\nOf course you can subclass some existing class that you may or may not have any control over and override its methods. And of course you can conveniently call the parent class' methods from inside your own methods. In Python you typically use the built in super proxy class for that. But none of this has anything to do with decoration or wrapping.\nIf you want to override a superclass' method, you should take care to match its signature. Otherwise you would be violating type safety. If all you care about are a few specific arguments but not all of them, you can cheat a bit and collect the rest in the catch-all *args, **kwargs parameters, but you should then pass them along to the superclass' method call.\nAlso, you seemed to be calling the ConfigParser.get method inside your overridden getint method. But I am pretty sure that was a typo/mistake because you are clearly expecting the returned value that you assign to ret to be an int, whereas get returns a str. So you should be calling super().getint(...) there.\nFinally, if we are already at it, it is good form to properly annotate your functions with types. (see PEP 484 for details)\nHere is what I think you need:\nfrom configparser import ConfigParser\nfrom typing import Any\n\n\nclass MyConfigParser(ConfigParser):\n def getint(self, section: str, option: str, *args: Any, **kwargs: Any) -> int:\n ret = super().getint(section, option, *args, **kwargs)\n print(\"overridden `getint`\")\n ret += 1\n return ret # type: ignore[no-any-return]\n\n\nif __name__ == \"__main__\":\n config = MyConfigParser()\n config.read(\"my-file.ini\")\n print(config.getint(\"a\", \"b\"))\n\nThe output with your example ini-file:\n\noverridden `getint`\n2\n\nThere are a lot of questions and answers on this platform about super and subclassing as well as decorators. I would suggest you search a bit and read up on those.\n" ]
[ 1 ]
[]
[]
[ "configparser", "python" ]
stackoverflow_0074645630_configparser_python.txt
Q: How to add item last and remove first item in python dataframe? My dataframe is like this: data = { "a": [420, 380, 390], "b": [50, 40, 45] } df = pd.DataFrame(data) I want to add new item at the end of this dataframe, and remove the first item. I mean cont will be 3 each addition. New item add {"a": 300, b: 88} and last stuation will be: data = { "a": [380, 390, 300], "b": [40, 45, 88] } Is there a short way to do this? A: You can use pd.concat because append is getting deprecated. Ref dct = {"a": 300, "b": 88} df_new = pd.concat([df, pd.Series(dct).to_frame().T] ).iloc[1:, :].reset_index(drop=True) print(df_new) # If maybe the values of 'dict' have multiple items. # dct = {"a": [300, 400], "b": [88, 98]} # df_new = pd.concat([df, pd.DataFrame(dct)] # ).iloc[1:, :].reset_index(drop=True) You can add a new row to df with pandas.DataFrame.append then drop the first-row base number of index. (At the end use reset_index if it is necessary) dct = {"a": 300, "b": 88} df_new = df.append(dct, ignore_index=True).drop(0, axis=0).reset_index(drop=True) print(df_new) Output: a b 0 380 40 1 390 45 2 300 88 A: Using concat: df = pd.concat([df.iloc[1:], pd.DataFrame.from_dict({0: d}, orient='index')], ignore_index=True) Output: a b 0 380 40 1 390 45 2 300 88 A: You can append new row to existing dataframe by df.append(). In your example, this would be new_row = {"a": 300, "b": 88} df2 = df.append(new_row, ignore_index=True) (Notice that append works both for Dataframe and dict objects, but the later requires ignore_index=True. And you can remove the first row from dataframe by one of the following methods: Select using iloc df3 = df2.iloc[1:, :] This slices all rows but the first one, and all columns. Use drop to remove the first row. df3 = df2.drop(df.index[0], axis=0, inplace=False) Or you could use inplace=True to modify df2 inplace, None will be returned in that case.
How to add item last and remove first item in python dataframe?
My dataframe is like this: data = { "a": [420, 380, 390], "b": [50, 40, 45] } df = pd.DataFrame(data) I want to add new item at the end of this dataframe, and remove the first item. I mean cont will be 3 each addition. New item add {"a": 300, b: 88} and last stuation will be: data = { "a": [380, 390, 300], "b": [40, 45, 88] } Is there a short way to do this?
[ "You can use pd.concat because append is getting deprecated. Ref\ndct = {\"a\": 300, \"b\": 88}\ndf_new = pd.concat([df, pd.Series(dct).to_frame().T]\n ).iloc[1:, :].reset_index(drop=True)\nprint(df_new)\n\n# If maybe the values of 'dict' have multiple items.\n# dct = {\"a\": [300, 400], \"b\": [88, 98]}\n# df_new = pd.concat([df, pd.DataFrame(dct)]\n# ).iloc[1:, :].reset_index(drop=True)\n\nYou can add a new row to df with pandas.DataFrame.append then drop the first-row base number of index. (At the end use reset_index if it is necessary)\ndct = {\"a\": 300, \"b\": 88}\ndf_new = df.append(dct, ignore_index=True).drop(0, axis=0).reset_index(drop=True)\nprint(df_new)\n\nOutput:\n a b\n0 380 40\n1 390 45\n2 300 88\n\n", "Using concat:\ndf = pd.concat([df.iloc[1:],\n pd.DataFrame.from_dict({0: d}, orient='index')], \n ignore_index=True)\n\nOutput:\n a b\n0 380 40\n1 390 45\n2 300 88\n\n", "You can append new row to existing dataframe by df.append(). In your example, this would be\nnew_row = {\"a\": 300, \"b\": 88}\ndf2 = df.append(new_row, ignore_index=True)\n\n(Notice that append works both for Dataframe and dict objects, but the later requires ignore_index=True.\nAnd you can remove the first row from dataframe by one of the following methods:\n\nSelect using iloc\n\ndf3 = df2.iloc[1:, :]\n\nThis slices all rows but the first one, and all columns.\n\nUse drop to remove the first row.\n\ndf3 = df2.drop(df.index[0], axis=0, inplace=False)\n\nOr you could use inplace=True to modify df2 inplace, None will be returned in that case.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074647052_dataframe_pandas_python.txt
Q: Python-Jenkins tunnel connection failed: 403 Forbidden I have been using the Python Jenkins APIs to manager my Jenkins jobs. It has worked for a long time, but it stopped suddenly working. This is the code excerpt: import jenkins server = jenkins.Jenkins('https://jenkins.company.com', username='xxxx', password='password') server._session.verify = False print(server.jobs_count()) The traceback: File "", line 1, in server.jobs_count() File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 1160, in jobs_count return len(self.get_all_jobs()) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 1020, in get_all_jobs jobs = [(0, [], self.get_info(query=jobs_query)['jobs'])] File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 769, in get_info requests.Request('GET', self._build_url(url)) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 557, in jenkins_open return self.jenkins_request(req, add_crumb, resolve_auth).text File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 573, in jenkins_request self.maybe_add_crumb(req) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 371, in maybe_add_crumb 'GET', self._build_url(CRUMB_URL)), add_crumb=False) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 557, in jenkins_open return self.jenkins_request(req, add_crumb, resolve_auth).text File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 576, in jenkins_request self._request(req)) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 550, in _request return self._session.send(r, **_settings) File "E:\anaconda3\Lib\site-packages\requests\sessions.py", line 622, in send r = adapter.send(request, **kwargs) File "E:\anaconda3\Lib\site-packages\requests\adapters.py", line 507, in send raise ProxyError(e, request=request) ProxyError: HTTPSConnectionPool(host='jenkins.company.com', port=443): Max retries exceeded with url: /job/scp/job/sm/job/9218/job/4198/job/SIT/crumbIssuer/api/json (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))) Note that there isn't any proxy on the Jenkins server, and I can use the user/password logon to the Jenkins server without any issues. I have the crum id and API token, but I haven't found anything that is indicating how to add the crum into the Python-Jenkins API. A: tl;dr: You lack connectivity. The jenkins library depends on import requests, which is reporting the connectivity error. Regrettably, it uses ProxyError in the diagnostic. The rationale goes like this: We're making a GET request for the application. Optionally the "GET from server S" will be turned into "GET from proxy P" if proxying is in use. Eventually we try to contact some host, S or P. Might as well tell a proxy user that state of S is unknown, but state of P is "down". Here ends the "why mention proxying?" diagnostic rant. When you say "I'm not using proxying", I believe you. The diagnostic can be a bit of a red herring for folks who are not yet familiar with it. When I probe ebs.usps.gov (56.207.107.97) on ports 443, 80, or with ICMP, I see zero response packets. You're in a different part of the net, with different filters between you and server, so your mileage might vary. I wouldn't describe that host as a "public server", since it offers me no responses. It appears you sent SYN to tcp port 443, and either some network device discarded that packet, or the server replied with SYN-ACK and that reply packet was discarded. Most likely the server is down or your request was discarded. A: The final part of the traceback says: ProxyError: HTTPSConnectionPool(host='ebs.usps.gov', port=443) Which most likely indicates that you have proxy settings that your Python code inherits from somewhere when it runs. It could be environment variables ((HTTP|HTTPS)_PROXY) on POSIX sort of platforms or something similar... If you need to to use a proxy to reach the Jenkins instance, then the issue is in the proxy itself. It blocks your access for some reason. If you do not need to use a proxy, then you should remove the settings affecting your Python code when you run it. Also, see what J_H said...
Python-Jenkins tunnel connection failed: 403 Forbidden
I have been using the Python Jenkins APIs to manager my Jenkins jobs. It has worked for a long time, but it stopped suddenly working. This is the code excerpt: import jenkins server = jenkins.Jenkins('https://jenkins.company.com', username='xxxx', password='password') server._session.verify = False print(server.jobs_count()) The traceback: File "", line 1, in server.jobs_count() File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 1160, in jobs_count return len(self.get_all_jobs()) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 1020, in get_all_jobs jobs = [(0, [], self.get_info(query=jobs_query)['jobs'])] File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 769, in get_info requests.Request('GET', self._build_url(url)) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 557, in jenkins_open return self.jenkins_request(req, add_crumb, resolve_auth).text File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 573, in jenkins_request self.maybe_add_crumb(req) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 371, in maybe_add_crumb 'GET', self._build_url(CRUMB_URL)), add_crumb=False) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 557, in jenkins_open return self.jenkins_request(req, add_crumb, resolve_auth).text File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 576, in jenkins_request self._request(req)) File "E:\anaconda3\Lib\site-packages\jenkins_init_.py", line 550, in _request return self._session.send(r, **_settings) File "E:\anaconda3\Lib\site-packages\requests\sessions.py", line 622, in send r = adapter.send(request, **kwargs) File "E:\anaconda3\Lib\site-packages\requests\adapters.py", line 507, in send raise ProxyError(e, request=request) ProxyError: HTTPSConnectionPool(host='jenkins.company.com', port=443): Max retries exceeded with url: /job/scp/job/sm/job/9218/job/4198/job/SIT/crumbIssuer/api/json (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))) Note that there isn't any proxy on the Jenkins server, and I can use the user/password logon to the Jenkins server without any issues. I have the crum id and API token, but I haven't found anything that is indicating how to add the crum into the Python-Jenkins API.
[ "tl;dr: You lack connectivity.\nThe jenkins library depends on import requests,\nwhich is reporting the connectivity error.\nRegrettably, it uses ProxyError in the diagnostic.\nThe rationale goes like this:\n\nWe're making a GET request for the application.\nOptionally the \"GET from server S\" will be turned into \"GET from proxy P\" if proxying is in use.\nEventually we try to contact some host, S or P. Might as well tell a proxy user that state of S is unknown, but state of P is \"down\".\n\nHere ends the \"why mention proxying?\" diagnostic rant.\nWhen you say \"I'm not using proxying\", I believe you.\nThe diagnostic can be a bit of a red herring for\nfolks who are not yet familiar with it.\n\nWhen I probe ebs.usps.gov (56.207.107.97) on ports 443, 80, or with ICMP, I see zero response packets.\nYou're in a different part of the net, with different\nfilters between you and server, so your mileage might vary.\nI wouldn't describe that host as a \"public server\",\nsince it offers me no responses.\n\nIt appears you sent SYN to tcp port 443,\nand either some network device discarded that packet,\nor the server replied with SYN-ACK and that\nreply packet was discarded.\nMost likely the server is down or your request was discarded.\n", "The final part of the traceback says:\nProxyError: HTTPSConnectionPool(host='ebs.usps.gov', port=443)\n\nWhich most likely indicates that you have proxy settings that your Python code inherits from somewhere when it runs. It could be environment variables ((HTTP|HTTPS)_PROXY) on POSIX sort of platforms or something similar... If you need to to use a proxy to reach the Jenkins instance, then the issue is in the proxy itself. It blocks your access for some reason. If you do not need to use a proxy, then you should remove the settings affecting your Python code when you run it.\nAlso, see what J_H said...\n" ]
[ 0, 0 ]
[]
[]
[ "api", "jenkins", "json", "python" ]
stackoverflow_0074647215_api_jenkins_json_python.txt
Q: Can I set a multiprocessing.pool as non-deamon? I have to do CPU-bound tasks, every task is assigened to a process with multiprocessing.Pool with multiprocessing.Pool(3) as p: results = list(p.map(task, [args1, args2, args3, aegs4, ..., argsn])) In every task there is a for loop, as the last one, that can be parallelized with multiprocessing.pool, but when i do it I get: AssertionError: daemonic processes are not allowed to have children I know one possible solution is: Python Process Pool non-daemonic? But my question is: should I make a pool process non-deamon or it is unsafe? Now I do this: # subtask def update(args): ... return updated_a # task def task(args): ... for i in range(200): # evaluations ... with multiprocessing.get_context('spawn').Pool(len(self.arraies)) as p: self.arraies = list(p.map(update, [[..., a] for a in self.arraies])) ... return result ... ss = np.random.SeedSequence() tasks_seeds = ss.spawn(N_ITERATIONS + 1) streams = [np.random.default_rng(s) for s in tasks_seeds] results = [] with multiprocessing.get_context('spawn').Pool(3) as p: results = list(p.map(task, [[...,streams[i]] for i in range(N_ITERATIONS)])) ... A: This is a bit too long to answer as a comment, and so ... If what these tasks are doing is all or mostly all CPU-processing with very little waiting, then you should not be creating a processing pool greater than the number of CPU cores you have. See below for the general idea. Instead of using a multithreading pool, you could simply have a single multiprocessing pool and restructure code to do the iterating in the main process: from multiprocessing.pool import ThreadPool, Pool import multiprocessing from functools import partial # subtask def update(args): ... return updated_a # task # This runs in a multithreading pool, but it # is mostly waiting for the multiprocessing pool to generate # results: def task(process_pool, args): ... for i in range(200): # evaluations ... self.arraies = list(process_pool.map(update, [[..., a] for a in self.arraies])) ... return result # required for spawned processes: if __name__ == '__main__': ... ss = np.random.SeedSequence() tasks_seeds = ss.spawn(N_ITERATIONS + 1) streams = [np.random.default_rng(s) for s in tasks_seeds] # Use number of cpu cores: with multiprocessing.get_context('spawn').Pool() as process_pool: task_args = [[...,streams[i]] for i in range(N_ITERATIONS)] # Limit thread pool size to a maximum of 200 (rather arbitrary): with ThreadPool(min(200, len(task_args))) as thread_pool: # pass the pool as the first argument: worker = partial(task, process_pool) results = p.map(worker, task_args)
Can I set a multiprocessing.pool as non-deamon?
I have to do CPU-bound tasks, every task is assigened to a process with multiprocessing.Pool with multiprocessing.Pool(3) as p: results = list(p.map(task, [args1, args2, args3, aegs4, ..., argsn])) In every task there is a for loop, as the last one, that can be parallelized with multiprocessing.pool, but when i do it I get: AssertionError: daemonic processes are not allowed to have children I know one possible solution is: Python Process Pool non-daemonic? But my question is: should I make a pool process non-deamon or it is unsafe? Now I do this: # subtask def update(args): ... return updated_a # task def task(args): ... for i in range(200): # evaluations ... with multiprocessing.get_context('spawn').Pool(len(self.arraies)) as p: self.arraies = list(p.map(update, [[..., a] for a in self.arraies])) ... return result ... ss = np.random.SeedSequence() tasks_seeds = ss.spawn(N_ITERATIONS + 1) streams = [np.random.default_rng(s) for s in tasks_seeds] results = [] with multiprocessing.get_context('spawn').Pool(3) as p: results = list(p.map(task, [[...,streams[i]] for i in range(N_ITERATIONS)])) ...
[ "This is a bit too long to answer as a comment, and so ...\nIf what these tasks are doing is all or mostly all CPU-processing with very little waiting, then you should not be creating a processing pool greater than the number of CPU cores you have. See below for the general idea. Instead of using a multithreading pool, you could simply have a single multiprocessing pool and restructure code to do the iterating in the main process:\nfrom multiprocessing.pool import ThreadPool, Pool\nimport multiprocessing\nfrom functools import partial\n\n# subtask\ndef update(args): \n ...\n return updated_a\n\n\n# task\n# This runs in a multithreading pool, but it\n# is mostly waiting for the multiprocessing pool to generate\n# results:\ndef task(process_pool, args):\n ...\n for i in range(200):\n # evaluations\n ...\n self.arraies = list(process_pool.map(update, [[..., a] for a in self.arraies]))\n ...\n return result\n\n\n# required for spawned processes:\nif __name__ == '__main__':\n ...\n ss = np.random.SeedSequence()\n tasks_seeds = ss.spawn(N_ITERATIONS + 1)\n streams = [np.random.default_rng(s) for s in tasks_seeds]\n # Use number of cpu cores:\n with multiprocessing.get_context('spawn').Pool() as process_pool:\n task_args = [[...,streams[i]] for i in range(N_ITERATIONS)]\n # Limit thread pool size to a maximum of 200 (rather arbitrary):\n with ThreadPool(min(200, len(task_args))) as thread_pool:\n # pass the pool as the first argument:\n worker = partial(task, process_pool)\n results = p.map(worker, task_args)\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "multiprocessing", "parallel_processing", "pool", "python" ]
stackoverflow_0074641688_for_loop_multiprocessing_parallel_processing_pool_python.txt
Q: How to apply a function on combining two columns in pandas dataframe? I have two columns "ColA" and "ColB" in a pandas dataframe like below: I want apply a custom function on ColA and ColB, and update another column ColC. The custom function is like below: def customFunc(file_name, pattern): match_index = -1 with open(file_name) as f: data = f.read() for n, line in enumerate(data): match_index = line.find(pattern) if match_index != -1: break return match_index For each row of the dataframe, the file_name will come from ColA and pattern will come from ColB and the returned match_index will be updated in ColC like below: I have tried like below but the value of ColA and ColB of each row is not passed into the custom function. df["ColC"] = df[["ColA", "ColB"]].apply(lambda x: customFunc(x.ColA, x.ColB)) How to pass ColA and ColB of each row into the custom function using apply()? A: using axis = 1 did the trick. df["ColC"] = df[["ColA", "ColB"]].apply(lambda x: customFunc(x.ColA, x.ColB), axis = 1)
How to apply a function on combining two columns in pandas dataframe?
I have two columns "ColA" and "ColB" in a pandas dataframe like below: I want apply a custom function on ColA and ColB, and update another column ColC. The custom function is like below: def customFunc(file_name, pattern): match_index = -1 with open(file_name) as f: data = f.read() for n, line in enumerate(data): match_index = line.find(pattern) if match_index != -1: break return match_index For each row of the dataframe, the file_name will come from ColA and pattern will come from ColB and the returned match_index will be updated in ColC like below: I have tried like below but the value of ColA and ColB of each row is not passed into the custom function. df["ColC"] = df[["ColA", "ColB"]].apply(lambda x: customFunc(x.ColA, x.ColB)) How to pass ColA and ColB of each row into the custom function using apply()?
[ "using axis = 1 did the trick.\ndf[\"ColC\"] = df[[\"ColA\", \"ColB\"]].apply(lambda x: customFunc(x.ColA, x.ColB), axis = 1)\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074647326_pandas_python.txt
Q: Restore an image after rotation without black borders in python I've used the following code in order to rotate an image (initial image) and make some processing: def rotate_image(mat, angle): """ Rotates an image (angle in degrees) and expands image to avoid cropping """ height, width = mat.shape[:2] # image shape has 3 dimensions image_center = (width/2, height/2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.) # rotation calculates the cos and sin, taking absolutes of those. abs_cos = abs(rotation_mat[0,0]) abs_sin = abs(rotation_mat[0,1]) # find the new width and height bounds bound_w = int(height * abs_sin + width * abs_cos) bound_h = int(height * abs_cos + width * abs_sin) # subtract old image center (bringing image back to origo) and adding the new image center coordinates rotation_mat[0, 2] += bound_w/2 - image_center[0] rotation_mat[1, 2] += bound_h/2 - image_center[1] # rotate image with the new bounds and translated rotation matrix rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h)) return rotated_mat ModifiedVersionRotation = rotate_image(img, 35) cv2.imwrite("lenarot.jpg", ModifiedVersionRotation) The function adds black borders to the image so it wont be cropped at the rotation rotated image which is what I actually need. But, how can I rotate back the image and remove the black borders? A: For rotating back, we may compute the inverse transformation, and apply it to the rotated image: Example: inv_rotation_mat = cv2.invertAffineTransform(rotation_mat) # Get inverse transformation matrix unrotated_mat = cv2.warpAffine(rotated_mat, inv_rotation_mat, (mat.shape[1], mat.shape[0])) # Apply warp (and set the destination size to original size of mat). Complete code sample: import cv2 img = cv2.imread('lena.jpg') def rotate_image(mat, angle): """ Rotates an image (angle in degrees) and expands image to avoid cropping """ height, width = mat.shape[:2] # image shape has 3 dimensions image_center = (width/2, height/2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.) # rotation calculates the cos and sin, taking absolutes of those. abs_cos = abs(rotation_mat[0,0]) abs_sin = abs(rotation_mat[0,1]) # find the new width and height bounds bound_w = int(height * abs_sin + width * abs_cos) bound_h = int(height * abs_cos + width * abs_sin) # subtract old image center (bringing image back to origo) and adding the new image center coordinates rotation_mat[0, 2] += bound_w/2 - image_center[0] rotation_mat[1, 2] += bound_h/2 - image_center[1] # rotate image with the new bounds and translated rotation matrix rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h)) return (rotated_mat, rotation_mat) # Also return rotation_mat ModifiedVersionRotation, T = rotate_image(img, 35) iT = cv2.invertAffineTransform(T) unrotated_img = cv2.warpAffine(ModifiedVersionRotation, iT, (img.shape[1], img.shape[0])) #cv2.imwrite("lenarot.jpg", ModifiedVersionRotation) #cv2.imwrite("lenarotback.jpg", unrotated_img) cv2.imwrite("lenarot.png", ModifiedVersionRotation) cv2.imwrite("lenarotback.png", unrotated_img) # Show images for testing cv2.imshow('ModifiedVersionRotation', ModifiedVersionRotation) cv2.imshow('img', img) cv2.imshow('unrotated_img', unrotated_img) cv2.waitKey() cv2.destroyAllWindows() ModifiedVersionRotation: unrotated_img: Original image: The "restored" image looks blurred, because each warp (forward and backward) applies bi-linear interpolation that blurs the image a little.
Restore an image after rotation without black borders in python
I've used the following code in order to rotate an image (initial image) and make some processing: def rotate_image(mat, angle): """ Rotates an image (angle in degrees) and expands image to avoid cropping """ height, width = mat.shape[:2] # image shape has 3 dimensions image_center = (width/2, height/2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.) # rotation calculates the cos and sin, taking absolutes of those. abs_cos = abs(rotation_mat[0,0]) abs_sin = abs(rotation_mat[0,1]) # find the new width and height bounds bound_w = int(height * abs_sin + width * abs_cos) bound_h = int(height * abs_cos + width * abs_sin) # subtract old image center (bringing image back to origo) and adding the new image center coordinates rotation_mat[0, 2] += bound_w/2 - image_center[0] rotation_mat[1, 2] += bound_h/2 - image_center[1] # rotate image with the new bounds and translated rotation matrix rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h)) return rotated_mat ModifiedVersionRotation = rotate_image(img, 35) cv2.imwrite("lenarot.jpg", ModifiedVersionRotation) The function adds black borders to the image so it wont be cropped at the rotation rotated image which is what I actually need. But, how can I rotate back the image and remove the black borders?
[ "For rotating back, we may compute the inverse transformation, and apply it to the rotated image:\nExample:\ninv_rotation_mat = cv2.invertAffineTransform(rotation_mat) # Get inverse transformation matrix\nunrotated_mat = cv2.warpAffine(rotated_mat, inv_rotation_mat, (mat.shape[1], mat.shape[0])) # Apply warp (and set the destination size to original size of mat).\n\n\nComplete code sample:\nimport cv2\n\nimg = cv2.imread('lena.jpg')\n\ndef rotate_image(mat, angle):\n \"\"\"\n Rotates an image (angle in degrees) and expands image to avoid cropping\n \"\"\"\n\n height, width = mat.shape[:2] # image shape has 3 dimensions\n image_center = (width/2, height/2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape\n\n rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.)\n\n # rotation calculates the cos and sin, taking absolutes of those.\n abs_cos = abs(rotation_mat[0,0]) \n abs_sin = abs(rotation_mat[0,1])\n\n # find the new width and height bounds\n bound_w = int(height * abs_sin + width * abs_cos)\n bound_h = int(height * abs_cos + width * abs_sin)\n\n # subtract old image center (bringing image back to origo) and adding the new image center coordinates\n rotation_mat[0, 2] += bound_w/2 - image_center[0]\n rotation_mat[1, 2] += bound_h/2 - image_center[1]\n\n # rotate image with the new bounds and translated rotation matrix\n rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))\n return (rotated_mat, rotation_mat) # Also return rotation_mat\n\n\nModifiedVersionRotation, T = rotate_image(img, 35)\n\niT = cv2.invertAffineTransform(T)\n\nunrotated_img = cv2.warpAffine(ModifiedVersionRotation, iT, (img.shape[1], img.shape[0]))\n\n#cv2.imwrite(\"lenarot.jpg\", ModifiedVersionRotation)\n#cv2.imwrite(\"lenarotback.jpg\", unrotated_img)\ncv2.imwrite(\"lenarot.png\", ModifiedVersionRotation)\ncv2.imwrite(\"lenarotback.png\", unrotated_img)\n\n\n\n# Show images for testing\ncv2.imshow('ModifiedVersionRotation', ModifiedVersionRotation)\ncv2.imshow('img', img)\ncv2.imshow('unrotated_img', unrotated_img)\ncv2.waitKey()\ncv2.destroyAllWindows()\n\n\nModifiedVersionRotation:\n\nunrotated_img:\n\nOriginal image:\n\n\nThe \"restored\" image looks blurred, because each warp (forward and backward) applies bi-linear interpolation that blurs the image a little.\n" ]
[ 3 ]
[]
[]
[ "image_processing", "opencv", "python" ]
stackoverflow_0074646959_image_processing_opencv_python.txt
Q: Unable to pull default text from input element with Selenium I'm trying to get the 11/30/2022 date from the SOA Handled Date/Time field from this site pictured here. It's not a public site, so I can't simply post the link. The text is in an input field that's filled in by default when you open the page, and it has the following HTML. <td> <input type="text" name="soa_h_date" id="soa_h_date" class="readOnly disableInput" readonly="readonly"> </td> I've tried everything and i'm just not able to pull the text no matter what I do. I've tried the following driver.find_element_by_xpath('//input[@id="soa_h_date"]').text driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("value") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("placeholder") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("textarea") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("innerText") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("outerText") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("value") A: Nevermind I figured it out. I had to use a javascript executor to pull the text with the following code. element = driver.find_element_by_xpath('//input[@id="soa_h_date"]') date = driver.execute_script("return arguments[0].value",element)
Unable to pull default text from input element with Selenium
I'm trying to get the 11/30/2022 date from the SOA Handled Date/Time field from this site pictured here. It's not a public site, so I can't simply post the link. The text is in an input field that's filled in by default when you open the page, and it has the following HTML. <td> <input type="text" name="soa_h_date" id="soa_h_date" class="readOnly disableInput" readonly="readonly"> </td> I've tried everything and i'm just not able to pull the text no matter what I do. I've tried the following driver.find_element_by_xpath('//input[@id="soa_h_date"]').text driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("value") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("placeholder") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("textarea") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("innerText") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("outerText") driver.find_element_by_xpath('//input[@id="soa_h_date"]').getAttribute("value")
[ "Nevermind I figured it out. I had to use a javascript executor to pull the text with the following code.\nelement = driver.find_element_by_xpath('//input[@id=\"soa_h_date\"]')\ndate = driver.execute_script(\"return arguments[0].value\",element)\n\n" ]
[ 0 ]
[]
[]
[ "html", "python", "selenium", "selenium_webdriver" ]
stackoverflow_0074643472_html_python_selenium_selenium_webdriver.txt
Q: Is it possible to improve python performance for this code? I have a simple code that: Read a trajectory file that can be seen as a list of 2D arrays (list of positions in space) stored in Y I then want to compute for each pair (scipy.pdist style) the RMSD My code works fine: trajectory = read("test.lammpstrj", index="::") m = len(trajectory) #.get_positions() return a 2d numpy array Y = np.array([snapshot.get_positions() for snapshot in trajectory]) b = [np.sqrt(((((Y[i]- Y[j])**2))*3).mean()) for i in range(m) for j in range(i + 1, m)] This code execute in 0.86 seconds using python3.10, using Julia1.8 the same kind of code execute in 0.46 seconds I plan to have trajectory much larger (~ 200,000 elements), would it be possible to get a speed-up using python or should I stick to Julia? A: You've mentioned that snapshot.get_positions() returns some 2D array, suppose of shape (p, q). So I expect that Y is a 3D array with some shape (m, p, q), where m is the number of snapshots in the trajectory. You also expect m to scale rather high. Let's see a basic way to speed up the distance calculation, on the setting m=1000: import numpy as np # dummy inputs m = 1000 p, q = 4, 5 Y = np.random.randn(m, p, q) # your current method def foo(): return [np.sqrt(((((Y[i]- Y[j])**2))*3).mean()) for i in range(m) for j in range(i + 1, m)] # vectorized approach -> compute the upper triangle of the pairwise distance matrix def bar(): u, v = np.triu_indices(Y.shape[0], 1) return np.sqrt((3 * (Y[u] - Y[v]) ** 2).mean(axis=(-1, -2))) # Check for correctness out_1 = foo() out_2 = bar() print(np.allclose(out_1, out_2)) # True If we test the time required: %timeit -n 10 -r 3 foo() # 3.16 s Β± 50.3 ms per loop (mean Β± std. dev. of 3 runs, 10 loops each) The first method is really slow, it takes over 3 seconds for this calculation. Let's check the second method: %timeit -n 10 -r 3 bar() # 97.5 ms Β± 405 Β΅s per loop (mean Β± std. dev. of 3 runs, 10 loops each) So we have a ~30x speedup here, which would make your large calculation in python much more feasible than using the original code. Feel free to test out with other sizes of Y to see how it scales compared to the original. JIT In addition, you can also try out JIT, mainly jax or numba. It is fairly simple to port the function bar with jax.numpy, for example: import jax import jax.numpy as jnp @jax.jit def jit_bar(Y): u, v = jnp.triu_indices(Y.shape[0], 1) return jnp.sqrt((3 * (Y[u] - Y[v]) ** 2).mean(axis=(-1, -2))) # check for correctness print(np.allclose(bar(), jit_bar(Y))) # True If we test the time of the jitted jnp op: %timeit -n 10 -r 3 jit_bar(Y) # 10.6 ms Β± 678 Β΅s per loop (mean Β± std. dev. of 3 runs, 10 loops each) So compared to the original, we could reach even up to ~300x speed. Note that not every operation can be converted to jax/jit so easily (this particular problem is conveniently suitable), so the general advice is to simply avoid python loops and use numpy's broadcasting/vectorization capabilities, like in bar(). A: Stick to Julia. If you already made it in a language which runs faster, why are you trying to use python in the first place? A: Your question is about speeding up Python, relative to Julia, so I'd like to offer some Julia code for comparison. Since your data is most naturally expressed as a list of 4x5 arrays, I suggest expressing it as a vector of SMatrixes: sumdiff2(A, B) = sum((A[i] - B[i])^2 for i in eachindex(A, B)) function dists(Y) M = length(Y) V = Vector{float(eltype(eltype(Y)))}(undef, sum(1:M-1)) Threads.@threads for i in eachindex(Y) ii = sum(M-i+1:M-1) # don't worry about this sum for j in i+1:lastindex(Y) ind = ii + (j-i) V[ind] = sqrt(3 * sumdiff2(Y[i], Y[j])/length(Y[i])) end end return V end using Random: randn using StaticArrays: SMatrix Ys = [randn(SMatrix{4,5,Float64}) for _ in 1:1000]; Benchmarks: # single-threaded julia> using BenchmarkTools julia> @btime dists($Ys); 6.561 ms (2 allocations: 3.81 MiB) # multi-threaded with 6 cores julia> @btime dists($Ys); 1.606 ms (75 allocations: 3.82 MiB) I was not able to install jax on my computer, but when comparing with @Mercury's numpy code I got foo: 5.5seconds bar: 179ms i.e. approximately 3400x speedup over foo. It is possible to write this as a one-liner at a ~2-3x performance cost. A: While Python tends to be slower than Julia for many tasks, it is possible to write numerical codes as fast as Julia in Python using Numba and plain loops. Indeed, Numba is based on LLVM-Lite which is basically a JIT-compiler based on the LLVM toolchain. The standard implementation of Julia also use a JIT and the LLVM toolchain. This means the two should behave pretty closely besides the overhead introduced by the languages that are negligible once the computation is performed in parallel (because the resulting computation will be memory-bound on nearly all modern platforms). This computation can be parallelized in both Julia and Python (still using Numba). While writing a sequential computation is quite straightforward, writing a parallel computation is if bit more complex. Indeed, computing the upper triangular values can result in an imbalanced workload and so to a sub-optimal execution time. An efficient strategy is to compute, for each iteration, a pair of lines: one comes from the top of the upper triangular part and one comes from the bottom. The top line contains m-i items while the bottom one contains i+1 items. In the end, there is m+1 items to compute per iteration so the number of item is independent of the iteration number. This results in a much better load-balancing. The line of the middle needs to be computed separately regarding the size of the input array. Here is the final implementation: import numba as nb import numpy as np @nb.njit(inline='always', fastmath=True) def compute_line(tmp, res, i, m): offset = (i * (2 * m - i - 1)) // 2 factor = 3.0 / n for j in range(i + 1, m): s = 0.0 for k in range(n): s += (tmp[i, k] - tmp[j, k]) ** 2 res[offset] = np.sqrt(s * factor) offset += 1 return res @nb.njit('()', parallel=True, fastmath=True) def fastest(): m, n = Y.shape[0], Y.shape[1] * Y.shape[2] res = np.empty(m*(m-1)//2) tmp = Y.reshape(m, n) for i in nb.prange(m//2): compute_line(tmp, res, i, m) compute_line(tmp, res, m-i-1, m) if m % 2 == 1: compute_line(tmp, res, (m+1)//2, m) return res # [...] same as others %timeit -n 100 fastest() Results Here are performance results on my machine (with a i5-9600KF having 6 cores): foo (seq, Python, Mercury): 4910.7 ms bar (seq, Python, Mercury): 134.2 ms jit_bar (seq, Python, Mercury): ??? dists (seq, Julia, DNF) 6.9 ms dists (par, Julia, DNF) 2.2 ms fastest (par, Python, me): 1.5 ms <----- (Jax does not work on my machine so I cannot test it yet) This implementation is the fastest one and succeed to beat the best Julia code so far. Optimal implementation Note that for large arrays like (200_000,4,5), all implementations provided so far are inefficient since they are not cache friendly. Indeed, the input array will take 32 MiB and will not for on the cache of most modern processors (and even if it could, one need to consider the space needed for the output and the fact that caches are not perfect). This can be fixed using tiling, at the expense of an even more complex code. I think such an implementation should be optimal if you use Z-order curves.
Is it possible to improve python performance for this code?
I have a simple code that: Read a trajectory file that can be seen as a list of 2D arrays (list of positions in space) stored in Y I then want to compute for each pair (scipy.pdist style) the RMSD My code works fine: trajectory = read("test.lammpstrj", index="::") m = len(trajectory) #.get_positions() return a 2d numpy array Y = np.array([snapshot.get_positions() for snapshot in trajectory]) b = [np.sqrt(((((Y[i]- Y[j])**2))*3).mean()) for i in range(m) for j in range(i + 1, m)] This code execute in 0.86 seconds using python3.10, using Julia1.8 the same kind of code execute in 0.46 seconds I plan to have trajectory much larger (~ 200,000 elements), would it be possible to get a speed-up using python or should I stick to Julia?
[ "You've mentioned that snapshot.get_positions() returns some 2D array, suppose of shape (p, q). So I expect that Y is a 3D array with some shape (m, p, q), where m is the number of snapshots in the trajectory. You also expect m to scale rather high.\nLet's see a basic way to speed up the distance calculation, on the setting m=1000:\nimport numpy as np\n\n# dummy inputs\nm = 1000\np, q = 4, 5\nY = np.random.randn(m, p, q)\n\n# your current method\ndef foo():\n return [np.sqrt(((((Y[i]- Y[j])**2))*3).mean()) for i in range(m) for j in range(i + 1, m)]\n\n# vectorized approach -> compute the upper triangle of the pairwise distance matrix\ndef bar():\n u, v = np.triu_indices(Y.shape[0], 1)\n return np.sqrt((3 * (Y[u] - Y[v]) ** 2).mean(axis=(-1, -2)))\n\n# Check for correctness\n\nout_1 = foo()\nout_2 = bar()\nprint(np.allclose(out_1, out_2))\n# True\n\nIf we test the time required:\n%timeit -n 10 -r 3 foo()\n# 3.16 s Β± 50.3 ms per loop (mean Β± std. dev. of 3 runs, 10 loops each)\n\nThe first method is really slow, it takes over 3 seconds for this calculation. Let's check the second method:\n%timeit -n 10 -r 3 bar()\n# 97.5 ms Β± 405 Β΅s per loop (mean Β± std. dev. of 3 runs, 10 loops each)\n\nSo we have a ~30x speedup here, which would make your large calculation in python much more feasible than using the original code. Feel free to test out with other sizes of Y to see how it scales compared to the original.\n\nJIT\nIn addition, you can also try out JIT, mainly jax or numba. It is fairly simple to port the function bar with jax.numpy, for example:\nimport jax\nimport jax.numpy as jnp\n\[email protected]\ndef jit_bar(Y):\n u, v = jnp.triu_indices(Y.shape[0], 1)\n return jnp.sqrt((3 * (Y[u] - Y[v]) ** 2).mean(axis=(-1, -2)))\n\n# check for correctness\n\nprint(np.allclose(bar(), jit_bar(Y)))\n# True\n\nIf we test the time of the jitted jnp op:\n%timeit -n 10 -r 3 jit_bar(Y)\n# 10.6 ms Β± 678 Β΅s per loop (mean Β± std. dev. of 3 runs, 10 loops each)\n\nSo compared to the original, we could reach even up to ~300x speed.\nNote that not every operation can be converted to jax/jit so easily (this particular problem is conveniently suitable), so the general advice is to simply avoid python loops and use numpy's broadcasting/vectorization capabilities, like in bar().\n", "Stick to Julia.\nIf you already made it in a language which runs faster, why are you trying to use python in the first place?\n", "Your question is about speeding up Python, relative to Julia, so I'd like to offer some Julia code for comparison.\nSince your data is most naturally expressed as a list of 4x5 arrays, I suggest expressing it as a vector of SMatrixes:\nsumdiff2(A, B) = sum((A[i] - B[i])^2 for i in eachindex(A, B))\nfunction dists(Y)\n M = length(Y)\n V = Vector{float(eltype(eltype(Y)))}(undef, sum(1:M-1))\n Threads.@threads for i in eachindex(Y)\n ii = sum(M-i+1:M-1) # don't worry about this sum\n for j in i+1:lastindex(Y)\n ind = ii + (j-i)\n V[ind] = sqrt(3 * sumdiff2(Y[i], Y[j])/length(Y[i]))\n end\n end\n return V\nend\n\nusing Random: randn\nusing StaticArrays: SMatrix\nYs = [randn(SMatrix{4,5,Float64}) for _ in 1:1000];\n\nBenchmarks:\n# single-threaded\njulia> using BenchmarkTools\njulia> @btime dists($Ys);\n 6.561 ms (2 allocations: 3.81 MiB)\n\n# multi-threaded with 6 cores\njulia> @btime dists($Ys);\n 1.606 ms (75 allocations: 3.82 MiB)\n\nI was not able to install jax on my computer, but when comparing with @Mercury's numpy code I got\nfoo: 5.5seconds\nbar: 179ms\n\ni.e. approximately 3400x speedup over foo.\nIt is possible to write this as a one-liner at a ~2-3x performance cost.\n", "While Python tends to be slower than Julia for many tasks, it is possible to write numerical codes as fast as Julia in Python using Numba and plain loops. Indeed, Numba is based on LLVM-Lite which is basically a JIT-compiler based on the LLVM toolchain. The standard implementation of Julia also use a JIT and the LLVM toolchain. This means the two should behave pretty closely besides the overhead introduced by the languages that are negligible once the computation is performed in parallel (because the resulting computation will be memory-bound on nearly all modern platforms).\nThis computation can be parallelized in both Julia and Python (still using Numba). While writing a sequential computation is quite straightforward, writing a parallel computation is if bit more complex. Indeed, computing the upper triangular values can result in an imbalanced workload and so to a sub-optimal execution time. An efficient strategy is to compute, for each iteration, a pair of lines: one comes from the top of the upper triangular part and one comes from the bottom. The top line contains m-i items while the bottom one contains i+1 items. In the end, there is m+1 items to compute per iteration so the number of item is independent of the iteration number. This results in a much better load-balancing. The line of the middle needs to be computed separately regarding the size of the input array.\nHere is the final implementation:\nimport numba as nb\nimport numpy as np\n\[email protected](inline='always', fastmath=True)\ndef compute_line(tmp, res, i, m):\n offset = (i * (2 * m - i - 1)) // 2\n factor = 3.0 / n\n for j in range(i + 1, m):\n s = 0.0\n for k in range(n):\n s += (tmp[i, k] - tmp[j, k]) ** 2\n res[offset] = np.sqrt(s * factor)\n offset += 1\n return res\n\[email protected]('()', parallel=True, fastmath=True)\ndef fastest():\n m, n = Y.shape[0], Y.shape[1] * Y.shape[2]\n res = np.empty(m*(m-1)//2)\n tmp = Y.reshape(m, n)\n for i in nb.prange(m//2):\n compute_line(tmp, res, i, m)\n compute_line(tmp, res, m-i-1, m)\n if m % 2 == 1:\n compute_line(tmp, res, (m+1)//2, m)\n return res\n\n# [...] same as others\n%timeit -n 100 fastest()\n\n\nResults\nHere are performance results on my machine (with a i5-9600KF having 6 cores):\nfoo (seq, Python, Mercury): 4910.7 ms\nbar (seq, Python, Mercury): 134.2 ms\njit_bar (seq, Python, Mercury): ???\ndists (seq, Julia, DNF) 6.9 ms\ndists (par, Julia, DNF) 2.2 ms\nfastest (par, Python, me): 1.5 ms <-----\n\n(Jax does not work on my machine so I cannot test it yet)\nThis implementation is the fastest one and succeed to beat the best Julia code so far.\n\nOptimal implementation\nNote that for large arrays like (200_000,4,5), all implementations provided so far are inefficient since they are not cache friendly. Indeed, the input array will take 32 MiB and will not for on the cache of most modern processors (and even if it could, one need to consider the space needed for the output and the fact that caches are not perfect). This can be fixed using tiling, at the expense of an even more complex code. I think such an implementation should be optimal if you use Z-order curves.\n" ]
[ 6, 4, 2, 1 ]
[]
[]
[ "julia", "numpy", "python" ]
stackoverflow_0074635970_julia_numpy_python.txt
Q: Split list from regex expression to regex expression I'm looking for a regex expression to split this list into x lists. where each list start with russian and ends with english. like this : [''ΠœΠ°Π»ΡŒΡ‡ΠΈΠΊ ΠΈ Π΄Π΅Π²ΠΎΡ‡ΠΊΠ° ΠΈΠ³Ρ€Π°ΡŽΡ‚','A boy and a girl are playing','\xa0literal\xa0 Boy and girl [are] playing'] ['ΠœΡ‹ стояли ΠΈ ΠΆΠ΄Π°Π»ΠΈ','We were standing and waiting'] ['Π‘Π»ΡƒΡˆΠ°Π»ΠΈ всС: ΠΈ ΠΌΡƒΠΆΡ‡ΠΈΠ½Ρ‹, ΠΈ ΠΆΠ΅Π½Ρ‰ΠΈΠ½Ρ‹, ΠΈ Π΄Π΅Ρ‚ΠΈ','Everybody was listening--men, women and children','\xa0literal\xa0 Everybody listened: and men, and women, and children'] I tried to use regex to split the sentence like this [^Π°-яёА-ЯЁ][$A-z] or by using if(regex) without sucess here is the list : re.split(r"\.|\?", st) ['ΠœΠ°Π»ΡŒΡ‡ΠΈΠΊ ΠΈ Π΄Π΅Π²ΠΎΡ‡ΠΊΠ° ΠΈΠ³Ρ€Π°ΡŽΡ‚', 'A boy and a girl are playing', '\xa0literal\xa0 Boy and girl [are] playing', 'ΠœΡ‹ стояли ΠΈ ΠΆΠ΄Π°Π»ΠΈ', 'We were standing and waiting', 'Π‘Π»ΡƒΡˆΠ°Π»ΠΈ всС: ΠΈ ΠΌΡƒΠΆΡ‡ΠΈΠ½Ρ‹, ΠΈ ΠΆΠ΅Π½Ρ‰ΠΈΠ½Ρ‹, ΠΈ Π΄Π΅Ρ‚ΠΈ', 'Everybody was listening--men, women and children', '\xa0literal\xa0 Everybody listened: and men, and women, and children', 'Π”Π°Π²Π°ΠΉ Π³ΠΎΠ²ΠΎΡ€ΠΈΡ‚ΡŒ прямо ΠΈ ΠΎΡ‚ΠΊΡ€ΠΎΠ²Π΅Π½Π½ΠΎ', "Let's talk up (talk frankly and sincerely)", 'Π― это ΠΈ имСю Π² Π²ΠΈΠ΄Ρƒ', "That's what I have in mind", '\xa0literal\xa0 I that and have in view', 'Π§Ρ‚ΠΎ Π±Ρ‹ Ρ‚Ρ‹ Π²Ρ‹Π±Ρ€Π°Π»', ' Π― Π±Ρ‹ Π²Ρ‹Π±Ρ€Π°Π» ΠΈ Ρ‚ΠΎ, ΠΈ Π΄Ρ€ΡƒΠ³ΠΎΠ΅', 'What would you choose', ' I would choose both this and that', 'И ΠΊΠ°ΠΊ Ρ‚Ρ‹ Π½Π΅ понимаСшь, Ρ‡Ρ‚ΠΎ это интСрСсно', "How come you don't understand that this is interesting", 'Он ΠΈ сказал Π±Ρ‹, Π΄Π° Π½Π΅ Π·Π½Π°Π΅Ρ‚', "He would tell but he doesn't know", 'ΠœΡ‹ Ρ‚Π°ΠΊ ΠΈ сдСлали', 'This is what we did', 'Π― Π΄Π°ΠΆΠ΅ ΠΈ Π½Π΅ знаю', "I don't even know", 'Она ΠΈ Π½Π°ΠΌ рассказала', 'She told us too', '']
Split list from regex expression to regex expression
I'm looking for a regex expression to split this list into x lists. where each list start with russian and ends with english. like this : [''ΠœΠ°Π»ΡŒΡ‡ΠΈΠΊ ΠΈ Π΄Π΅Π²ΠΎΡ‡ΠΊΠ° ΠΈΠ³Ρ€Π°ΡŽΡ‚','A boy and a girl are playing','\xa0literal\xa0 Boy and girl [are] playing'] ['ΠœΡ‹ стояли ΠΈ ΠΆΠ΄Π°Π»ΠΈ','We were standing and waiting'] ['Π‘Π»ΡƒΡˆΠ°Π»ΠΈ всС: ΠΈ ΠΌΡƒΠΆΡ‡ΠΈΠ½Ρ‹, ΠΈ ΠΆΠ΅Π½Ρ‰ΠΈΠ½Ρ‹, ΠΈ Π΄Π΅Ρ‚ΠΈ','Everybody was listening--men, women and children','\xa0literal\xa0 Everybody listened: and men, and women, and children'] I tried to use regex to split the sentence like this [^Π°-яёА-ЯЁ][$A-z] or by using if(regex) without sucess here is the list : re.split(r"\.|\?", st) ['ΠœΠ°Π»ΡŒΡ‡ΠΈΠΊ ΠΈ Π΄Π΅Π²ΠΎΡ‡ΠΊΠ° ΠΈΠ³Ρ€Π°ΡŽΡ‚', 'A boy and a girl are playing', '\xa0literal\xa0 Boy and girl [are] playing', 'ΠœΡ‹ стояли ΠΈ ΠΆΠ΄Π°Π»ΠΈ', 'We were standing and waiting', 'Π‘Π»ΡƒΡˆΠ°Π»ΠΈ всС: ΠΈ ΠΌΡƒΠΆΡ‡ΠΈΠ½Ρ‹, ΠΈ ΠΆΠ΅Π½Ρ‰ΠΈΠ½Ρ‹, ΠΈ Π΄Π΅Ρ‚ΠΈ', 'Everybody was listening--men, women and children', '\xa0literal\xa0 Everybody listened: and men, and women, and children', 'Π”Π°Π²Π°ΠΉ Π³ΠΎΠ²ΠΎΡ€ΠΈΡ‚ΡŒ прямо ΠΈ ΠΎΡ‚ΠΊΡ€ΠΎΠ²Π΅Π½Π½ΠΎ', "Let's talk up (talk frankly and sincerely)", 'Π― это ΠΈ имСю Π² Π²ΠΈΠ΄Ρƒ', "That's what I have in mind", '\xa0literal\xa0 I that and have in view', 'Π§Ρ‚ΠΎ Π±Ρ‹ Ρ‚Ρ‹ Π²Ρ‹Π±Ρ€Π°Π»', ' Π― Π±Ρ‹ Π²Ρ‹Π±Ρ€Π°Π» ΠΈ Ρ‚ΠΎ, ΠΈ Π΄Ρ€ΡƒΠ³ΠΎΠ΅', 'What would you choose', ' I would choose both this and that', 'И ΠΊΠ°ΠΊ Ρ‚Ρ‹ Π½Π΅ понимаСшь, Ρ‡Ρ‚ΠΎ это интСрСсно', "How come you don't understand that this is interesting", 'Он ΠΈ сказал Π±Ρ‹, Π΄Π° Π½Π΅ Π·Π½Π°Π΅Ρ‚', "He would tell but he doesn't know", 'ΠœΡ‹ Ρ‚Π°ΠΊ ΠΈ сдСлали', 'This is what we did', 'Π― Π΄Π°ΠΆΠ΅ ΠΈ Π½Π΅ знаю', "I don't even know", 'Она ΠΈ Π½Π°ΠΌ рассказала', 'She told us too', '']
[]
[]
[ "This creates a list of lists, grouping the phrases.\nfinal_list = []\nsublist = []\nfor phrase in starting_list:\n if re.match('^[Π°-яёА-ЯЁ]', phrase):\n final_list.append(sublist)\n sublist = [phrase]\n else:\n sublist += [phrase]\n\n" ]
[ -1 ]
[ "list", "python", "string" ]
stackoverflow_0074646958_list_python_string.txt
Q: Remove dictionary from list If I have a list of dictionaries, say: [{'id': 1, 'name': 'paul'}, {'id': 2, 'name': 'john'}] and I would like to remove the dictionary with id of 2 (or name 'john'), what is the most efficient way to go about this programmatically (that is to say, I don't know the index of the entry in the list so it can't simply be popped). A: thelist[:] = [d for d in thelist if d.get('id') != 2] Edit: as some doubts have been expressed in a comment about the performance of this code (some based on misunderstanding Python's performance characteristics, some on assuming beyond the given specs that there is exactly one dict in the list with a value of 2 for key 'id'), I wish to offer reassurance on this point. On an old Linux box, measuring this code: $ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(99)]; import random" "thelist=list(lod); random.shuffle(thelist); thelist[:] = [d for d in thelist if d.get('id') != 2]" 10000 loops, best of 3: 82.3 usec per loop of which about 57 microseconds for the random.shuffle (needed to ensure that the element to remove is not ALWAYS at the same spot;-) and 0.65 microseconds for the initial copy (whoever worries about performance impact of shallow copies of Python lists is most obviously out to lunch;-), needed to avoid altering the original list in the loop (so each leg of the loop does have something to delete;-). When it is known that there is exactly one item to remove, it's possible to locate and remove it even more expeditiously: $ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(99)]; import random" "thelist=list(lod); random.shuffle(thelist); where=(i for i,d in enumerate(thelist) if d.get('id')==2).next(); del thelist[where]" 10000 loops, best of 3: 72.8 usec per loop (use the next builtin rather than the .next method if you're on Python 2.6 or better, of course) -- but this code breaks down if the number of dicts that satisfy the removal condition is not exactly one. Generalizing this, we have: $ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*3; import random" "thelist=list(lod); where=[i for i,d in enumerate(thelist) if d.get('id')==2]; where.reverse()" "for i in where: del thelist[i]" 10000 loops, best of 3: 23.7 usec per loop where the shuffling can be removed because there are already three equispaced dicts to remove, as we know. And the listcomp, unchanged, fares well: $ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*3; import random" "thelist=list(lod); thelist[:] = [d for d in thelist if d.get('id') != 2]" 10000 loops, best of 3: 23.8 usec per loop totally neck and neck, with even just 3 elements of 99 to be removed. With longer lists and more repetitions, this holds even more of course: $ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*133; import random" "thelist=list(lod); where=[i for i,d in enumerate(thelist) if d.get('id')==2]; where.reverse()" "for i in where: del thelist[i]" 1000 loops, best of 3: 1.11 msec per loop $ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*133; import random" "thelist=list(lod); thelist[:] = [d for d in thelist if d.get('id') != 2]" 1000 loops, best of 3: 998 usec per loop All in all, it's obviously not worth deploying the subtlety of making and reversing the list of indices to remove, vs the perfectly simple and obvious list comprehension, to possibly gain 100 nanoseconds in one small case -- and lose 113 microseconds in a larger one;-). Avoiding or criticizing simple, straightforward, and perfectly performance-adequate solutions (like list comprehensions for this general class of "remove some items from a list" problems) is a particularly nasty example of Knuth's and Hoare's well-known thesis that "premature optimization is the root of all evil in programming"!-) A: Here's a way to do it with a list comprehension (assuming you name your list 'foo'): [x for x in foo if not (2 == x.get('id'))] Substitute 'john' == x.get('name') or whatever as appropriate. filter also works: foo.filter(lambda x: x.get('id')!=2, foo) And if you want a generator you can use itertools: itertools.ifilter(lambda x: x.get('id')!=2, foo) However, as of Python 3, filter will return an iterator anyway, so the list comprehension is really the best choice, as Alex suggested. A: This is not properly an anwser (as I think you already have some quite good of them), but... have you considered of having a dictionary of <id>:<name> instead of a list of dictionaries? A: # assume ls contains your list for i in range(len(ls)): if ls[i]['id'] == 2: del ls[i] break Will probably be faster than the list comprehension methods on average because it doesn't traverse the whole list if it finds the item in question early on. A: Supposed your python version is 3.6 or greater, and that you don't need the deleted item this would be less expensive... If the dictionaries in the list are unique : for i in range(len(dicts)): if dicts[i].get('id') == 2: del dicts[i] break If you want to remove all matched items : for i in range(len(dicts)): if dicts[i].get('id') == 2: del dicts[i] You can also to this to be sure getting id key won't raise keyerror regardless the python version if dicts[i].get('id', None) == 2 A: You can try the following: a = [{'id': 1, 'name': 'paul'}, {'id': 2, 'name': 'john'}] for e in range(len(a) - 1, -1, -1): if a[e]['id'] == 2: a.pop(e) If You can't pop from the beginning - pop from the end, it won't ruin the for loop. A: You could try something along the following lines: def destructively_remove_if(predicate, list): for k in xrange(len(list)): if predicate(list[k]): del list[k] break return list list = [ { 'id': 1, 'name': 'John' }, { 'id': 2, 'name': 'Karl' }, { 'id': 3, 'name': 'Desdemona' } ] print "Before:", list destructively_remove_if(lambda p: p["id"] == 2, list) print "After:", list Unless you build something akin to an index over your data, I don't think that you can do better than doing a brute-force "table scan" over the entire list. If your data is sorted by the key you are using, you might be able to employ the bisect module to find the object you are looking for somewhat faster. A: From the update on pep448 on unpacking generalisations (python 3.5 and onwards) while iterating a list of dicts with a temporary variable, let's say row, You can take the dict of the current iteration in, using **row, merge new keys in or use a boolean operation to filter out dict(s) from your list of dicts. Keep in mind **row will output a new dictionary. For example your starting list of dicts : data = [{'id': 1, 'name': 'paul'},{'id': 2, 'name': 'john'}] if we want to filter out id 2 : data = [{**row} for row in data if data['id']!=2] if you want to filter out John : data = [{**row} for row in data if data['name']!='John'] not directly related to the question but if you want to a add new key : data = [{**row, 'id_name':str(row['id'])+'_'+row['name']} for row in data] It's also a tiny bit faster than the accepted solution.
Remove dictionary from list
If I have a list of dictionaries, say: [{'id': 1, 'name': 'paul'}, {'id': 2, 'name': 'john'}] and I would like to remove the dictionary with id of 2 (or name 'john'), what is the most efficient way to go about this programmatically (that is to say, I don't know the index of the entry in the list so it can't simply be popped).
[ "thelist[:] = [d for d in thelist if d.get('id') != 2]\n\nEdit: as some doubts have been expressed in a comment about the performance of this code (some based on misunderstanding Python's performance characteristics, some on assuming beyond the given specs that there is exactly one dict in the list with a value of 2 for key 'id'), I wish to offer reassurance on this point.\nOn an old Linux box, measuring this code:\n$ python -mtimeit -s\"lod=[{'id':i, 'name':'nam%s'%i} for i in range(99)]; import random\" \"thelist=list(lod); random.shuffle(thelist); thelist[:] = [d for d in thelist if d.get('id') != 2]\"\n10000 loops, best of 3: 82.3 usec per loop\n\nof which about 57 microseconds for the random.shuffle (needed to ensure that the element to remove is not ALWAYS at the same spot;-) and 0.65 microseconds for the initial copy (whoever worries about performance impact of shallow copies of Python lists is most obviously out to lunch;-), needed to avoid altering the original list in the loop (so each leg of the loop does have something to delete;-).\nWhen it is known that there is exactly one item to remove, it's possible to locate and remove it even more expeditiously:\n$ python -mtimeit -s\"lod=[{'id':i, 'name':'nam%s'%i} for i in range(99)]; import random\" \"thelist=list(lod); random.shuffle(thelist); where=(i for i,d in enumerate(thelist) if d.get('id')==2).next(); del thelist[where]\"\n10000 loops, best of 3: 72.8 usec per loop\n\n(use the next builtin rather than the .next method if you're on Python 2.6 or better, of course) -- but this code breaks down if the number of dicts that satisfy the removal condition is not exactly one. Generalizing this, we have:\n$ python -mtimeit -s\"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*3; import random\" \"thelist=list(lod); where=[i for i,d in enumerate(thelist) if d.get('id')==2]; where.reverse()\" \"for i in where: del thelist[i]\"\n10000 loops, best of 3: 23.7 usec per loop\n\nwhere the shuffling can be removed because there are already three equispaced dicts to remove, as we know. And the listcomp, unchanged, fares well:\n$ python -mtimeit -s\"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*3; import random\" \"thelist=list(lod); thelist[:] = [d for d in thelist if d.get('id') != 2]\"\n10000 loops, best of 3: 23.8 usec per loop\n\ntotally neck and neck, with even just 3 elements of 99 to be removed. With longer lists and more repetitions, this holds even more of course:\n$ python -mtimeit -s\"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*133; import random\" \"thelist=list(lod); where=[i for i,d in enumerate(thelist) if d.get('id')==2]; where.reverse()\" \"for i in where: del thelist[i]\"\n1000 loops, best of 3: 1.11 msec per loop\n$ python -mtimeit -s\"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*133; import random\" \"thelist=list(lod); thelist[:] = [d for d in thelist if d.get('id') != 2]\"\n1000 loops, best of 3: 998 usec per loop\n\nAll in all, it's obviously not worth deploying the subtlety of making and reversing the list of indices to remove, vs the perfectly simple and obvious list comprehension, to possibly gain 100 nanoseconds in one small case -- and lose 113 microseconds in a larger one;-). Avoiding or criticizing simple, straightforward, and perfectly performance-adequate solutions (like list comprehensions for this general class of \"remove some items from a list\" problems) is a particularly nasty example of Knuth's and Hoare's well-known thesis that \"premature optimization is the root of all evil in programming\"!-)\n", "Here's a way to do it with a list comprehension (assuming you name your list 'foo'):\n[x for x in foo if not (2 == x.get('id'))]\n\nSubstitute 'john' == x.get('name') or whatever as appropriate.\nfilter also works:\nfoo.filter(lambda x: x.get('id')!=2, foo)\nAnd if you want a generator you can use itertools:\nitertools.ifilter(lambda x: x.get('id')!=2, foo)\nHowever, as of Python 3, filter will return an iterator anyway, so the list comprehension is really the best choice, as Alex suggested.\n", "This is not properly an anwser (as I think you already have some quite good of them), but... have you considered of having a dictionary of <id>:<name> instead of a list of dictionaries?\n", "# assume ls contains your list\nfor i in range(len(ls)):\n if ls[i]['id'] == 2:\n del ls[i]\n break\n\nWill probably be faster than the list comprehension methods on average because it doesn't traverse the whole list if it finds the item in question early on.\n", "Supposed your python version is 3.6 or greater, and that you don't need the deleted item this would be less expensive...\nIf the dictionaries in the list are unique :\nfor i in range(len(dicts)):\n if dicts[i].get('id') == 2:\n del dicts[i]\n break\n\nIf you want to remove all matched items :\nfor i in range(len(dicts)):\n if dicts[i].get('id') == 2:\n del dicts[i]\n\nYou can also to this to be sure getting id key won't raise keyerror regardless the python version\n\nif dicts[i].get('id', None) == 2\n\n", "You can try the following: \na = [{'id': 1, 'name': 'paul'},\n {'id': 2, 'name': 'john'}]\n\nfor e in range(len(a) - 1, -1, -1):\n if a[e]['id'] == 2:\n a.pop(e)\n\nIf You can't pop from the beginning - pop from the end, it won't ruin the for loop.\n", "You could try something along the following lines:\ndef destructively_remove_if(predicate, list):\n for k in xrange(len(list)):\n if predicate(list[k]):\n del list[k]\n break\n return list\n\n list = [\n { 'id': 1, 'name': 'John' },\n { 'id': 2, 'name': 'Karl' },\n { 'id': 3, 'name': 'Desdemona' } \n ]\n\n print \"Before:\", list\n destructively_remove_if(lambda p: p[\"id\"] == 2, list)\n print \"After:\", list\n\nUnless you build something akin to an index over your data, I\ndon't think that you can do better than doing a brute-force \"table\nscan\" over the entire list. If your data is sorted by the key you\nare using, you might be able to employ the bisect module to\nfind the object you are looking for somewhat faster.\n", "From the update on pep448 on unpacking generalisations (python 3.5 and onwards) while iterating a list of dicts with a temporary variable, let's say row, You can take the dict of the current iteration in, using **row, merge new keys in or use a boolean operation to filter out dict(s) from your list of dicts.\nKeep in mind **row will output a new dictionary.\nFor example your starting list of dicts :\ndata = [{'id': 1, 'name': 'paul'},{'id': 2, 'name': 'john'}]\n\nif we want to filter out id 2 :\ndata = [{**row} for row in data if data['id']!=2]\n\nif you want to filter out John :\ndata = [{**row} for row in data if data['name']!='John']\n\nnot directly related to the question but if you want to a add new key :\ndata = [{**row, 'id_name':str(row['id'])+'_'+row['name']} for row in data]\n\nIt's also a tiny bit faster than the accepted solution.\n" ]
[ 143, 11, 8, 8, 2, 1, 0, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0001235618_dictionary_list_python.txt
Q: Pandas Dataframe index & loc I am pretty new to Pandas , and working on an assignment to convert some pandas code to pyspark. Can someone pls explain me what is below code is actually doing. There is a Pandas Dataframe named DFF and it looks like below: DB SalesOrder SOItem SLNo 4500041 10 1 PP 4501034 20 1 ZH This is the Index details of DFF DB SalesOrder SOItem SLNo 4500041 10 1 PP 4501034 20 1 ZH MultiIndex([('4500041', '10', 1), ('4501034', '20', 1)], names=['SalesOrder', 'SOItem', 'SLNo']) There is another Pandas Dataframe named SDD and it looks like below: SalesOrder SOItem SLNo DlvDate ... DB CommittQty ProdOrder CommitQty 0 4500041 10 1 2017-02-16 ... PP 6,000 6.0 1 4501034 20 1 2017-02-13 ... ZH 1,000 1.0 2 4501034 10 2 2017-02-16 ... ZH 5,00 5.0 3 4501464 20 2 2017-02-13 ... KK 9,000 8500065 9.0 [4 rows x 11 columns] The part of code that I need help with is below. SDD.loc[DFF.index, 'RDD'] = SDD.loc[DFF.index, 'DlvDate'] Can someone pls explain me what is being done in the above line of code. I got these two dataframes in Pyspark but unable to understand what to do with that for the above mentioned Pandas code. I printed every level to debug however couldn't get much understanding. A: This is the below action that is being performed with the below code. SDD.loc[DFF.index, 'RDD'] = SDD.loc[DFF.index, 'DlvDate'] Basically in this above line the following operations are being done. All the index columns of DFF Dataframe and All the Index columns of SDD Dataframe are joined. A new column is created named 'RDD' on the SDD Dataframe and for all matching index values of DFF Dataframe , SDD.DlvDate column value is set in the RDD column and for unmatched values , null is set. The equivalent pyspark is left joining the dataframes where SDD is the left dataframe , then with a "case when then" needs to check if any of the joining col of DFF dataframe is null then set the SDD dataframe DlvDate as null.
Pandas Dataframe index & loc
I am pretty new to Pandas , and working on an assignment to convert some pandas code to pyspark. Can someone pls explain me what is below code is actually doing. There is a Pandas Dataframe named DFF and it looks like below: DB SalesOrder SOItem SLNo 4500041 10 1 PP 4501034 20 1 ZH This is the Index details of DFF DB SalesOrder SOItem SLNo 4500041 10 1 PP 4501034 20 1 ZH MultiIndex([('4500041', '10', 1), ('4501034', '20', 1)], names=['SalesOrder', 'SOItem', 'SLNo']) There is another Pandas Dataframe named SDD and it looks like below: SalesOrder SOItem SLNo DlvDate ... DB CommittQty ProdOrder CommitQty 0 4500041 10 1 2017-02-16 ... PP 6,000 6.0 1 4501034 20 1 2017-02-13 ... ZH 1,000 1.0 2 4501034 10 2 2017-02-16 ... ZH 5,00 5.0 3 4501464 20 2 2017-02-13 ... KK 9,000 8500065 9.0 [4 rows x 11 columns] The part of code that I need help with is below. SDD.loc[DFF.index, 'RDD'] = SDD.loc[DFF.index, 'DlvDate'] Can someone pls explain me what is being done in the above line of code. I got these two dataframes in Pyspark but unable to understand what to do with that for the above mentioned Pandas code. I printed every level to debug however couldn't get much understanding.
[ "This is the below action that is being performed with the below code.\nSDD.loc[DFF.index, 'RDD'] = SDD.loc[DFF.index, 'DlvDate']\nBasically in this above line the following operations are being done.\nAll the index columns of DFF Dataframe and All the Index columns of SDD Dataframe are joined. A new column is created named 'RDD' on the SDD Dataframe and for all matching index values of DFF Dataframe , SDD.DlvDate column value is set in the RDD column and for unmatched values , null is set.\nThe equivalent pyspark is left joining the dataframes where SDD is the left dataframe , then with a \"case when then\" needs to check if any of the joining col of DFF dataframe is null then set the SDD dataframe DlvDate as null.\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074634149_pandas_python.txt
Q: Uwsgi Locking Up After a Few Requests with Nginx/Traefik/Flask App Running over HTTPS/TLS and Docker Problem I have an app that uses nginx to serve my Python Flask app in production that only after a few requests starts locking up and timing out (will serve the first request or two quickly then start timing out and locking up afterwards). The Nginx app is served via Docker, the uwsgi Python app is served on barebones macOS (this Python app interfaces with the Docker instance running on the OS itself), the routing occurs via Traefik. Findings This problem only occurs in production and the only difference there is I'm using Traefik's LetsEncrypt SSL certs to use HTTPS to protect the API. I've narrowed the problem down to the following two docker-compose config lines (when present the problem persists, when removed the problem is corrected but SSL no longer is enabled): - "traefik.http.routers.harveyapi.tls=true" - "traefik.http.routers.harveyapi.tls.certresolver=letsencrypt" Once locked up, I must restart the uwsgi processes to fix the problem just to have it lock right back up. Restarting nginx (Docker container) doesn't fix the problem which leads me to believe that uwsgi doesn't like the SSL config I'm using? Once I disable SSL support, I can send 2000 requests to the API and have it only take a second or two. Once enabled again, uwsgi can't even respond to 2 requests. Desired Outcome I'd like to be able to support SSL certs to enforce HTTPS connections to this API. I can currently run HTTP with this setup fine (thousands of concurrent connections) but that breaks when trying to use HTTPS. Configs I host dozens of other PHP sites with near identical setups. The only difference between those projects and this one is that they run PHP in Docker and this runs Python Uwsgi on barebones macOS. Here is the complete dump of configs for this project: traefik.toml # Traefik v2 Configuration # Documentation: https://doc.traefik.io/traefik/migration/v1-to-v2/ [entryPoints] # http should be redirected to https [entryPoints.web] address = ":80" [entryPoints.web.http.redirections.entryPoint] to = "websecure" scheme = "https" [entryPoints.websecure] address = ":443" [entryPoints.websecure.http.tls] certResolver = "letsencrypt" # Enable ACME (Let's Encrypt): automatic SSL [certificatesResolvers.letsencrypt.acme] email = "[email protected]" storage = "/etc/traefik/acme/acme.json" [certificatesResolvers.letsencrypt.acme.httpChallenge] entryPoint = "web" [log] level = "DEBUG" # Enable Docker Provider [providers.docker] endpoint = "unix:///var/run/docker.sock" exposedByDefault = false # Must pass `traefik.enable=true` label to use Traefik network = "traefik" # Enable Ping (used for healthcheck) [ping] docker-compose.yml version: "3.8" services: harvey-nginx: build: . restart: always networks: - traefik labels: - traefik.enable=true labels: - "traefik.http.routers.harveyapi.rule=Host(`project.com`, `www.project.com`)" - "traefik.http.routers.harveyapi.tls=true" - "traefik.http.routers.harveyapi.tls.certresolver=letsencrypt" networks: traefik: name: traefik uwsgi.ini [uwsgi] ; uwsgi setup master = true memory-report = true auto-procname = true strict = true vacuum = true die-on-term = true need-app = true ; concurrency enable-threads = true cheaper-initial = 5 ; workers to spawn on startup cheaper = 2 ; minimum number of workers to go down to workers = 10 ; highest number of workers to run ; workers harakiri = 60 ; Restart workers if they have hung on a single request max-requests = 500 ; Restart workers after this many requests max-worker-lifetime = 3600 ; Restart workers after this many seconds reload-on-rss = 1024 ; Restart workers after this much resident memory reload-mercy = 3 ; How long to wait before forcefully killing workers worker-reload-mercy = 3 ; How long to wait before forcefully killing workers ; app setup protocol = http socket = 127.0.0.1:5000 module = wsgi:APP ; daemonization ; TODO: Name processes `harvey` here daemonize = /tmp/harvey_daemon.log nginx.conf server { listen 80; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location / { include uwsgi_params; # TODO: Please note this only works for macOS: https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host # and will require adjusting for your OS. proxy_pass http://host.docker.internal:5000; } } Dockerfile FROM nginx:1.23-alpine RUN rm /etc/nginx/conf.d/default.conf COPY nginx.conf /etc/nginx/conf.d Additional Context I've added additional findings on the GitHub issue where I've documented my journey for this problem: https://github.com/Justintime50/harvey/issues/67 A: This is no longer a problem and the solution is real frustrating - it was Docker's fault. For ~6 months there was a bug in Docker that was dropping connections (ultimately leading to the timeouts mentioned above) which was finally fixed in Docker Desktop 4.14. The moment I upgraded Docker (it had just come out at the time and I thought I would try the hail Mary upgrade having already turned every dial and adjusted every config param without any luck), it finally stopped timing out and dropping connections. I was suddenly able to send through tens of thousands of concurrent requests without issue. TLDR: uWSGI, Nginx, nor my config were at fault here. Docker had a bug that has been patched. If others on macOS are facing this problem, try upgrading to at least Docker Dekstop 4.14.
Uwsgi Locking Up After a Few Requests with Nginx/Traefik/Flask App Running over HTTPS/TLS and Docker
Problem I have an app that uses nginx to serve my Python Flask app in production that only after a few requests starts locking up and timing out (will serve the first request or two quickly then start timing out and locking up afterwards). The Nginx app is served via Docker, the uwsgi Python app is served on barebones macOS (this Python app interfaces with the Docker instance running on the OS itself), the routing occurs via Traefik. Findings This problem only occurs in production and the only difference there is I'm using Traefik's LetsEncrypt SSL certs to use HTTPS to protect the API. I've narrowed the problem down to the following two docker-compose config lines (when present the problem persists, when removed the problem is corrected but SSL no longer is enabled): - "traefik.http.routers.harveyapi.tls=true" - "traefik.http.routers.harveyapi.tls.certresolver=letsencrypt" Once locked up, I must restart the uwsgi processes to fix the problem just to have it lock right back up. Restarting nginx (Docker container) doesn't fix the problem which leads me to believe that uwsgi doesn't like the SSL config I'm using? Once I disable SSL support, I can send 2000 requests to the API and have it only take a second or two. Once enabled again, uwsgi can't even respond to 2 requests. Desired Outcome I'd like to be able to support SSL certs to enforce HTTPS connections to this API. I can currently run HTTP with this setup fine (thousands of concurrent connections) but that breaks when trying to use HTTPS. Configs I host dozens of other PHP sites with near identical setups. The only difference between those projects and this one is that they run PHP in Docker and this runs Python Uwsgi on barebones macOS. Here is the complete dump of configs for this project: traefik.toml # Traefik v2 Configuration # Documentation: https://doc.traefik.io/traefik/migration/v1-to-v2/ [entryPoints] # http should be redirected to https [entryPoints.web] address = ":80" [entryPoints.web.http.redirections.entryPoint] to = "websecure" scheme = "https" [entryPoints.websecure] address = ":443" [entryPoints.websecure.http.tls] certResolver = "letsencrypt" # Enable ACME (Let's Encrypt): automatic SSL [certificatesResolvers.letsencrypt.acme] email = "[email protected]" storage = "/etc/traefik/acme/acme.json" [certificatesResolvers.letsencrypt.acme.httpChallenge] entryPoint = "web" [log] level = "DEBUG" # Enable Docker Provider [providers.docker] endpoint = "unix:///var/run/docker.sock" exposedByDefault = false # Must pass `traefik.enable=true` label to use Traefik network = "traefik" # Enable Ping (used for healthcheck) [ping] docker-compose.yml version: "3.8" services: harvey-nginx: build: . restart: always networks: - traefik labels: - traefik.enable=true labels: - "traefik.http.routers.harveyapi.rule=Host(`project.com`, `www.project.com`)" - "traefik.http.routers.harveyapi.tls=true" - "traefik.http.routers.harveyapi.tls.certresolver=letsencrypt" networks: traefik: name: traefik uwsgi.ini [uwsgi] ; uwsgi setup master = true memory-report = true auto-procname = true strict = true vacuum = true die-on-term = true need-app = true ; concurrency enable-threads = true cheaper-initial = 5 ; workers to spawn on startup cheaper = 2 ; minimum number of workers to go down to workers = 10 ; highest number of workers to run ; workers harakiri = 60 ; Restart workers if they have hung on a single request max-requests = 500 ; Restart workers after this many requests max-worker-lifetime = 3600 ; Restart workers after this many seconds reload-on-rss = 1024 ; Restart workers after this much resident memory reload-mercy = 3 ; How long to wait before forcefully killing workers worker-reload-mercy = 3 ; How long to wait before forcefully killing workers ; app setup protocol = http socket = 127.0.0.1:5000 module = wsgi:APP ; daemonization ; TODO: Name processes `harvey` here daemonize = /tmp/harvey_daemon.log nginx.conf server { listen 80; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location / { include uwsgi_params; # TODO: Please note this only works for macOS: https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host # and will require adjusting for your OS. proxy_pass http://host.docker.internal:5000; } } Dockerfile FROM nginx:1.23-alpine RUN rm /etc/nginx/conf.d/default.conf COPY nginx.conf /etc/nginx/conf.d Additional Context I've added additional findings on the GitHub issue where I've documented my journey for this problem: https://github.com/Justintime50/harvey/issues/67
[ "This is no longer a problem and the solution is real frustrating - it was Docker's fault. For ~6 months there was a bug in Docker that was dropping connections (ultimately leading to the timeouts mentioned above) which was finally fixed in Docker Desktop 4.14.\nThe moment I upgraded Docker (it had just come out at the time and I thought I would try the hail Mary upgrade having already turned every dial and adjusted every config param without any luck), it finally stopped timing out and dropping connections. I was suddenly able to send through tens of thousands of concurrent requests without issue.\nTLDR: uWSGI, Nginx, nor my config were at fault here. Docker had a bug that has been patched. If others on macOS are facing this problem, try upgrading to at least Docker Dekstop 4.14.\n" ]
[ 4 ]
[]
[]
[ "docker", "nginx", "python", "traffic", "uwsgi" ]
stackoverflow_0073596677_docker_nginx_python_traffic_uwsgi.txt
Q: 'float' object is not subscriptable error while trying to add data into a csv file I have a csv file containing reviews. I want to calculate each review's sentiment polarity, and then output a new column that says if the review's sentiment is positive or negative The Whole thing looks like this filename = r'./DisneylandReviews.csv' df = pd.read_csv(filename, encoding='latin-1') df.columns=['ID','rating','Date', 'Location','Review','Branch'] def Rating_To_Sent (row): if row['rating'] == 3 : return 'neutral' if row['rating'] < 3 : return 'negative' else: return 'positive' df['RatingSentiment'] = df.apply (lambda row: Rating_To_Sent(row), axis=1) def sentiment_calc(text): try: return TextBlob(text).sentiment.polarity except: return None df['sentiment']=df['Review'].apply(sentiment_calc) def Review_To_Sent(row): if row ['sentiment'] < -0.05 : return 'negative' if row['sentiment'] > 0.05 : return 'positive' else: return 'neutral' df['Review_Sentiment']=df['sentiment'].apply(Review_To_Sent) x=df.loc[0:20,'sentiment'] print(x) and this is the line thats producing an error def Review_To_Sent(row): if row ['sentiment'] < -0.05 : A: To the function, review_to_sent(row) you are not passing the whole row, you are passing only the row values of a particular column called sentiment so Method 1: Change calling methodology by doing apply on dataframe like df.apply(Review_to_Sent(row)) Method 2: Remove row[β€˜sentiment’] from the check condition.
'float' object is not subscriptable error while trying to add data into a csv file
I have a csv file containing reviews. I want to calculate each review's sentiment polarity, and then output a new column that says if the review's sentiment is positive or negative The Whole thing looks like this filename = r'./DisneylandReviews.csv' df = pd.read_csv(filename, encoding='latin-1') df.columns=['ID','rating','Date', 'Location','Review','Branch'] def Rating_To_Sent (row): if row['rating'] == 3 : return 'neutral' if row['rating'] < 3 : return 'negative' else: return 'positive' df['RatingSentiment'] = df.apply (lambda row: Rating_To_Sent(row), axis=1) def sentiment_calc(text): try: return TextBlob(text).sentiment.polarity except: return None df['sentiment']=df['Review'].apply(sentiment_calc) def Review_To_Sent(row): if row ['sentiment'] < -0.05 : return 'negative' if row['sentiment'] > 0.05 : return 'positive' else: return 'neutral' df['Review_Sentiment']=df['sentiment'].apply(Review_To_Sent) x=df.loc[0:20,'sentiment'] print(x) and this is the line thats producing an error def Review_To_Sent(row): if row ['sentiment'] < -0.05 :
[ "To the function, review_to_sent(row) you are not passing the whole row, you are passing only the row values of a particular column called sentiment so\nMethod 1:\nChange calling methodology by doing apply on dataframe like\ndf.apply(Review_to_Sent(row))\n\nMethod 2:\nRemove row[β€˜sentiment’] from the check condition.\n" ]
[ 0 ]
[]
[]
[ "csv", "pandas", "python" ]
stackoverflow_0074647406_csv_pandas_python.txt
Q: if else statement to a comprehension list with enumeration? Using if-else statements in comprehension lists like this is great: a = [1, 0, 1, 0, 1, 0, 1, 0, 1] b = [i-1 if i > 0 else i+1 for i in a] b [0, 1, 0, 1, 0, 1, 0, 1, 0] also using the enumerations makes possible to use the iterator like: c = [j for j, item in enumerate(b) if item > 0 ] c [1, 3, 5, 7] but how to add an else statement to a comprehension list with enumeration? i.e. something like c = [j for j, item in enumerate(b) if item > 0 ELSE ] A: Just rearrange, such as c = [j if item>0 else 99 for j, item in enumerate(b)] produces [99, 1, 99, 3, 99, 5, 99, 7, 99]
if else statement to a comprehension list with enumeration?
Using if-else statements in comprehension lists like this is great: a = [1, 0, 1, 0, 1, 0, 1, 0, 1] b = [i-1 if i > 0 else i+1 for i in a] b [0, 1, 0, 1, 0, 1, 0, 1, 0] also using the enumerations makes possible to use the iterator like: c = [j for j, item in enumerate(b) if item > 0 ] c [1, 3, 5, 7] but how to add an else statement to a comprehension list with enumeration? i.e. something like c = [j for j, item in enumerate(b) if item > 0 ELSE ]
[ "Just rearrange, such as\nc = [j if item>0 else 99 for j, item in enumerate(b)]\n\nproduces\n[99, 1, 99, 3, 99, 5, 99, 7, 99]\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074647365_python.txt
Q: passing multiple flags in argparse? I am trying to pass multiple different flags using argparse. I know this kind of code would work for a single flag. if the -percentage flag is passed then do something import argparse parser = argparse.ArgumentParser() parser.add_argument('-percentage', action='store_true') but I'm trying to pass multiple flags, for example this code import argparse parser = argparse.ArgumentParser() parser.add_argument('-serviceA', action='store_true') parser.add_argument('-serviceB', action='store_true') parser.add_argument('-serviceC', action='store_true') parser.add_argument('-serviceD', action='store_true') parser.add_argument('-activate', action='store_true') and then pass flags -serivceB -activate, my intention is that the activate flag is basically a yes or no. and the service flag is the actual service. so that the service would get activated only when there is a activate flag next to it. how can I do this? I hope i explained the situation in detail. please any help or tips are appreciated. A: I was debugging it wrong. I spent last 5 hours trying to figure this out. Thanks to everyone who mentioned the comment! for anyone experiencing the same issue, when you are debugging using a launch.json file. make sure your args are like this "args": [ "--serviceA", "--activate" ], I had set up args like this "--serviceA --activate"
passing multiple flags in argparse?
I am trying to pass multiple different flags using argparse. I know this kind of code would work for a single flag. if the -percentage flag is passed then do something import argparse parser = argparse.ArgumentParser() parser.add_argument('-percentage', action='store_true') but I'm trying to pass multiple flags, for example this code import argparse parser = argparse.ArgumentParser() parser.add_argument('-serviceA', action='store_true') parser.add_argument('-serviceB', action='store_true') parser.add_argument('-serviceC', action='store_true') parser.add_argument('-serviceD', action='store_true') parser.add_argument('-activate', action='store_true') and then pass flags -serivceB -activate, my intention is that the activate flag is basically a yes or no. and the service flag is the actual service. so that the service would get activated only when there is a activate flag next to it. how can I do this? I hope i explained the situation in detail. please any help or tips are appreciated.
[ "I was debugging it wrong. I spent last 5 hours trying to figure this out. Thanks to everyone who mentioned the comment!\nfor anyone experiencing the same issue, when you are debugging using a launch.json file. make sure your args are like this \"args\": [\n\"--serviceA\", \"--activate\"\n],\nI had set up args like this \"--serviceA --activate\"\n" ]
[ 0 ]
[]
[]
[ "argparse", "arguments", "command_line", "command_line_arguments", "python" ]
stackoverflow_0074647313_argparse_arguments_command_line_command_line_arguments_python.txt
Q: How to do multi-line string search in a file and get start line, end line info in python? I want to search for multi-line string in a file in python. If there is a match, then I want to get the start line number, end line number, start column and end column number of the match. For example: in the below file, I want to match the below multi-line string: pattern = """b'0100000001685c7c35aabe690cc99f947a8172ad075d4401448a212b9f26607d6ec5530915010000006a4730' b'440220337117278ee2fc7ae222ec1547b3a40fa39a05f91c1e19db60060541c4b3d6e4022020188e1d5d843c'""" The result of the match should be as: start_line: 2, end_line = 3, start_column: 23 and end_column: 114 The start column is the index in that line where the first character is matched of the pattern and end column is the last index of the line where the last character is matched of the pattern. The end column is shown below: I tried with the re package of python but it returns None as it could not find any match. import re pattern = """b'0100000001685c7c35aabe690cc99f947a8172ad075d4401448a212b9f26607d6ec5530915010000006a4730' b'440220337117278ee2fc7ae222ec1547b3a40fa39a05f91c1e19db60060541c4b3d6e4022020188e1d5d843c'""" with open("test.py") as f: content = f.read() print(re.search(pattern, content)) I can find the metadata of the location of the match of a single line strings in a file using with open("test.py") as f: data = f.read() for n, line in enumerate(data): match_index = line.find(pattern) if match_index != -1: print("Start Line:", n + 1) print("End Line", n + 1) print("Start Column:", match_index) print("End Column:", match_index + len(pattern) + 1) break But, I am struggling to make it work for multi-line strings. How can I match multi-line strings in a file and get the metadata of the location of the match in python? A: You should use the re.MULTILINE flag to search multiple lines import re pattern = r"(c\nd)" string = """ a b c d e f """ match = re.search(pattern, string, flags=re.MULTILINE) print(match) To get the start line, you could count the newline characters as follows start, stop = match.span() start_line = string[:start].count('\n') You could do the same for the end_line, or if you know how many lines is your pattern, you can just add this info to avoid counting twice. To also get the start column, you can check the line itself, or a pure regex solution could also look line: pattern = "(?:.*\n)*(\s*(c\s*\n\s*d)\s*)" match = re.match(pattern, string, flags=re.MULTILINE) start_column = match.start(2) - match.start(1) start_line = string[:match.start(1)].count('\n') print(start_line, start_column) However, I think difflib could be more useful here.
How to do multi-line string search in a file and get start line, end line info in python?
I want to search for multi-line string in a file in python. If there is a match, then I want to get the start line number, end line number, start column and end column number of the match. For example: in the below file, I want to match the below multi-line string: pattern = """b'0100000001685c7c35aabe690cc99f947a8172ad075d4401448a212b9f26607d6ec5530915010000006a4730' b'440220337117278ee2fc7ae222ec1547b3a40fa39a05f91c1e19db60060541c4b3d6e4022020188e1d5d843c'""" The result of the match should be as: start_line: 2, end_line = 3, start_column: 23 and end_column: 114 The start column is the index in that line where the first character is matched of the pattern and end column is the last index of the line where the last character is matched of the pattern. The end column is shown below: I tried with the re package of python but it returns None as it could not find any match. import re pattern = """b'0100000001685c7c35aabe690cc99f947a8172ad075d4401448a212b9f26607d6ec5530915010000006a4730' b'440220337117278ee2fc7ae222ec1547b3a40fa39a05f91c1e19db60060541c4b3d6e4022020188e1d5d843c'""" with open("test.py") as f: content = f.read() print(re.search(pattern, content)) I can find the metadata of the location of the match of a single line strings in a file using with open("test.py") as f: data = f.read() for n, line in enumerate(data): match_index = line.find(pattern) if match_index != -1: print("Start Line:", n + 1) print("End Line", n + 1) print("Start Column:", match_index) print("End Column:", match_index + len(pattern) + 1) break But, I am struggling to make it work for multi-line strings. How can I match multi-line strings in a file and get the metadata of the location of the match in python?
[ "You should use the re.MULTILINE flag to search multiple lines\nimport re\npattern = r\"(c\\nd)\"\nstring = \"\"\"\na\nb\nc\nd\ne\nf\n\"\"\"\n\nmatch = re.search(pattern, string, flags=re.MULTILINE)\nprint(match)\n\nTo get the start line, you could count the newline characters as follows\nstart, stop = match.span()\nstart_line = string[:start].count('\\n')\n\nYou could do the same for the end_line, or if you know how many lines is your pattern, you can just add this info to avoid counting twice.\nTo also get the start column, you can check the line itself, or a pure regex solution could also look line:\npattern = \"(?:.*\\n)*(\\s*(c\\s*\\n\\s*d)\\s*)\"\nmatch = re.match(pattern, string, flags=re.MULTILINE)\nstart_column = match.start(2) - match.start(1)\nstart_line = string[:match.start(1)].count('\\n')\nprint(start_line, start_column)\n\nHowever, I think difflib could be more useful here.\n" ]
[ 0 ]
[]
[]
[ "match", "python", "python_re" ]
stackoverflow_0074636125_match_python_python_re.txt
Q: How to combine two JSON objects using jq I have two files: kube-apiserver.json { "apiVersion": "v1", "kind": "Pod", "metadata": { [...] }, "spec": { "containers": [ { "command": [ "kube-apiserver", "--advertise-address=192.168.49.2", "--allow-privileged=true", "--authorization-mode=Node,RBAC", "--client-ca-file=/var/lib/minikube/certs/ca.crt", "--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota", "--enable-bootstrap-token-auth=true", "--etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt", "--etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt", "--etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key", "--etcd-servers=https://127.0.0.1:2379", "--kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt", "--kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname", "--proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt", "--proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key", "--requestheader-allowed-names=front-proxy-client", "--requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt", "--requestheader-extra-headers-prefix=X-Remote-Extra-", "--requestheader-group-headers=X-Remote-Group", "--requestheader-username-headers=X-Remote-User", "--secure-port=8443", "--service-account-issuer=https://kubernetes.default.svc.cluster.local", "--service-account-key-file=/var/lib/minikube/certs/sa.pub", "--service-account-signing-key-file=/var/lib/minikube/certs/sa.key", "--service-cluster-ip-range=10.96.0.0/12", "--tls-cert-file=/var/lib/minikube/certs/apiserver.crt", "--tls-private-key-file=/var/lib/minikube/certs/apiserver.key" ], [...] "volumeMounts": [ { "mountPath": "/etc/ssl/certs", "name": "ca-certs", "readOnly": true }, { "mountPath": "/etc/ca-certificates", "name": "etc-ca-certificates", "readOnly": true }, { "mountPath": "/var/lib/minikube/certs", "name": "k8s-certs", "readOnly": true }, { "mountPath": "/usr/local/share/ca-certificates", "name": "usr-local-share-ca-certificates", "readOnly": true }, { "mountPath": "/usr/share/ca-certificates", "name": "usr-share-ca-certificates", "readOnly": true } ] } ], [...] "volumes": [ { "hostPath": { "path": "/etc/ssl/certs", "type": "DirectoryOrCreate" }, "name": "ca-certs" }, { "hostPath": { "path": "/etc/ca-certificates", "type": "DirectoryOrCreate" }, "name": "etc-ca-certificates" }, { "hostPath": { "path": "/var/lib/minikube/certs", "type": "DirectoryOrCreate" }, "name": "k8s-certs" }, { "hostPath": { "path": "/usr/local/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-local-share-ca-certificates" }, { "hostPath": { "path": "/usr/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-share-ca-certificates" } ] }, "status": { [...] } } and patch.json { "apiVersion": "v1", "kind": "Pod", "metadata": { }, "spec": { "containers": [ { "command": [ "--audit-policy-file=/etc/kubernetes/audit-policy.yaml", "--audit-log-path=/var/log/kubernetes/audit/audit.log" ], "volumeMounts": [ { "mountPath": "/etc/kubernetes/audit-policy.yaml", "name": "audit", "readOnly": true }, { "mountPath": "/var/log/kubernetes/audit/", "name": "audit-log", "readOnly": true } ] } ], "volumes": [ { "hostPath": { "path": "/etc/kubernetes/audit-policy.yaml", "type": "FileOrCreate" }, "name": "audit" }, { "hostPath": { "path": "/var/log/kubernetes/audit/", "type": "DirectoryOrCreate" }, "name": "audit-log" } ] }, "status": { } } When i try to do jq -s '.[0] * .[1]' kube-apiserver.json patch.json > patched-apiserver.json items that are in patch.json overrides items from kube-apiserver.json so it looks like this: { "apiVersion": "v1", "kind": "Pod", "metadata": { [...] "spec": { "containers": [ { "command": [ "--audit-policy-file=/etc/kubernetes/audit-policy.yaml", "--audit-log-path=/var/log/kubernetes/audit/audit.log" ], "volumeMounts": [ { "mountPath": "/etc/kubernetes/audit-policy.yaml", "name": "audit", "readOnly": true }, { "mountPath": "/var/log/kubernetes/audit/", "name": "audit-log", "readOnly": true } ] } ], [..]. "volumes": [ { "hostPath": { "path": "/etc/kubernetes/audit-policy.yaml", "type": "FileOrCreate" }, "name": "audit" }, { "hostPath": { "path": "/var/log/kubernetes/audit/", "type": "DirectoryOrCreate" }, "name": "audit-log" } ] }, "status": { [...] } } and i would like my file to look like this: { "apiVersion": "v1", "kind": "Pod", "metadata": { [...] }, "spec": { "containers": [ { "command": [ "kube-apiserver", "--advertise-address=192.168.49.2", "--allow-privileged=true", "--authorization-mode=Node,RBAC", "--client-ca-file=/var/lib/minikube/certs/ca.crt", "--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota", "--enable-bootstrap-token-auth=true", "--etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt", "--etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt", "--etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key", "--etcd-servers=https://127.0.0.1:2379", "--kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt", "--kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname", "--proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt", "--proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key", "--requestheader-allowed-names=front-proxy-client", "--requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt", "--requestheader-extra-headers-prefix=X-Remote-Extra-", "--requestheader-group-headers=X-Remote-Group", "--requestheader-username-headers=X-Remote-User", "--secure-port=8443", "--service-account-issuer=https://kubernetes.default.svc.cluster.local", "--service-account-key-file=/var/lib/minikube/certs/sa.pub", "--service-account-signing-key-file=/var/lib/minikube/certs/sa.key", "--service-cluster-ip-range=10.96.0.0/12", "--tls-cert-file=/var/lib/minikube/certs/apiserver.crt", "--tls-private-key-file=/var/lib/minikube/certs/apiserver.key", "--audit-policy-file=/etc/kubernetes/audit-policy.yaml", "--audit-log-path=/var/log/kubernetes/audit/audit.log" ], [...] "volumeMounts": [ { "mountPath": "/etc/ssl/certs", "name": "ca-certs", "readOnly": true }, { "mountPath": "/etc/ca-certificates", "name": "etc-ca-certificates", "readOnly": true }, { "mountPath": "/var/lib/minikube/certs", "name": "k8s-certs", "readOnly": true }, { "mountPath": "/usr/local/share/ca-certificates", "name": "usr-local-share-ca-certificates", "readOnly": true }, { "mountPath": "/usr/share/ca-certificates", "name": "usr-share-ca-certificates", "readOnly": true }, { "mountPath": "/etc/kubernetes/audit-policy.yaml", "name": "audit", "readOnly": true }, { "mountPath": "/var/log/kubernetes/audit/", "name": "audit-log", "readOnly": true } ] } ], [...] "volumes": [ { "hostPath": { "path": "/etc/ssl/certs", "type": "DirectoryOrCreate" }, "name": "ca-certs" }, { "hostPath": { "path": "/etc/ca-certificates", "type": "DirectoryOrCreate" }, "name": "etc-ca-certificates" }, { "hostPath": { "path": "/var/lib/minikube/certs", "type": "DirectoryOrCreate" }, "name": "k8s-certs" }, { "hostPath": { "path": "/usr/local/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-local-share-ca-certificates" }, { "hostPath": { "path": "/usr/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-share-ca-certificates" }, { "hostPath": { "path": "/etc/kubernetes/audit-policy.yaml", "type": "FileOrCreate" }, "name": "audit" }, { "hostPath": { "path": "/var/log/kubernetes/audit/", "type": "DirectoryOrCreate" }, "name": "audit-log" } ] }, "status": { [...] } } Does anyone know how to solve it with jq/python/bash/whatever? A: jq --slurpfile patch patch.json ' (.spec.containers |= map(.command |= (. + $patch[].spec.containers[0].command | unique) | .volumeMounts |= (. + $patch[].spec.containers[0].volumeMounts | unique))) | (.spec.volumes |= (. + $patch[].spec.volumes | unique)) ' kube-apiserver.json I have added unique to ensure that commands, volumeMounts and volumes only appear once. This has the side effect of sorting the arrays. You can remove unique if you do not want it. There is a problem in your question: spec.containers is an array. In your example, this array contains only one element and my code adds the first element from the patch to each container. Or would you like the patch to be merged based on position? In that case this solution does not work.
How to combine two JSON objects using jq
I have two files: kube-apiserver.json { "apiVersion": "v1", "kind": "Pod", "metadata": { [...] }, "spec": { "containers": [ { "command": [ "kube-apiserver", "--advertise-address=192.168.49.2", "--allow-privileged=true", "--authorization-mode=Node,RBAC", "--client-ca-file=/var/lib/minikube/certs/ca.crt", "--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota", "--enable-bootstrap-token-auth=true", "--etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt", "--etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt", "--etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key", "--etcd-servers=https://127.0.0.1:2379", "--kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt", "--kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname", "--proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt", "--proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key", "--requestheader-allowed-names=front-proxy-client", "--requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt", "--requestheader-extra-headers-prefix=X-Remote-Extra-", "--requestheader-group-headers=X-Remote-Group", "--requestheader-username-headers=X-Remote-User", "--secure-port=8443", "--service-account-issuer=https://kubernetes.default.svc.cluster.local", "--service-account-key-file=/var/lib/minikube/certs/sa.pub", "--service-account-signing-key-file=/var/lib/minikube/certs/sa.key", "--service-cluster-ip-range=10.96.0.0/12", "--tls-cert-file=/var/lib/minikube/certs/apiserver.crt", "--tls-private-key-file=/var/lib/minikube/certs/apiserver.key" ], [...] "volumeMounts": [ { "mountPath": "/etc/ssl/certs", "name": "ca-certs", "readOnly": true }, { "mountPath": "/etc/ca-certificates", "name": "etc-ca-certificates", "readOnly": true }, { "mountPath": "/var/lib/minikube/certs", "name": "k8s-certs", "readOnly": true }, { "mountPath": "/usr/local/share/ca-certificates", "name": "usr-local-share-ca-certificates", "readOnly": true }, { "mountPath": "/usr/share/ca-certificates", "name": "usr-share-ca-certificates", "readOnly": true } ] } ], [...] "volumes": [ { "hostPath": { "path": "/etc/ssl/certs", "type": "DirectoryOrCreate" }, "name": "ca-certs" }, { "hostPath": { "path": "/etc/ca-certificates", "type": "DirectoryOrCreate" }, "name": "etc-ca-certificates" }, { "hostPath": { "path": "/var/lib/minikube/certs", "type": "DirectoryOrCreate" }, "name": "k8s-certs" }, { "hostPath": { "path": "/usr/local/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-local-share-ca-certificates" }, { "hostPath": { "path": "/usr/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-share-ca-certificates" } ] }, "status": { [...] } } and patch.json { "apiVersion": "v1", "kind": "Pod", "metadata": { }, "spec": { "containers": [ { "command": [ "--audit-policy-file=/etc/kubernetes/audit-policy.yaml", "--audit-log-path=/var/log/kubernetes/audit/audit.log" ], "volumeMounts": [ { "mountPath": "/etc/kubernetes/audit-policy.yaml", "name": "audit", "readOnly": true }, { "mountPath": "/var/log/kubernetes/audit/", "name": "audit-log", "readOnly": true } ] } ], "volumes": [ { "hostPath": { "path": "/etc/kubernetes/audit-policy.yaml", "type": "FileOrCreate" }, "name": "audit" }, { "hostPath": { "path": "/var/log/kubernetes/audit/", "type": "DirectoryOrCreate" }, "name": "audit-log" } ] }, "status": { } } When i try to do jq -s '.[0] * .[1]' kube-apiserver.json patch.json > patched-apiserver.json items that are in patch.json overrides items from kube-apiserver.json so it looks like this: { "apiVersion": "v1", "kind": "Pod", "metadata": { [...] "spec": { "containers": [ { "command": [ "--audit-policy-file=/etc/kubernetes/audit-policy.yaml", "--audit-log-path=/var/log/kubernetes/audit/audit.log" ], "volumeMounts": [ { "mountPath": "/etc/kubernetes/audit-policy.yaml", "name": "audit", "readOnly": true }, { "mountPath": "/var/log/kubernetes/audit/", "name": "audit-log", "readOnly": true } ] } ], [..]. "volumes": [ { "hostPath": { "path": "/etc/kubernetes/audit-policy.yaml", "type": "FileOrCreate" }, "name": "audit" }, { "hostPath": { "path": "/var/log/kubernetes/audit/", "type": "DirectoryOrCreate" }, "name": "audit-log" } ] }, "status": { [...] } } and i would like my file to look like this: { "apiVersion": "v1", "kind": "Pod", "metadata": { [...] }, "spec": { "containers": [ { "command": [ "kube-apiserver", "--advertise-address=192.168.49.2", "--allow-privileged=true", "--authorization-mode=Node,RBAC", "--client-ca-file=/var/lib/minikube/certs/ca.crt", "--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota", "--enable-bootstrap-token-auth=true", "--etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt", "--etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt", "--etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key", "--etcd-servers=https://127.0.0.1:2379", "--kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt", "--kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname", "--proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt", "--proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key", "--requestheader-allowed-names=front-proxy-client", "--requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt", "--requestheader-extra-headers-prefix=X-Remote-Extra-", "--requestheader-group-headers=X-Remote-Group", "--requestheader-username-headers=X-Remote-User", "--secure-port=8443", "--service-account-issuer=https://kubernetes.default.svc.cluster.local", "--service-account-key-file=/var/lib/minikube/certs/sa.pub", "--service-account-signing-key-file=/var/lib/minikube/certs/sa.key", "--service-cluster-ip-range=10.96.0.0/12", "--tls-cert-file=/var/lib/minikube/certs/apiserver.crt", "--tls-private-key-file=/var/lib/minikube/certs/apiserver.key", "--audit-policy-file=/etc/kubernetes/audit-policy.yaml", "--audit-log-path=/var/log/kubernetes/audit/audit.log" ], [...] "volumeMounts": [ { "mountPath": "/etc/ssl/certs", "name": "ca-certs", "readOnly": true }, { "mountPath": "/etc/ca-certificates", "name": "etc-ca-certificates", "readOnly": true }, { "mountPath": "/var/lib/minikube/certs", "name": "k8s-certs", "readOnly": true }, { "mountPath": "/usr/local/share/ca-certificates", "name": "usr-local-share-ca-certificates", "readOnly": true }, { "mountPath": "/usr/share/ca-certificates", "name": "usr-share-ca-certificates", "readOnly": true }, { "mountPath": "/etc/kubernetes/audit-policy.yaml", "name": "audit", "readOnly": true }, { "mountPath": "/var/log/kubernetes/audit/", "name": "audit-log", "readOnly": true } ] } ], [...] "volumes": [ { "hostPath": { "path": "/etc/ssl/certs", "type": "DirectoryOrCreate" }, "name": "ca-certs" }, { "hostPath": { "path": "/etc/ca-certificates", "type": "DirectoryOrCreate" }, "name": "etc-ca-certificates" }, { "hostPath": { "path": "/var/lib/minikube/certs", "type": "DirectoryOrCreate" }, "name": "k8s-certs" }, { "hostPath": { "path": "/usr/local/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-local-share-ca-certificates" }, { "hostPath": { "path": "/usr/share/ca-certificates", "type": "DirectoryOrCreate" }, "name": "usr-share-ca-certificates" }, { "hostPath": { "path": "/etc/kubernetes/audit-policy.yaml", "type": "FileOrCreate" }, "name": "audit" }, { "hostPath": { "path": "/var/log/kubernetes/audit/", "type": "DirectoryOrCreate" }, "name": "audit-log" } ] }, "status": { [...] } } Does anyone know how to solve it with jq/python/bash/whatever?
[ "jq --slurpfile patch patch.json '\n (.spec.containers |= map(.command |= (. + $patch[].spec.containers[0].command | unique) |\n .volumeMounts |= (. + $patch[].spec.containers[0].volumeMounts | unique))) |\n (.spec.volumes |= (. + $patch[].spec.volumes | unique))\n' kube-apiserver.json\n\nI have added unique to ensure that commands, volumeMounts and volumes only appear once. This has the side effect of sorting the arrays.\nYou can remove unique if you do not want it.\nThere is a problem in your question: spec.containers is an array. In your example, this array contains only one element and my code adds the first element from the patch to each container.\nOr would you like the patch to be merged based on position? In that case this solution does not work.\n" ]
[ 0 ]
[]
[]
[ "jq", "json", "python", "text_processing" ]
stackoverflow_0074646311_jq_json_python_text_processing.txt
Q: How to override a mock for an individual test within a class that already has a mock I have a test class that has a mock decorator, and several tests. Each test receives the mock, because mock is defined on the class level. Great. Here's what it looks like: @mock.patch("foo", bar) class TestMyThing(TestCase): def test_A(self): assert something def test_B(self): assert something def test_C(self): assert something def test_D(self): assert something I now want test_D to get a have a different value mocked for foo. I first try: @mock.patch("foo", bar) class TestMyThing(TestCase): def test_A(self): assert something def test_B(self): assert something def test_C(self): assert something @mock.patch("foo", baz) def test_D(self): assert something This doesn't work. Currently to get unittest to take the mock.patch that decorates test_D, I have to remove the mock.patch that decorates the class. This means creating lots of DRY and doing the following: class TestMyThing(TestCase): @mock.patch("foo", bar) def test_A(self): assert something @mock.patch("foo", bar) def test_B(self): assert something @mock.patch("foo", bar) def test_C(self): assert something @mock.patch("foo", baz) def test_D(self): assert something This is non ideal due to DRY boilerplate, which makes it error prone and violates open-closed principle. Is there a better way to achieve the same logic? A: Yes! You can leverage the setUp/tearDown methods of the unittest.TestCase and the fact that unittest.mock.patch in its "pure" form (i.e. not as context manager or decorator) returns a "patcher" object that has start/stop methods to control when exactly it should do its magic. You can call on the patcher to start inside setUp and to stop inside tearDown and if you keep a reference to it in an attribute of your test case, you can easily stop it manually in selected test methods. Here is a full working example: from unittest import TestCase from unittest.mock import patch class Foo: @staticmethod def bar() -> int: return 1 class TestMyThing(TestCase): def setUp(self) -> None: self.foo_bar_patcher = patch.object(Foo, "bar", return_value=42) self.mock_foo_bar = self.foo_bar_patcher.start() super().setUp() def tearDown(self) -> None: self.foo_bar_patcher.stop() super().tearDown() def test_a(self): self.assertEqual(42, Foo.bar()) def test_b(self): self.assertEqual(42, Foo.bar()) def test_c(self): self.assertEqual(42, Foo.bar()) def test_d(self): self.foo_bar_patcher.stop() self.assertEqual(1, Foo.bar()) This patching behavior is the same, regardless of the different variations like patch.object (which I used here), patch.multiple etc. Note that for this example it is not necessary to keep a reference to the actual MagicMock instance generated by the patcher in an attribute, like I did with mock_foo_bar. I just usually do that because I often want to check something against the mock instance in my test methods. It is worth mentioning that you can also use setUpClass/tearDownClass for this, but then you need to be careful with re-starting the patch, if you stop it because those methods are (as the name implies) only called once for each test case, whereas setUp/tearDown are called once before/after each test method. PS: The default implementations of setUp/tearDown on TestCase do nothing, but it is still good practice IMO to make a habit of calling the superclass' method, unless you deliberately want to omit that call.
How to override a mock for an individual test within a class that already has a mock
I have a test class that has a mock decorator, and several tests. Each test receives the mock, because mock is defined on the class level. Great. Here's what it looks like: @mock.patch("foo", bar) class TestMyThing(TestCase): def test_A(self): assert something def test_B(self): assert something def test_C(self): assert something def test_D(self): assert something I now want test_D to get a have a different value mocked for foo. I first try: @mock.patch("foo", bar) class TestMyThing(TestCase): def test_A(self): assert something def test_B(self): assert something def test_C(self): assert something @mock.patch("foo", baz) def test_D(self): assert something This doesn't work. Currently to get unittest to take the mock.patch that decorates test_D, I have to remove the mock.patch that decorates the class. This means creating lots of DRY and doing the following: class TestMyThing(TestCase): @mock.patch("foo", bar) def test_A(self): assert something @mock.patch("foo", bar) def test_B(self): assert something @mock.patch("foo", bar) def test_C(self): assert something @mock.patch("foo", baz) def test_D(self): assert something This is non ideal due to DRY boilerplate, which makes it error prone and violates open-closed principle. Is there a better way to achieve the same logic?
[ "Yes! You can leverage the setUp/tearDown methods of the unittest.TestCase and the fact that unittest.mock.patch in its \"pure\" form (i.e. not as context manager or decorator) returns a \"patcher\" object that has start/stop methods to control when exactly it should do its magic.\nYou can call on the patcher to start inside setUp and to stop inside tearDown and if you keep a reference to it in an attribute of your test case, you can easily stop it manually in selected test methods. Here is a full working example:\nfrom unittest import TestCase\nfrom unittest.mock import patch\n\n\nclass Foo:\n @staticmethod\n def bar() -> int:\n return 1\n\n\nclass TestMyThing(TestCase):\n def setUp(self) -> None:\n self.foo_bar_patcher = patch.object(Foo, \"bar\", return_value=42)\n self.mock_foo_bar = self.foo_bar_patcher.start()\n super().setUp()\n\n def tearDown(self) -> None:\n self.foo_bar_patcher.stop()\n super().tearDown()\n\n def test_a(self):\n self.assertEqual(42, Foo.bar())\n\n def test_b(self):\n self.assertEqual(42, Foo.bar())\n\n def test_c(self):\n self.assertEqual(42, Foo.bar())\n\n def test_d(self):\n self.foo_bar_patcher.stop()\n self.assertEqual(1, Foo.bar())\n\nThis patching behavior is the same, regardless of the different variations like patch.object (which I used here), patch.multiple etc.\nNote that for this example it is not necessary to keep a reference to the actual MagicMock instance generated by the patcher in an attribute, like I did with mock_foo_bar. I just usually do that because I often want to check something against the mock instance in my test methods.\nIt is worth mentioning that you can also use setUpClass/tearDownClass for this, but then you need to be careful with re-starting the patch, if you stop it because those methods are (as the name implies) only called once for each test case, whereas setUp/tearDown are called once before/after each test method.\nPS: The default implementations of setUp/tearDown on TestCase do nothing, but it is still good practice IMO to make a habit of calling the superclass' method, unless you deliberately want to omit that call.\n" ]
[ 1 ]
[]
[]
[ "python", "python_unittest", "unit_testing" ]
stackoverflow_0074641489_python_python_unittest_unit_testing.txt
Q: javascript for-loop not iterating within python loop I have a list of dates in python that I would like to iterate through to create button elements. df2 = ['Sat Nov 12 11:57:21 CST 2022', 'Wed Nov 23 18:13:31 CST 2022', 'Wed Nov 23 18:13:32 CST 2022', 'Thu Nov 10 19:07:50 CST 2022', 'Fri Nov 11 09:54:54 CST 2022', 'Fri Nov 11 10:18:36 CST 2022', 'Sat Nov 26 10:50:05 CST 2022', 'Sat Nov 12 11:57:50 CST 2022'] For some reason my javascript loop doesn't iterate the value yet outside the js loop it is iterating the value for count, value in enumerate(df2): counts = str(count) f.write(''' <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <div id="myHTMLWrapper"> '''+value+''' </div> <script> var wrapper = document.getElementById("myHTMLWrapper"); var myHTML = ''; for (var i = 0; i < '''+counts+'''; i++) { myHTML += '<button class="test">'''+value+'''</button>'; }/* w w w . j av a 2 s. c o m*/ wrapper.innerHTML = myHTML </script> </body> </html> ''') So the value variable in my div there is iterating fine. The buttons are also being generated but they have the last value of my list still... A: If you're dfoing the looping in Python, as you are doing here, then you don't need to do any looping in Javascript. The HTML you create will already have all of the data enumerated: df2 = ['Sat Nov 12 11:57:21 CST 2022', 'Wed Nov 23 18:13:31 CST 2022', 'Wed Nov 23 18:13:32 CST 2022', 'Thu Nov 10 19:07:50 CST 2022', 'Fri Nov 11 09:54:54 CST 2022', 'Fri Nov 11 10:18:36 CST 2022', 'Sat Nov 26 10:50:05 CST 2022', 'Sat Nov 12 11:57:50 CST 2022'] f.write(''' <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> ''') for count, value in enumerate(df2): f.write(''' <div id="myHTMLWrapper"> '''+value+''' </div> '<button class="test">'''+value+'''</button>'; ''') f.write(''' </body> </html> ''')
javascript for-loop not iterating within python loop
I have a list of dates in python that I would like to iterate through to create button elements. df2 = ['Sat Nov 12 11:57:21 CST 2022', 'Wed Nov 23 18:13:31 CST 2022', 'Wed Nov 23 18:13:32 CST 2022', 'Thu Nov 10 19:07:50 CST 2022', 'Fri Nov 11 09:54:54 CST 2022', 'Fri Nov 11 10:18:36 CST 2022', 'Sat Nov 26 10:50:05 CST 2022', 'Sat Nov 12 11:57:50 CST 2022'] For some reason my javascript loop doesn't iterate the value yet outside the js loop it is iterating the value for count, value in enumerate(df2): counts = str(count) f.write(''' <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <div id="myHTMLWrapper"> '''+value+''' </div> <script> var wrapper = document.getElementById("myHTMLWrapper"); var myHTML = ''; for (var i = 0; i < '''+counts+'''; i++) { myHTML += '<button class="test">'''+value+'''</button>'; }/* w w w . j av a 2 s. c o m*/ wrapper.innerHTML = myHTML </script> </body> </html> ''') So the value variable in my div there is iterating fine. The buttons are also being generated but they have the last value of my list still...
[ "If you're dfoing the looping in Python, as you are doing here, then you don't need to do any looping in Javascript. The HTML you create will already have all of the data enumerated:\ndf2 = ['Sat Nov 12 11:57:21 CST 2022',\n 'Wed Nov 23 18:13:31 CST 2022',\n 'Wed Nov 23 18:13:32 CST 2022',\n 'Thu Nov 10 19:07:50 CST 2022',\n 'Fri Nov 11 09:54:54 CST 2022',\n 'Fri Nov 11 10:18:36 CST 2022',\n 'Sat Nov 26 10:50:05 CST 2022',\n 'Sat Nov 12 11:57:50 CST 2022']\n\n\n\nf.write(''' \n <html>\n <head> \n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"> \n </head> \n <body> \n''')\n\nfor count, value in enumerate(df2):\n f.write(''' \n <div id=\"myHTMLWrapper\"> '''+value+''' </div>\n '<button class=\"test\">'''+value+'''</button>';\n ''')\n\nf.write('''\n </body>\n</html>\n''')\n\n" ]
[ 2 ]
[]
[]
[ "html", "javascript", "loops", "python" ]
stackoverflow_0074647592_html_javascript_loops_python.txt
Q: I am lost on what i am doing wrong I am trying to call the class AQIparameters to present to the user the variables stored in the function aqi_parameters but it is only displaying me the strings in aqiparameters I have tried calling the aqiparameters class in the function but it results in an location error or some other form of error The form.py where the class and functions are located from AQI_WebApp_Flask import main_functions import requests from flask_wtf import FlaskForm from wtforms import SelectField import folium class AQIParameters(FlaskForm): aqiparameter = SelectField('aqiparameter', choices=[("coordinates","coordinates"), ("pr","pr"), ("hu", "hu"), ("aqius", "aqius")]) def aqi_parameter(): url = "https://api.airvisual.com/v2/nearest_city?key=" aqi_key = main_functions.read_from_file("AQI_WebApp_Flask/JSON_Files/aqi_key.json") aqi_key = aqi_key['aqi_key'] url2 = url+aqi_key request_json = requests.get(url2).json() main_functions.save_to_file(request_json, "AQI_WebApp_Flask/JSON_Files/aqi.json") air_quality_index = main_functions.read_from_file("AQI_WebApp_Flask/JSON_Files/aqi.json") ''' From the json file read above, please find the right values for the variables below Notice that I already accessed the values for latitude and longitude and I also concatenated them into one variable called coordinates ''' latitude = air_quality_index['data']['location']['coordinates'][0] longitude = air_quality_index['data']['location']['coordinates'][1] coordinates = str(latitude)+', '+str(longitude) temperatureC = air_quality_index['data']['current']['weather']['tp'] pressure = air_quality_index['data']['current']['weather']['pr'] humidity = air_quality_index['data']['current']['weather']['hu'] aqius = air_quality_index['data']['current']['pollution']['aqius'] parameters = {'coordinates': coordinates, 'temperatureC': str(temperatureC), 'pressure': str(pressure), 'humidity': str(humidity), 'aqius': str(aqius)} return parameters The routes.py where the user interaction is mainly taking place from AQI_WebApp_Flask import app, forms from flask import request, render_template @app.route('/', methods=['GET', 'POST']) def search(): searchForm = forms.AQIParameters(request.form) if request.method == "POST": parameters_requested = request.form['aqiparameter'] aqi_parameter = forms.aqi_parameter() ''' If the user makes a post request, please save the selected value by the user into a variable Also, call the function aqi_parameter (from forms.py) with all the results Then, render template parameter_result.html only with the parameter requested by the user Which means that you should assign the correct value for the variable parameter_requested below ''' return render_template('parameter_result.html', result=parameters_requested, aqi_parameter=aqi_parameter) return render_template('parameter_search.html', form=searchForm) A: It looks like the issue you are experiencing is that when you try to access the values of the aqi_parameter dictionary in your template, you are using the string names of the dictionary keys instead of the keys themselves. For example, in your template, you are trying to access the coordinates value by using aqi_parameter.coordinates, but this will not work because coordinates is not a property of the aqi_parameter object. Instead, you need to use the square bracket notation to access the values of the dictionary, like this: aqi_parameter['coordinates']. Here is an example of how you can access the values of the aqi_parameter dictionary in your template: <p>Coordinates: {{ aqi_parameter['coordinates'] }}</p> <p>Temperature: {{ aqi_parameter['temperatureC'] }}</p> <p>Pressure: {{ aqi_parameter['pressure'] }}</p> <p>Humidity: {{ aqi_parameter['humidity'] }}</p> <p>AQI: {{ aqi_parameter['aqius'] }}</p>
I am lost on what i am doing wrong
I am trying to call the class AQIparameters to present to the user the variables stored in the function aqi_parameters but it is only displaying me the strings in aqiparameters I have tried calling the aqiparameters class in the function but it results in an location error or some other form of error The form.py where the class and functions are located from AQI_WebApp_Flask import main_functions import requests from flask_wtf import FlaskForm from wtforms import SelectField import folium class AQIParameters(FlaskForm): aqiparameter = SelectField('aqiparameter', choices=[("coordinates","coordinates"), ("pr","pr"), ("hu", "hu"), ("aqius", "aqius")]) def aqi_parameter(): url = "https://api.airvisual.com/v2/nearest_city?key=" aqi_key = main_functions.read_from_file("AQI_WebApp_Flask/JSON_Files/aqi_key.json") aqi_key = aqi_key['aqi_key'] url2 = url+aqi_key request_json = requests.get(url2).json() main_functions.save_to_file(request_json, "AQI_WebApp_Flask/JSON_Files/aqi.json") air_quality_index = main_functions.read_from_file("AQI_WebApp_Flask/JSON_Files/aqi.json") ''' From the json file read above, please find the right values for the variables below Notice that I already accessed the values for latitude and longitude and I also concatenated them into one variable called coordinates ''' latitude = air_quality_index['data']['location']['coordinates'][0] longitude = air_quality_index['data']['location']['coordinates'][1] coordinates = str(latitude)+', '+str(longitude) temperatureC = air_quality_index['data']['current']['weather']['tp'] pressure = air_quality_index['data']['current']['weather']['pr'] humidity = air_quality_index['data']['current']['weather']['hu'] aqius = air_quality_index['data']['current']['pollution']['aqius'] parameters = {'coordinates': coordinates, 'temperatureC': str(temperatureC), 'pressure': str(pressure), 'humidity': str(humidity), 'aqius': str(aqius)} return parameters The routes.py where the user interaction is mainly taking place from AQI_WebApp_Flask import app, forms from flask import request, render_template @app.route('/', methods=['GET', 'POST']) def search(): searchForm = forms.AQIParameters(request.form) if request.method == "POST": parameters_requested = request.form['aqiparameter'] aqi_parameter = forms.aqi_parameter() ''' If the user makes a post request, please save the selected value by the user into a variable Also, call the function aqi_parameter (from forms.py) with all the results Then, render template parameter_result.html only with the parameter requested by the user Which means that you should assign the correct value for the variable parameter_requested below ''' return render_template('parameter_result.html', result=parameters_requested, aqi_parameter=aqi_parameter) return render_template('parameter_search.html', form=searchForm)
[ "It looks like the issue you are experiencing is that when you try to access the values of the aqi_parameter dictionary in your template, you are using the string names of the dictionary keys instead of the keys themselves.\nFor example, in your template, you are trying to access the coordinates value by using aqi_parameter.coordinates, but this will not work because coordinates is not a property of the aqi_parameter object. Instead, you need to use the square bracket notation to access the values of the dictionary, like this: aqi_parameter['coordinates'].\nHere is an example of how you can access the values of the aqi_parameter dictionary in your template:\n<p>Coordinates: {{ aqi_parameter['coordinates'] }}</p>\n<p>Temperature: {{ aqi_parameter['temperatureC'] }}</p>\n<p>Pressure: {{ aqi_parameter['pressure'] }}</p>\n<p>Humidity: {{ aqi_parameter['humidity'] }}</p>\n<p>AQI: {{ aqi_parameter['aqius'] }}</p>\n\n" ]
[ 0 ]
[]
[]
[ "api", "class", "function", "python" ]
stackoverflow_0074647548_api_class_function_python.txt
Q: Python Pandas: Execute comparison between columns in two DataFrames if values in their rows are equal I have df1 and df2. Each dataframe contains an ID column. Each dataframe also contains a geometry column. I would like to calculate the distance between each dataframe's geometry column only for rows where ID's match in each dataframe. I would imagine it looks something like this but can't figure it out: for geom in df1.geometry: if df1['system_id'] == df2f['systemID']: df1['distance'] = [geom.distance(df2.geometry[0].boundary) for geom in df1.geometry] A: There's the function .equals() df1['system_id'].equals(df2f['systemID']) which will return a boolean
Python Pandas: Execute comparison between columns in two DataFrames if values in their rows are equal
I have df1 and df2. Each dataframe contains an ID column. Each dataframe also contains a geometry column. I would like to calculate the distance between each dataframe's geometry column only for rows where ID's match in each dataframe. I would imagine it looks something like this but can't figure it out: for geom in df1.geometry: if df1['system_id'] == df2f['systemID']: df1['distance'] = [geom.distance(df2.geometry[0].boundary) for geom in df1.geometry]
[ "There's the function .equals()\ndf1['system_id'].equals(df2f['systemID'])\n\nwhich will return a boolean\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074647675_dataframe_pandas_python.txt
Q: How should I handle timestamps in Python? Whenever I read floats from sqlite using pandas.read_sql_query, there's a chance it'll have a slight precision error. So when I search for that row later by using that unprecise float, it can't find that row. Here's the exact process I used to recreate the problem: Create row in sqlite database test table insert into test values ("bc011fcd6c8e40069dd7c7f2fdf92952", 1669836415.8800698) sqlite test table Read row from test table import sqlalchemy as sa import pandas as pd import os def main(): file_path = '/path/to/sqlite.db' conn_str = f'sqlite:///{file_path}' engine = sa.create_engine(conn_str) query = 'select * from test WHERE job_id = "bc011fcd6c8e40069dd7c7f2fdf92952" AND modified_on = 1669836415.8800698' data = pd.read_sql_query(query, con=engine) print(data['modified_on'].iloc[0]) # gives 1669836415.8800697 if __name__ == '__main__': main() I'm currently using this float as a timestamp to query for later. So how should I handle this and other floats in python? Should I always round floats to 6 decimal places? Note/interesting observations: print(data['modified_on'].iloc[0]-int(data['modified_on '].iloc[0])) # gives 0.8800697326660156 A: Don't use floats if you can avoid it, this is just how they work: >>> print ("%40.18f\n" % (1669836415.8800698)) **1669836415.880069732666015625 <<<< the actual value** >>> print ("%40.7f\n" % (1669836415.8800698)) **1669836415.8800697 <<<< the rounded value**
How should I handle timestamps in Python?
Whenever I read floats from sqlite using pandas.read_sql_query, there's a chance it'll have a slight precision error. So when I search for that row later by using that unprecise float, it can't find that row. Here's the exact process I used to recreate the problem: Create row in sqlite database test table insert into test values ("bc011fcd6c8e40069dd7c7f2fdf92952", 1669836415.8800698) sqlite test table Read row from test table import sqlalchemy as sa import pandas as pd import os def main(): file_path = '/path/to/sqlite.db' conn_str = f'sqlite:///{file_path}' engine = sa.create_engine(conn_str) query = 'select * from test WHERE job_id = "bc011fcd6c8e40069dd7c7f2fdf92952" AND modified_on = 1669836415.8800698' data = pd.read_sql_query(query, con=engine) print(data['modified_on'].iloc[0]) # gives 1669836415.8800697 if __name__ == '__main__': main() I'm currently using this float as a timestamp to query for later. So how should I handle this and other floats in python? Should I always round floats to 6 decimal places? Note/interesting observations: print(data['modified_on'].iloc[0]-int(data['modified_on '].iloc[0])) # gives 0.8800697326660156
[ "Don't use floats if you can avoid it, this is just how they work:\n>>> print (\"%40.18f\\n\" % (1669836415.8800698))\n **1669836415.880069732666015625 <<<< the actual value**\n\n>>> print (\"%40.7f\\n\" % (1669836415.8800698))\n **1669836415.8800697 <<<< the rounded value**\n\n" ]
[ 0 ]
[]
[]
[ "floating_point", "precision", "python" ]
stackoverflow_0074634839_floating_point_precision_python.txt
Q: Python symlink to python3 So I'm setting up my default variables in a new MacBook M1 and for some reason, my symlink doesn't seem to work. Why is the following behaviour happening? The symlink from python to python3 gets lost somehow. /Users/overflow/Documents/tools is part of my PATH variable. $ type python python is /Users/overflow/Documents/tools/python $ python -V Python 2.7.16 $ ls -lah /Users/overflow/Documents/tools/python lrwxr-xr-x 1 overflow staff 16B 6 Oct 18:48 /Users/overflow/Documents/tools/python -> /usr/bin/python3 $ /usr/bin/python3 -V Python 3.8.9 $ echo $PATH | sed 's/:/\n/g' /Users/overflow/Documents/tools /Users/overflow/Documents/Dropbox/productivity/bin /Users/overflow/Documents/tools/confluent-6.1.0/bin /Users/overflow/.sdkman/candidates/java/current/bin /Users/overflow/.nvm/versions/node/v16.10.0/bin /Users/overflow/bin /usr/local/bin /opt/homebrew/bin /opt/homebrew/sbin /usr/local/bin /usr/bin /bin /usr/sbin /sbin A: Given the path is not being utilized, it is being overridden by a shell alias. This can be confirmed by typing set | grep python If you are using virtualenv: Try: /usr/bin/python3 -m venv python3.8.9 The common MacOS python managers are: pythonz List installs using pythonz list Change using: /usr/bin/python3 -m venv python3.8.9 pythonbrew List installs pythonbrew list Use pythonbrew switch 3.8.9 to move to the new version. pyenv Claims to use only $PATH. Included for completeness, but just in case List installs pyenv versions Use pyenv global 3.8.9 to move to the new version. To remove any of these remove the appropriate line from your ~/.bashrc or ~/.zshrc file. You can locate the line with grep ~/.bashrc python. A: I can give you some general troubleshooting tips and a band-aid that can replace a symlink. $(which python) -V This should get you the version for python3. Other thing worth checking: which python2 This should point to some standard location, /usr/bin/python2 for example. If it isn't pointed there, might be worth understanding why. In general, Macs have some quirks. The fact that a new MacBook M4 ships with Python2 as the default is a smell that you probably shouldn't try to overwrite the default value, and for whatever reason it doesn't seem to want you to. Simple answer: set an alias. echo 'alias python="python3"' >> ~/.zshrc && source; This will work for you and should not interfere with any potential other systems. Better answer: As others have described, create a virtual environment and source venv/bin/activate. A: I simply created a symlink where the actual python -V is located. Execute the following commands: type python#in my case was /usr/bin/python3which is a symlink to the installed version. It appears in cyan color. Change the directory, switch there. cd /usr/bin/ (where the python3 is, and let's list for the python words ls pyth* #in my case was python3 and python3.10 How to make a symlink: ln [OPTION]... [-T] TARGET LINK_NAME sudo ln -s python3.10 python You'll have a simplest symlink without a number at the end. Lets try: python Optionally you can remove the one with the number at the end.
Python symlink to python3
So I'm setting up my default variables in a new MacBook M1 and for some reason, my symlink doesn't seem to work. Why is the following behaviour happening? The symlink from python to python3 gets lost somehow. /Users/overflow/Documents/tools is part of my PATH variable. $ type python python is /Users/overflow/Documents/tools/python $ python -V Python 2.7.16 $ ls -lah /Users/overflow/Documents/tools/python lrwxr-xr-x 1 overflow staff 16B 6 Oct 18:48 /Users/overflow/Documents/tools/python -> /usr/bin/python3 $ /usr/bin/python3 -V Python 3.8.9 $ echo $PATH | sed 's/:/\n/g' /Users/overflow/Documents/tools /Users/overflow/Documents/Dropbox/productivity/bin /Users/overflow/Documents/tools/confluent-6.1.0/bin /Users/overflow/.sdkman/candidates/java/current/bin /Users/overflow/.nvm/versions/node/v16.10.0/bin /Users/overflow/bin /usr/local/bin /opt/homebrew/bin /opt/homebrew/sbin /usr/local/bin /usr/bin /bin /usr/sbin /sbin
[ "Given the path is not being utilized, it is being overridden by a shell alias. This can be confirmed by typing set | grep python\nIf you are using virtualenv:\nTry: /usr/bin/python3 -m venv python3.8.9\nThe common MacOS python managers are:\n\npythonz\n\nList installs using pythonz list\nChange using: /usr/bin/python3 -m venv python3.8.9\n\npythonbrew\n\nList installs pythonbrew list\nUse pythonbrew switch 3.8.9 to move to the new version.\n\npyenv\n\nClaims to use only $PATH. Included for completeness, but just in case\nList installs pyenv versions\nUse pyenv global 3.8.9 to move to the new version.\n\nTo remove any of these remove the appropriate line from your ~/.bashrc or ~/.zshrc file.\nYou can locate the line with grep ~/.bashrc python.\n", "I can give you some general troubleshooting tips and a band-aid that can replace a symlink.\n$(which python) -V\n\nThis should get you the version for python3.\nOther thing worth checking:\nwhich python2\n\nThis should point to some standard location, /usr/bin/python2 for example. If it isn't pointed there, might be worth understanding why.\nIn general, Macs have some quirks. The fact that a new MacBook M4 ships with Python2 as the default is a smell that you probably shouldn't try to overwrite the default value, and for whatever reason it doesn't seem to want you to.\nSimple answer: set an alias.\necho 'alias python=\"python3\"' >> ~/.zshrc && source;\n\nThis will work for you and should not interfere with any potential other systems.\nBetter answer:\nAs others have described, create a virtual environment and source venv/bin/activate.\n", "I simply created a symlink where the actual\npython -V is located.\nExecute the following commands:\n\ntype python#in my case was /usr/bin/python3which is a symlink to the installed version. It appears in cyan color. Change the directory, switch there.\ncd /usr/bin/ (where the python3 is, and let's list for the python words\nls pyth* #in my case was python3 and python3.10\nHow to make a symlink:\nln [OPTION]... [-T] TARGET LINK_NAME\nsudo ln -s python3.10 python\n\nYou'll have a simplest symlink without a number at the end.\n\nLets try: python\n\nOptionally you can remove the one with the number at the end.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "apple_m1", "python", "symlink", "unix" ]
stackoverflow_0069470556_apple_m1_python_symlink_unix.txt
Q: Use Python to remove unneeded elements from XML file I'm writing a program in Python to use an API that doesn't seem to filter out requests based on if a user is considered active. When I ask the API for a list of active users I get a much longer XML document that looks like the below text and it still includes users where the <active> tag is false. <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>false</active> <datelastlogin>2/3/2014 10:21:13 PM</datelastlogin> <dept>0</dept> <email/> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated/> <lastupdatedby/> <loginemail>userloginemail</loginemail> <phone1/> <phone2/> <rep>userinitials</rep> </user> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>userinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials</rep> </user> </ArrayOfuser> The program needs to eventually return a list of the <rep> tag from only active users. Here is the code I tried as a beginning to this project. I may have overcomplicated this because I was trying to parse users.xml for active users then save a file containing all the XML data about active users, then use a for loop in that file to get the info from each <rep> tag and save it to a list: to_remove = ['<active>false</active>'] with open('users.xml') as xmlfile, open('activeusers.xml','w') as newfile: for line in xmlfile: if not any(remo in line for remo in to_remove): newfile.write(line) In activeusers.xml I was expecting to see the below code block. <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>userinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials</rep> </user> </ArrayOfuser> The result is an identical copy of the users xml file. My guess is that the program must be reading the file correctly if it's copying everything, but it's definitely not removing what I need so that syntax must not be correct. This is just the solution I thought of and the program doesn't have to make a new file called activeusers.xml. The end goal is to get a list of the <rep> tag for only active users, so if there is a better way to do this I would love to know because I'm a complete newbie with XML and a novice with Python. A: Since you're dealing with xml, you should use a proper xml parser. Note that in this case you have to deal with namespaces as well. So try this: from lxml import etree #load your file doc = etree.parse("users.xml") #declare namespaces ns = {'xx': 'WebsiteWhereDataComesFrom.com'} #locate your deletion targets targets = doc.xpath('//xx:user[xx:active="false"]',namespaces=ns) for target in targets: target.getparent().remove(target) #save your file with open("newfile.xml", 'a') as file: file.write(etree.tostring(doc).decode()) This should have your expected output.
Use Python to remove unneeded elements from XML file
I'm writing a program in Python to use an API that doesn't seem to filter out requests based on if a user is considered active. When I ask the API for a list of active users I get a much longer XML document that looks like the below text and it still includes users where the <active> tag is false. <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>false</active> <datelastlogin>2/3/2014 10:21:13 PM</datelastlogin> <dept>0</dept> <email/> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated/> <lastupdatedby/> <loginemail>userloginemail</loginemail> <phone1/> <phone2/> <rep>userinitials</rep> </user> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>userinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials</rep> </user> </ArrayOfuser> The program needs to eventually return a list of the <rep> tag from only active users. Here is the code I tried as a beginning to this project. I may have overcomplicated this because I was trying to parse users.xml for active users then save a file containing all the XML data about active users, then use a for loop in that file to get the info from each <rep> tag and save it to a list: to_remove = ['<active>false</active>'] with open('users.xml') as xmlfile, open('activeusers.xml','w') as newfile: for line in xmlfile: if not any(remo in line for remo in to_remove): newfile.write(line) In activeusers.xml I was expecting to see the below code block. <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>userinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials</rep> </user> </ArrayOfuser> The result is an identical copy of the users xml file. My guess is that the program must be reading the file correctly if it's copying everything, but it's definitely not removing what I need so that syntax must not be correct. This is just the solution I thought of and the program doesn't have to make a new file called activeusers.xml. The end goal is to get a list of the <rep> tag for only active users, so if there is a better way to do this I would love to know because I'm a complete newbie with XML and a novice with Python.
[ "Since you're dealing with xml, you should use a proper xml parser. Note that in this case you have to deal with namespaces as well.\nSo try this:\nfrom lxml import etree\n#load your file\ndoc = etree.parse(\"users.xml\")\n#declare namespaces\nns = {'xx': 'WebsiteWhereDataComesFrom.com'}\n\n#locate your deletion targets\ntargets = doc.xpath('//xx:user[xx:active=\"false\"]',namespaces=ns)\nfor target in targets:\n target.getparent().remove(target)\n\n#save your file\nwith open(\"newfile.xml\", 'a') as file:\n file.write(etree.tostring(doc).decode())\n\nThis should have your expected output.\n" ]
[ 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0074646352_python_xml.txt
Q: Create a Category Tree using python A category tree is a representation of a set of categories and their parent-child relationships. Each category has a unique name (no two categories have the same name). A category can have a parent category. Categories without a parent category are called root categories. Need to create a category tree with following description using python programming. To add a category to a category tree :- the name and the parent of the category should be provided. When adding a root category, a null value should be provided as the parent. A call to getChildren method should return the direct children of the specified category in any order. KeyError method should be thrown when adding a category that has already been added anywhere in the CategoryTree or if a parent is specified but does not exist. Please help me out to solve this problem. Am new to oops concepts using python, thus unable to clear the problem. A helping hand to solve the problem is much appreciated. A: To create a category tree in Python, you can define a CategoryTree class that has the following methods: add_category(name: str, parent: Optional[str]): This method should add a new category with the given name and parent to the category tree. If a null value is provided as the parent, the category should be added as a root category. get_children(name: str): This method should return the direct children of the category with the given name in any order. invalid_argument_exception(name: str): This method should raise an InvalidArgumentException exception when adding a category that has already been added anywhere in the CategoryTree or if a parent is specified but does not exist. Here is an example of how you can implement this CategoryTree class: class CategoryTree: def __init__(self): self.categories = {} def add_category(self, name: str, parent: Optional[str]): if name in self.categories: self.invalid_argument_exception(f"Category {name} already exists") if parent is not None and parent not in self.categories: self.invalid_argument_exception(f"Parent {parent} does not exist") self.categories[name] = parent def get_children(self, name: str): children = [] for category, parent in self.categories.items(): if parent == name: children.append(category) return children def invalid_argument_exception(self, message: str): raise InvalidArgumentException(message) In the code above, the CategoryTree class has a dictionary categories that stores the name and parent of each category. The add_category() method first checks if the given category name already exists in the categories dictionary. If it does, it raises an InvalidArgumentException with an appropriate error message. It then checks if the given parent exists in the categories dictionary. If it does not, it raises an InvalidArgumentException with an appropriate error message. Finally, it adds the category to the categories dictionary. The get_children() method iterates over the categories dictionary and returns all the categories whose parent is the given category name. The invalid_argument_exception() method simply raises an InvalidArgumentException with the given error message. You can use this CategoryTree class as follows: tree = CategoryTree() tree.add_category("fruit", None) tree.add_category("vegetable", None) tree.add_category("apple", "fruit") tree.add_category("banana", "fruit") tree.add_category("carrot", "vegetable") tree.add_category("tomato", "vegetable") print(tree.get_children("fruit")) # ["apple", "banana"] print(tree.get_children("vegetable")) # ["carrot", "tomato"] In the code above, we first create a CategoryTree object and add several categories to it. We then print the direct children of the "fruit" and "veget tree = CategoryTree() tree.add_category("fruit", None) tree.add_category("vegetable", None) tree.add_category("apple", "fruit") tree.add_category("banana", "fruit") tree.add_category("carrot", "vegetable") tree.add_category("tomato", "vegetable") print(tree.get_children("fruit")) # ["apple", "banana"] print(tree.get_children("vegetable")) # ["carrot", "tomato"]
Create a Category Tree using python
A category tree is a representation of a set of categories and their parent-child relationships. Each category has a unique name (no two categories have the same name). A category can have a parent category. Categories without a parent category are called root categories. Need to create a category tree with following description using python programming. To add a category to a category tree :- the name and the parent of the category should be provided. When adding a root category, a null value should be provided as the parent. A call to getChildren method should return the direct children of the specified category in any order. KeyError method should be thrown when adding a category that has already been added anywhere in the CategoryTree or if a parent is specified but does not exist. Please help me out to solve this problem. Am new to oops concepts using python, thus unable to clear the problem. A helping hand to solve the problem is much appreciated.
[ "To create a category tree in Python, you can define a CategoryTree class that has the following methods:\n\nadd_category(name: str, parent: Optional[str]): This method should add a new category with the given name and parent to the category tree. If a null value is provided as the parent, the category should be added as a root category.\nget_children(name: str): This method should return the direct children of the category with the given name in any order.\ninvalid_argument_exception(name: str): This method should raise an InvalidArgumentException exception when adding a category that has already been added anywhere in the CategoryTree or if a parent is specified but does not exist.\nHere is an example of how you can implement this CategoryTree class:\n\nclass CategoryTree:\n def __init__(self):\n self.categories = {}\n\n def add_category(self, name: str, parent: Optional[str]):\n if name in self.categories:\n self.invalid_argument_exception(f\"Category {name} already exists\")\n\n if parent is not None and parent not in self.categories:\n self.invalid_argument_exception(f\"Parent {parent} does not exist\")\n\n self.categories[name] = parent\n\n def get_children(self, name: str):\n children = []\n for category, parent in self.categories.items():\n if parent == name:\n children.append(category)\n return children\n\n def invalid_argument_exception(self, message: str):\n raise InvalidArgumentException(message)\n\nIn the code above, the CategoryTree class has a dictionary categories that stores the name and parent of each category. The add_category() method first checks if the given category name already exists in the categories dictionary. If it does, it raises an InvalidArgumentException with an appropriate error message. It then checks if the given parent exists in the categories dictionary. If it does not, it raises an InvalidArgumentException with an appropriate error message. Finally, it adds the category to the categories dictionary.\nThe get_children() method iterates over the categories dictionary and returns all the categories whose parent is the given category name.\nThe invalid_argument_exception() method simply raises an InvalidArgumentException with the given error message.\nYou can use this CategoryTree class as follows:\ntree = CategoryTree()\ntree.add_category(\"fruit\", None)\ntree.add_category(\"vegetable\", None)\ntree.add_category(\"apple\", \"fruit\")\ntree.add_category(\"banana\", \"fruit\")\ntree.add_category(\"carrot\", \"vegetable\")\ntree.add_category(\"tomato\", \"vegetable\")\n\nprint(tree.get_children(\"fruit\")) # [\"apple\", \"banana\"]\nprint(tree.get_children(\"vegetable\")) # [\"carrot\", \"tomato\"]\n\nIn the code above, we first create a CategoryTree object and add several categories to it. We then print the direct children of the \"fruit\" and \"veget\ntree = CategoryTree()\ntree.add_category(\"fruit\", None)\ntree.add_category(\"vegetable\", None)\ntree.add_category(\"apple\", \"fruit\")\ntree.add_category(\"banana\", \"fruit\")\ntree.add_category(\"carrot\", \"vegetable\")\ntree.add_category(\"tomato\", \"vegetable\")\n\nprint(tree.get_children(\"fruit\")) # [\"apple\", \"banana\"]\nprint(tree.get_children(\"vegetable\")) # [\"carrot\", \"tomato\"]\n\n" ]
[ 0 ]
[]
[]
[ "exception", "python", "tree" ]
stackoverflow_0074647726_exception_python_tree.txt
Q: python: [Errno 10054] An existing connection was forcibly closed by the remote host I am writing python to crawl Twitter space using Twitter-py. I have set the crawler to sleep for a while (2 seconds) between each request to api.twitter.com. However, after some times of running (around 1), when the Twitter's rate limit not exceeded yet, I got this error. [Errno 10054] An existing connection was forcibly closed by the remote host. What are possible causes of this problem and how to solve this? I have searched through and found that the Twitter server itself may force to close the connection due to many requests. Thank you very much in advance. A: This can be caused by the two sides of the connection disagreeing over whether the connection timed out or not during a keepalive. (Your code tries to reused the connection just as the server is closing it because it has been idle for too long.) You should basically just retry the operation over a new connection. (I'm surprised your library doesn't do this automatically.) A: I know this is a very old question but it may be that you need to set the request headers. This solved it for me. For example 'user-agent', 'accept' etc. here is an example with user-agent: url = 'your-url-here' headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36'} r = requests.get(url, headers=headers) A: there are many causes such as The network link between server and client may be temporarily going down. running out of system resources. sending malformed data. To examine the problem in detail, you can use Wireshark. or you can just re-request or re-connect again. A: For me this problem arised while trying to connect to the SAP Hana database. When I got this error, OperationalError: Lost connection to HANA server (ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)) I tried to run the code for connection(mentioned below), which created that error, again and it worked. import pyhdb connection = pyhdb.connect(host="example.com",port=30015,user="user",password="secret") cursor = connection.cursor() cursor.execute("SELECT 'Hello Python World' FROM DUMMY") cursor.fetchone() connection.close() It was because the server refused to connect. It might require you to wait for a while and try again. Try closing the Hana Studio by logging off and then logging in again. Keep running the code for a number of times. A: I got the same error ([WinError 10054] An existing connection was forcibly closed by the remote host) with websocket-client after setting ping_interval = 2 in websocket.run_forever(). (I had multiple threads connecting to the same host.) Setting ping_interval = 10 and ping_timeout = 9 solved the issue. May be you need to reduce the amount of requests and stop making host busy otherwise it will forcibly disconnect you. A: I fixed it with a while try loop, waiting for the response to set the variable in order to exit the loop. When the connection has an exception, it waits five seconds, and continues looking for the response from the connection. My code before fix, with the failed response HTTPSConnectionPool(host='etc.com', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001E9955A2050>, 'Connection to example.net timed out. (connect timeout=None)')) from __future__ import print_function import sys import requests def condition_questions(**kwargs): proxies = {'https': 'example.com', 'http': 'example.com:3128'} print(kwargs, file=sys.stdout) headers = {'etc':'etc',} body = f'''<etc> </etc>''' try: response_xml = requests.post('https://example.com', data=body, headers=headers, proxies=proxies) except Exception as ex: print("exception", ex, file=sys.stdout) log.exception(ex) finally: print("response_xml", response_xml, file=sys.stdout) return response_xml After fix, with successful response response_xml <Response [200]>: import time ... response_xml = '' while response_xml == '': try: response_xml = requests.post('https://example.com', data=body, headers=headers, proxies=proxies) break except Exception as ex: print("exception", ex, file=sys.stdout) log.exception(ex) time.sleep(5) continue finally: print("response_xml", response_xml, file=sys.stdout) return response_xml based on Jatin's answer here --"Just do this, import time page = '' while page == '': try: page = requests.get(url) break except: print("Connection refused by the server..") print("Let me sleep for 5 seconds") print("ZZzzzz...") time.sleep(5) print("Was a nice sleep, now let me continue...") continue You're welcome :)"
python: [Errno 10054] An existing connection was forcibly closed by the remote host
I am writing python to crawl Twitter space using Twitter-py. I have set the crawler to sleep for a while (2 seconds) between each request to api.twitter.com. However, after some times of running (around 1), when the Twitter's rate limit not exceeded yet, I got this error. [Errno 10054] An existing connection was forcibly closed by the remote host. What are possible causes of this problem and how to solve this? I have searched through and found that the Twitter server itself may force to close the connection due to many requests. Thank you very much in advance.
[ "This can be caused by the two sides of the connection disagreeing over whether the connection timed out or not during a keepalive. (Your code tries to reused the connection just as the server is closing it because it has been idle for too long.) You should basically just retry the operation over a new connection. (I'm surprised your library doesn't do this automatically.)\n", "I know this is a very old question but it may be that you need to set the request headers. This solved it for me.\nFor example 'user-agent', 'accept' etc. here is an example with user-agent:\nurl = 'your-url-here'\nheaders = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36'}\nr = requests.get(url, headers=headers)\n\n", "there are many causes such as \n\nThe network link between server and client may be temporarily going down.\nrunning out of system resources.\nsending malformed data.\n\nTo examine the problem in detail, you can use Wireshark.\nor you can just re-request or re-connect again.\n", "For me this problem arised while trying to connect to the SAP Hana database. When I got this error, \nOperationalError: Lost connection to HANA server (ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))\nI tried to run the code for connection(mentioned below), which created that error, again and it worked. \n\n\n import pyhdb\n connection = pyhdb.connect(host=\"example.com\",port=30015,user=\"user\",password=\"secret\")\n cursor = connection.cursor()\n cursor.execute(\"SELECT 'Hello Python World' FROM DUMMY\")\n cursor.fetchone()\n connection.close()\n\n\nIt was because the server refused to connect. It might require you to wait for a while and try again. Try closing the Hana Studio by logging off and then logging in again. Keep running the code for a number of times.\n", "I got the same error ([WinError 10054] An existing connection was forcibly closed by the remote host) with websocket-client after setting ping_interval = 2 in websocket.run_forever(). (I had multiple threads connecting to the same host.)\nSetting ping_interval = 10 and ping_timeout = 9 solved the issue. May be you need to reduce the amount of requests and stop making host busy otherwise it will forcibly disconnect you.\n", "I fixed it with a while try loop, waiting for the response to set the variable in order to exit the loop.\nWhen the connection has an exception, it waits five seconds, and continues looking for the response from the connection.\nMy code before fix, with the failed response HTTPSConnectionPool(host='etc.com', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001E9955A2050>, 'Connection to example.net timed out. (connect timeout=None)'))\n \n\nfrom __future__ import print_function\nimport sys\nimport requests\n\n\ndef condition_questions(**kwargs):\n proxies = {'https': 'example.com', 'http': 'example.com:3128'}\n print(kwargs, file=sys.stdout)\n headers = {'etc':'etc',}\n body = f'''<etc>\n </etc>'''\n\n try:\n response_xml = requests.post('https://example.com', data=body, headers=headers, proxies=proxies)\n except Exception as ex:\n print(\"exception\", ex, file=sys.stdout)\n log.exception(ex)\n finally:\n print(\"response_xml\", response_xml, file=sys.stdout)\n return response_xml\n\nAfter fix, with successful response response_xml <Response [200]>:\n\nimport time\n...\n\nresponse_xml = ''\n while response_xml == '':\n try:\n response_xml = requests.post('https://example.com', data=body, headers=headers, proxies=proxies)\n break\n except Exception as ex:\n print(\"exception\", ex, file=sys.stdout)\n log.exception(ex)\n time.sleep(5)\n continue\n finally:\n print(\"response_xml\", response_xml, file=sys.stdout)\n return response_xml\n\nbased on Jatin's answer here --\"Just do this,\nimport time\n\npage = ''\nwhile page == '':\n try:\n page = requests.get(url)\n break\n except:\n print(\"Connection refused by the server..\")\n print(\"Let me sleep for 5 seconds\")\n print(\"ZZzzzz...\")\n time.sleep(5)\n print(\"Was a nice sleep, now let me continue...\")\n continue\n\nYou're welcome :)\"\n" ]
[ 24, 15, 13, 2, 2, 0 ]
[]
[]
[ "python", "twitter", "web_crawler" ]
stackoverflow_0008814802_python_twitter_web_crawler.txt
Q: uncheck checkbox tkinter after a certain time interval my program is based on independent checkboxes, that is, they do not depend on a booleanVar, I am using a setInterval created from a thread , and after a certain time I want the checkbox to 'turn off' and be able to receive another setInterval self.timer = Checkbutton(command=session_timer ,text='Session Timer') self.timer.grid(row=1,column=1, sticky='w') currently my solution is to recreate the checkbox itself again in the same position as the previous one inside my timer function if timer >= timer_interval_minutes: self.timer = Checkbutton(command=session_timer ,text='Session Timer') self.timer.grid(row=1,column=1, sticky='w') however what I'm looking for is something to simply disable and change the text, something like this if timer >= timer_interval_minutes: self.timer.configure(state='normal') however the 'state' function only has 3 options, 'active', 'disabled', 'normal', none of them can only disable the checkbox checkbox uncheck using configure, or something similar A: The checkbutton has a documented method named deselect which does what the name implies. if timer >= timer_interval_minutes: self.timer.deselect()
uncheck checkbox tkinter after a certain time interval
my program is based on independent checkboxes, that is, they do not depend on a booleanVar, I am using a setInterval created from a thread , and after a certain time I want the checkbox to 'turn off' and be able to receive another setInterval self.timer = Checkbutton(command=session_timer ,text='Session Timer') self.timer.grid(row=1,column=1, sticky='w') currently my solution is to recreate the checkbox itself again in the same position as the previous one inside my timer function if timer >= timer_interval_minutes: self.timer = Checkbutton(command=session_timer ,text='Session Timer') self.timer.grid(row=1,column=1, sticky='w') however what I'm looking for is something to simply disable and change the text, something like this if timer >= timer_interval_minutes: self.timer.configure(state='normal') however the 'state' function only has 3 options, 'active', 'disabled', 'normal', none of them can only disable the checkbox checkbox uncheck using configure, or something similar
[ "The checkbutton has a documented method named deselect which does what the name implies.\nif timer >= timer_interval_minutes:\n self.timer.deselect()\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074647867_python_tkinter.txt
Q: async apscheduler does not start the task Trying to create a scheduler: sheduler = AsyncIOScheduler(timezone='Europe/Moscow') async def thread_maintaining_communication(): print('There') async def main(): sheduler.add_job(thread_maintaining_communication,"interval", seconds=20) sheduler.start() #await bot.infinity_polling(skip_pending=True) while True: print('sleep 10 sec') await asyncio.sleep(10) asyncio.run(main()) But for some reason unknown to me, it does not work. Here's what's in the console: INFO:apscheduler.scheduler:Adding job tentatively -- it will be properly scheduled when the scheduler starts INFO:apscheduler.scheduler:Added job "thread_maintaining_communication" to job store "default" sleep 10 secINFO:apscheduler.scheduler:Scheduler started sleep 10 sec sleep 10 sec sleep 10 sec A: I was given an answer, I hope that someone who faces the same problem will find this topic sheduler.add_job(thread_maintaining_communication,"interval", seconds=20) bot.add_custom_filter(asyncio_filters.StateFilter(bot)) sheduler.start() loop = asyncio.get_event_loop() loop.run_until_complete(bot.polling(skip_pending=True))
async apscheduler does not start the task
Trying to create a scheduler: sheduler = AsyncIOScheduler(timezone='Europe/Moscow') async def thread_maintaining_communication(): print('There') async def main(): sheduler.add_job(thread_maintaining_communication,"interval", seconds=20) sheduler.start() #await bot.infinity_polling(skip_pending=True) while True: print('sleep 10 sec') await asyncio.sleep(10) asyncio.run(main()) But for some reason unknown to me, it does not work. Here's what's in the console: INFO:apscheduler.scheduler:Adding job tentatively -- it will be properly scheduled when the scheduler starts INFO:apscheduler.scheduler:Added job "thread_maintaining_communication" to job store "default" sleep 10 secINFO:apscheduler.scheduler:Scheduler started sleep 10 sec sleep 10 sec sleep 10 sec
[ "I was given an answer, I hope that someone who faces the same problem will find this topic\nsheduler.add_job(thread_maintaining_communication,\"interval\", seconds=20)\nbot.add_custom_filter(asyncio_filters.StateFilter(bot))\nsheduler.start()\nloop = asyncio.get_event_loop() \nloop.run_until_complete(bot.polling(skip_pending=True))\n\n" ]
[ 0 ]
[]
[]
[ "apscheduler", "asynchronous", "python" ]
stackoverflow_0074641830_apscheduler_asynchronous_python.txt
Q: how can I create a new column with two columns combination? i want a new column that contains the amount of times user_id and artist_id are the same, for example if user_id = 0, and artist_id = 10, and it happens 5 times, i want to store number 5 in a column in the 5 rows in which this occurs. This code gives me the value, but I can't store it. treino.groupby(['user_id', 'artist_id']).count() A: IIUC you need a column that represents the size of each group in each row. Then you need to use groupby.transform. df["group_size"] = ( df.assign(group_size=1) .groupby(["user_id", "artist_id"])["group_size"] .transform("count") )
how can I create a new column with two columns combination?
i want a new column that contains the amount of times user_id and artist_id are the same, for example if user_id = 0, and artist_id = 10, and it happens 5 times, i want to store number 5 in a column in the 5 rows in which this occurs. This code gives me the value, but I can't store it. treino.groupby(['user_id', 'artist_id']).count()
[ "IIUC you need a column that represents the size of each group in each row. Then you need to use groupby.transform.\ndf[\"group_size\"] = (\n df.assign(group_size=1)\n .groupby([\"user_id\", \"artist_id\"])[\"group_size\"]\n .transform(\"count\")\n)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074646247_dataframe_pandas_python.txt
Q: Python CLI and Import Module I'm trying to figure out how to set up a python "project" as both a CLI command and an import "object". This probably has a simple answer(s), but I'm not as familiar with the technical terms so I'm not quite sure what I need to research. What I would like to have: mytool being a python "thing" (module/package?) I can run pip install -e /home/user/project/mytool/ (or whatever it needs to be) Then I can use it in these two ways: use mytool as a CLI command: CLI usage: mytool string_input CLI output: result_string import mytool in a different python script and use it there: In other script usage: import mytool result_string = mytool.process_string(string_input) This is my setup: $ tree ./mytool/ ./mytool/ β”œβ”€β”€ __init__.py β”œβ”€β”€ setup.py β”œβ”€β”€ mytool β”‚Β Β  β”œβ”€β”€ mytool_helper.py β”‚Β Β  └── mytool.py β”œβ”€β”€ mytool.egg-info β”‚Β Β  β”œβ”€β”€ dependency_links.txt β”‚Β Β  β”œβ”€β”€ entry_points.txt β”‚Β Β  β”œβ”€β”€ PKG-INFO β”‚Β Β  β”œβ”€β”€ requires.txt β”‚Β Β  β”œβ”€β”€ SOURCES.txt β”‚Β Β  └── top_level.txt β”œβ”€β”€ mytool_helper.py └── mytool.py 2 directories, 12 files I know I don't need mytool.py and mytool_helper.py in both folders, but this has been my rough trying to get something to work, before pruning. I've tried to pattern this from a couple of other python things, but I think I've just confused myself at this point. I have also read __init__.py can be empty, but it might be what I'm looking for, this is the content of __init__.py: from mytool.mytool import * contents of setup.py: """ The setup module(?) for mytool """ # To use a consistent encoding from os import path # Always prefer setuptools over distutils from setuptools import setup, find_packages version = '0.0.2' here = path.abspath(path.dirname(__file__)) setup( name='mytool', version=version, description='Do stuff with a string', packages=find_packages(), py_modules=['mytool'], classifiers=[ 'Programming Language :: Python :: 3.10' ], install_requires=[ 'argparse', 'pyyaml' ], extras_require={ }, entry_points={ 'console_scripts': [ 'mytool=mytool.mytool:main' ] } ) contents of mytool.py (simplified): import json import argparse from mytool.mytool_helper import HelperClass # various string manipulation functions #... def process_string(in_string): result = HelperClass.parse_string(in_string) # Do stuff with string return result.to_string() def main(): # This just gets the input via argparse input_string = _get_input_string() result_string = process_string(input_string) print(json.dumps(result_string, indent=4, default=str)) if __name__ == "__main__": main() The current behavior is: pip install -e /home/user/project/mytool/ CLI: mytool string_to_process In a script: from mytool import mytool result = mytool.process_string("string_to_process") How do I fix and clean my project so I can just use below while still having the CLI functionality? import mytool result = mytool.process_string("string_to_process") I don't need the helper directly "importable". A: Finally found a question that lead me to a solution, but I still welcome feedback and comments! This question: https://softwareengineering.stackexchange.com/q/243044 (they asked which is preferred) asks about single python file distribution where you have a file foo.py which has an internal thing (function,class,etc) called useful_thing They provide 2 file architectures so one could use from foo import useful_thing (which would also allow the functionality of: import foo foo.useful_thing ) The file structures they provide in their question are: $ tree -a ./foo_folder/ ./foo_folder/ β”œβ”€β”€ setup.py β”œβ”€β”€ README.rst β”œβ”€β”€ ...etc... └── foo.py 0 directories, 4 files and $ tree -a ./foo_folder/ ./foo_folder/ β”œβ”€β”€ setup.py β”œβ”€β”€ README.rst β”œβ”€β”€ ...etc... └── foo β”œβ”€β”€ module.py └── __init__.py 2 directories, 4 files where useful_thing is defined in module.py and the contents of __init__.py would be: from foo.module import useful_thing This question and the answer: https://softwareengineering.stackexchange.com/a/243045 (which says: use whichever works for you) Provides a couple of example packages that are distributions of just one module. Specifically https://pypi.org/project/six/ which I used the structure to get my tool working as both a CLI tool and import. This was the final result: $ tree ./mytool/ ./mytool/ β”œβ”€β”€ mytool.egg-info β”‚Β Β  β”œβ”€β”€ dependency_links.txt β”‚Β Β  β”œβ”€β”€ entry_points.txt β”‚Β Β  β”œβ”€β”€ PKG-INFO β”‚Β Β  β”œβ”€β”€ requires.txt β”‚Β Β  β”œβ”€β”€ SOURCES.txt β”‚Β Β  └── top_level.txt β”œβ”€β”€ mytool_helper.py β”œβ”€β”€ mytool.py └── setup.py 1 directory, 9 files contents of setup.py: """ The setup module for mytool """ # To use a consistent encoding from os import path # Always prefer setuptools over distutils from setuptools import setup, find_packages import mytool # NOTE: this is importing the file version = '0.0.2' here = path.abspath(path.dirname(__file__)) # NOTE: This is leftover, I should probably prune setup( name='mytool', version=version, description='Do stuff with a string', #packages=find_packages(), # NOTE: I commented this because I don't think it's technically a "package" anymore just a moduel py_modules=['mytool'], classifiers=[ 'Programming Language :: Python :: 3' ], install_requires=[ 'argparse', 'pyyaml' ], extras_require={ }, entry_points={ 'console_scripts': [ 'mytool=mytool:main' # NOTE: have to change this here because the file structure is different ] } ) contents of mytool.py (simplified): import json import argparse from mytool_helper import HelperClass # NOTE:You just import the helper file/module directly here rather than as a module as part of a package. # various string manipulation functions #... def process_string(in_string): result = HelperClass.parse_string(in_string) # Do stuff with string return result.to_string() def main(): # This just gets the input via argparse input_string = _get_input_string() result_string = process_string(input_string) print(json.dumps(result_string, indent=4, default=str)) if __name__ == "__main__": main() Which allows for it to be installed in an editable manner with pip: pip install -e /home/user/project/mytool/ The CLI usage: mytool string_to_process In script: import mytool result = mytool.process_string("string_to_process") I may have still confused terms here, but this should help someone with structure and file contents. I should probably also explain some of the other pieces, or provide links, for things like how to make the CLI part of the tool (https://setuptools.pypa.io/en/latest/userguide/entry_point.html)
Python CLI and Import Module
I'm trying to figure out how to set up a python "project" as both a CLI command and an import "object". This probably has a simple answer(s), but I'm not as familiar with the technical terms so I'm not quite sure what I need to research. What I would like to have: mytool being a python "thing" (module/package?) I can run pip install -e /home/user/project/mytool/ (or whatever it needs to be) Then I can use it in these two ways: use mytool as a CLI command: CLI usage: mytool string_input CLI output: result_string import mytool in a different python script and use it there: In other script usage: import mytool result_string = mytool.process_string(string_input) This is my setup: $ tree ./mytool/ ./mytool/ β”œβ”€β”€ __init__.py β”œβ”€β”€ setup.py β”œβ”€β”€ mytool β”‚Β Β  β”œβ”€β”€ mytool_helper.py β”‚Β Β  └── mytool.py β”œβ”€β”€ mytool.egg-info β”‚Β Β  β”œβ”€β”€ dependency_links.txt β”‚Β Β  β”œβ”€β”€ entry_points.txt β”‚Β Β  β”œβ”€β”€ PKG-INFO β”‚Β Β  β”œβ”€β”€ requires.txt β”‚Β Β  β”œβ”€β”€ SOURCES.txt β”‚Β Β  └── top_level.txt β”œβ”€β”€ mytool_helper.py └── mytool.py 2 directories, 12 files I know I don't need mytool.py and mytool_helper.py in both folders, but this has been my rough trying to get something to work, before pruning. I've tried to pattern this from a couple of other python things, but I think I've just confused myself at this point. I have also read __init__.py can be empty, but it might be what I'm looking for, this is the content of __init__.py: from mytool.mytool import * contents of setup.py: """ The setup module(?) for mytool """ # To use a consistent encoding from os import path # Always prefer setuptools over distutils from setuptools import setup, find_packages version = '0.0.2' here = path.abspath(path.dirname(__file__)) setup( name='mytool', version=version, description='Do stuff with a string', packages=find_packages(), py_modules=['mytool'], classifiers=[ 'Programming Language :: Python :: 3.10' ], install_requires=[ 'argparse', 'pyyaml' ], extras_require={ }, entry_points={ 'console_scripts': [ 'mytool=mytool.mytool:main' ] } ) contents of mytool.py (simplified): import json import argparse from mytool.mytool_helper import HelperClass # various string manipulation functions #... def process_string(in_string): result = HelperClass.parse_string(in_string) # Do stuff with string return result.to_string() def main(): # This just gets the input via argparse input_string = _get_input_string() result_string = process_string(input_string) print(json.dumps(result_string, indent=4, default=str)) if __name__ == "__main__": main() The current behavior is: pip install -e /home/user/project/mytool/ CLI: mytool string_to_process In a script: from mytool import mytool result = mytool.process_string("string_to_process") How do I fix and clean my project so I can just use below while still having the CLI functionality? import mytool result = mytool.process_string("string_to_process") I don't need the helper directly "importable".
[ "Finally found a question that lead me to a solution, but I still welcome feedback and comments!\nThis question:\nhttps://softwareengineering.stackexchange.com/q/243044\n(they asked which is preferred)\nasks about single python file distribution where you have a file foo.py which has an internal thing (function,class,etc) called useful_thing\nThey provide 2 file architectures so one could use from foo import useful_thing\n(which would also allow the functionality of:\nimport foo\nfoo.useful_thing\n\n)\nThe file structures they provide in their question are:\n$ tree -a ./foo_folder/\n./foo_folder/\nβ”œβ”€β”€ setup.py\nβ”œβ”€β”€ README.rst\nβ”œβ”€β”€ ...etc...\n└── foo.py\n\n0 directories, 4 files\n\nand\n$ tree -a ./foo_folder/\n./foo_folder/\nβ”œβ”€β”€ setup.py\nβ”œβ”€β”€ README.rst\nβ”œβ”€β”€ ...etc...\n└── foo\n β”œβ”€β”€ module.py\n └── __init__.py\n\n2 directories, 4 files\n\nwhere useful_thing is defined in module.py and the contents of __init__.py would be:\nfrom foo.module import useful_thing\n\nThis question and the answer:\nhttps://softwareengineering.stackexchange.com/a/243045\n(which says: use whichever works for you)\nProvides a couple of example packages that are distributions of just one module. Specifically https://pypi.org/project/six/ which I used the structure to get my tool working as both a CLI tool and import.\nThis was the final result:\n$ tree ./mytool/\n./mytool/\nβ”œβ”€β”€ mytool.egg-info\nβ”‚Β Β  β”œβ”€β”€ dependency_links.txt\nβ”‚Β Β  β”œβ”€β”€ entry_points.txt\nβ”‚Β Β  β”œβ”€β”€ PKG-INFO\nβ”‚Β Β  β”œβ”€β”€ requires.txt\nβ”‚Β Β  β”œβ”€β”€ SOURCES.txt\nβ”‚Β Β  └── top_level.txt\nβ”œβ”€β”€ mytool_helper.py\nβ”œβ”€β”€ mytool.py\n└── setup.py\n\n1 directory, 9 files\n\ncontents of setup.py:\n\"\"\"\nThe setup module for mytool\n\"\"\"\n\n# To use a consistent encoding\nfrom os import path\n\n# Always prefer setuptools over distutils\nfrom setuptools import setup, find_packages\n\nimport mytool # NOTE: this is importing the file\n\nversion = '0.0.2'\n\nhere = path.abspath(path.dirname(__file__)) # NOTE: This is leftover, I should probably prune\n\nsetup(\n name='mytool',\n version=version,\n\n description='Do stuff with a string',\n\n\n #packages=find_packages(), # NOTE: I commented this because I don't think it's technically a \"package\" anymore just a moduel\n py_modules=['mytool'],\n\n classifiers=[\n 'Programming Language :: Python :: 3'\n ],\n install_requires=[\n 'argparse',\n 'pyyaml'\n ],\n extras_require={\n },\n entry_points={\n 'console_scripts': [\n 'mytool=mytool:main' # NOTE: have to change this here because the file structure is different\n ]\n }\n)\n\ncontents of mytool.py (simplified):\nimport json\nimport argparse\n\nfrom mytool_helper import HelperClass # NOTE:You just import the helper file/module directly here rather than as a module as part of a package.\n\n# various string manipulation functions\n#...\n\ndef process_string(in_string):\n result = HelperClass.parse_string(in_string)\n # Do stuff with string\n return result.to_string()\n\n\ndef main():\n # This just gets the input via argparse\n input_string = _get_input_string()\n result_string = process_string(input_string)\n print(json.dumps(result_string, indent=4, default=str))\n\n\nif __name__ == \"__main__\":\n main()\n\nWhich allows for it to be installed in an editable manner with pip:\npip install -e /home/user/project/mytool/\n\nThe CLI usage: mytool string_to_process\nIn script:\n\nimport mytool\nresult = mytool.process_string(\"string_to_process\")\n\nI may have still confused terms here, but this should help someone with structure and file contents.\nI should probably also explain some of the other pieces, or provide links, for things like how to make the CLI part of the tool (https://setuptools.pypa.io/en/latest/userguide/entry_point.html)\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "python_import" ]
stackoverflow_0074647151_python_python_3.x_python_import.txt
Q: Center the flower using turtle I want to draw a flower with turtle. Although I am facing problem in centering the flower (0,0) should be flower's center or where turtle initially is spawned. How can I center it? import turtle import math turtle.speed(-1) def Flower(): global radius, num_of for i in range(num_of): turtle.setheading(i * 360/num_of) turtle.circle(radius*3.5/num_of,180) radius = 50 num_of = 10 Flower() I tried setting turtle to where it starts drawing but number of sides ruin it. A: Since the turtle draws from a edge, we need to move the turtle to compensate for the radius of the entire image. To simplify this move, we align the starting point of the image with one (X) axis. We also switch from absolute coordinates (setheading()) to relative coordinates (right()) so our initial rotational offset doesn't get lost or need to be added to every position: import turtle import math radius = 50 num_of = 13 def flower(): outer_radius = radius * 3.5 / math.pi turtle.penup() turtle.setx(-outer_radius) # assumes heading of 0 turtle.pendown() turtle.right(180 / num_of) for _ in range(num_of): turtle.right(180 - 360 / num_of) turtle.circle(radius * 3.5 / num_of, 180) turtle.speed('fastest') turtle.dot() # mark turtle starting location flower() turtle.hideturtle() turtle.done() To get the radius of the flower, we add up the diameters of all the petals and use the standard circumference to radius formula. There are probably simplifications, math-wise and code-wise, we could make.
Center the flower using turtle
I want to draw a flower with turtle. Although I am facing problem in centering the flower (0,0) should be flower's center or where turtle initially is spawned. How can I center it? import turtle import math turtle.speed(-1) def Flower(): global radius, num_of for i in range(num_of): turtle.setheading(i * 360/num_of) turtle.circle(radius*3.5/num_of,180) radius = 50 num_of = 10 Flower() I tried setting turtle to where it starts drawing but number of sides ruin it.
[ "Since the turtle draws from a edge, we need to move the turtle to compensate for the radius of the entire image. To simplify this move, we align the starting point of the image with one (X) axis. We also switch from absolute coordinates (setheading()) to relative coordinates (right()) so our initial rotational offset doesn't get lost or need to be added to every position:\nimport turtle\nimport math\n\nradius = 50\nnum_of = 13\n\ndef flower():\n outer_radius = radius * 3.5 / math.pi\n\n turtle.penup()\n turtle.setx(-outer_radius) # assumes heading of 0\n turtle.pendown()\n\n turtle.right(180 / num_of)\n\n for _ in range(num_of):\n turtle.right(180 - 360 / num_of)\n turtle.circle(radius * 3.5 / num_of, 180)\n\nturtle.speed('fastest')\nturtle.dot() # mark turtle starting location\n\nflower()\n\nturtle.hideturtle()\nturtle.done()\n\n\nTo get the radius of the flower, we add up the diameters of all the petals and use the standard circumference to radius formula. There are probably simplifications, math-wise and code-wise, we could make.\n" ]
[ 1 ]
[]
[]
[ "flower", "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074625622_flower_python_python_turtle_turtle_graphics.txt
Q: AWS SAM DockerBuildArgs It does not add them when creating the lambda image I am trying to test a lambda function locally, the function is created from the public docker image from aws, however I want to install my own python library from my github, according to the documentation AWS sam Build I have to add a variable to be taken in the Dockerfile like this: Dockerfile FROM public.ecr.aws/lambda/python:3.8 COPY lambda_preprocessor.py requirements.txt ./ RUN yum install -y git RUN python3.8 -m pip install -r requirements.txt -t . ARG GITHUB_TOKEN RUN python3.8 -m pip install git+https://${GITHUB_TOKEN}@github.com/repository/library.git -t . And to pass the GITHUB_TOKEN I can create a .json file containing the variables for the docker environment. .json file named env.json { "LambdaPreprocessor": { "GITHUB_TOKEN": "TOKEN_VALUE" } } And simply pass the file address in the sam build: sam build --use-container --container-env-var-file env.json Or directly the value without the .json with the command: sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE My problem is that I don't get the GITHUB_TOKEN variable either with the .json file or by putting it directly in the command with --container-env-var GITHUB_TOKEN=TOKEN_VALUE Using sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE --debug shows that it doesn't take it when creating the lambda image. The only way that has worked for me is to put the token directly in the Dockerfile not as an build argument. Promt output Building image for LambdaPreprocessor function Setting DockerBuildArgs: {} for LambdaPreprocessor function Does anyone know why this is happening, am I doing something wrong? If you need to see the template.yaml this is the lambda definition. template.yaml LambdaPreprocessor: Type: AWS::Serverless::Function Properties: PackageType: Image Architectures: - x86_64 Timeout: 180 Metadata: Dockerfile: Dockerfile DockerContext: ./lambda_preprocessor DockerTag: python3.8-v1 I'm doing it with vscode and wsl 2 with ubuntu 20.04 lts on windows 10 A: I am having this issue too. What I have learned is that in the Metadata field there is DockerBuildArgs: that you can also add. Example: Metadata: DockerBuildArgs: MY_VAR: <some variable> When I add this it does make it to the DockerBuildArgs dict.
AWS SAM DockerBuildArgs It does not add them when creating the lambda image
I am trying to test a lambda function locally, the function is created from the public docker image from aws, however I want to install my own python library from my github, according to the documentation AWS sam Build I have to add a variable to be taken in the Dockerfile like this: Dockerfile FROM public.ecr.aws/lambda/python:3.8 COPY lambda_preprocessor.py requirements.txt ./ RUN yum install -y git RUN python3.8 -m pip install -r requirements.txt -t . ARG GITHUB_TOKEN RUN python3.8 -m pip install git+https://${GITHUB_TOKEN}@github.com/repository/library.git -t . And to pass the GITHUB_TOKEN I can create a .json file containing the variables for the docker environment. .json file named env.json { "LambdaPreprocessor": { "GITHUB_TOKEN": "TOKEN_VALUE" } } And simply pass the file address in the sam build: sam build --use-container --container-env-var-file env.json Or directly the value without the .json with the command: sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE My problem is that I don't get the GITHUB_TOKEN variable either with the .json file or by putting it directly in the command with --container-env-var GITHUB_TOKEN=TOKEN_VALUE Using sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE --debug shows that it doesn't take it when creating the lambda image. The only way that has worked for me is to put the token directly in the Dockerfile not as an build argument. Promt output Building image for LambdaPreprocessor function Setting DockerBuildArgs: {} for LambdaPreprocessor function Does anyone know why this is happening, am I doing something wrong? If you need to see the template.yaml this is the lambda definition. template.yaml LambdaPreprocessor: Type: AWS::Serverless::Function Properties: PackageType: Image Architectures: - x86_64 Timeout: 180 Metadata: Dockerfile: Dockerfile DockerContext: ./lambda_preprocessor DockerTag: python3.8-v1 I'm doing it with vscode and wsl 2 with ubuntu 20.04 lts on windows 10
[ "I am having this issue too. What I have learned is that in the Metadata field there is DockerBuildArgs: that you can also add. Example:\n Metadata:\n DockerBuildArgs:\n MY_VAR: <some variable>\n\nWhen I add this it does make it to the DockerBuildArgs dict.\n" ]
[ 1 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "aws_sam_cli", "docker", "python" ]
stackoverflow_0073507177_amazon_web_services_aws_lambda_aws_sam_cli_docker_python.txt
Q: Is there any option to reset label after printing it ? [Python, Tkinter] this is my code from tkinter import * root = Tk() root.title("MyTitle") root.iconbitmap("icon.ico") root.geometry("800x800") c = [] feature1 = IntVar() feature1.set(0) feature2 = IntVar() feature2.set(0) feature3 = IntVar() feature3.set(0) Checkbutton(root, text="Pizza", variable=feature1).pack() Checkbutton(root, text="Hamburger", variable=feature2).pack() Checkbutton(root, text="Cola", variable=feature3).pack() adej = "You've ordered" def receipt(): global c, adej if feature1.get() == 1: c.append("Pizza") if feature2.get() == 1: c.append("Hamburger") if feature3.get() == 1: c.append("Cola") adej = "You've ordered" for x in c: adej = adej + " " + x Label(root, text=adej).pack() adej = "" button1 = Button(root, text="Get Receipt", command=receipt) button1.pack() root.mainloop() edit_1: I've changed code like this but it still gives me duplicated answers as label. What i've meant as duplicate is for example if check box_1 and box_2 it returns me check box_1 and check box_2. But if i deselect check box_1 and click button it returns me check box_1, check box_2 and check box_2 (again). Here's changed code from tkinter import * root = Tk() root.title("MyTitle") root.iconbitmap("icon.ico") root.geometry("800x800") c = [] feature1 = IntVar() feature1.set(0) feature2 = IntVar() feature2.set(0) feature3 = IntVar() feature3.set(0) Checkbutton(root, text="Pizza", variable=feature1).pack() Checkbutton(root, text="Hamburger", variable=feature2).pack() Checkbutton(root, text="Cola", variable=feature3).pack() adej = "You've ordered" def receipt(): global c, adej if feature1.get() == 1: c.append("Pizza") if feature2.get() == 1: c.append("Hamburger") if feature3.get() == 1: c.append("Cola") adej = "You've ordered" for x in c: adej = adej + " " + x label1.config(text=adej) button1 = Button(root, text="Get Receipt", command=receipt) button1.pack() label1 = Label(root, text="") label1.pack() root.mainloop() edit2: Thanks to Mr. Tim Roberts for the answer. You can find his answer as marked solution below. By the meantime i've found another solution for this project with "StringVar". Here's the changed working codes i've done. from tkinter import * root = Tk() root.title("MyTitle") root.iconbitmap("icon.ico") root.geometry("250x150") # creating string variables that will be connected to check boxes feature1 = StringVar() feature2 = StringVar() feature3 = StringVar() # attaching variables to checkboxes Checkbutton(root, text="Pizza", variable=feature1, onvalue="Pizza ", offvalue="").pack() Checkbutton(root, text="Hamburger", variable=feature2, onvalue="Hamburger ", offvalue="").pack() Checkbutton(root, text="Cola", variable=feature3, onvalue="Cola ", offvalue="").pack() def receipt(): adej = "You've ordered " if len(feature1.get()+feature2.get()+feature3.get()) == 0: label1.config(text="You didn't order anything.") else: label1.config(text=adej + feature1.get() + feature2.get() + feature3.get()) button1 = Button(root, text="Get Receipt", command=receipt) button1.pack() label1 = Label(root, text="") label1.pack() root.mainloop() A: c should not be a global. You need to rebuild it from scratch every time the button is clicked. adej also does not need to be a global. This works. Also, delete the global definitions of c and adej. def receipt(): c = [] if feature1.get() == 1: c.append("Pizza") if feature2.get() == 1: c.append("Hamburger") if feature3.get() == 1: c.append("Cola") adej = "You've ordered " + ' '.join(c) label1.config(text=adej)
Is there any option to reset label after printing it ? [Python, Tkinter]
this is my code from tkinter import * root = Tk() root.title("MyTitle") root.iconbitmap("icon.ico") root.geometry("800x800") c = [] feature1 = IntVar() feature1.set(0) feature2 = IntVar() feature2.set(0) feature3 = IntVar() feature3.set(0) Checkbutton(root, text="Pizza", variable=feature1).pack() Checkbutton(root, text="Hamburger", variable=feature2).pack() Checkbutton(root, text="Cola", variable=feature3).pack() adej = "You've ordered" def receipt(): global c, adej if feature1.get() == 1: c.append("Pizza") if feature2.get() == 1: c.append("Hamburger") if feature3.get() == 1: c.append("Cola") adej = "You've ordered" for x in c: adej = adej + " " + x Label(root, text=adej).pack() adej = "" button1 = Button(root, text="Get Receipt", command=receipt) button1.pack() root.mainloop() edit_1: I've changed code like this but it still gives me duplicated answers as label. What i've meant as duplicate is for example if check box_1 and box_2 it returns me check box_1 and check box_2. But if i deselect check box_1 and click button it returns me check box_1, check box_2 and check box_2 (again). Here's changed code from tkinter import * root = Tk() root.title("MyTitle") root.iconbitmap("icon.ico") root.geometry("800x800") c = [] feature1 = IntVar() feature1.set(0) feature2 = IntVar() feature2.set(0) feature3 = IntVar() feature3.set(0) Checkbutton(root, text="Pizza", variable=feature1).pack() Checkbutton(root, text="Hamburger", variable=feature2).pack() Checkbutton(root, text="Cola", variable=feature3).pack() adej = "You've ordered" def receipt(): global c, adej if feature1.get() == 1: c.append("Pizza") if feature2.get() == 1: c.append("Hamburger") if feature3.get() == 1: c.append("Cola") adej = "You've ordered" for x in c: adej = adej + " " + x label1.config(text=adej) button1 = Button(root, text="Get Receipt", command=receipt) button1.pack() label1 = Label(root, text="") label1.pack() root.mainloop() edit2: Thanks to Mr. Tim Roberts for the answer. You can find his answer as marked solution below. By the meantime i've found another solution for this project with "StringVar". Here's the changed working codes i've done. from tkinter import * root = Tk() root.title("MyTitle") root.iconbitmap("icon.ico") root.geometry("250x150") # creating string variables that will be connected to check boxes feature1 = StringVar() feature2 = StringVar() feature3 = StringVar() # attaching variables to checkboxes Checkbutton(root, text="Pizza", variable=feature1, onvalue="Pizza ", offvalue="").pack() Checkbutton(root, text="Hamburger", variable=feature2, onvalue="Hamburger ", offvalue="").pack() Checkbutton(root, text="Cola", variable=feature3, onvalue="Cola ", offvalue="").pack() def receipt(): adej = "You've ordered " if len(feature1.get()+feature2.get()+feature3.get()) == 0: label1.config(text="You didn't order anything.") else: label1.config(text=adej + feature1.get() + feature2.get() + feature3.get()) button1 = Button(root, text="Get Receipt", command=receipt) button1.pack() label1 = Label(root, text="") label1.pack() root.mainloop()
[ "c should not be a global. You need to rebuild it from scratch every time the button is clicked. adej also does not need to be a global.\nThis works. Also, delete the global definitions of c and adej.\ndef receipt():\n c = []\n if feature1.get() == 1:\n c.append(\"Pizza\")\n if feature2.get() == 1:\n c.append(\"Hamburger\")\n if feature3.get() == 1:\n c.append(\"Cola\")\n adej = \"You've ordered \" + ' '.join(c)\n label1.config(text=adej)\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074647460_python_tkinter.txt
Q: Odoo Smartbutton access rights for different model I have added smartbutton in res.partner form view header that opens the current partner helpdesk tickets (helpdesk.ticket model). Smartbutton view (If i remove this code then button is removed and users can freely open partner form view) <odoo> <data> <record id="helpdesk_ticket_smart_button" model="ir.ui.view"> <field name="name">partner.helpdesk.ticket.smart.buttons</field> <field name="model">res.partner</field> <field name="inherit_id" ref="base.view_partner_form" /> <field name="arch" type="xml"> <div name="button_box" position="inside"> <button class="oe_stat_button" type="object" name="action_view_helpdesk_tickets" icon="fa-ticket"> <field string="Tickets" name="helpdesk_ticket_count" widget="statinfo"/> </button> </div> </field> </record> </data> </odoo> Unfortunately now, after a non-helpdesk user tries to open any partner form he gets blocked due to Access Error. What can i change so that any odoo user can open partner form view without getting blocked but only helpdesk users can see the Tickets smartbutton? Any help would be appreciated. Let me know if you need more information. A: You can set a group on your extension view: <field name="groups_id" eval="[(4,ref('helpdesk.group_helpdesk_user'))]"/> Will look like this: <odoo> <data> <record id="helpdesk_ticket_smart_button" model="ir.ui.view"> <field name="name">partner.helpdesk.ticket.smart.buttons</field> <field name="model">res.partner</field> <field name="inherit_id" ref="base.view_partner_form" /> <field name="groups_id" eval="[(4,ref('helpdesk.group_helpdesk_user'))]"/> <field name="arch" type="xml"> <div name="button_box" position="inside"> <button class="oe_stat_button" type="object" name="action_view_helpdesk_tickets" icon="fa-ticket"> <field string="Tickets" name="helpdesk_ticket_count" widget="statinfo"/> </button> </div> </field> </record> </data> </odoo> Odoo will only load the extension view for users in the groups that are set on the view. A: While the answer provided by CZoellner should indeed work, another and more robust solution would be to add a groups attribute to your button definition: <button class="oe_stat_button" type="object" name="action_view_helpdesk_tickets" groups="helpdesk.group_helpdesk_user"> <field string="Tickets" name="helpdesk_ticket_count" widget="statinfo"/> </button> The reason this solution is more robust in my opinion is because you or some of your collegues may later want to add some more changes to this view, but those changes may have to be available for everybody.
Odoo Smartbutton access rights for different model
I have added smartbutton in res.partner form view header that opens the current partner helpdesk tickets (helpdesk.ticket model). Smartbutton view (If i remove this code then button is removed and users can freely open partner form view) <odoo> <data> <record id="helpdesk_ticket_smart_button" model="ir.ui.view"> <field name="name">partner.helpdesk.ticket.smart.buttons</field> <field name="model">res.partner</field> <field name="inherit_id" ref="base.view_partner_form" /> <field name="arch" type="xml"> <div name="button_box" position="inside"> <button class="oe_stat_button" type="object" name="action_view_helpdesk_tickets" icon="fa-ticket"> <field string="Tickets" name="helpdesk_ticket_count" widget="statinfo"/> </button> </div> </field> </record> </data> </odoo> Unfortunately now, after a non-helpdesk user tries to open any partner form he gets blocked due to Access Error. What can i change so that any odoo user can open partner form view without getting blocked but only helpdesk users can see the Tickets smartbutton? Any help would be appreciated. Let me know if you need more information.
[ "You can set a group on your extension view:\n<field name=\"groups_id\" eval=\"[(4,ref('helpdesk.group_helpdesk_user'))]\"/>\nWill look like this:\n<odoo>\n <data>\n <record id=\"helpdesk_ticket_smart_button\" model=\"ir.ui.view\">\n <field name=\"name\">partner.helpdesk.ticket.smart.buttons</field>\n <field name=\"model\">res.partner</field>\n <field name=\"inherit_id\" ref=\"base.view_partner_form\" />\n <field name=\"groups_id\" eval=\"[(4,ref('helpdesk.group_helpdesk_user'))]\"/>\n <field name=\"arch\" type=\"xml\">\n <div name=\"button_box\" position=\"inside\">\n <button class=\"oe_stat_button\" type=\"object\" name=\"action_view_helpdesk_tickets\" icon=\"fa-ticket\">\n <field string=\"Tickets\" name=\"helpdesk_ticket_count\" widget=\"statinfo\"/>\n </button>\n </div>\n </field>\n </record>\n </data>\n</odoo>\n\nOdoo will only load the extension view for users in the groups that are set on the view.\n", "While the answer provided by CZoellner should indeed work, another and more robust solution would be to add a groups attribute to your button definition:\n<button class=\"oe_stat_button\" type=\"object\" name=\"action_view_helpdesk_tickets\" groups=\"helpdesk.group_helpdesk_user\">\n <field string=\"Tickets\" name=\"helpdesk_ticket_count\" widget=\"statinfo\"/>\n</button>\n\nThe reason this solution is more robust in my opinion is because you or some of your collegues may later want to add some more changes to this view, but those changes may have to be available for everybody.\n" ]
[ 1, 1 ]
[]
[]
[ "access_rights", "odoo", "odoo_14", "python" ]
stackoverflow_0074643745_access_rights_odoo_odoo_14_python.txt
Q: Python - Passing a function with multiple arguments into another function I have this method that I want to pass into another function. def get_service_enums(context, enum): svc = Service(context) return svc.get_enum(enum) I want to pass this function is as a parameter to another class. ColumnDef(enum_values=my_func) Ideally, my_func is get_service_enums. However get_service_enums has a second parameter, enum that I want to pass in at same time I pass in get_service_enums. How can I do this without actually invoking get_service_enums with parenthesis? A: using partial from functools to create a new function that only takes the first argument. from functools import partial def get_service_enums(context, enum): print(context, enum) partial_function = partial(get_service_enums, enum="second_thing") partial_function("first_thing") first_thing second_thing A: Does it not work for you to pass the function and its argument separately ColumnDef(enum_values=get_service_enums, enum) with the class ColumnDef in charge of passing in enum when the function is invoked? If not, functools.partial is your friend: import functools # New version of get_service_enums with enum = 42 my_func = functools.partial(get_service_enums, enum=42) my_func('hello') # hello, 42 ColumnDef(enum_values=my_func)
Python - Passing a function with multiple arguments into another function
I have this method that I want to pass into another function. def get_service_enums(context, enum): svc = Service(context) return svc.get_enum(enum) I want to pass this function is as a parameter to another class. ColumnDef(enum_values=my_func) Ideally, my_func is get_service_enums. However get_service_enums has a second parameter, enum that I want to pass in at same time I pass in get_service_enums. How can I do this without actually invoking get_service_enums with parenthesis?
[ "using partial from functools to create a new function that only takes the first argument.\nfrom functools import partial\n\ndef get_service_enums(context, enum):\n print(context, enum)\n\npartial_function = partial(get_service_enums, enum=\"second_thing\")\npartial_function(\"first_thing\")\n\nfirst_thing second_thing\n\n", "Does it not work for you to pass the function and its argument separately\nColumnDef(enum_values=get_service_enums, enum)\n\nwith the class ColumnDef in charge of passing in enum when the function is invoked?\nIf not, functools.partial is your friend:\nimport functools\n\n# New version of get_service_enums with enum = 42\nmy_func = functools.partial(get_service_enums, enum=42)\nmy_func('hello') # hello, 42\n\nColumnDef(enum_values=my_func)\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074648077_python_python_3.x.txt
Q: Cannot Keep My Datetime Data and 'No' Word in My Pandas DataFrame I have a pandas dataframe from csv and I want to clean it using Regex in Python. The data that I have look like this: Name Date Status Number A/bCDef 2022-07-11 Yes io123-07 GhIjK-l 2022-07-12 No io456-08 I'm trying to clean the dataframe so it will be easier to process, but the thing is, my code deletes the date, the word 'no', and the hyphen. This the data that I got so far: name date status number abcdef yes io ghijkl no io This is the code that I found on the internet and tried on my dataframe: def regex_values(cols): nltk.download("stopwords") stemmer = nltk.SnowballStemmer('english') stopword = set(stopwords.words('english')) cols = str(cols).lower() cols = re.sub('\[.*?\]', '', cols) cols = re.sub('https?://\S+|www\.\S+', '', cols) cols = re.sub('<.*?>+/', '', cols) cols = re.sub('[%s]' % re.escape(string.punctuation), '', cols) cols = re.sub('\n', '', cols) cols = re.sub('\w*\d\w*', '', cols) cols = re.sub(r'^\s+|\s+$', '', cols) cols = re.sub(' +', ' ', cols) cols = re.sub(r'\b(\w+)(?:\W\1\b)+', 'r\1', cols, flags = re.IGNORECASE) cols = [word for word in cols.split(' ') if word not in stopword] cols = " ".join(cols) return cols This is the pandas dataframe that I wish to have at the end: name date status number abcdef 2022-07-11 yes io123-07 ghijkl 2022-07-12 no io456-08 I'm new to Regex so I wish anyone can help me to code the right code. Or if there is a simpler way to clean my data I would much appreciate the help. Thanks in advance. A: can you try this: df = df.applymap(lambda s: s.lower() if type(s) == str else s) #lower string values df.columns = df.columns.str.lower() #lower for columns df['name']=df['name'].str.replace(r'\W+', '') #remove any non-word character #output ''' name date status number 0 abcdef 2022-07-11 yes io123-07 1 ghijkl 2022-07-12 no io456-08 '''
Cannot Keep My Datetime Data and 'No' Word in My Pandas DataFrame
I have a pandas dataframe from csv and I want to clean it using Regex in Python. The data that I have look like this: Name Date Status Number A/bCDef 2022-07-11 Yes io123-07 GhIjK-l 2022-07-12 No io456-08 I'm trying to clean the dataframe so it will be easier to process, but the thing is, my code deletes the date, the word 'no', and the hyphen. This the data that I got so far: name date status number abcdef yes io ghijkl no io This is the code that I found on the internet and tried on my dataframe: def regex_values(cols): nltk.download("stopwords") stemmer = nltk.SnowballStemmer('english') stopword = set(stopwords.words('english')) cols = str(cols).lower() cols = re.sub('\[.*?\]', '', cols) cols = re.sub('https?://\S+|www\.\S+', '', cols) cols = re.sub('<.*?>+/', '', cols) cols = re.sub('[%s]' % re.escape(string.punctuation), '', cols) cols = re.sub('\n', '', cols) cols = re.sub('\w*\d\w*', '', cols) cols = re.sub(r'^\s+|\s+$', '', cols) cols = re.sub(' +', ' ', cols) cols = re.sub(r'\b(\w+)(?:\W\1\b)+', 'r\1', cols, flags = re.IGNORECASE) cols = [word for word in cols.split(' ') if word not in stopword] cols = " ".join(cols) return cols This is the pandas dataframe that I wish to have at the end: name date status number abcdef 2022-07-11 yes io123-07 ghijkl 2022-07-12 no io456-08 I'm new to Regex so I wish anyone can help me to code the right code. Or if there is a simpler way to clean my data I would much appreciate the help. Thanks in advance.
[ "can you try this:\ndf = df.applymap(lambda s: s.lower() if type(s) == str else s) #lower string values\ndf.columns = df.columns.str.lower() #lower for columns\ndf['name']=df['name'].str.replace(r'\\W+', '') #remove any non-word character\n\n#output\n'''\n name date status number\n0 abcdef 2022-07-11 yes io123-07\n1 ghijkl 2022-07-12 no io456-08\n'''\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "nlp", "python", "python_regex" ]
stackoverflow_0074636944_datetime_nlp_python_python_regex.txt
Q: How to sum duplicate columns in dataframe and return nan if at least one value is nan I have a dataframe with duplicate columns (number not known a priori) like this example: a a a b b 0 1 1 1 1 1 1 1 nan 1 1 1 I need to be able to aggregate the columns by summing their values (by rows) and returning NaN if at least one value, in one of the columns among the duplicates, is NaN. I have tried this code: import numpy as np import pandas as pd df = pd.DataFrame([[1,1,1,1,1], [1,np.nan,1,1,1]], columns=['a','a','a','b','b']) df = df.groupby(axis=1, level=0).sum() The result i get is as follows, but it does not return NaN in the second row of column 'a'. a b 0 3 2 1 2 2 In the documentation of pandas.DataFrame.sum, there is the skipna parameter which might suit my case. But I am using the function pandas.core.groupby.GroupBy.sum which does not have this parameter, but the min_count which does what i want but the number is not known in advance and would be different for each duplicate column. For example, a min_count=3 solves the problem for column 'a', but obviously returns NaN on the whole of column 'b'. The result I want to achieve is: a b 0 3 2 1 nan 2 A: One workaround might be to use apply to get the DataFrame.sum: df.groupby(level=0, axis=1).apply(lambda x: x.sum(axis=1, skipna=False)) Output: a b 0 3.0 2.0 1 NaN 2.0 A: Another possible solution: cols, ldf = df.columns.unique(), len(df) pd.DataFrame( np.reshape([sum(df.loc[i, x]) for i in range(ldf) for x in cols], (len(cols), ldf)), columns=cols) Output: a b 0 3.0 2.0 1 NaN 2.0
How to sum duplicate columns in dataframe and return nan if at least one value is nan
I have a dataframe with duplicate columns (number not known a priori) like this example: a a a b b 0 1 1 1 1 1 1 1 nan 1 1 1 I need to be able to aggregate the columns by summing their values (by rows) and returning NaN if at least one value, in one of the columns among the duplicates, is NaN. I have tried this code: import numpy as np import pandas as pd df = pd.DataFrame([[1,1,1,1,1], [1,np.nan,1,1,1]], columns=['a','a','a','b','b']) df = df.groupby(axis=1, level=0).sum() The result i get is as follows, but it does not return NaN in the second row of column 'a'. a b 0 3 2 1 2 2 In the documentation of pandas.DataFrame.sum, there is the skipna parameter which might suit my case. But I am using the function pandas.core.groupby.GroupBy.sum which does not have this parameter, but the min_count which does what i want but the number is not known in advance and would be different for each duplicate column. For example, a min_count=3 solves the problem for column 'a', but obviously returns NaN on the whole of column 'b'. The result I want to achieve is: a b 0 3 2 1 nan 2
[ "One workaround might be to use apply to get the DataFrame.sum:\ndf.groupby(level=0, axis=1).apply(lambda x: x.sum(axis=1, skipna=False))\n\nOutput:\n a b\n0 3.0 2.0\n1 NaN 2.0\n\n", "Another possible solution:\ncols, ldf = df.columns.unique(), len(df)\n\npd.DataFrame(\n np.reshape([sum(df.loc[i, x]) for i in range(ldf) for x in cols],\n (len(cols), ldf)), \n columns=cols)\n\nOutput:\n a b\n0 3.0 2.0\n1 NaN 2.0\n\n" ]
[ 2, 0 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074646196_dataframe_numpy_pandas_python.txt
Q: Will converting to PySDL2 make my app run faster than it does under PyGame? I've written a little toy in Python using Pygame. It generates critters (a circle with a directional line, not an image) to wander around the screen. I'm interested in making it more sophisticated, but I'm running into serious performance problems. As the number of critters on the screen passes 20, the frame rate drops rapidly from 60fps as far as 11fps with 50 on the screen. I've gone over my (very simple) code a number of different ways, even profiling with cProfile, without finding any way to optimize. To make a long story somewhat less long, I think I've concluded PyGame just isn't cut out for what I'm asking it to do. Consequently, I'm looking to convert to something else. C++ is the obvious answer, but as this is just a toy I'd rather code in Python, if possible. Especially because it's already written. In looking at C++, I discovered that there's an SDL (wrapper? bindings? Not sure the term) for Python: PySDL2. Thanks for sticking with me. Now the payoff: is there any reason to believe that converting my app to use PySDL2 will make it faster? Especially considering PyGame apparently uses SDL under the hood (somehow). EDIT: As requested: import pygame from pygame import gfxdraw import pygame.locals import os import math import random import time (INSERT CONTENTS OF VECTOR.PY FROM https://gist.github.com/mcleonard/5351452 HERE) pygame.init() #some global constants BLUE = (0, 0, 255) WHITE = (255,255,255) diagnostic = False SPAWN_TIME = 1 #number of seconds between creating new critters FLOCK_LIMIT = 30 #number of critters at which the flock begins being culled GUIDs = [0] #list of guaranteed unique IDs for identifying each critter # Set the position of the OS window position = (30, 30) os.environ['SDL_VIDEO_WINDOW_POS'] = str(position[0]) + "," + str(position[1]) # Set the position, width and height of the screen [width, height] size_x = 1000 size_y = 500 size = (size_x, size_y) FRAMERATE = 60 SECS_FOR_DYING = 1 screen = pygame.display.set_mode(size) screen.set_alpha(None) pygame.display.set_caption("My Game") # Used to manage how fast the screen updates clock = pygame.time.Clock() def random_float(lower, upper): num = random.randint(lower*1000, upper*1000) return num/1000 def new_GUID(): num = GUIDs[-1] num = num + 1 while num in GUIDs: num += 1 GUIDs.append(num) return num class HeatBlock: def __init__(self,_tlx,_tly,h,w): self.tlx = int(_tlx) self.tly = int(_tly) self.height = int(h)+1 self.width = int(w) self.heat = 255.0 self.registered = False def register_tresspasser(self): self.registered = True self.heat = max(self.heat - 1, 0) def cool_down(self): if not self.registered: self.heat = min(self.heat + 0.1, 255) self.registered = False def hb_draw_self(self): screen.fill((255,int(self.heat),int(self.heat)), [self.tlx, self.tly, self.width, self.height]) class HeatMap: def __init__(self, _h, _v): self.h_freq = _h #horizontal frequency self.h_rez = size_x/self.h_freq #horizontal resolution self.v_freq = _v #vertical frequency self.v_rez = size_y/self.v_freq #vertical resolution self.blocks = [] def make_map(self): h_size = size_x/self.h_freq v_size = size_y/self.v_freq for h_count in range(0, self.h_freq): TLx = h_count * h_size #TopLeft corner, x col = [] for v_count in range(0, self.v_freq): TLy = v_count * v_size #TopLeft corner, y col.append(HeatBlock(TLx,TLy,v_size,h_size)) self.blocks.append(col) def hm_draw_self(self): for col in self.blocks: for block in col: block.cool_down() block.hb_draw_self() def register(self, x, y): #convert the given coordinates of the trespasser into a col/row block index col = max(int(math.floor(x / self.h_rez)),0) row = max(int(math.floor(y / self.v_rez)),0) self.blocks[col][row].register_tresspasser() class Critter: def __init__(self): self.color = (random.randint(1, 200), random.randint(1, 200), random.randint(1, 200)) self.linear_speed = random_float(20, 100) self.radius = int(round(10 * (100/self.linear_speed))) self.angular_speed = random_float(0.1, 2) self.x = int(random.randint(self.radius*2, size_x - (self.radius*2))) self.y = int(random.randint(self.radius*2, size_y - (self.radius*2))) self.orientation = Vector(0, 1).rotate(random.randint(-180, 180)) self.sensor = Vector(0, 20) self.sensor_length = 20 self.new_orientation = self.orientation self.draw_bounds = False self.GUID = new_GUID() self.condition = 0 #0 = alive, [1-fps] = dying, >fps = dead self.delete_me = False def c_draw_self(self): #if we're alive and not dying, draw our normal self if self.condition == 0: #diagnostic if self.draw_bounds: pygame.gfxdraw.rectangle(screen, [int(self.x), int(self.y), 1, 1], BLUE) temp = self.orientation * (self.linear_speed * 20) pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + temp[0]), int(self.y + temp[1]), BLUE) #if there's a new orientation, match it gradually temp = self.new_orientation * self.linear_speed #draw my body pygame.gfxdraw.aacircle(screen, int(self.x), int(self.y), self.radius, self.color) #draw a line indicating my new direction pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + temp[0]), int(self.y + temp[1]), BLUE) #draw my sensor (a line pointing forward) self.sensor = self.orientation.normalize() * self.sensor_length pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + self.sensor[0]), int(self.y + self.sensor[1]), BLUE) #otherwise we're dying, draw our dying animation elif 1 <= self.condition <= FRAMERATE*SECS_FOR_DYING: #draw some lines in a spinningi circle for num in range(0,10): line = Vector(0, 1).rotate((num*(360/10))+(self.condition*23)) line = line*self.radius pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x+line[0]), int(self.y+line[1]), self.color) def print_self(self): #diagnostic print("==============") print("radius:", self.radius) print("color:", self.color) print("linear_speed:", self.linear_speed) print("angular_speed:", self.angular_speed) print("x:", self.x) print("y:", int(self.y)) print("orientation:", self.orientation) def avoid_others(self, _flock): for _critter in _flock: #if the critter isn't ME... if _critter.GUID is not self.GUID and _critter.condition == 0: #and it's touching me... if self.x - _critter.x <= self.radius + _critter.radius: me = Vector(self.x, int(self.y)) other_guy = Vector(_critter.x, _critter.y) distance = me - other_guy #give me new orientation that's away from the other guy if distance.norm() <= ((self.radius) + (_critter.radius)): new_direction = me - other_guy self.orientation = self.new_orientation = new_direction.normalize() def update_location(self, elapsed): boundary = '?' while boundary != 'X': boundary = self.out_of_bounds() if boundary == 'N': self.orientation = self.new_orientation = Vector(0, 1).rotate(random.randint(-20, 20)) self.y = (self.radius) + 2 elif boundary == 'S': self.orientation = self.new_orientation = Vector(0,-1).rotate(random.randint(-20, 20)) self.y = (size_y - (self.radius)) - 2 elif boundary == 'E': self.orientation = self.new_orientation = Vector(-1,0).rotate(random.randint(-20, 20)) self.x = (size_x - (self.radius)) - 2 elif boundary == 'W': self.orientation = self.new_orientation = Vector(1,0).rotate(random.randint(-20, 20)) self.x = (self.radius) + 2 point = Vector(self.x, self.y) self.x, self.y = (point + (self.orientation * (self.linear_speed*(elapsed/1000)))) boundary = self.out_of_bounds() def update_orientation(self, elapsed): #randomly choose a new direction, from time to time if random.randint(0, 100) > 98: self.choose_new_orientation() difference = self.orientation.argument() - self.new_orientation.argument() self.orientation = self.orientation.rotate((difference * (self.angular_speed*(elapsed/1000)))) def still_alive(self, elapsed): return_value = True #I am still alive if self.condition == 0: return_value = True elif self.condition <= FRAMERATE*SECS_FOR_DYING: self.condition = self.condition + (elapsed/17) return_value = True if self.condition > FRAMERATE*SECS_FOR_DYING: return_value = False return return_value def choose_new_orientation(self): if self.new_orientation: if (self.orientation.argument() - self.new_orientation.argument()) < 5: rotation = random.randint(-300, 300) self.new_orientation = self.orientation.rotate(rotation) def out_of_bounds(self): if self.x >= (size_x - (self.radius)): return 'E' elif self.y >= (size_y - (self.radius)): return 'S' elif self.x <= (0 + (self.radius)): return 'W' elif self.y <= (0 + (self.radius)): return 'N' else: return 'X' # -------- Main Program Loop ----------- # generate critters flock = [Critter()] heatMap = HeatMap(60, 40) heatMap.make_map() last_spawn = time.clock() run_time = time.perf_counter() frame_count = 0 max_time = 0 ms_elapsed = 1 avg_fps = [1] # Loop until the user clicks the close button. done = False while not done: # --- Main event loop only processes one event frame_count = frame_count + 1 for event in pygame.event.get(): if event.type == pygame.QUIT: done = True # --- Game logic should go here #check if it's time to make another critter if time.clock() - last_spawn > SPAWN_TIME: flock.append(Critter()) last_spawn = time.clock() if len(flock) >= FLOCK_LIMIT: #if we're over the flock limit, cull the herd counter = FLOCK_LIMIT for critter in flock[0:len(flock)-FLOCK_LIMIT]: #this code allows a critter to be "dying" for a while, to play an animation if critter.condition == 0: critter.condition = 1 elif not critter.still_alive(ms_elapsed): critter.delete_me = True counter = 0 #delete all the critters that have finished dying while counter < len(flock): if flock[counter].delete_me: del flock[counter] else: counter = counter+1 #----loop on all critters once, doing all functions for each critter for critter in flock: if critter.condition == 0: critter.avoid_others(flock) if critter.condition == 0: heatMap.register(critter.x, critter.y) critter.update_location(ms_elapsed) critter.update_orientation(ms_elapsed) if diagnostic: critter.print_self() #----alternately, loop for each function. Speed seems to be similar either way #for critter in flock: # if critter.condition == 0: # critter.update_location(ms_elapsed) #for critter in flock: # if critter.condition == 0: # critter.update_orientation(ms_elapsed) # --- Screen-clearing code goes here # Here, we clear the screen to white. Don't put other drawing commands screen.fill(WHITE) # --- Drawing code should go here #draw the heat_map heatMap.hm_draw_self() for critter in flock: critter.c_draw_self() #draw the framerate myfont = pygame.font.SysFont("monospace", 15) #average the framerate over 60 frames temp = sum(avg_fps)/float(len(avg_fps)) text = str(round(((1/temp)*1000),0))+"FPS | "+str(len(flock))+" Critters" label = myfont.render(text, 1, (0, 0, 0)) screen.blit(label, (5, 5)) # --- Go ahead and update the screen with what we've drawn. pygame.display.update() # --- Limit to 60 frames per second #only run for 30 seconds if time.perf_counter()-run_time >= 30: done = True #limit to 60fps #add this frame's time to the list avg_fps.append(ms_elapsed) #remove any old frames while len(avg_fps) > 60: del avg_fps[0] ms_elapsed = clock.tick(FRAMERATE) #track longest frame if ms_elapsed > max_time: max_time = ms_elapsed #print some stats once the program is finished print("Count:", frame_count) print("Max time since last flip:", str(max_time)+"ms") print("Total Time:", str(int(time.perf_counter()-run_time))+"s") print("Average time for a flip:", str(int(((time.perf_counter()-run_time)/frame_count)*1000))+"ms") # Close the window and quit. pygame.quit() A: Do not create the font object in each frame. Creating a font object is a very expensive operation because the font must be read from the resource and decoded. Create the font before the application loop, but use it in the application loop: myfont = pygame.font.SysFont("monospace", 15) # <-- INSERT done = False while not done: # [...] # myfont = pygame.font.SysFont("monospace", 15) <-- DELETE
Will converting to PySDL2 make my app run faster than it does under PyGame?
I've written a little toy in Python using Pygame. It generates critters (a circle with a directional line, not an image) to wander around the screen. I'm interested in making it more sophisticated, but I'm running into serious performance problems. As the number of critters on the screen passes 20, the frame rate drops rapidly from 60fps as far as 11fps with 50 on the screen. I've gone over my (very simple) code a number of different ways, even profiling with cProfile, without finding any way to optimize. To make a long story somewhat less long, I think I've concluded PyGame just isn't cut out for what I'm asking it to do. Consequently, I'm looking to convert to something else. C++ is the obvious answer, but as this is just a toy I'd rather code in Python, if possible. Especially because it's already written. In looking at C++, I discovered that there's an SDL (wrapper? bindings? Not sure the term) for Python: PySDL2. Thanks for sticking with me. Now the payoff: is there any reason to believe that converting my app to use PySDL2 will make it faster? Especially considering PyGame apparently uses SDL under the hood (somehow). EDIT: As requested: import pygame from pygame import gfxdraw import pygame.locals import os import math import random import time (INSERT CONTENTS OF VECTOR.PY FROM https://gist.github.com/mcleonard/5351452 HERE) pygame.init() #some global constants BLUE = (0, 0, 255) WHITE = (255,255,255) diagnostic = False SPAWN_TIME = 1 #number of seconds between creating new critters FLOCK_LIMIT = 30 #number of critters at which the flock begins being culled GUIDs = [0] #list of guaranteed unique IDs for identifying each critter # Set the position of the OS window position = (30, 30) os.environ['SDL_VIDEO_WINDOW_POS'] = str(position[0]) + "," + str(position[1]) # Set the position, width and height of the screen [width, height] size_x = 1000 size_y = 500 size = (size_x, size_y) FRAMERATE = 60 SECS_FOR_DYING = 1 screen = pygame.display.set_mode(size) screen.set_alpha(None) pygame.display.set_caption("My Game") # Used to manage how fast the screen updates clock = pygame.time.Clock() def random_float(lower, upper): num = random.randint(lower*1000, upper*1000) return num/1000 def new_GUID(): num = GUIDs[-1] num = num + 1 while num in GUIDs: num += 1 GUIDs.append(num) return num class HeatBlock: def __init__(self,_tlx,_tly,h,w): self.tlx = int(_tlx) self.tly = int(_tly) self.height = int(h)+1 self.width = int(w) self.heat = 255.0 self.registered = False def register_tresspasser(self): self.registered = True self.heat = max(self.heat - 1, 0) def cool_down(self): if not self.registered: self.heat = min(self.heat + 0.1, 255) self.registered = False def hb_draw_self(self): screen.fill((255,int(self.heat),int(self.heat)), [self.tlx, self.tly, self.width, self.height]) class HeatMap: def __init__(self, _h, _v): self.h_freq = _h #horizontal frequency self.h_rez = size_x/self.h_freq #horizontal resolution self.v_freq = _v #vertical frequency self.v_rez = size_y/self.v_freq #vertical resolution self.blocks = [] def make_map(self): h_size = size_x/self.h_freq v_size = size_y/self.v_freq for h_count in range(0, self.h_freq): TLx = h_count * h_size #TopLeft corner, x col = [] for v_count in range(0, self.v_freq): TLy = v_count * v_size #TopLeft corner, y col.append(HeatBlock(TLx,TLy,v_size,h_size)) self.blocks.append(col) def hm_draw_self(self): for col in self.blocks: for block in col: block.cool_down() block.hb_draw_self() def register(self, x, y): #convert the given coordinates of the trespasser into a col/row block index col = max(int(math.floor(x / self.h_rez)),0) row = max(int(math.floor(y / self.v_rez)),0) self.blocks[col][row].register_tresspasser() class Critter: def __init__(self): self.color = (random.randint(1, 200), random.randint(1, 200), random.randint(1, 200)) self.linear_speed = random_float(20, 100) self.radius = int(round(10 * (100/self.linear_speed))) self.angular_speed = random_float(0.1, 2) self.x = int(random.randint(self.radius*2, size_x - (self.radius*2))) self.y = int(random.randint(self.radius*2, size_y - (self.radius*2))) self.orientation = Vector(0, 1).rotate(random.randint(-180, 180)) self.sensor = Vector(0, 20) self.sensor_length = 20 self.new_orientation = self.orientation self.draw_bounds = False self.GUID = new_GUID() self.condition = 0 #0 = alive, [1-fps] = dying, >fps = dead self.delete_me = False def c_draw_self(self): #if we're alive and not dying, draw our normal self if self.condition == 0: #diagnostic if self.draw_bounds: pygame.gfxdraw.rectangle(screen, [int(self.x), int(self.y), 1, 1], BLUE) temp = self.orientation * (self.linear_speed * 20) pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + temp[0]), int(self.y + temp[1]), BLUE) #if there's a new orientation, match it gradually temp = self.new_orientation * self.linear_speed #draw my body pygame.gfxdraw.aacircle(screen, int(self.x), int(self.y), self.radius, self.color) #draw a line indicating my new direction pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + temp[0]), int(self.y + temp[1]), BLUE) #draw my sensor (a line pointing forward) self.sensor = self.orientation.normalize() * self.sensor_length pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x + self.sensor[0]), int(self.y + self.sensor[1]), BLUE) #otherwise we're dying, draw our dying animation elif 1 <= self.condition <= FRAMERATE*SECS_FOR_DYING: #draw some lines in a spinningi circle for num in range(0,10): line = Vector(0, 1).rotate((num*(360/10))+(self.condition*23)) line = line*self.radius pygame.gfxdraw.line(screen, int(self.x), int(self.y), int(self.x+line[0]), int(self.y+line[1]), self.color) def print_self(self): #diagnostic print("==============") print("radius:", self.radius) print("color:", self.color) print("linear_speed:", self.linear_speed) print("angular_speed:", self.angular_speed) print("x:", self.x) print("y:", int(self.y)) print("orientation:", self.orientation) def avoid_others(self, _flock): for _critter in _flock: #if the critter isn't ME... if _critter.GUID is not self.GUID and _critter.condition == 0: #and it's touching me... if self.x - _critter.x <= self.radius + _critter.radius: me = Vector(self.x, int(self.y)) other_guy = Vector(_critter.x, _critter.y) distance = me - other_guy #give me new orientation that's away from the other guy if distance.norm() <= ((self.radius) + (_critter.radius)): new_direction = me - other_guy self.orientation = self.new_orientation = new_direction.normalize() def update_location(self, elapsed): boundary = '?' while boundary != 'X': boundary = self.out_of_bounds() if boundary == 'N': self.orientation = self.new_orientation = Vector(0, 1).rotate(random.randint(-20, 20)) self.y = (self.radius) + 2 elif boundary == 'S': self.orientation = self.new_orientation = Vector(0,-1).rotate(random.randint(-20, 20)) self.y = (size_y - (self.radius)) - 2 elif boundary == 'E': self.orientation = self.new_orientation = Vector(-1,0).rotate(random.randint(-20, 20)) self.x = (size_x - (self.radius)) - 2 elif boundary == 'W': self.orientation = self.new_orientation = Vector(1,0).rotate(random.randint(-20, 20)) self.x = (self.radius) + 2 point = Vector(self.x, self.y) self.x, self.y = (point + (self.orientation * (self.linear_speed*(elapsed/1000)))) boundary = self.out_of_bounds() def update_orientation(self, elapsed): #randomly choose a new direction, from time to time if random.randint(0, 100) > 98: self.choose_new_orientation() difference = self.orientation.argument() - self.new_orientation.argument() self.orientation = self.orientation.rotate((difference * (self.angular_speed*(elapsed/1000)))) def still_alive(self, elapsed): return_value = True #I am still alive if self.condition == 0: return_value = True elif self.condition <= FRAMERATE*SECS_FOR_DYING: self.condition = self.condition + (elapsed/17) return_value = True if self.condition > FRAMERATE*SECS_FOR_DYING: return_value = False return return_value def choose_new_orientation(self): if self.new_orientation: if (self.orientation.argument() - self.new_orientation.argument()) < 5: rotation = random.randint(-300, 300) self.new_orientation = self.orientation.rotate(rotation) def out_of_bounds(self): if self.x >= (size_x - (self.radius)): return 'E' elif self.y >= (size_y - (self.radius)): return 'S' elif self.x <= (0 + (self.radius)): return 'W' elif self.y <= (0 + (self.radius)): return 'N' else: return 'X' # -------- Main Program Loop ----------- # generate critters flock = [Critter()] heatMap = HeatMap(60, 40) heatMap.make_map() last_spawn = time.clock() run_time = time.perf_counter() frame_count = 0 max_time = 0 ms_elapsed = 1 avg_fps = [1] # Loop until the user clicks the close button. done = False while not done: # --- Main event loop only processes one event frame_count = frame_count + 1 for event in pygame.event.get(): if event.type == pygame.QUIT: done = True # --- Game logic should go here #check if it's time to make another critter if time.clock() - last_spawn > SPAWN_TIME: flock.append(Critter()) last_spawn = time.clock() if len(flock) >= FLOCK_LIMIT: #if we're over the flock limit, cull the herd counter = FLOCK_LIMIT for critter in flock[0:len(flock)-FLOCK_LIMIT]: #this code allows a critter to be "dying" for a while, to play an animation if critter.condition == 0: critter.condition = 1 elif not critter.still_alive(ms_elapsed): critter.delete_me = True counter = 0 #delete all the critters that have finished dying while counter < len(flock): if flock[counter].delete_me: del flock[counter] else: counter = counter+1 #----loop on all critters once, doing all functions for each critter for critter in flock: if critter.condition == 0: critter.avoid_others(flock) if critter.condition == 0: heatMap.register(critter.x, critter.y) critter.update_location(ms_elapsed) critter.update_orientation(ms_elapsed) if diagnostic: critter.print_self() #----alternately, loop for each function. Speed seems to be similar either way #for critter in flock: # if critter.condition == 0: # critter.update_location(ms_elapsed) #for critter in flock: # if critter.condition == 0: # critter.update_orientation(ms_elapsed) # --- Screen-clearing code goes here # Here, we clear the screen to white. Don't put other drawing commands screen.fill(WHITE) # --- Drawing code should go here #draw the heat_map heatMap.hm_draw_self() for critter in flock: critter.c_draw_self() #draw the framerate myfont = pygame.font.SysFont("monospace", 15) #average the framerate over 60 frames temp = sum(avg_fps)/float(len(avg_fps)) text = str(round(((1/temp)*1000),0))+"FPS | "+str(len(flock))+" Critters" label = myfont.render(text, 1, (0, 0, 0)) screen.blit(label, (5, 5)) # --- Go ahead and update the screen with what we've drawn. pygame.display.update() # --- Limit to 60 frames per second #only run for 30 seconds if time.perf_counter()-run_time >= 30: done = True #limit to 60fps #add this frame's time to the list avg_fps.append(ms_elapsed) #remove any old frames while len(avg_fps) > 60: del avg_fps[0] ms_elapsed = clock.tick(FRAMERATE) #track longest frame if ms_elapsed > max_time: max_time = ms_elapsed #print some stats once the program is finished print("Count:", frame_count) print("Max time since last flip:", str(max_time)+"ms") print("Total Time:", str(int(time.perf_counter()-run_time))+"s") print("Average time for a flip:", str(int(((time.perf_counter()-run_time)/frame_count)*1000))+"ms") # Close the window and quit. pygame.quit()
[ "Do not create the font object in each frame. Creating a font object is a very expensive operation because the font must be read from the resource and decoded. Create the font before the application loop, but use it in the application loop:\nmyfont = pygame.font.SysFont(\"monospace\", 15) # <-- INSERT\n\ndone = False\nwhile not done:\n # [...]\n\n # myfont = pygame.font.SysFont(\"monospace\", 15) <-- DELETE\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "pysdl2", "python" ]
stackoverflow_0043614091_pygame_pysdl2_python.txt
Q: How to accept all data from connection in socketserver python? How to receive all data from a connection in socketserver so that it the connection does not hang on the client side class ConnectionHandler(BaseRequestHandler): def handle(self): data = b'' while 1: tmp = self.request.recv(1024) if not tmp: break data += tmp print (data.decode()) on the client side I am using char text[] = "Hello world\n"; SSL_write(ssl, text, sizeof(text)); char tmp[20]; int received = SSL_read (ssl, tmp, 20); printf("Server replied: [%s]\n", tmp); but this causes the connection not to close and the client hangs, I am sure this is the case since replacing the while loop with self.request.recv(1024) receives the client message and outputs it but what if i don't know the message size of the client A: In order to receive all data from a connection in socketserver, you can use the makefile method of the socket object. This method returns a file-like object that can be used to read data from the connection. Here is an example of how you could use this method to receive all data from the connection: class ConnectionHandler(BaseRequestHandler): def handle(self): # Use the makefile method to get a file-like object for the connection file_like_obj = self.request.makefile('rb') # Read all data from the file-like object data = file_like_obj.read() print(data.decode()) This approach allows you to read all data from the connection without having to manually manage the receive buffer. Additionally, since the makefile method returns a file-like object, you can use the familiar file operations like read, readline, and readlines to read data from the connection. However, keep in mind that using the makefile method to read data from the connection will consume the data from the receive buffer. This means that if you also want to use the recv method to read data from the connection, you will need to call the recv method before calling the makefile method. In your specific example, it looks like you are using SSL to encrypt the data that is sent over the connection. In this case, you should use the SSL_makefile method instead of the makefile method in order to get a file-like object for the connection. This method is similar to the makefile method, but it is used for SSL connections. Here is an example of how you could use the SSL_makefile method to receive all data from an SSL connection: class ConnectionHandler(BaseRequestHandler): def handle(self): # Use the SSL_makefile method to get a file-like object for the SSL connection file_like_obj = self.request.SSL_makefile('rb') # Read all data from the file-like object data = file_like_obj.read() print(data.decode()) I hope this helps. Let me know if you have any questions. A: Using self.request.recv() without an actual value on how many bytes to read seems to obtain the whole buffer from the client
How to accept all data from connection in socketserver python?
How to receive all data from a connection in socketserver so that it the connection does not hang on the client side class ConnectionHandler(BaseRequestHandler): def handle(self): data = b'' while 1: tmp = self.request.recv(1024) if not tmp: break data += tmp print (data.decode()) on the client side I am using char text[] = "Hello world\n"; SSL_write(ssl, text, sizeof(text)); char tmp[20]; int received = SSL_read (ssl, tmp, 20); printf("Server replied: [%s]\n", tmp); but this causes the connection not to close and the client hangs, I am sure this is the case since replacing the while loop with self.request.recv(1024) receives the client message and outputs it but what if i don't know the message size of the client
[ "In order to receive all data from a connection in socketserver, you can use the makefile method of the socket object. This method returns a file-like object that can be used to read data from the connection. Here is an example of how you could use this method to receive all data from the connection:\nclass ConnectionHandler(BaseRequestHandler):\n\ndef handle(self):\n # Use the makefile method to get a file-like object for the connection\n file_like_obj = self.request.makefile('rb')\n # Read all data from the file-like object\n data = file_like_obj.read()\n print(data.decode())\n\nThis approach allows you to read all data from the connection without having to manually manage the receive buffer. Additionally, since the makefile method returns a file-like object, you can use the familiar file operations like read, readline, and readlines to read data from the connection.\nHowever, keep in mind that using the makefile method to read data from the connection will consume the data from the receive buffer. This means that if you also want to use the recv method to read data from the connection, you will need to call the recv method before calling the makefile method.\nIn your specific example, it looks like you are using SSL to encrypt the data that is sent over the connection. In this case, you should use the SSL_makefile method instead of the makefile method in order to get a file-like object for the connection. This method is similar to the makefile method, but it is used for SSL connections. Here is an example of how you could use the SSL_makefile method to receive all data from an SSL connection:\nclass ConnectionHandler(BaseRequestHandler):\n\ndef handle(self):\n # Use the SSL_makefile method to get a file-like object for the SSL connection\n file_like_obj = self.request.SSL_makefile('rb')\n # Read all data from the file-like object\n data = file_like_obj.read()\n print(data.decode())\n\nI hope this helps. Let me know if you have any questions.\n", "Using self.request.recv() without an actual value on how many bytes to read seems to obtain the whole buffer from the client\n" ]
[ 2, 0 ]
[]
[]
[ "python", "recv", "socketserver", "tcp" ]
stackoverflow_0074647416_python_recv_socketserver_tcp.txt
Q: MetPy geostrophic wind for WRF data Edit: I'm starting to suspect the problems arising below are due to the metadata, because even after correcting the issues raised regarding units mpcalc.geostrophic_wind(z) still issues warnings about the coordinates and ordering. Maybe the function is unable to identify the coordinates from the file? Perhaps this is because WRF output data is non-CF compliant? I would like to compute geostrophic and ageostrophic winds from WRF-ARW data using the MetPy function mpcalc.geostrophic_wind. My attempt results in a bunch of errors and I don't know what I'm doing wrong. Can someone tell me how to modify my code to get rid of these errors? Here is my attempt so far: # import numpy as np from netCDF4 import Dataset import metpy.calc as mpcalc from wrf import getvar # Open the NetCDF file filename = "wrfout_d01_2016-10-04_12:00:00" ncfile = Dataset(filename) # Extract the geopotential height and wind variables z = getvar(ncfile, "z", units="m") ua = getvar(ncfile, "ua", units="m s-1") va = getvar(ncfile, "va", units="m s-1") # Smooth height data z = mpcalc.smooth_gaussian(z, 3) # Compute the geostrophic wind geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z) # Calculate ageostrophic wind components ageo_wind_u = ua - geo_wind_u ageo_wind_v = va - geo_wind_v # The computation of the geostrophic wind throws several warnings: >>> # Compute the geostrophic wind >>> geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z) /mnt/.../.../metpy_en/lib/python3.9/site-packages/metpy/xarray.py:355: UserWarning: More than one time coordinate present for variable. warnings.warn('More than one ' + axis + ' coordinate present for variable' /mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:1459: UserWarning: Horizontal dimension numbers not found. Defaulting to (..., Y, X) order. warnings.warn('Horizontal dimension numbers not found. Defaulting to ' /mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:355: UserWarning: More than one time coordinate present for variable "XLAT". warnings.warn('More than one ' + axis + ' coordinate present for variable' /mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:1393: UserWarning: y and x dimensions unable to be identified. Assuming [..., y, x] dimension order. warnings.warn('y and x dimensions unable to be identified. Assuming [..., y, x] ' /mnt/.../.../lib/python3.9/site-packages/metpy/calc/basic.py:1274: UserWarning: Input over 1.5707963267948966 radians. Ensure proper units are given. warnings.warn('Input over {} radians. ' Can anyone tell me why I'm getting these warnings? And then trying to compute an ageostrophic wind component results in a bunch of errors: >>> # Calculate ageostrophic wind components >>> ageo_wind_u = ua - geo_wind_u Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mnt/.../lib/python3.9/site-packages/xarray/core/_typed_ops.py", line 209, in __sub__ return self._binary_op(other, operator.sub) File "/mnt/.../lib/python3.9/site-packages/xarray/core/dataarray.py", line 4357, in _binary_op f(self.variable, other_variable) File "/mnt/.../lib/python3.9/site-packages/xarray/core/_typed_ops.py", line 399, in __sub__ return self._binary_op(other, operator.sub) File "/mnt/.../lib/python3.9/site-packages/xarray/core/variable.py", line 2639, in _binary_op f(self_data, other_data) if not reflexive else f(other_data, self_data) File "/mnt/iusers01/fatpou01/sees01/w34926hb/.conda/envs/metpy_env/lib/python3.9/site-packages/pint/facets/numpy/quantity.py", line 61, in __array_ufunc__ return numpy_wrap("ufunc", ufunc, inputs, kwargs, types) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 953, in numpy_wrap return handled[name](*args, **kwargs) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 513, in _subtract (x1, x2), output_wrap = unwrap_and_wrap_consistent_units(x1, x2) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 130, in unwrap_and_wrap_consistent_units args, _ = convert_to_consistent_units(*args, pre_calc_units=first_input_units) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 111, in convert_to_consistent_units tuple(convert_arg(arg, pre_calc_units=pre_calc_units) for arg in args), File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 111, in <genexpr> tuple(convert_arg(arg, pre_calc_units=pre_calc_units) for arg in args), File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 93, in convert_arg raise DimensionalityError("dimensionless", pre_calc_units) pint.errors.DimensionalityError: Cannot convert from 'dimensionless' to 'meter / second' Any help would be appreciated. (By the way, I looked at the script at https://github.com/Unidata/python-training/blob/master/pages/gallery/Ageostrophic_Wind_Example.ipynb and did not find it helpful because I'm not sure which of the data manipulations near the top I need to do for the WRF data.) A: wrfpython's getvar function, while it takes units as a parameter, only uses this (as far as I can tell) to convert values in the arrays before returning them. To use this with MetPy you need to attach proper units. I would do this using a small helper function: from metpy.units import units def metpy_getvar(file, name, units_str): return getvar(file, name, units=units_str) * units(units_str) z = metpy_getvar(ncfile, "z", units="m") ua = metpy_getvar(ncfile, "ua", units="m s-1") va = metpy_getvar(ncfile, "va", units="m s-1") That should eliminate the complaints about missing units. EDIT: Fix name collision in hastily written function. A: I've made some progress: an updated script and the resulting plot are included below. Part of the problem was that I needed to pass dx, dy, and lat into the function metpy.calc.geostrophic_wind, as they were seemingly not being read automatically from the numpy array. There are still (at least) two problems: I've passed x_dim=-2 and y_dim=-1 in an effort to set [X,Y] order. (The documentation here https://unidata.github.io/MetPy/latest/api/generated/metpy.calc.geostrophic_wind.html says the default is x_dim = -1 and y_dim=-2 for [...Y,X] order, but does not say what to set x_dim and y_dim to for [...X,Y] order, so I just guessed.) However, I am still getting ``UserWarning: Horizontal dimension numbers not found. Defaulting to (..., Y, X) order.'' Secondly, as you can see in the plot there is something weird going on with the geostrophic wind component at the coastlines. u-component of geostrophic wind at 300 mb Here is my current script: import numpy as np from netCDF4 import Dataset import metpy.calc as mpcalc from metpy.units import units import matplotlib.pyplot as plt from matplotlib.cm import get_cmap from wrf import getvar, interplevel, to_np, get_basemap, latlon_coords # Open the NetCDF file filename = "wrfout_d01_2016-10-04_12:00:00" ncfile = Dataset(filename) z = getvar(ncfile, "z", units="m") * units.meter # Smooth height data z = mpcalc.smooth_gaussian(z, 3) dx = 4000.0 * units.meter dy = 4000.0 * units.meter lat = getvar(ncfile, "lat") * units.degrees geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z,dx,dy,lat,x_dim=-2,y_dim=-1) ##### p = getvar(ncfile, "pressure") z = getvar(ncfile, "z", units="m") ht_300 = interplevel(z, p, 300) #geostrophic wind components on 300 mb level geo_wind_u_300 = interplevel(geo_wind_u, p, 300) geo_wind_v_300 = interplevel(geo_wind_v, p, 300) # Get the lat/lon coordinates lats, lons = latlon_coords(ht_300) # Get the basemap object bm = get_basemap(ht_300) # Create the figure fig = plt.figure(figsize=(12,12)) ax = plt.axes() # Convert the lat/lon coordinates to x/y coordinates in the projection space x, y = bm(to_np(lons), to_np(lats)) # Add the 300 mb height contours levels = np.arange(8640., 9690., 40.) contours = bm.contour(x, y, to_np(ht_300), levels=levels, colors="black") plt.clabel(contours, inline=1, fontsize=10, fmt="%i") # Add the wind contours levels = np.arange(10, 70, 5) geo_u_contours = bm.contourf(x, y, to_np(geo_wind_u_300), levels=levels, cmap=get_cmap("YlGnBu")) plt.colorbar(geo_u_contours, ax=ax, orientation="horizontal", pad=.05, shrink=0.75) # Add the geographic boundaries bm.drawcoastlines(linewidth=0.25) bm.drawstates(linewidth=0.25) bm.drawcountries(linewidth=0.25) plt.title("300 mb height (m) and u-component of geostrophic wind (m s-1) at 1200 UTC on 04-10-2016", fontsize=12) plt.savefig('geo_u_300mb_04-10-2016_1200_smoothed.png', bbox_inches='tight') A: The data presented by raw WRF-ARW datasets and by variables extracted via wrf-python do not have metadata that interact well with MetPy's assumptions about unit attributes, coordinate variables, and grid projections (from the CF Conventions). Instead, I would recommend using xwrf, a recently released package for working with WRF data in a more CF-Conventions-friendly way. With xwrf, your example would look like: import metpy.calc as mpcalc import xarray as xr import xwrf # Open the NetCDF file filename = "wrfout_d01_2016-10-04_12:00:00" ds = xr.open_dataset(filename).xwrf.postprocess() # Extract the geopotential height and wind variables z = ds['geopotential_height'] ua = ds['wind_east'] va = ds['wind_north'] # Smooth height data z = mpcalc.smooth_gaussian(z, 3) # Compute the geostrophic wind geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z) # Calculate ageostrophic wind components ageo_wind_u = ua - geo_wind_u ageo_wind_v = va - geo_wind_v
MetPy geostrophic wind for WRF data
Edit: I'm starting to suspect the problems arising below are due to the metadata, because even after correcting the issues raised regarding units mpcalc.geostrophic_wind(z) still issues warnings about the coordinates and ordering. Maybe the function is unable to identify the coordinates from the file? Perhaps this is because WRF output data is non-CF compliant? I would like to compute geostrophic and ageostrophic winds from WRF-ARW data using the MetPy function mpcalc.geostrophic_wind. My attempt results in a bunch of errors and I don't know what I'm doing wrong. Can someone tell me how to modify my code to get rid of these errors? Here is my attempt so far: # import numpy as np from netCDF4 import Dataset import metpy.calc as mpcalc from wrf import getvar # Open the NetCDF file filename = "wrfout_d01_2016-10-04_12:00:00" ncfile = Dataset(filename) # Extract the geopotential height and wind variables z = getvar(ncfile, "z", units="m") ua = getvar(ncfile, "ua", units="m s-1") va = getvar(ncfile, "va", units="m s-1") # Smooth height data z = mpcalc.smooth_gaussian(z, 3) # Compute the geostrophic wind geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z) # Calculate ageostrophic wind components ageo_wind_u = ua - geo_wind_u ageo_wind_v = va - geo_wind_v # The computation of the geostrophic wind throws several warnings: >>> # Compute the geostrophic wind >>> geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z) /mnt/.../.../metpy_en/lib/python3.9/site-packages/metpy/xarray.py:355: UserWarning: More than one time coordinate present for variable. warnings.warn('More than one ' + axis + ' coordinate present for variable' /mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:1459: UserWarning: Horizontal dimension numbers not found. Defaulting to (..., Y, X) order. warnings.warn('Horizontal dimension numbers not found. Defaulting to ' /mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:355: UserWarning: More than one time coordinate present for variable "XLAT". warnings.warn('More than one ' + axis + ' coordinate present for variable' /mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:1393: UserWarning: y and x dimensions unable to be identified. Assuming [..., y, x] dimension order. warnings.warn('y and x dimensions unable to be identified. Assuming [..., y, x] ' /mnt/.../.../lib/python3.9/site-packages/metpy/calc/basic.py:1274: UserWarning: Input over 1.5707963267948966 radians. Ensure proper units are given. warnings.warn('Input over {} radians. ' Can anyone tell me why I'm getting these warnings? And then trying to compute an ageostrophic wind component results in a bunch of errors: >>> # Calculate ageostrophic wind components >>> ageo_wind_u = ua - geo_wind_u Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mnt/.../lib/python3.9/site-packages/xarray/core/_typed_ops.py", line 209, in __sub__ return self._binary_op(other, operator.sub) File "/mnt/.../lib/python3.9/site-packages/xarray/core/dataarray.py", line 4357, in _binary_op f(self.variable, other_variable) File "/mnt/.../lib/python3.9/site-packages/xarray/core/_typed_ops.py", line 399, in __sub__ return self._binary_op(other, operator.sub) File "/mnt/.../lib/python3.9/site-packages/xarray/core/variable.py", line 2639, in _binary_op f(self_data, other_data) if not reflexive else f(other_data, self_data) File "/mnt/iusers01/fatpou01/sees01/w34926hb/.conda/envs/metpy_env/lib/python3.9/site-packages/pint/facets/numpy/quantity.py", line 61, in __array_ufunc__ return numpy_wrap("ufunc", ufunc, inputs, kwargs, types) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 953, in numpy_wrap return handled[name](*args, **kwargs) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 513, in _subtract (x1, x2), output_wrap = unwrap_and_wrap_consistent_units(x1, x2) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 130, in unwrap_and_wrap_consistent_units args, _ = convert_to_consistent_units(*args, pre_calc_units=first_input_units) File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 111, in convert_to_consistent_units tuple(convert_arg(arg, pre_calc_units=pre_calc_units) for arg in args), File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 111, in <genexpr> tuple(convert_arg(arg, pre_calc_units=pre_calc_units) for arg in args), File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 93, in convert_arg raise DimensionalityError("dimensionless", pre_calc_units) pint.errors.DimensionalityError: Cannot convert from 'dimensionless' to 'meter / second' Any help would be appreciated. (By the way, I looked at the script at https://github.com/Unidata/python-training/blob/master/pages/gallery/Ageostrophic_Wind_Example.ipynb and did not find it helpful because I'm not sure which of the data manipulations near the top I need to do for the WRF data.)
[ "wrfpython's getvar function, while it takes units as a parameter, only uses this (as far as I can tell) to convert values in the arrays before returning them. To use this with MetPy you need to attach proper units. I would do this using a small helper function:\nfrom metpy.units import units\n\ndef metpy_getvar(file, name, units_str):\n return getvar(file, name, units=units_str) * units(units_str)\n\nz = metpy_getvar(ncfile, \"z\", units=\"m\")\nua = metpy_getvar(ncfile, \"ua\", units=\"m s-1\")\nva = metpy_getvar(ncfile, \"va\", units=\"m s-1\")\n\nThat should eliminate the complaints about missing units.\nEDIT: Fix name collision in hastily written function.\n", "I've made some progress: an updated script and the resulting plot are included below. Part of the problem was that I needed to pass dx, dy, and lat into the function metpy.calc.geostrophic_wind, as they were seemingly not being read automatically from the numpy array.\nThere are still (at least) two problems:\nI've passed x_dim=-2 and y_dim=-1 in an effort to set [X,Y] order. (The documentation here https://unidata.github.io/MetPy/latest/api/generated/metpy.calc.geostrophic_wind.html says the default is x_dim = -1 and y_dim=-2 for [...Y,X] order, but does not say what to set x_dim and y_dim to for [...X,Y] order, so I just guessed.) However, I am still getting ``UserWarning: Horizontal dimension numbers not found. Defaulting to (..., Y, X) order.''\nSecondly, as you can see in the plot there is something weird going on with the geostrophic wind component at the coastlines.\nu-component of geostrophic wind at 300 mb\nHere is my current script:\nimport numpy as np\nfrom netCDF4 import Dataset\nimport metpy.calc as mpcalc\nfrom metpy.units import units\nimport matplotlib.pyplot as plt\nfrom matplotlib.cm import get_cmap\n\nfrom wrf import getvar, interplevel, to_np, get_basemap, latlon_coords\n\n# Open the NetCDF file\nfilename = \"wrfout_d01_2016-10-04_12:00:00\"\nncfile = Dataset(filename)\n\nz = getvar(ncfile, \"z\", units=\"m\") * units.meter\n\n# Smooth height data\nz = mpcalc.smooth_gaussian(z, 3)\n\ndx = 4000.0 * units.meter\ndy = 4000.0 * units.meter\n\nlat = getvar(ncfile, \"lat\") * units.degrees\n\ngeo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z,dx,dy,lat,x_dim=-2,y_dim=-1)\n\n#####\n\np = getvar(ncfile, \"pressure\")\nz = getvar(ncfile, \"z\", units=\"m\")\n\nht_300 = interplevel(z, p, 300)\n\n#geostrophic wind components on 300 mb level\ngeo_wind_u_300 = interplevel(geo_wind_u, p, 300)\ngeo_wind_v_300 = interplevel(geo_wind_v, p, 300)\n\n# Get the lat/lon coordinates\nlats, lons = latlon_coords(ht_300)\n\n# Get the basemap object\nbm = get_basemap(ht_300)\n\n# Create the figure\nfig = plt.figure(figsize=(12,12))\nax = plt.axes()\n\n# Convert the lat/lon coordinates to x/y coordinates in the projection space\nx, y = bm(to_np(lons), to_np(lats))\n\n# Add the 300 mb height contours\nlevels = np.arange(8640., 9690., 40.)\ncontours = bm.contour(x, y, to_np(ht_300), levels=levels, colors=\"black\")\nplt.clabel(contours, inline=1, fontsize=10, fmt=\"%i\")\n\n# Add the wind contours\nlevels = np.arange(10, 70, 5)\ngeo_u_contours = bm.contourf(x, y, to_np(geo_wind_u_300), levels=levels, cmap=get_cmap(\"YlGnBu\"))\nplt.colorbar(geo_u_contours, ax=ax, orientation=\"horizontal\", pad=.05, shrink=0.75)\n\n# Add the geographic boundaries\nbm.drawcoastlines(linewidth=0.25)\nbm.drawstates(linewidth=0.25)\nbm.drawcountries(linewidth=0.25)\n\nplt.title(\"300 mb height (m) and u-component of geostrophic wind (m s-1) at 1200 UTC on 04-10-2016\", fontsize=12)\n\nplt.savefig('geo_u_300mb_04-10-2016_1200_smoothed.png', bbox_inches='tight')\n\n", "The data presented by raw WRF-ARW datasets and by variables extracted via wrf-python do not have metadata that interact well with MetPy's assumptions about unit attributes, coordinate variables, and grid projections (from the CF Conventions). Instead, I would recommend using xwrf, a recently released package for working with WRF data in a more CF-Conventions-friendly way. With xwrf, your example would look like:\nimport metpy.calc as mpcalc\nimport xarray as xr\nimport xwrf\n\n# Open the NetCDF file\nfilename = \"wrfout_d01_2016-10-04_12:00:00\"\nds = xr.open_dataset(filename).xwrf.postprocess()\n\n# Extract the geopotential height and wind variables\nz = ds['geopotential_height']\nua = ds['wind_east']\nva = ds['wind_north']\n\n# Smooth height data\nz = mpcalc.smooth_gaussian(z, 3)\n\n# Compute the geostrophic wind\ngeo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z)\n\n# Calculate ageostrophic wind components\nageo_wind_u = ua - geo_wind_u\nageo_wind_v = va - geo_wind_v\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "metpy", "python", "python_xarray" ]
stackoverflow_0074615766_metpy_python_python_xarray.txt
Q: assignment to variable using exponents showing answer which is confusing a = 1 b = 0 a = a ^ b b = a ^ b a a = a ^ b print(a, b) Can someone shed some light on this? I see the answer is (0, 1) but why? first line would make a = 1 making it (1,0), second line would make it (1,1) so Im thinking the third line would make it 1 ^ 1 which = 1, but its showing (0, 1). what am I not understanding? Thank you in advance. A: In python, the exponent operator is not ^ but **. The ^ operator is actually the bitwise XOR operator.
assignment to variable using exponents showing answer which is confusing
a = 1 b = 0 a = a ^ b b = a ^ b a a = a ^ b print(a, b) Can someone shed some light on this? I see the answer is (0, 1) but why? first line would make a = 1 making it (1,0), second line would make it (1,1) so Im thinking the third line would make it 1 ^ 1 which = 1, but its showing (0, 1). what am I not understanding? Thank you in advance.
[ "In python, the exponent operator is not ^ but **. The ^ operator is actually the bitwise XOR operator.\n" ]
[ 0 ]
[]
[]
[ "exponent", "python", "variable_assignment" ]
stackoverflow_0074648216_exponent_python_variable_assignment.txt
Q: Python Error ModuleNotFoundError: No module named 'transformers' I'm getting below error when running 'import transformers', even though I have installed in the same vitual env. I'm using python 3.8 ModuleNotFoundError: No module named 'transformers' Error: enter image description here I have uninstalled it and reinstalled it using 'pip3 install transformers' from python cmd line. Then I tried to uninstalled again, and reinstalled in jupyter notebook using '!pip install transformers', result shows ' Installing collected packages: transformers Successfully installed transformers-4.24.0 ' I can also verify directly in Jupyter Notebook: enter image description here I tried to install transformers successfully in jupyter Notebook. After that still getting ModuleNotFoundError error (have tried restarted kernel too) enter image description here A: its resolved now. I just tried to use %pip install transformers==3.4.0, instead of !pip install transformers==3.4.0 in jupyter book, and it worked. I can proceed with the project for now. Although I don't know what I did wrong in my python command line earlier that caused the inconsistency. Will open a new thread.
Python Error ModuleNotFoundError: No module named 'transformers'
I'm getting below error when running 'import transformers', even though I have installed in the same vitual env. I'm using python 3.8 ModuleNotFoundError: No module named 'transformers' Error: enter image description here I have uninstalled it and reinstalled it using 'pip3 install transformers' from python cmd line. Then I tried to uninstalled again, and reinstalled in jupyter notebook using '!pip install transformers', result shows ' Installing collected packages: transformers Successfully installed transformers-4.24.0 ' I can also verify directly in Jupyter Notebook: enter image description here I tried to install transformers successfully in jupyter Notebook. After that still getting ModuleNotFoundError error (have tried restarted kernel too) enter image description here
[ "its resolved now. I just tried to use %pip install transformers==3.4.0, instead of !pip install transformers==3.4.0 in jupyter book, and it worked. I can proceed with the project for now. Although I don't know what I did wrong in my python command line earlier that caused the inconsistency. Will open a new thread.\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "python" ]
stackoverflow_0074607244_jupyter_notebook_python.txt
Q: Python Discord bot not responding in servers I have run into a very strange problem and I would appreciate any help that comes my way So I made a discord bot using discord.py library and hosted it on Heroku. It was working perfectly well. Until recently had to take it down for some development. Now when I uploaded it again it does not work. Here is a summary of what is going on The bot does have online status on Discord. The bot responds perfectly normal in DMs. The bot does not receive and respond to any message in any server. When I run the exact same code file on my PC it works perfectly normal but when I run it on Heroku I get this issue I dont think the code has anything to do with it and I'm thinking it has something to do with Heroku. Can anyone help me out. Here is the main code in case if anyone needs it import os,random import discord from discord.ui import Button, View from discord.ext import commands from dotenv import load_dotenv from speech_recognition_module import recognize_speech from RPS import RPS import yt_dlp from youtubesearchpython import VideosSearch import json from requests import get load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') intents = discord.Intents.default() client = commands.Bot(command_prefix="!",intents=intents) #------------Bot Starting-------------------------- @client.event async def on_ready(): guilds = client.guilds print( f'{client.user} is connected to the following guild:\n' # f'{guild.name}(id: {guild.id})' ) for guild in guilds: print(f"{guild.name}") #--------- Hello command to display a list of available commands----------------- @client.command() async def hello(ctx): username = str(ctx.author).split("#")[0] embedVar = discord.Embed(title="List of Avaiable Commands", description='''Here is a list of currently available commands: **General Interaction with the Bot:** !hello - display a list of available commands !bye - a nice lil goodbye message from CodTheBot !listen - listens to your audio, converts it into text and sends it to the server **Rock, Papers, Scissors Commands:** !playRPS - play a Rock, Paper, Scissors game **TicTacToe Commands:** !tictactoe @user1 @user2 - runs a tictactoe game between the tagged users !place *n* - marks your choice at the nth value box **Music Commands:** !join - CodTheBot joins the vc !play *songname* - plays the desired song !pause - pause the song !resume - resume the song !disconnect - CodTheBot leaves the vc **Memes Commands:** !meme - displays a random meme from reddit''',color=0xFF0000) await ctx.send(f"Hello I am CODtheBOT. You must be {username}") await ctx.send(embed=embedVar) #----------------- Bye Command--------------------------------------- @client.command() async def bye(ctx): username = str(ctx.author).split("#")[0] await ctx.send(f'See you later {username}!') #----------------Listen Command to listen to audio ,convert it to text and send it to server--- @client.command() async def listen(ctx): username = str(ctx.author).split("#")[0] msg = await ctx.send(f'Go on I am listening') text = recognize_speech() await msg.edit("Hey Everyone!") await ctx.send(f'{username} said: {text}') #--------------------Rock Paper Scissors Game-------------- @client.command() async def playRPS(ctx): username = str(ctx.author).split("#")[0] await ctx.send(f'Hey {username} Lets Play. Make your choice') button_Rock = Button(emoji="✊") button_Paper = Button(emoji="βœ‹") button_Scissor = Button(emoji="✌") async def btn_rock_callback(interaction, custom_id="rock"): user2, field, game_stat = RPS(custom_id) embedVar = discord.Embed( description=f"You chose Rock\nI choose {user2}", color=0x00ff00) await interaction.response.edit_message(content=field, embed=embedVar, view=None) embedVar = discord.Embed(title=game_stat["message"], description=game_stat["description"], color=game_stat["color"]) await ctx.send(embed=embedVar) async def btn_paper_callback(interaction, custom_id="paper"): user2, field, game_stat = RPS(custom_id) embedVar = discord.Embed( description=f"You chose Paper\nI choose {user2}", color=0x00ff00) await interaction.response.edit_message(content=field, embed=embedVar, view=None) embedVar = discord.Embed(title=game_stat["message"], description=game_stat["description"], color=game_stat["color"]) await ctx.send(embed=embedVar) async def btn_scissor_callback(interaction, custom_id="scissor"): user2, field, game_stat = RPS(custom_id) embedVar = discord.Embed( description=f"You chose Scissors\nI choose {user2}", color=0x00ff00) await interaction.response.edit_message(content=field, embed=embedVar, view=None) embedVar = discord.Embed(title=game_stat["message"], description=game_stat["description"], color=game_stat["color"]) await ctx.send(embed=embedVar) view_var = View() view_var.add_item(button_Rock) view_var.add_item(button_Paper) view_var.add_item(button_Scissor) start_field= ":right_fist: :left_fist:" await ctx.send(start_field,view=view_var) button_Rock.callback = btn_rock_callback button_Paper.callback = btn_paper_callback button_Scissor.callback = btn_scissor_callback #----------Tic Tac Toe Game------------------------------------------ @client.command() async def tictactoe(ctx, p1: discord.Member, p2: discord.Member): global player1 global player2 global turn global game_over game_over = True global count if game_over: global board board = [":white_large_square:"]*9 game_over = False count = 0 player1 = p1 player2 = p2 #Print blank board line = "" for x in range(len(board)): if x==2 or x==5 or x==8: line += " " + board[x] await ctx.send(line) line = "" else: line += " " + board[x] #Determine who goes first num = random.randint(1,2) if num == 1: turn = player1 await ctx.send("It is <@" + str(player1.id) + ">'s turn.") else: turn = player2 await ctx.send("It is <@" + str(player2.id) + ">'s turn.") else: await ctx.send("A game is already in progress. Please finish it.") #----------------Tic Tac Toe Placement Handling ---------------------- @client.command() async def place(ctx, position: int): global turn global count def checkwin(winning_conditions, mark,board): global game_over for condition in winning_conditions: if board[condition[0]] == mark and board[condition[1]] == mark and board[condition[2]] == mark: game_over = True winning_conditions = [ [0,1,2], [3,4,5], [6,7,8], [0,3,6], [1,4,7], [2,5,8], [0,4,8], [2,4,6], ] if not game_over: mark = "" if turn == ctx.author: if turn == player1: mark = ":regional_indicator_x:" else: mark = ":o2:" if 0 < position < 10 and board[position-1] == ":white_large_square:": board[position-1] = mark count += 1 # will try to make this into a function line = "" for x in range(len(board)): if x==2 or x==5 or x==8: line += " " + board[x] await ctx.send(line) line = "" else: line += " " + board[x] checkwin(winning_conditions, mark,board) if game_over: await ctx.send(mark + " wins!") elif count >= 9: await ctx.send("It's a tie!") #Switch turns if turn == player1: turn = player2 elif turn == player2: turn = player1 else: await ctx.send("Please choose an integer between 1 and 9 and an unmarked tile.") else: await ctx.send("It is not your turn.") else: await ctx.send("Please start a new game first.") #----------------Tic Tac Toe Error Handling ---------------------- @tictactoe.error async def tictactoe_error(ctx, error): if isinstance(error, commands.MissingRequiredArgument): await ctx.send("Please mention two players for this command.") elif isinstance(error, commands.BadArgument): await ctx.send("Please make sure to mention/ping players i.e. <@playerid>") @place.error async def place_error(ctx, error): if isinstance(error, commands.MissingRequiredArgument): await ctx.send("Please enter a position to mark.") elif isinstance(error, commands.BadArgument): await ctx.send("Please make sure to enter an integer.") #---------------------------------Music player--------------------------------------- @client.command() async def join(ctx): if ctx.author.voice is None: await ctx.send("You're not in a voice channel") voice_channel = ctx.author.voice.channel if ctx.voice_client is None: await voice_channel.connect() else: await ctx.voice_client.move_to(voice_channel) @client.command() async def disconnect(ctx): await ctx.voice_client.disconnect() @client.command() async def play(ctx,url): ctx.voice_client.stop() # FFMPEG_OPTIONS = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', # 'options': '-vn'} YDL_OPTIONS = {'format': 'bestaudio'} vc = ctx.voice_client with yt_dlp.YoutubeDL(YDL_OPTIONS) as ydl: url = VideosSearch(url, limit = 2) y=url.result() x=y['result'][0]['link'] print(x) info = ydl.extract_info(x, download = False) url2 = info ['formats'][0]['url'] print (url2) source = await discord.FFmpegOpusAudio.from_probe(url2)# **FFMPEG_OPTIONS) vc.play(source) embed_var=discord.Embed(title="Now playing - ", description=y['result'][0]['title'], color=0x00FF00) embed_var.set_image(url=y['result'][0]['thumbnails'][0]['url']) await ctx.send(embed=embed_var) @client.command() async def pause(ctx): ctx.voice_client.pause() await ctx.send("Paused ⏸️") @client.command() async def resume(ctx): ctx.voice_client.resume() await ctx.send("Resumed ▢️") #----------------- Random Meme Gennerator----------------------------- @client.command() async def meme(ctx): content = get("https://meme-api.herokuapp.com/gimme").text data = json.loads(content) meme = discord.Embed(title=f"{data['title']}", color = 0x00FF00) meme.set_image(url=f"{data['url']}") await ctx.reply(embed=meme) client.run(TOKEN) A: You need to enable message Intents in the Discord site, then specify the messages intent in the code. That should work for you.
Python Discord bot not responding in servers
I have run into a very strange problem and I would appreciate any help that comes my way So I made a discord bot using discord.py library and hosted it on Heroku. It was working perfectly well. Until recently had to take it down for some development. Now when I uploaded it again it does not work. Here is a summary of what is going on The bot does have online status on Discord. The bot responds perfectly normal in DMs. The bot does not receive and respond to any message in any server. When I run the exact same code file on my PC it works perfectly normal but when I run it on Heroku I get this issue I dont think the code has anything to do with it and I'm thinking it has something to do with Heroku. Can anyone help me out. Here is the main code in case if anyone needs it import os,random import discord from discord.ui import Button, View from discord.ext import commands from dotenv import load_dotenv from speech_recognition_module import recognize_speech from RPS import RPS import yt_dlp from youtubesearchpython import VideosSearch import json from requests import get load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') intents = discord.Intents.default() client = commands.Bot(command_prefix="!",intents=intents) #------------Bot Starting-------------------------- @client.event async def on_ready(): guilds = client.guilds print( f'{client.user} is connected to the following guild:\n' # f'{guild.name}(id: {guild.id})' ) for guild in guilds: print(f"{guild.name}") #--------- Hello command to display a list of available commands----------------- @client.command() async def hello(ctx): username = str(ctx.author).split("#")[0] embedVar = discord.Embed(title="List of Avaiable Commands", description='''Here is a list of currently available commands: **General Interaction with the Bot:** !hello - display a list of available commands !bye - a nice lil goodbye message from CodTheBot !listen - listens to your audio, converts it into text and sends it to the server **Rock, Papers, Scissors Commands:** !playRPS - play a Rock, Paper, Scissors game **TicTacToe Commands:** !tictactoe @user1 @user2 - runs a tictactoe game between the tagged users !place *n* - marks your choice at the nth value box **Music Commands:** !join - CodTheBot joins the vc !play *songname* - plays the desired song !pause - pause the song !resume - resume the song !disconnect - CodTheBot leaves the vc **Memes Commands:** !meme - displays a random meme from reddit''',color=0xFF0000) await ctx.send(f"Hello I am CODtheBOT. You must be {username}") await ctx.send(embed=embedVar) #----------------- Bye Command--------------------------------------- @client.command() async def bye(ctx): username = str(ctx.author).split("#")[0] await ctx.send(f'See you later {username}!') #----------------Listen Command to listen to audio ,convert it to text and send it to server--- @client.command() async def listen(ctx): username = str(ctx.author).split("#")[0] msg = await ctx.send(f'Go on I am listening') text = recognize_speech() await msg.edit("Hey Everyone!") await ctx.send(f'{username} said: {text}') #--------------------Rock Paper Scissors Game-------------- @client.command() async def playRPS(ctx): username = str(ctx.author).split("#")[0] await ctx.send(f'Hey {username} Lets Play. Make your choice') button_Rock = Button(emoji="✊") button_Paper = Button(emoji="βœ‹") button_Scissor = Button(emoji="✌") async def btn_rock_callback(interaction, custom_id="rock"): user2, field, game_stat = RPS(custom_id) embedVar = discord.Embed( description=f"You chose Rock\nI choose {user2}", color=0x00ff00) await interaction.response.edit_message(content=field, embed=embedVar, view=None) embedVar = discord.Embed(title=game_stat["message"], description=game_stat["description"], color=game_stat["color"]) await ctx.send(embed=embedVar) async def btn_paper_callback(interaction, custom_id="paper"): user2, field, game_stat = RPS(custom_id) embedVar = discord.Embed( description=f"You chose Paper\nI choose {user2}", color=0x00ff00) await interaction.response.edit_message(content=field, embed=embedVar, view=None) embedVar = discord.Embed(title=game_stat["message"], description=game_stat["description"], color=game_stat["color"]) await ctx.send(embed=embedVar) async def btn_scissor_callback(interaction, custom_id="scissor"): user2, field, game_stat = RPS(custom_id) embedVar = discord.Embed( description=f"You chose Scissors\nI choose {user2}", color=0x00ff00) await interaction.response.edit_message(content=field, embed=embedVar, view=None) embedVar = discord.Embed(title=game_stat["message"], description=game_stat["description"], color=game_stat["color"]) await ctx.send(embed=embedVar) view_var = View() view_var.add_item(button_Rock) view_var.add_item(button_Paper) view_var.add_item(button_Scissor) start_field= ":right_fist: :left_fist:" await ctx.send(start_field,view=view_var) button_Rock.callback = btn_rock_callback button_Paper.callback = btn_paper_callback button_Scissor.callback = btn_scissor_callback #----------Tic Tac Toe Game------------------------------------------ @client.command() async def tictactoe(ctx, p1: discord.Member, p2: discord.Member): global player1 global player2 global turn global game_over game_over = True global count if game_over: global board board = [":white_large_square:"]*9 game_over = False count = 0 player1 = p1 player2 = p2 #Print blank board line = "" for x in range(len(board)): if x==2 or x==5 or x==8: line += " " + board[x] await ctx.send(line) line = "" else: line += " " + board[x] #Determine who goes first num = random.randint(1,2) if num == 1: turn = player1 await ctx.send("It is <@" + str(player1.id) + ">'s turn.") else: turn = player2 await ctx.send("It is <@" + str(player2.id) + ">'s turn.") else: await ctx.send("A game is already in progress. Please finish it.") #----------------Tic Tac Toe Placement Handling ---------------------- @client.command() async def place(ctx, position: int): global turn global count def checkwin(winning_conditions, mark,board): global game_over for condition in winning_conditions: if board[condition[0]] == mark and board[condition[1]] == mark and board[condition[2]] == mark: game_over = True winning_conditions = [ [0,1,2], [3,4,5], [6,7,8], [0,3,6], [1,4,7], [2,5,8], [0,4,8], [2,4,6], ] if not game_over: mark = "" if turn == ctx.author: if turn == player1: mark = ":regional_indicator_x:" else: mark = ":o2:" if 0 < position < 10 and board[position-1] == ":white_large_square:": board[position-1] = mark count += 1 # will try to make this into a function line = "" for x in range(len(board)): if x==2 or x==5 or x==8: line += " " + board[x] await ctx.send(line) line = "" else: line += " " + board[x] checkwin(winning_conditions, mark,board) if game_over: await ctx.send(mark + " wins!") elif count >= 9: await ctx.send("It's a tie!") #Switch turns if turn == player1: turn = player2 elif turn == player2: turn = player1 else: await ctx.send("Please choose an integer between 1 and 9 and an unmarked tile.") else: await ctx.send("It is not your turn.") else: await ctx.send("Please start a new game first.") #----------------Tic Tac Toe Error Handling ---------------------- @tictactoe.error async def tictactoe_error(ctx, error): if isinstance(error, commands.MissingRequiredArgument): await ctx.send("Please mention two players for this command.") elif isinstance(error, commands.BadArgument): await ctx.send("Please make sure to mention/ping players i.e. <@playerid>") @place.error async def place_error(ctx, error): if isinstance(error, commands.MissingRequiredArgument): await ctx.send("Please enter a position to mark.") elif isinstance(error, commands.BadArgument): await ctx.send("Please make sure to enter an integer.") #---------------------------------Music player--------------------------------------- @client.command() async def join(ctx): if ctx.author.voice is None: await ctx.send("You're not in a voice channel") voice_channel = ctx.author.voice.channel if ctx.voice_client is None: await voice_channel.connect() else: await ctx.voice_client.move_to(voice_channel) @client.command() async def disconnect(ctx): await ctx.voice_client.disconnect() @client.command() async def play(ctx,url): ctx.voice_client.stop() # FFMPEG_OPTIONS = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', # 'options': '-vn'} YDL_OPTIONS = {'format': 'bestaudio'} vc = ctx.voice_client with yt_dlp.YoutubeDL(YDL_OPTIONS) as ydl: url = VideosSearch(url, limit = 2) y=url.result() x=y['result'][0]['link'] print(x) info = ydl.extract_info(x, download = False) url2 = info ['formats'][0]['url'] print (url2) source = await discord.FFmpegOpusAudio.from_probe(url2)# **FFMPEG_OPTIONS) vc.play(source) embed_var=discord.Embed(title="Now playing - ", description=y['result'][0]['title'], color=0x00FF00) embed_var.set_image(url=y['result'][0]['thumbnails'][0]['url']) await ctx.send(embed=embed_var) @client.command() async def pause(ctx): ctx.voice_client.pause() await ctx.send("Paused ⏸️") @client.command() async def resume(ctx): ctx.voice_client.resume() await ctx.send("Resumed ▢️") #----------------- Random Meme Gennerator----------------------------- @client.command() async def meme(ctx): content = get("https://meme-api.herokuapp.com/gimme").text data = json.loads(content) meme = discord.Embed(title=f"{data['title']}", color = 0x00FF00) meme.set_image(url=f"{data['url']}") await ctx.reply(embed=meme) client.run(TOKEN)
[ "You need to enable message Intents in the Discord site, then specify the messages intent in the code. That should work for you.\n" ]
[ 0 ]
[]
[]
[ "asynchronous", "discord.py", "heroku", "python" ]
stackoverflow_0071721558_asynchronous_discord.py_heroku_python.txt
Q: How can I extract a set of 2D slices from a larger 2D numpy array? If I have a large 2D numpy array and 2 arrays which correspond to the x and y indices I want to extract, It's easy enough: h = np.arange(49).reshape(7,7) # h = [[0, 1, 2, 3, 4, 5, 6], # [7, 8, 9, 10, 11, 12, 13], # [14, 15, 16, 17, 18, 19, 20], # [21, 22, 23, 24, 25, 26, 27], # [28, 29, 30, 31, 32, 33, 34], # [35, 36, 37, 38, 39, 40, 41], # [42, 43, 44, 45, 46, 47, 48]] x_indices = np.array([1,3,4]) y_indices = np.array([2,3,5]) reduced_h = h[x_indices, y_indices] #reduced_h = [ 9, 24, 33] However, I would like to, for each x,y pair cut out a square (denoted by 'a' - the number of indices in each direction from the centre) surrounding this 'coordinate' and return an array of these little 2D arrays. For example, for h, x,y_indices as above and a=1: reduced_h = [[[1,2,3],[8,9,10],[15,16,17]], [[16,17,18],[23,24,25],[30,31,32]], [[25,26,27],[32,33,34],[39,40,41]]] i.e one 3x3 array for each x-y index pair corresponding to the 3x3 square of elements centred on the x-y index. In general, this should return a numpy array which has shape (len(x_indices),2a+1, 2a+1) By analogy to reduced_h[0] = h[x_indices[0]-1:x_indices[0]+1 , y_indices[0]-1:y_indices[0]+1] = h[1-1:1+1 , 2-1:2+1] = h[0:2, 1:3] my first try was the following: h[x_indices-a : x_indices+a, y_indices-a : y_indices+a] However, perhaps unsurprisingly, slicing between the arrays fails. So the obvious next thing to try is to create this slice manually. np.arange seems to struggle with this but linspace works: a=1 xrange = np.linspace(x_indices-a, x_indices+a, 2*a+1, dtype=int) # xrange = [ [0, 2, 3], [1, 3, 4], [2, 4, 5] ] yrange = np.linspace(y_indices-a, y_indices+a, 2*a+1, dtype=int) Now can try h[xrange,yrange] but this unsurprisingly does this element-wise meaning I get only one (2a+1)x(2a+1) array (the same dimensions as xrange and yrange). It there a way to, for every index, take the right slices from these ranges (without loops)? Or is there a way to make the broadcast work initially without having to set up linspace explicitly? Thanks A: You can index np.lib.stride_tricks.sliding_window_view using your x and y indices: import numpy as np h = np.arange(49).reshape(7,7) x_indices = np.array([1,3,4]) y_indices = np.array([2,3,5]) a = 1 window = (2*a+1, 2*a+1) out = np.lib.stride_tricks.sliding_window_view(h, window)[x_indices-a, y_indices-a] out: array([[[ 1, 2, 3], [ 8, 9, 10], [15, 16, 17]], [[16, 17, 18], [23, 24, 25], [30, 31, 32]], [[25, 26, 27], [32, 33, 34], [39, 40, 41]]]) Note that you may need to pad h first to handle windows around your coordinates that reach "outside" h.
How can I extract a set of 2D slices from a larger 2D numpy array?
If I have a large 2D numpy array and 2 arrays which correspond to the x and y indices I want to extract, It's easy enough: h = np.arange(49).reshape(7,7) # h = [[0, 1, 2, 3, 4, 5, 6], # [7, 8, 9, 10, 11, 12, 13], # [14, 15, 16, 17, 18, 19, 20], # [21, 22, 23, 24, 25, 26, 27], # [28, 29, 30, 31, 32, 33, 34], # [35, 36, 37, 38, 39, 40, 41], # [42, 43, 44, 45, 46, 47, 48]] x_indices = np.array([1,3,4]) y_indices = np.array([2,3,5]) reduced_h = h[x_indices, y_indices] #reduced_h = [ 9, 24, 33] However, I would like to, for each x,y pair cut out a square (denoted by 'a' - the number of indices in each direction from the centre) surrounding this 'coordinate' and return an array of these little 2D arrays. For example, for h, x,y_indices as above and a=1: reduced_h = [[[1,2,3],[8,9,10],[15,16,17]], [[16,17,18],[23,24,25],[30,31,32]], [[25,26,27],[32,33,34],[39,40,41]]] i.e one 3x3 array for each x-y index pair corresponding to the 3x3 square of elements centred on the x-y index. In general, this should return a numpy array which has shape (len(x_indices),2a+1, 2a+1) By analogy to reduced_h[0] = h[x_indices[0]-1:x_indices[0]+1 , y_indices[0]-1:y_indices[0]+1] = h[1-1:1+1 , 2-1:2+1] = h[0:2, 1:3] my first try was the following: h[x_indices-a : x_indices+a, y_indices-a : y_indices+a] However, perhaps unsurprisingly, slicing between the arrays fails. So the obvious next thing to try is to create this slice manually. np.arange seems to struggle with this but linspace works: a=1 xrange = np.linspace(x_indices-a, x_indices+a, 2*a+1, dtype=int) # xrange = [ [0, 2, 3], [1, 3, 4], [2, 4, 5] ] yrange = np.linspace(y_indices-a, y_indices+a, 2*a+1, dtype=int) Now can try h[xrange,yrange] but this unsurprisingly does this element-wise meaning I get only one (2a+1)x(2a+1) array (the same dimensions as xrange and yrange). It there a way to, for every index, take the right slices from these ranges (without loops)? Or is there a way to make the broadcast work initially without having to set up linspace explicitly? Thanks
[ "You can index np.lib.stride_tricks.sliding_window_view using your x and y indices:\nimport numpy as np\n\nh = np.arange(49).reshape(7,7)\n\nx_indices = np.array([1,3,4])\ny_indices = np.array([2,3,5])\n\na = 1\nwindow = (2*a+1, 2*a+1)\n\nout = np.lib.stride_tricks.sliding_window_view(h, window)[x_indices-a, y_indices-a]\n\nout:\narray([[[ 1, 2, 3],\n [ 8, 9, 10],\n [15, 16, 17]],\n\n [[16, 17, 18],\n [23, 24, 25],\n [30, 31, 32]],\n\n [[25, 26, 27],\n [32, 33, 34],\n [39, 40, 41]]])\n\nNote that you may need to pad h first to handle windows around your coordinates that reach \"outside\" h.\n" ]
[ 4 ]
[]
[]
[ "array_broadcasting", "numpy", "numpy_slicing", "python" ]
stackoverflow_0074646902_array_broadcasting_numpy_numpy_slicing_python.txt
Q: Subseting python dataframe using position values from lists I have a dataframe with raw data and I would like to select different range of rows for each column, using two different lists: one containing the first row position to select and the other the last. INPUT | Index | Column A | Column B | |:--------:|:--------:|:--------:| | 1 | 2 | 8 | | 2 | 4 | 9 | | 3 | 1 | 7 | first_position=[1,2] last_position=[2,3] EXPECTED OUTPUT | Index | Column A | Column B | |:--------:|:--------:|:--------:| | 1 | 2 | 9 | | 2 | 4 | 7 | Which function can I use? Thanks! I tried df.filter but I think it does not accept list as input. A: Basically, as far as I can see, you have two meaningful columns in your DataFrame. Thus, I would suggest using "Index" column as the index indeed: df.set_index(df.columns[0], inplace=True) That way you might use .loc: df_out = pd.concat( [ df.loc[first_position, "Column A"].reset_index(drop=True), df.loc[last_position, "Column B"].reset_index(drop=True) ], axis=1 ) However, having indexes stored in separate lists you would need to watch them yourselves, which may be not too convenient. Instead, I would re-organize it with slicing: df_out = pd.concat( [ df[["Column A"]][:-1].reset_index(drop=True), df[["Column B"]][1:].reset_index(drop=True) ], axis=1 ) In either cases, index is being destroyed. If that matters, then the scenario without .reset_index(drop=True) would be required.
Subseting python dataframe using position values from lists
I have a dataframe with raw data and I would like to select different range of rows for each column, using two different lists: one containing the first row position to select and the other the last. INPUT | Index | Column A | Column B | |:--------:|:--------:|:--------:| | 1 | 2 | 8 | | 2 | 4 | 9 | | 3 | 1 | 7 | first_position=[1,2] last_position=[2,3] EXPECTED OUTPUT | Index | Column A | Column B | |:--------:|:--------:|:--------:| | 1 | 2 | 9 | | 2 | 4 | 7 | Which function can I use? Thanks! I tried df.filter but I think it does not accept list as input.
[ "Basically, as far as I can see, you have two meaningful columns in your DataFrame.\nThus, I would suggest using \"Index\" column as the index indeed:\ndf.set_index(df.columns[0], inplace=True)\n\nThat way you might use .loc:\ndf_out = pd.concat(\n [\n df.loc[first_position, \"Column A\"].reset_index(drop=True),\n df.loc[last_position, \"Column B\"].reset_index(drop=True)\n ],\n axis=1\n)\n\nHowever, having indexes stored in separate lists you would need to watch them yourselves, which may be not too convenient.\nInstead, I would re-organize it with slicing:\ndf_out = pd.concat(\n [\n df[[\"Column A\"]][:-1].reset_index(drop=True),\n df[[\"Column B\"]][1:].reset_index(drop=True)\n ],\n axis=1\n)\n\nIn either cases, index is being destroyed. If that matters, then the scenario without .reset_index(drop=True) would be required.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python", "subset" ]
stackoverflow_0074647945_dataframe_pandas_python_subset.txt
Q: Python: Unable to call function within a seperate function? (undefined name 'getItemClassiness') For some reason the getClassiness Function does not work as it is not able to call the helper function getItemClassiness. Is there any reason this might be? Thanks! class Classy(object): def __init__(self): self.items = [] def addItem(self, item): self.items.append(item) def getItemClassiness(item): if item == "tophat": return 2 if item == "bowtie": return 4 if item == "monocle": return 5 return 0 def getClassiness(self): total = 0 for item in self.items: x = getItemClassiness(item) total += x return total # Test cases me = Classy() # Should be 0 print(me.getClassiness()) # Should be 2 me.addItem("tophat") print(me.getClassiness()) me.addItem("bowtie") me.addItem("jacket") me.addItem("monocle") print(me.getClassiness()) # Should be 11 me.addItem("bowtie\n") print(me.getClassiness()) # Should be 15 You can use this class to represent how classy someone or something is. "Classy" is interchangable with "fancy". If you add fancy-looking items, you will increase your "classiness". Create a function in "Classy" that takes a string as input and adds it to the "items" list. Another method should calculate the "classiness" value based on the items. The following items have classiness points associated with them: "tophat" = 2 "bowtie" = 4 "monocle" = 5 Everything else has 0 points. Use the test cases below to guide you! A: In line 21 call for a class method is made without using the self keyword. x = self.getItemClassiness(item) Similarly on line 8 in self keyword is required with as parameter for function definition of getItemClassiness def getItemClassiness(self, item): A: You should declare getItemClassiness as a static method because it doesn't require a specific instance. Then you can call the function as you would an instance method. @staticmethod def getItemClassiness(item): ... def getClassiness(self): ... for item in self.items: x = self.getItemClassiness(item) But still it won't give you 15 for the last test case, because "bowtie" != "bowtie\n". If you intend to ignore white space at the start or the end of the string, use str.strip(). A: Here is what I did using Static Method. Got the right output in Test Cases. class Classy(object): def __init__(self): self.items = [] def addItem(self, item): self.items.append(item) @staticmethod def getItemClassiness(item): if item == "tophat": return 2 if item == "bowtie": return 4 if item == "monocle": return 5 return 0 def getClassiness(self): total = 0 for item in self.items: x = self.getItemClassiness(item) total += x return total
Python: Unable to call function within a seperate function? (undefined name 'getItemClassiness')
For some reason the getClassiness Function does not work as it is not able to call the helper function getItemClassiness. Is there any reason this might be? Thanks! class Classy(object): def __init__(self): self.items = [] def addItem(self, item): self.items.append(item) def getItemClassiness(item): if item == "tophat": return 2 if item == "bowtie": return 4 if item == "monocle": return 5 return 0 def getClassiness(self): total = 0 for item in self.items: x = getItemClassiness(item) total += x return total # Test cases me = Classy() # Should be 0 print(me.getClassiness()) # Should be 2 me.addItem("tophat") print(me.getClassiness()) me.addItem("bowtie") me.addItem("jacket") me.addItem("monocle") print(me.getClassiness()) # Should be 11 me.addItem("bowtie\n") print(me.getClassiness()) # Should be 15 You can use this class to represent how classy someone or something is. "Classy" is interchangable with "fancy". If you add fancy-looking items, you will increase your "classiness". Create a function in "Classy" that takes a string as input and adds it to the "items" list. Another method should calculate the "classiness" value based on the items. The following items have classiness points associated with them: "tophat" = 2 "bowtie" = 4 "monocle" = 5 Everything else has 0 points. Use the test cases below to guide you!
[ "In line 21 call for a class method is made without using the self keyword.\n x = self.getItemClassiness(item)\n\nSimilarly on line 8 in self keyword is required with as parameter for function definition of getItemClassiness\ndef getItemClassiness(self, item):\n\n", "You should declare getItemClassiness as a static method because it doesn't require a specific instance. Then you can call the function as you would an instance method.\n @staticmethod\n def getItemClassiness(item):\n ...\n \n \n def getClassiness(self):\n ...\n for item in self.items:\n x = self.getItemClassiness(item)\n\nBut still it won't give you 15 for the last test case, because \"bowtie\" != \"bowtie\\n\". If you intend to ignore white space at the start or the end of the string, use str.strip().\n", "Here is what I did using Static Method. Got the right output in Test Cases.\nclass Classy(object):\ndef __init__(self):\n self.items = []\n \ndef addItem(self, item):\n self.items.append(item)\n \n@staticmethod\ndef getItemClassiness(item):\n if item == \"tophat\":\n return 2\n if item == \"bowtie\":\n return 4\n if item == \"monocle\":\n return 5\n return 0\n\n\ndef getClassiness(self):\n total = 0\n for item in self.items:\n x = self.getItemClassiness(item)\n total += x\n return total\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "function", "python" ]
stackoverflow_0072455810_function_python.txt
Q: If a logging warning occurs before __main__, how is it being called? I'm working on a corporate python program which uses buck build. When I run part of the program, abc.py (via a .par file), then it runs the program starting with if __name__ == "__main__" etc. However, I'm trying to find source of logging warnings that occur before any of the contents of the if __name__ ... are run. Or perhaps keywords to understand this better. What could be running before the main file? A: Your mistake is thinking execution starts with if __name__ == "__main__":. That check is a guard that prevents the guarded code from executing when imported as a module, rather than run as the main script. The unguarded code always runs, regardless of how the module is loaded, so if the guard is at the bottom of the file, the rest of the file is run before it is reached. So if you have a module: # foo.py print("foo") And a script: # main.py import foo def main(): print("Main") if __name__ == '__main__': main() and you run python3 main.py, execution of main.py begins with import foo, which ends up printing foo immediately. Then main.py defines the main function (defining functions without calling them is still executing code; if Python had skipped to the guarded code, main wouldn't exist), then the guard is checked, the check passes, and main is called. Basically every bit of top-level code (typically imports, global constants, functions, and class definitions) is executed before the guarded block is reached; normally that top-level code does nothing visible, but if an imported module tried to do some real work and logged during the process, it would take effect before you ever reached your guarded "main script" code.
If a logging warning occurs before __main__, how is it being called?
I'm working on a corporate python program which uses buck build. When I run part of the program, abc.py (via a .par file), then it runs the program starting with if __name__ == "__main__" etc. However, I'm trying to find source of logging warnings that occur before any of the contents of the if __name__ ... are run. Or perhaps keywords to understand this better. What could be running before the main file?
[ "Your mistake is thinking execution starts with if __name__ == \"__main__\":. That check is a guard that prevents the guarded code from executing when imported as a module, rather than run as the main script. The unguarded code always runs, regardless of how the module is loaded, so if the guard is at the bottom of the file, the rest of the file is run before it is reached.\nSo if you have a module:\n# foo.py\nprint(\"foo\")\n\nAnd a script:\n# main.py\nimport foo\n\ndef main():\n print(\"Main\")\n\nif __name__ == '__main__':\n main()\n\nand you run python3 main.py, execution of main.py begins with import foo, which ends up printing foo immediately. Then main.py defines the main function (defining functions without calling them is still executing code; if Python had skipped to the guarded code, main wouldn't exist), then the guard is checked, the check passes, and main is called. Basically every bit of top-level code (typically imports, global constants, functions, and class definitions) is executed before the guarded block is reached; normally that top-level code does nothing visible, but if an imported module tried to do some real work and logged during the process, it would take effect before you ever reached your guarded \"main script\" code.\n" ]
[ 1 ]
[]
[]
[ "buck", "python" ]
stackoverflow_0074648292_buck_python.txt
Q: Convert Pandas DataFrame WITHOUT connecting to a SQL database All solutions I have seen require connecting to a SQL database, which IS NOT the goal of this question. The Goal Is To Convert A DataFrame To A String Capturing How To Re-Create The DataFrame That I Can Save As A Valid .sql File Let's say I have a simple pandas DataFrame: df = pd.DataFrame({{'hello'}:[1], {'world}:[2]}) ...and I wanted to automatically convert it into a .sql file that could be executed to generate the table, so something like: #psuedocode py_script.output_file_sql('my_table') return """CREATE TABLE my_table ( hello integer, world integer );"" Problem: I can't find the documentation for pandas conversion into an .sql without actually connecting to a database. If I use sqlalchemy, then run a query with information_schema.columns or \d table_name that doesn't seem to work. Any suggestions? A: you need to map all the datatypes correctly, i only used a sample to show you how top start. But to be correct you need to rebuild all https://www.postgresql.org/docs/current/sql-createtable.html if you want to have all options So i repeat my comment, best is to backup your database on database server with a backup tool, and use hat instead. import pandas as pd df = pd.DataFrame({'hello':[1], 'world':[2]}) df.name = 'Ones' indextext = "hello" def typeconversion(x): return { 'int64': 'bigint ', 'float64': 'FLOAT' }[x] def get_sql(df,Indexx_table): STR_sql = "CREATE TABLE " + df.name + "( " for (col1, col2) in zip(df.columns, df.dtypes): STR_sql += col1 + " " + typeconversion(col2.name) + ',' #remove last comma STR_sql = STR_sql[:-1] if Indexx_table: STR_sql += ", PRIMARY KEY (" + Indexx_table + ")" STR_sql += ")" return STR_sql print(get_sql(df,indextext)) result is CREATE TABLE Ones( hello bigint ,world bigint , PRIMARY KEY (hello))
Convert Pandas DataFrame WITHOUT connecting to a SQL database
All solutions I have seen require connecting to a SQL database, which IS NOT the goal of this question. The Goal Is To Convert A DataFrame To A String Capturing How To Re-Create The DataFrame That I Can Save As A Valid .sql File Let's say I have a simple pandas DataFrame: df = pd.DataFrame({{'hello'}:[1], {'world}:[2]}) ...and I wanted to automatically convert it into a .sql file that could be executed to generate the table, so something like: #psuedocode py_script.output_file_sql('my_table') return """CREATE TABLE my_table ( hello integer, world integer );"" Problem: I can't find the documentation for pandas conversion into an .sql without actually connecting to a database. If I use sqlalchemy, then run a query with information_schema.columns or \d table_name that doesn't seem to work. Any suggestions?
[ "you need to map all the datatypes correctly, i only used a sample to show you how top start.\nBut to be correct you need to rebuild all https://www.postgresql.org/docs/current/sql-createtable.html if you want to have all options\nSo i repeat my comment, best is to backup your database on database server with a backup tool, and use hat instead.\nimport pandas as pd\ndf = pd.DataFrame({'hello':[1], 'world':[2]})\ndf.name = 'Ones'\nindextext = \"hello\"\ndef typeconversion(x):\n return {\n 'int64': 'bigint ',\n 'float64': 'FLOAT'\n }[x]\n\ndef get_sql(df,Indexx_table):\n\n STR_sql = \"CREATE TABLE \" + df.name + \"( \"\n\n for (col1, col2) in zip(df.columns, df.dtypes): \n STR_sql += col1 + \" \" + typeconversion(col2.name) + ','\n #remove last comma\n STR_sql = STR_sql[:-1]\n if Indexx_table:\n STR_sql += \", PRIMARY KEY (\" + Indexx_table + \")\"\n STR_sql += \")\"\n return STR_sql\n\nprint(get_sql(df,indextext))\n\nresult is\nCREATE TABLE Ones( hello bigint ,world bigint , PRIMARY KEY (hello))\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "postgresql", "python", "sql" ]
stackoverflow_0074645609_pandas_postgresql_python_sql.txt
Q: Generic is an abstract class in python? I'm trying to create a base class that works for any CRUD in applications and I've seen the following implementation: ModelType = TypeVar("ModelType", bound=Base) CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel) UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel) class CRUDBase(Generic[ModelType, CreateSchemaType, UpdateSchemaType]): def __init__(self, model: Type[ModelType]): """CRUD object with default methods to Create, Read, Update, Delete (CRUD). **Parameters** * `model`: A SQLAlchemy model class * `schema`: A Pydantic model (schema) class """ self.model = model '...crud methods'` Generic is a way to define an abstract class, containing the objects that are specified (in this case [ModelType,CreateSchemaType, UpdateSchemaType]) or what is the use of generic? A: If your question is about the purpose of typing.Generic I would suggest you read through PEP 484. It has a section dedicated to user defined generic classes with some examples specifically for this, but the entire document is worthwhile reading IMHO. If you are unsure about the entire concept of generic types, the Wikipedia articles on parametric polymorphism and generic programming might even be worth skimming. In very simple terms, you can use the Generic base class to define your own classes in a generic way, i.e. parameterized with regards to one or more type arguments. These type parameters are constructed via typing.TypeVar. It is worth noting that in most cases these things are only relevant for the purposes of type safety, which is not really enforced by Python itself. Instead, static type checkers (most IDEs have them built-in) deal with those things. As to your example, it is entirely possible to define your class in a generic way and thus improve your own coding experience because you'll likely get useful auto-suggestions and warnings by your IDE. This only really becomes useful, if you use the type variables somewhere else throughout your class. Example: from typing import Generic, TypeVar class Base: pass ModelType = TypeVar("ModelType", bound=Base) class CRUDBase(Generic[ModelType]): def __init__(self, model: type[ModelType]): self.model = model def get_model_instance(self) -> ModelType: return self.model() class Sub(Base): def foo(self) -> None: print("hi mom") if __name__ == "__main__": crud_a = CRUDBase(Sub) crud_b = CRUDBase(Base) a = crud_a.get_model_instance() b = crud_b.get_model_instance() a.foo() b.foo() # error A good type checker like mypy will know, that a is of the Sub type and thus has the foo method, whereas b is of the Base type and therefore does not. And it will know so just from the way we annotated CRUDBase. PyCharm for example struggles with this (don't know why). Fortunately, we can help it out with explicit type annotations because we defined CRUDBase as a generic type: ... crud_a: CRUDBase[Sub] = CRUDBase(Sub) crud_b: CRUDBase[Base] = CRUDBase(Base) a = crud_a.get_model_instance() b = crud_b.get_model_instance() a.foo() b.foo() # PyCharm marks `foo` and complains Notice that so far there is nothing particularly "abstract" about our CRUDBase. It is fully self-contained and functioning like this (albeit not very useful yet). If you want a class to be an abstract base and not be directly instantiated, the abc module provides you with the tools you need. You can define an abstract base class and abstract methods on it that all subclasses must implement. But I think it is clear now that this is a different concept. The idea is that an abstract class is not intended to be instantiated, only its subclasses are. I hope this helps point you in a few useful directions.
Generic is an abstract class in python?
I'm trying to create a base class that works for any CRUD in applications and I've seen the following implementation: ModelType = TypeVar("ModelType", bound=Base) CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel) UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel) class CRUDBase(Generic[ModelType, CreateSchemaType, UpdateSchemaType]): def __init__(self, model: Type[ModelType]): """CRUD object with default methods to Create, Read, Update, Delete (CRUD). **Parameters** * `model`: A SQLAlchemy model class * `schema`: A Pydantic model (schema) class """ self.model = model '...crud methods'` Generic is a way to define an abstract class, containing the objects that are specified (in this case [ModelType,CreateSchemaType, UpdateSchemaType]) or what is the use of generic?
[ "If your question is about the purpose of typing.Generic I would suggest you read through PEP 484. It has a section dedicated to user defined generic classes with some examples specifically for this, but the entire document is worthwhile reading IMHO. If you are unsure about the entire concept of generic types, the Wikipedia articles on parametric polymorphism and generic programming might even be worth skimming.\nIn very simple terms, you can use the Generic base class to define your own classes in a generic way, i.e. parameterized with regards to one or more type arguments. These type parameters are constructed via typing.TypeVar. It is worth noting that in most cases these things are only relevant for the purposes of type safety, which is not really enforced by Python itself. Instead, static type checkers (most IDEs have them built-in) deal with those things.\nAs to your example, it is entirely possible to define your class in a generic way and thus improve your own coding experience because you'll likely get useful auto-suggestions and warnings by your IDE. This only really becomes useful, if you use the type variables somewhere else throughout your class. Example:\nfrom typing import Generic, TypeVar\n\n\nclass Base:\n pass\n\n\nModelType = TypeVar(\"ModelType\", bound=Base)\n\n\nclass CRUDBase(Generic[ModelType]):\n def __init__(self, model: type[ModelType]):\n self.model = model\n\n def get_model_instance(self) -> ModelType:\n return self.model()\n\n\nclass Sub(Base):\n def foo(self) -> None:\n print(\"hi mom\")\n\n\nif __name__ == \"__main__\":\n crud_a = CRUDBase(Sub)\n crud_b = CRUDBase(Base)\n a = crud_a.get_model_instance()\n b = crud_b.get_model_instance()\n a.foo()\n b.foo() # error\n\nA good type checker like mypy will know, that a is of the Sub type and thus has the foo method, whereas b is of the Base type and therefore does not. And it will know so just from the way we annotated CRUDBase.\nPyCharm for example struggles with this (don't know why). Fortunately, we can help it out with explicit type annotations because we defined CRUDBase as a generic type:\n...\n crud_a: CRUDBase[Sub] = CRUDBase(Sub)\n crud_b: CRUDBase[Base] = CRUDBase(Base)\n a = crud_a.get_model_instance()\n b = crud_b.get_model_instance()\n a.foo()\n b.foo() # PyCharm marks `foo` and complains\n\nNotice that so far there is nothing particularly \"abstract\" about our CRUDBase. It is fully self-contained and functioning like this (albeit not very useful yet).\nIf you want a class to be an abstract base and not be directly instantiated, the abc module provides you with the tools you need. You can define an abstract base class and abstract methods on it that all subclasses must implement. But I think it is clear now that this is a different concept. The idea is that an abstract class is not intended to be instantiated, only its subclasses are.\nI hope this helps point you in a few useful directions.\n" ]
[ 0 ]
[]
[]
[ "abstract_class", "crud", "generics", "python", "type_variables" ]
stackoverflow_0074647999_abstract_class_crud_generics_python_type_variables.txt
Q: Margins in PyQtGraph's GraphicsLayout Having a simple graphics layout with PyQtGraph: from pyqtgraph.Qt import QtGui, QtCore import pyqtgraph as pg app = QtGui.QApplication([]) view = pg.GraphicsView() l = pg.GraphicsLayout(border='g') view.setCentralItem(l) view.show() view.resize(800,600) l.addPlot(0, 0) l.addPlot(1, 0) l.layout.setSpacing(0.) l.setContentsMargins(0., 0., 0., 0.) if __name__ == '__main__': import sys if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'): QtGui.QApplication.instance().exec_() Whichs outputs: How could I get rid of the small margins which are between the green external line and the window borders? I could do the trick and use l.setContentsMargins(-10., -10., -10., -10.), and that works: But it seems to me like a dirty trick and there should be another parameter which is setting that margin. Could this be possible? Is there another margin parameter which I could set to 0 to get the same results? A: I think this might be a Qt bug. There's an easy workaround: l = pg.GraphicsLayout() l.layout.setContentsMargins(0, 0, 0, 0) To understand this, let's look at a modified example: from pyqtgraph.Qt import QtGui, QtCore import pyqtgraph as pg app = QtGui.QApplication([]) view = pg.GraphicsView() view.show() view.resize(800,600) class R(QtGui.QGraphicsWidget): # simple graphics widget that draws a rectangle around its geometry def boundingRect(self): return self.mapRectFromParent(self.geometry()).normalized() def paint(self, p, *args): p.setPen(pg.mkPen('y')) p.drawRect(self.boundingRect()) l = QtGui.QGraphicsGridLayout() r1 = R() r2 = R() r3 = R() r1.setLayout(l) l.addItem(r2, 0, 0) l.addItem(r3, 1, 0) view.scene().addItem(r1) In this example, calling l.setContentsMargins(...) has the expected effect, but calling r1.setContentsMargins(...) does not. The Qt docs suggest that the effect should have been the same, though: http://qt-project.org/doc/qt-4.8/qgraphicswidget.html#setContentsMargins A: For anyone going through this in 2022, use a pg.GraphicsLayoutWidget : # GraphicsLayoutWidget is now recommended w = pg.GraphicsLayoutWidget(border=(30,20,255)) win.centralWidget.layout.setContentsMargins(0,0,0,0) win.centralWidget.layout.setSpacing(0) Notice how there is no spacing between each blue border of each plot :
Margins in PyQtGraph's GraphicsLayout
Having a simple graphics layout with PyQtGraph: from pyqtgraph.Qt import QtGui, QtCore import pyqtgraph as pg app = QtGui.QApplication([]) view = pg.GraphicsView() l = pg.GraphicsLayout(border='g') view.setCentralItem(l) view.show() view.resize(800,600) l.addPlot(0, 0) l.addPlot(1, 0) l.layout.setSpacing(0.) l.setContentsMargins(0., 0., 0., 0.) if __name__ == '__main__': import sys if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'): QtGui.QApplication.instance().exec_() Whichs outputs: How could I get rid of the small margins which are between the green external line and the window borders? I could do the trick and use l.setContentsMargins(-10., -10., -10., -10.), and that works: But it seems to me like a dirty trick and there should be another parameter which is setting that margin. Could this be possible? Is there another margin parameter which I could set to 0 to get the same results?
[ "I think this might be a Qt bug. There's an easy workaround:\nl = pg.GraphicsLayout()\nl.layout.setContentsMargins(0, 0, 0, 0)\n\nTo understand this, let's look at a modified example:\nfrom pyqtgraph.Qt import QtGui, QtCore\nimport pyqtgraph as pg\n\napp = QtGui.QApplication([])\nview = pg.GraphicsView()\nview.show()\nview.resize(800,600)\n\nclass R(QtGui.QGraphicsWidget):\n # simple graphics widget that draws a rectangle around its geometry\n def boundingRect(self):\n return self.mapRectFromParent(self.geometry()).normalized()\n def paint(self, p, *args):\n p.setPen(pg.mkPen('y'))\n p.drawRect(self.boundingRect())\n\nl = QtGui.QGraphicsGridLayout()\nr1 = R()\nr2 = R()\nr3 = R()\nr1.setLayout(l)\nl.addItem(r2, 0, 0)\nl.addItem(r3, 1, 0)\n\nview.scene().addItem(r1)\n\nIn this example, calling l.setContentsMargins(...) has the expected effect, but calling r1.setContentsMargins(...) does not. The Qt docs suggest that the effect should have been the same, though: http://qt-project.org/doc/qt-4.8/qgraphicswidget.html#setContentsMargins\n", "For anyone going through this in 2022, use a pg.GraphicsLayoutWidget :\n# GraphicsLayoutWidget is now recommended\nw = pg.GraphicsLayoutWidget(border=(30,20,255))\nwin.centralWidget.layout.setContentsMargins(0,0,0,0)\nwin.centralWidget.layout.setSpacing(0)\n\nNotice how there is no spacing between each blue border of each plot :\n\n" ]
[ 2, 0 ]
[]
[]
[ "pyqt", "pyqtgraph", "python" ]
stackoverflow_0027092164_pyqt_pyqtgraph_python.txt
Q: How to make program sleep until next day I need my code to stop and wait until the next day. The time does not matter, I just need it to continue when the date changes. currentDate = datetime.datetime.now() future = datetime.datetime(currentDate.year, currentDate.month, (currentDate.day + 1)) time.sleep((future-currentDate).total_seconds()) The code pauses but does not continue after A: Two options here with comments. First do imports import datetime import time one uses a while loop - probably not a good solution but highlights one way to wait for a condition to be met. def loop_until_tomorrow(): """ Will use a while loop to iterate until tomorrow """ #get current date currentDate = datetime.datetime.now().date() # loop attempts times = 0 # this will loop infiniatly if condition is never met while True: # increment by one each iteration times += 1 #get date now now = datetime.datetime.now().date() if currentDate != now: # return when condition met print("\nDay has changed") return else: # print attempts and sleep here to avoid program hanging print(f"Attempt: {times}".ljust(13) + " - Not tomorrow yet!", end="\r") time.sleep(5) the other - sleeps for the amount of seconds from now till tomorrow def sleep_until_tomorrow(): """wait till tomorrow using time.sleep""" #get date now now = datetime.datetime.now() #get tomorrows date tomorrow_date = now.date() + datetime.timedelta(days=1) #set to datetime tomorrow_datetime = datetime.datetime(year=tomorrow_date.year, month=tomorrow_date.month, day=tomorrow_date.day, hour=0, minute=0, second=0) #get seconds seconds_til_tomorrow = (tomorrow_datetime-now).total_seconds() #sleep time.sleep(seconds_til_tomorrow) A: You can use schedule for that purpose, which will give you the flexibility to refactore the code when needed without having to write a chunck of code. from schedule import every, repeat, run_pending import time #just to give you the idea on how to implement the module. @repeat(every().day.at("7:15")) def remind_me_its_a_new_day(): print("Hey there it's a new day! ") while True: run_pending() time.sleep(1)
How to make program sleep until next day
I need my code to stop and wait until the next day. The time does not matter, I just need it to continue when the date changes. currentDate = datetime.datetime.now() future = datetime.datetime(currentDate.year, currentDate.month, (currentDate.day + 1)) time.sleep((future-currentDate).total_seconds()) The code pauses but does not continue after
[ "Two options here with comments.\nFirst do imports\nimport datetime\nimport time\n\n\none uses a while loop - probably not a good solution but highlights one way to wait for a condition to be met.\n\ndef loop_until_tomorrow():\n \"\"\" Will use a while loop to iterate until tomorrow \"\"\"\n\n #get current date\n currentDate = datetime.datetime.now().date()\n # loop attempts \n times = 0 \n # this will loop infiniatly if condition is never met \n while True:\n # increment by one each iteration\n times += 1\n #get date now\n now = datetime.datetime.now().date()\n if currentDate != now:\n # return when condition met \n print(\"\\nDay has changed\")\n return \n else: \n # print attempts and sleep here to avoid program hanging \n print(f\"Attempt: {times}\".ljust(13) + \" - Not tomorrow yet!\", end=\"\\r\")\n time.sleep(5)\n\n\nthe other - sleeps for the amount of seconds from now till tomorrow\n\ndef sleep_until_tomorrow():\n\n \"\"\"wait till tomorrow using time.sleep\"\"\"\n \n #get date now\n now = datetime.datetime.now()\n #get tomorrows date \n tomorrow_date = now.date() + datetime.timedelta(days=1)\n #set to datetime\n tomorrow_datetime = datetime.datetime(year=tomorrow_date.year, month=tomorrow_date.month, day=tomorrow_date.day, hour=0, minute=0, second=0)\n #get seconds\n seconds_til_tomorrow = (tomorrow_datetime-now).total_seconds()\n #sleep\n time.sleep(seconds_til_tomorrow)\n\n", "You can use schedule for that purpose, which will give you the flexibility to refactore the code when needed without having to write a chunck of code.\nfrom schedule import every, repeat, run_pending\nimport time\n\n#just to give you the idea on how to implement the module. \n@repeat(every().day.at(\"7:15\"))\ndef remind_me_its_a_new_day():\n print(\"Hey there it's a new day! \")\n\nwhile True:\n run_pending()\n time.sleep(1)\n\n" ]
[ 2, 1 ]
[]
[]
[ "python", "sleep", "time" ]
stackoverflow_0074647866_python_sleep_time.txt
Q: Substract 2 Columns and show the current Value on a new one I have this DataFrame: Names Account_1 Account_2 ID_Movement Less_1 Less_2 Peter 35 70 Movement_1 0 5 Peter 35 70 Movement_2 6 0 Peter 35 70 Movement_3 1 0 Peter 35 70 Movement_4 0 2 Jhon 55 60 Movement_5 6 0 Jhon 55 60 Movement_6 0 2 Jhon 55 60 Movement_7 0 3 Jhon 55 60 Movement_8 12 0 Jhon 55 60 Movement_9 6 0 William 34 88 Movement_10 0 8 William 34 88 Movement_11 0 9 William 34 88 Movement_12 0 5 I was trying to create a new column with the current value of the account of each person with this code: s = (df['Account_1']).sub(df['Less_1']).groupby(df['Names']).cumsum() df2['New_Account1'] = s Desired Output: Names Account 1 Account 2 ID_Movement Less_1 Less_2 New_Account1 New_Account2 Peter 35 70 Movement_1 0 11 35 59 Peter 35 70 Movement_2 6 0 29 59 Peter 35 70 Movement_3 6 0 23 59 Peter 35 70 Movement_4 0 4 23 55 Jhon 55 60 Movement_5 6 0 49 60 Jhon 55 60 Movement_6 0 14 49 46 Jhon 55 60 Movement_7 0 13 49 33 Jhon 55 60 Movement_8 12 0 37 33 Jhon 55 60 Movement_9 6 0 31 33 William 34 88 Movement_10 12 0 22 88 William 34 88 Movement_11 0 9 22 79 William 34 88 Movement_12 0 5 22 74 A: Use groupby.cumsum and subtraction with to_numpy(): df[['New_Account1', 'New_Account2']] = (df[['Account_1', 'Account_2']] - df.groupby('Names')[['Less_1', 'Less_2']] .cumsum().to_numpy() ) Output: Names Account_1 Account_2 ID_Movement Less_1 Less_2 New_Account1 New_Account2 0 Peter 35 70 Movement_1 0 5 35 65 1 Peter 35 70 Movement_2 6 0 29 65 2 Peter 35 70 Movement_3 1 0 28 65 3 Peter 35 70 Movement_4 0 2 28 63 4 Jhon 55 60 Movement_5 6 0 49 60 5 Jhon 55 60 Movement_6 0 2 49 58 6 Jhon 55 60 Movement_7 0 3 49 55 7 Jhon 55 60 Movement_8 12 0 37 55 8 Jhon 55 60 Movement_9 6 0 31 55 9 William 34 88 Movement_10 0 8 34 80 10 William 34 88 Movement_11 0 9 34 71 11 William 34 88 Movement_12 0 5 34 66 A: Try to use lambda df['New_Account1'] = df.apply(lambda row : (row['Account 1'] - row['Less_1']), axis = 1)
Substract 2 Columns and show the current Value on a new one
I have this DataFrame: Names Account_1 Account_2 ID_Movement Less_1 Less_2 Peter 35 70 Movement_1 0 5 Peter 35 70 Movement_2 6 0 Peter 35 70 Movement_3 1 0 Peter 35 70 Movement_4 0 2 Jhon 55 60 Movement_5 6 0 Jhon 55 60 Movement_6 0 2 Jhon 55 60 Movement_7 0 3 Jhon 55 60 Movement_8 12 0 Jhon 55 60 Movement_9 6 0 William 34 88 Movement_10 0 8 William 34 88 Movement_11 0 9 William 34 88 Movement_12 0 5 I was trying to create a new column with the current value of the account of each person with this code: s = (df['Account_1']).sub(df['Less_1']).groupby(df['Names']).cumsum() df2['New_Account1'] = s Desired Output: Names Account 1 Account 2 ID_Movement Less_1 Less_2 New_Account1 New_Account2 Peter 35 70 Movement_1 0 11 35 59 Peter 35 70 Movement_2 6 0 29 59 Peter 35 70 Movement_3 6 0 23 59 Peter 35 70 Movement_4 0 4 23 55 Jhon 55 60 Movement_5 6 0 49 60 Jhon 55 60 Movement_6 0 14 49 46 Jhon 55 60 Movement_7 0 13 49 33 Jhon 55 60 Movement_8 12 0 37 33 Jhon 55 60 Movement_9 6 0 31 33 William 34 88 Movement_10 12 0 22 88 William 34 88 Movement_11 0 9 22 79 William 34 88 Movement_12 0 5 22 74
[ "Use groupby.cumsum and subtraction with to_numpy():\ndf[['New_Account1', 'New_Account2']] = (df[['Account_1', 'Account_2']]\n - df.groupby('Names')[['Less_1', 'Less_2']]\n .cumsum().to_numpy()\n )\n\nOutput:\n Names Account_1 Account_2 ID_Movement Less_1 Less_2 New_Account1 New_Account2\n0 Peter 35 70 Movement_1 0 5 35 65\n1 Peter 35 70 Movement_2 6 0 29 65\n2 Peter 35 70 Movement_3 1 0 28 65\n3 Peter 35 70 Movement_4 0 2 28 63\n4 Jhon 55 60 Movement_5 6 0 49 60\n5 Jhon 55 60 Movement_6 0 2 49 58\n6 Jhon 55 60 Movement_7 0 3 49 55\n7 Jhon 55 60 Movement_8 12 0 37 55\n8 Jhon 55 60 Movement_9 6 0 31 55\n9 William 34 88 Movement_10 0 8 34 80\n10 William 34 88 Movement_11 0 9 34 71\n11 William 34 88 Movement_12 0 5 34 66\n\n", "Try to use lambda\ndf['New_Account1'] = df.apply(lambda row : (row['Account 1'] - row['Less_1']), axis = 1)\n\n" ]
[ 3, 0 ]
[]
[]
[ "dataframe", "multiple_columns", "pandas", "python" ]
stackoverflow_0074647640_dataframe_multiple_columns_pandas_python.txt
Q: saving from api to s3 bucket I'm trying to get the below python code to save the csv from an api to an amazon s3 bucket using bot03 and python, but I can't see where I'm going wrong. When I execute the code I don't get any error but the file never appear in the s3 bucket. import boto3 from botocore.exceptions import ClientError file_name = "test.csv" bucket = "mybucket" def main(): url = "https://api0.solar.sheffield.ac.uk/pvlive/v3/pes/10?start=2021-01-01T00:00:00&end=2021-07-06T00:00:00&data_format=csv" x = requests.get(url,headers={'Content-Type': 'application/json', 'Accept': 'application/json', 'Accept-Encoding': 'gzip, deflate',}) s3 = bot03.client("s3") with open("test.csv","rb") as file2: s3.upload_fileobj(x.content, bucket, "test.cvc") any tips/advice would be appreciated. I'm a python/aws newbie so apologies if a basic question A: I used this code to acheive what I need file_name = "test.csv" bucket = "my_bucket" def main(): url = "https://api0.solar.sheffield.ac.uk/pvlive/v3/pes/10?start=2021-01-01T00:00:00&end=2021-07-06T00:00:00&data_format=csv" x = requests.get(url,headers={'Content-Type': 'application/json', 'Accept': 'application/json', 'Accept-Encoding': 'gzip, deflate',}) s3_resource = boto3.resource('s3') s3_resource.Object(bucket, 'snowflake/csv/df1.csv').put(Body=x.content) if __name__ == "__main__": main()
saving from api to s3 bucket
I'm trying to get the below python code to save the csv from an api to an amazon s3 bucket using bot03 and python, but I can't see where I'm going wrong. When I execute the code I don't get any error but the file never appear in the s3 bucket. import boto3 from botocore.exceptions import ClientError file_name = "test.csv" bucket = "mybucket" def main(): url = "https://api0.solar.sheffield.ac.uk/pvlive/v3/pes/10?start=2021-01-01T00:00:00&end=2021-07-06T00:00:00&data_format=csv" x = requests.get(url,headers={'Content-Type': 'application/json', 'Accept': 'application/json', 'Accept-Encoding': 'gzip, deflate',}) s3 = bot03.client("s3") with open("test.csv","rb") as file2: s3.upload_fileobj(x.content, bucket, "test.cvc") any tips/advice would be appreciated. I'm a python/aws newbie so apologies if a basic question
[ "I used this code to acheive what I need\nfile_name = \"test.csv\"\nbucket = \"my_bucket\"\n\ndef main():\n url = \"https://api0.solar.sheffield.ac.uk/pvlive/v3/pes/10?start=2021-01-01T00:00:00&end=2021-07-06T00:00:00&data_format=csv\"\n x = requests.get(url,headers={'Content-Type': 'application/json', 'Accept': 'application/json', 'Accept-Encoding': 'gzip, deflate',})\n\n\n\n s3_resource = boto3.resource('s3')\n s3_resource.Object(bucket, 'snowflake/csv/df1.csv').put(Body=x.content)\n\n\nif __name__ == \"__main__\":\n main()\n\n" ]
[ 1 ]
[ "per the OP(Original Post),\ndid you try\n(line 11) s3 = boto3.client(\"s3\") -- OP: bot03.client(\"s3\")\n" ]
[ -1 ]
[ "amazon_s3", "amazon_web_services", "api", "python" ]
stackoverflow_0068350137_amazon_s3_amazon_web_services_api_python.txt
Q: Creating functions to read file in python This a sample txt file called "price_file.txt": Apple,$2.55 Banana,$5.79 Carrot,$8.19 Dragon Fruit,$8.24 Eggs,$1.44 Hamburger Buns,$1.89 Ice Pops,$4.42 This is a function to allow the user to read the file: def addpricefile (price_file): # input: price file txt # output: item mapped to its price in a dictionary global item_to_price for next_line in price_file: item,price = next_line.strip().split(',') item_to_price[item]= float(price[1:]) #map item to price return item_to_price p = input ("Enter price file: ") price_file2 = open(p, "r") price_file = price_file2.readlines() for next_line in price_file: addpricefile(price_file2) print(item_to_price) price_file2.close() However, I get an empty dictionary as the output. How do I fix this? A: Try this code, I was a bit confused by what you had there but you can simplify the operation a bit. This will achieve the same result. I hope this helps you solve your problem. def openAndSeperate(filename): with open(filename,'r') as file: priceList = {} for i in file: i = i.strip('\n').split(',') priceList[i[0]] = float(str(i[1])[1:]) return priceList def main(): filename = 'price_file.txt'#input('Enter File Name: \n') priceList = openAndSeperate(filename) print(priceList) if __name__ == '__main__': main()
Creating functions to read file in python
This a sample txt file called "price_file.txt": Apple,$2.55 Banana,$5.79 Carrot,$8.19 Dragon Fruit,$8.24 Eggs,$1.44 Hamburger Buns,$1.89 Ice Pops,$4.42 This is a function to allow the user to read the file: def addpricefile (price_file): # input: price file txt # output: item mapped to its price in a dictionary global item_to_price for next_line in price_file: item,price = next_line.strip().split(',') item_to_price[item]= float(price[1:]) #map item to price return item_to_price p = input ("Enter price file: ") price_file2 = open(p, "r") price_file = price_file2.readlines() for next_line in price_file: addpricefile(price_file2) print(item_to_price) price_file2.close() However, I get an empty dictionary as the output. How do I fix this?
[ "Try this code, I was a bit confused by what you had there but you can simplify the operation a bit. This will achieve the same result. I hope this helps you solve your problem.\ndef openAndSeperate(filename):\n with open(filename,'r') as file:\n priceList = {}\n for i in file:\n i = i.strip('\\n').split(',')\n priceList[i[0]] = float(str(i[1])[1:])\n return priceList\n\n\ndef main():\n filename = 'price_file.txt'#input('Enter File Name: \\n')\n priceList = openAndSeperate(filename)\n print(priceList)\n\n\n\nif __name__ == '__main__':\n main()\n\n" ]
[ 1 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074648240_function_python.txt
Q: Same code in C++ and Python calculates diff values after a lot of loops, is it the nature of float/double? I am writing a driver model using C++ and Python to compare the performance. The simulation gives data like width, position, speed, etc. and the driver model does some calculations to decide if it needs to brake or not. Both models have the same variables and calculations, but after looping over 500 times first divergence surface and in the end, result in different brake responses. I am aware that floating point error is a thing, but shouldn't it be the same for both languages? I checked C++ and Python if they have IEEE 754 and it seems to be the case. I will attach my check at the end of the question. Is there something I don't know about float calculations or I do sth wrong in my codes? What did I expect: That I get the same response for both driver models. C++ Code: void DM::runAccumulator() { // storing values from previous run double prevVTheta = vTheta; double prevVActivation = vActivation; // shorter handle for convinience and to stay true to given Modell double egoSpeed = data.egoSpeed; double egoAcc = data.egoAcc; double egoLength = data.egoLength; double targetSpeed = data.targetSpeed; double targetAcc = data.targetAcc; double targetWidth = data.targetWidth; double targetLength = data.targetLength; // calculate positional data double targetPosRear = data.targetPosX - targetLength/2.0; double egoPosFront = data.egoPosX + egoLength/2.0; double distEgotoTarget = targetPosRear - egoPosFront; // optical size of target vehicle vTheta = 2.0 * std::atan(targetWidth / (2.0 * prevDistEgoToTarget)); prevDistEgoToTarget = distEgotoTarget; vThetaDot = (vTheta - prevVTheta)/timestep; vp = vThetaDot/vTheta; if (log = true) { this -> file << std::fixed << std::setprecision(2) << data.timestamp << ',' << std::setprecision(6) << targetPosRear << ',' << egoPosFront << ',' << prevDistEgoToTarget << ',' << vTheta << ',' << vThetaDot << ',' << vp << '\n'; } Python: def runAccumulator(self): #storing values from previous run prevVTheta = self.vTheta prevVActivation = self.vActivation #shorter handle for convinience and to stay true to given Modell egoSpeed = self.data.egoSpeed egoAcc = self.data.egoAcc egoLength = self.data.egoLength targetSpeed = self.data.targetSpeed targetAcc = self.data.targetAcc targetWidth = self.data.targetWidth targetLength = self.data.targetLength # calculate positional data targetPosRear = self.data.targetPosX - targetLength/2.0 egoPosFront = self.data.egoPosX + egoLength/2.0 distEgotoTarget = targetPosRear - egoPosFront # optical size target vehicle and looming self.vTheta = 2.0*arctan(targetWidth/(2.0*self.prevDistEgoToTarget)) self.prevDistEgoToTarget = distEgotoTarget self.vThetaDot = (self.vTheta - prevVTheta)/self.timestep self.vp = self.vThetaDot/self.vTheta # Driver Model update self.vEpsilon = self.vp - self.vpp activationChange = (self.accuK * self.vEpsilon - self.accuM - self.accuC * prevVActivation) * self.timestep self.vActivation = max(0.0, prevVActivation-activationChange) # log some parameters if self.log == True: logString = str(self.data.timestamp) + ',' logString += str(targetPosRear) + ',' #'{:.7f}'.format(targetPosRear) + ',' logString += str(egoPosFront) + ',' #'{:.7f}'.format(egoPosFront) + ',' logString += str(self.prevDistEgoToTarget) + ',' #'{:.7f}'.format(targetPosRear - egoPosFront) + ','; logString += str(self.vTheta) + ',' #'{:.7f}'.format(self.vTheta) + ',' logString +=str(self.vThetaDot) + ',' # '{:.7f}'.format(self.vThetaDot) + ',' logString += str(self.vp) + '\n' #'{:.7f}'.format(self.vp) + '\n' self.file.write(logString) C++ IEEE 754 Check: #include <cfloat> #include <iomanip> #include <iostream> int main() { int w = 16; std::cout << std::left; // std::cout << std::setprecision(53); # define COUT(x) std::cout << std::setw(w) << #x << " = " << x << '\n' COUT( FLT_RADIX ); COUT( DECIMAL_DIG ); COUT( FLT_DECIMAL_DIG ); COUT( DBL_DECIMAL_DIG ); COUT( LDBL_DECIMAL_DIG ); COUT( FLT_MIN ); COUT( DBL_MIN ); COUT( LDBL_MIN ); COUT( FLT_TRUE_MIN ); COUT( DBL_TRUE_MIN ); COUT( LDBL_TRUE_MIN ); COUT( FLT_MAX ); COUT( DBL_MAX ); COUT( LDBL_MAX ); COUT( FLT_EPSILON ); COUT( DBL_EPSILON ); COUT( LDBL_EPSILON ); COUT( FLT_DIG ); COUT( DBL_DIG ); COUT( LDBL_DIG ); COUT( FLT_MANT_DIG ); COUT( DBL_MANT_DIG ); COUT( LDBL_MANT_DIG ); COUT( FLT_MIN_EXP ); COUT( DBL_MIN_EXP ); COUT( LDBL_MIN_EXP ); COUT( FLT_MIN_10_EXP ); COUT( DBL_MIN_10_EXP ); COUT( LDBL_MIN_10_EXP ); COUT( FLT_MAX_EXP ); COUT( DBL_MAX_EXP ); COUT( LDBL_MAX_EXP ); COUT( FLT_MAX_10_EXP ); COUT( DBL_MAX_10_EXP ); COUT( LDBL_MAX_10_EXP ); COUT( FLT_ROUNDS ); COUT( FLT_EVAL_METHOD ); COUT( FLT_HAS_SUBNORM ); COUT( DBL_HAS_SUBNORM ); COUT( LDBL_HAS_SUBNORM ); } EDIT: Minimal Working examples (Removed the logging but left the parsing, since I believe it is good to have he actual data from the sim) It has been some time, since I tried different ways to work around this issue. Sadly, everything I tried didn't work out, so here I am finally following up with the minimal working example. There is one .csv file with the data from the simulation and a minimal version for both, C++ and Python. To clarify, I have the driver model and would like them to break at the same time. For this, vp, vTheta and vThetaDot should be roughly the same. As it is now, vp is 0.0825 in Pyhton and 0.00054 in C++... Python minimal working example from numpy import nan, arctan2 from csv import DictReader class DM(): def __init__(self): self.vTheta: float = nan self.vThetaDot: float = nan self.prevDistEgoToTarget: float = nan self.timestep: float = 0.01 self.vpp: float = nan def runAccumulator(self, data): #storing values from previous run prevVTheta = self.vTheta length = 5.4 width = 2.0 # calculate positional data targetPosRear = float(data["targetPosX"]) - length/2.0 # adapt arctan2 input egoPosFront = float(data["egoPosX"]) + length/2.0 distEgotoTarget = targetPosRear - egoPosFront # optical size target vehicle and looming self.vTheta = 2.0*arctan2(width,2.0*self.prevDistEgoToTarget) self.vThetaDot = (self.vTheta - prevVTheta)/self.timestep self.vp = self.vThetaDot/self.vTheta self.prevDistEgoToTarget = distEgotoTarget def main(): with open("../log/sample.csv", 'r') as f: dictReader = DictReader(f) listOfDict = list(dictReader) dm = DM() for i in range(700): dm.runAccumulator(listOfDict[i]) if __name__ == "__main__": main() C++ minimal working example #include <limits> #include <cmath> #include <fstream> #include <sstream> #include <iostream> #include <vector> class DM { private: /* data */ public: DM(/* args */); ~DM(); double vTheta = std::numeric_limits<double>::quiet_NaN(); double vThetaDot = std::numeric_limits<double>::quiet_NaN(); double prevDistEgoToTarget = std::numeric_limits<double>::quiet_NaN(); double timestep = 0.01; double vp = std::numeric_limits<double>::quiet_NaN(); void runAccumulator(std::vector<std::string> data); std::ifstream sampleCSV; void readFileIntoString(const std::string& path); }; DM::DM(/* args */) { } DM::~DM() { } void DM::runAccumulator(std::vector<std::string> data) { double prevVTheta = vTheta; double length = 5.4; double width = 2.0; // calculate positional data double targetPosX = stod(data[3]) - length/2.0; double egoPosX = stod(data[2]) + length/2.0; double distEgoToTarget = targetPosX - egoPosX; vTheta = 2.0* std::atan2(width,distEgoToTarget); vThetaDot = (vTheta-prevVTheta)/timestep; vp = vThetaDot/vTheta; prevDistEgoToTarget = distEgoToTarget; std::cout << vp << std::endl; } void DM::readFileIntoString(const std::string& path) { this -> sampleCSV.open(path); if (!sampleCSV.is_open()) { std::cerr << "Could not open the file - '" << path << "'" << std::endl; exit(EXIT_FAILURE); } std::string temp; getline(this->sampleCSV, temp); } std::vector<std::string> split (std::string s, std::string delimiter) { size_t pos_start = 0, pos_end, delim_len = delimiter.length(); std::string token; std::vector<std::string> res; while ((pos_end = s.find (delimiter, pos_start)) != std::string::npos) { token = s.substr (pos_start, pos_end - pos_start); pos_start = pos_end + delim_len; res.push_back(token); } res.push_back(s.substr (pos_start)); return res; } int main(int argc, char const *argv[]) { DM dm; dm.readFileIntoString("../log/sample.csv"); std::string buffer; std::vector<std::string> linevec; for (size_t i = 0; i < 700; i++) { getline(dm.sampleCSV, buffer); linevec = split(buffer, ","); dm.runAccumulator(linevec); } return 0; } CSV: https://pastebin.com/8gBm5z8q A: As a rule of thumb, if you every find yourself in a situation, where your code does not work because of floating point arithmetic, it is very likely that at least one of the following sentences applies to you: You work in a very niche field of research. You have a bug in your code. You are dealing with a mathematically ill-posed problem. In your case, you have a bug in your code. Your C++ code and your Python code do not do the same thing: You have an index error (you should get targetPosX from data[4] not from data[3]) In your Python code, the second argument to the arctan2 function is 2.0*self.prevDistEgoToTarget, while in the C++ code it's distEgoToTarget. Your Python code does one more iteration than your C++ code. These errors can easily be found by printing all intermediate values in just one iteration. You don't even need a debugger for this. If you fix these issues, you get a vp value of 0.08624797012674965 from Python and 0.086247970126749646 from C++ after 700 iterations. I would argue that the difference is negligible. A: IEEE754 guarantees a few things such as: The format of FP values Basic operations (+,-,*,/ and sqrt) are exact. The result is as if you did the calculation with infinite precision, and then rounded the result. What IEEE754 does not guarantee is that other functions such as arctan are rounded in the same way. And therefore, Python and C++ can reasonably differ by 1 ULP. [background] The problem with "exact rounding" is that you need an algorithm to do it correctly. For the 5 basic functions, there were well-known, efficient algorithms at the time IEEE754 was drafted. Since then, efficient algorithms have been discovered for more functions, but there are many, many functions that still do not have such algorithms. In particular, many functions still use tables and interpolation, and that's almost automatically inexact.
Same code in C++ and Python calculates diff values after a lot of loops, is it the nature of float/double?
I am writing a driver model using C++ and Python to compare the performance. The simulation gives data like width, position, speed, etc. and the driver model does some calculations to decide if it needs to brake or not. Both models have the same variables and calculations, but after looping over 500 times first divergence surface and in the end, result in different brake responses. I am aware that floating point error is a thing, but shouldn't it be the same for both languages? I checked C++ and Python if they have IEEE 754 and it seems to be the case. I will attach my check at the end of the question. Is there something I don't know about float calculations or I do sth wrong in my codes? What did I expect: That I get the same response for both driver models. C++ Code: void DM::runAccumulator() { // storing values from previous run double prevVTheta = vTheta; double prevVActivation = vActivation; // shorter handle for convinience and to stay true to given Modell double egoSpeed = data.egoSpeed; double egoAcc = data.egoAcc; double egoLength = data.egoLength; double targetSpeed = data.targetSpeed; double targetAcc = data.targetAcc; double targetWidth = data.targetWidth; double targetLength = data.targetLength; // calculate positional data double targetPosRear = data.targetPosX - targetLength/2.0; double egoPosFront = data.egoPosX + egoLength/2.0; double distEgotoTarget = targetPosRear - egoPosFront; // optical size of target vehicle vTheta = 2.0 * std::atan(targetWidth / (2.0 * prevDistEgoToTarget)); prevDistEgoToTarget = distEgotoTarget; vThetaDot = (vTheta - prevVTheta)/timestep; vp = vThetaDot/vTheta; if (log = true) { this -> file << std::fixed << std::setprecision(2) << data.timestamp << ',' << std::setprecision(6) << targetPosRear << ',' << egoPosFront << ',' << prevDistEgoToTarget << ',' << vTheta << ',' << vThetaDot << ',' << vp << '\n'; } Python: def runAccumulator(self): #storing values from previous run prevVTheta = self.vTheta prevVActivation = self.vActivation #shorter handle for convinience and to stay true to given Modell egoSpeed = self.data.egoSpeed egoAcc = self.data.egoAcc egoLength = self.data.egoLength targetSpeed = self.data.targetSpeed targetAcc = self.data.targetAcc targetWidth = self.data.targetWidth targetLength = self.data.targetLength # calculate positional data targetPosRear = self.data.targetPosX - targetLength/2.0 egoPosFront = self.data.egoPosX + egoLength/2.0 distEgotoTarget = targetPosRear - egoPosFront # optical size target vehicle and looming self.vTheta = 2.0*arctan(targetWidth/(2.0*self.prevDistEgoToTarget)) self.prevDistEgoToTarget = distEgotoTarget self.vThetaDot = (self.vTheta - prevVTheta)/self.timestep self.vp = self.vThetaDot/self.vTheta # Driver Model update self.vEpsilon = self.vp - self.vpp activationChange = (self.accuK * self.vEpsilon - self.accuM - self.accuC * prevVActivation) * self.timestep self.vActivation = max(0.0, prevVActivation-activationChange) # log some parameters if self.log == True: logString = str(self.data.timestamp) + ',' logString += str(targetPosRear) + ',' #'{:.7f}'.format(targetPosRear) + ',' logString += str(egoPosFront) + ',' #'{:.7f}'.format(egoPosFront) + ',' logString += str(self.prevDistEgoToTarget) + ',' #'{:.7f}'.format(targetPosRear - egoPosFront) + ','; logString += str(self.vTheta) + ',' #'{:.7f}'.format(self.vTheta) + ',' logString +=str(self.vThetaDot) + ',' # '{:.7f}'.format(self.vThetaDot) + ',' logString += str(self.vp) + '\n' #'{:.7f}'.format(self.vp) + '\n' self.file.write(logString) C++ IEEE 754 Check: #include <cfloat> #include <iomanip> #include <iostream> int main() { int w = 16; std::cout << std::left; // std::cout << std::setprecision(53); # define COUT(x) std::cout << std::setw(w) << #x << " = " << x << '\n' COUT( FLT_RADIX ); COUT( DECIMAL_DIG ); COUT( FLT_DECIMAL_DIG ); COUT( DBL_DECIMAL_DIG ); COUT( LDBL_DECIMAL_DIG ); COUT( FLT_MIN ); COUT( DBL_MIN ); COUT( LDBL_MIN ); COUT( FLT_TRUE_MIN ); COUT( DBL_TRUE_MIN ); COUT( LDBL_TRUE_MIN ); COUT( FLT_MAX ); COUT( DBL_MAX ); COUT( LDBL_MAX ); COUT( FLT_EPSILON ); COUT( DBL_EPSILON ); COUT( LDBL_EPSILON ); COUT( FLT_DIG ); COUT( DBL_DIG ); COUT( LDBL_DIG ); COUT( FLT_MANT_DIG ); COUT( DBL_MANT_DIG ); COUT( LDBL_MANT_DIG ); COUT( FLT_MIN_EXP ); COUT( DBL_MIN_EXP ); COUT( LDBL_MIN_EXP ); COUT( FLT_MIN_10_EXP ); COUT( DBL_MIN_10_EXP ); COUT( LDBL_MIN_10_EXP ); COUT( FLT_MAX_EXP ); COUT( DBL_MAX_EXP ); COUT( LDBL_MAX_EXP ); COUT( FLT_MAX_10_EXP ); COUT( DBL_MAX_10_EXP ); COUT( LDBL_MAX_10_EXP ); COUT( FLT_ROUNDS ); COUT( FLT_EVAL_METHOD ); COUT( FLT_HAS_SUBNORM ); COUT( DBL_HAS_SUBNORM ); COUT( LDBL_HAS_SUBNORM ); } EDIT: Minimal Working examples (Removed the logging but left the parsing, since I believe it is good to have he actual data from the sim) It has been some time, since I tried different ways to work around this issue. Sadly, everything I tried didn't work out, so here I am finally following up with the minimal working example. There is one .csv file with the data from the simulation and a minimal version for both, C++ and Python. To clarify, I have the driver model and would like them to break at the same time. For this, vp, vTheta and vThetaDot should be roughly the same. As it is now, vp is 0.0825 in Pyhton and 0.00054 in C++... Python minimal working example from numpy import nan, arctan2 from csv import DictReader class DM(): def __init__(self): self.vTheta: float = nan self.vThetaDot: float = nan self.prevDistEgoToTarget: float = nan self.timestep: float = 0.01 self.vpp: float = nan def runAccumulator(self, data): #storing values from previous run prevVTheta = self.vTheta length = 5.4 width = 2.0 # calculate positional data targetPosRear = float(data["targetPosX"]) - length/2.0 # adapt arctan2 input egoPosFront = float(data["egoPosX"]) + length/2.0 distEgotoTarget = targetPosRear - egoPosFront # optical size target vehicle and looming self.vTheta = 2.0*arctan2(width,2.0*self.prevDistEgoToTarget) self.vThetaDot = (self.vTheta - prevVTheta)/self.timestep self.vp = self.vThetaDot/self.vTheta self.prevDistEgoToTarget = distEgotoTarget def main(): with open("../log/sample.csv", 'r') as f: dictReader = DictReader(f) listOfDict = list(dictReader) dm = DM() for i in range(700): dm.runAccumulator(listOfDict[i]) if __name__ == "__main__": main() C++ minimal working example #include <limits> #include <cmath> #include <fstream> #include <sstream> #include <iostream> #include <vector> class DM { private: /* data */ public: DM(/* args */); ~DM(); double vTheta = std::numeric_limits<double>::quiet_NaN(); double vThetaDot = std::numeric_limits<double>::quiet_NaN(); double prevDistEgoToTarget = std::numeric_limits<double>::quiet_NaN(); double timestep = 0.01; double vp = std::numeric_limits<double>::quiet_NaN(); void runAccumulator(std::vector<std::string> data); std::ifstream sampleCSV; void readFileIntoString(const std::string& path); }; DM::DM(/* args */) { } DM::~DM() { } void DM::runAccumulator(std::vector<std::string> data) { double prevVTheta = vTheta; double length = 5.4; double width = 2.0; // calculate positional data double targetPosX = stod(data[3]) - length/2.0; double egoPosX = stod(data[2]) + length/2.0; double distEgoToTarget = targetPosX - egoPosX; vTheta = 2.0* std::atan2(width,distEgoToTarget); vThetaDot = (vTheta-prevVTheta)/timestep; vp = vThetaDot/vTheta; prevDistEgoToTarget = distEgoToTarget; std::cout << vp << std::endl; } void DM::readFileIntoString(const std::string& path) { this -> sampleCSV.open(path); if (!sampleCSV.is_open()) { std::cerr << "Could not open the file - '" << path << "'" << std::endl; exit(EXIT_FAILURE); } std::string temp; getline(this->sampleCSV, temp); } std::vector<std::string> split (std::string s, std::string delimiter) { size_t pos_start = 0, pos_end, delim_len = delimiter.length(); std::string token; std::vector<std::string> res; while ((pos_end = s.find (delimiter, pos_start)) != std::string::npos) { token = s.substr (pos_start, pos_end - pos_start); pos_start = pos_end + delim_len; res.push_back(token); } res.push_back(s.substr (pos_start)); return res; } int main(int argc, char const *argv[]) { DM dm; dm.readFileIntoString("../log/sample.csv"); std::string buffer; std::vector<std::string> linevec; for (size_t i = 0; i < 700; i++) { getline(dm.sampleCSV, buffer); linevec = split(buffer, ","); dm.runAccumulator(linevec); } return 0; } CSV: https://pastebin.com/8gBm5z8q
[ "As a rule of thumb, if you every find yourself in a situation, where your code does not work because of floating point arithmetic, it is very likely that at least one of the following sentences applies to you:\n\nYou work in a very niche field of research.\nYou have a bug in your code.\nYou are dealing with a mathematically ill-posed problem.\n\nIn your case, you have a bug in your code. Your C++ code and your Python code do not do the same thing:\n\nYou have an index error (you should get targetPosX from data[4] not from data[3])\nIn your Python code, the second argument to the arctan2 function is 2.0*self.prevDistEgoToTarget, while in the C++ code it's distEgoToTarget.\nYour Python code does one more iteration than your C++ code.\n\nThese errors can easily be found by printing all intermediate values in just one iteration. You don't even need a debugger for this. If you fix these issues, you get a vp value of 0.08624797012674965 from Python and 0.086247970126749646 from C++ after 700 iterations. I would argue that the difference is negligible.\n", "IEEE754 guarantees a few things such as:\n\nThe format of FP values\nBasic operations (+,-,*,/ and sqrt) are exact. The result is as if you did the calculation with infinite precision, and then rounded the result.\n\nWhat IEEE754 does not guarantee is that other functions such as arctan are rounded in the same way. And therefore, Python and C++ can reasonably differ by 1 ULP.\n[background]\nThe problem with \"exact rounding\" is that you need an algorithm to do it correctly. For the 5 basic functions, there were well-known, efficient algorithms at the time IEEE754 was drafted. Since then, efficient algorithms have been discovered for more functions, but there are many, many functions that still do not have such algorithms. In particular, many functions still use tables and interpolation, and that's almost automatically inexact.\n" ]
[ 1, 0 ]
[]
[]
[ "c++", "floating_point", "python", "python_3.x" ]
stackoverflow_0074401499_c++_floating_point_python_python_3.x.txt
Q: create a new dataframe from selecting specific rows from existing dataframe python i have a table in my pandas dataframe. df id count price 1 2 100 2 7 25 3 3 720 4 7 221 5 8 212 6 2 200 i want to create a new dataframe(df2) from this, selecting rows where count is 2 and price is 100,and count is 7 and price is 221 my output should be df2 = id count price 1 2 100 4 7 221 i am trying using df[df['count'] == '2' & df['price'] == '100'] but getting error TypeError: cannot compare a dtyped [object] array with a scalar of type [bool] A: You nedd add () because & has higher precedence than ==: df3 = df[(df['count'] == '2') & (df['price'] == '100')] print (df3) id count price 0 1 2 100 If need check multiple values use isin: df4 = df[(df['count'].isin(['2','7'])) & (df['price'].isin(['100', '221']))] print (df4) id count price 0 1 2 100 3 4 7 221 But if check numeric, use: df3 = df[(df['count'] == 2) & (df['price'] == 100)] print (df3) df4 = df[(df['count'].isin([2,7])) & (df['price'].isin([100, 221]))] print (df4) A: if you want to do by index id you could do: new_df = (df.iloc[[1]]).append(df.iloc[[6]])
create a new dataframe from selecting specific rows from existing dataframe python
i have a table in my pandas dataframe. df id count price 1 2 100 2 7 25 3 3 720 4 7 221 5 8 212 6 2 200 i want to create a new dataframe(df2) from this, selecting rows where count is 2 and price is 100,and count is 7 and price is 221 my output should be df2 = id count price 1 2 100 4 7 221 i am trying using df[df['count'] == '2' & df['price'] == '100'] but getting error TypeError: cannot compare a dtyped [object] array with a scalar of type [bool]
[ "You nedd add () because & has higher precedence than ==:\ndf3 = df[(df['count'] == '2') & (df['price'] == '100')]\nprint (df3)\n id count price\n0 1 2 100\n\nIf need check multiple values use isin:\ndf4 = df[(df['count'].isin(['2','7'])) & (df['price'].isin(['100', '221']))]\nprint (df4)\n id count price\n0 1 2 100\n3 4 7 221\n\nBut if check numeric, use:\ndf3 = df[(df['count'] == 2) & (df['price'] == 100)]\nprint (df3)\n\ndf4 = df[(df['count'].isin([2,7])) & (df['price'].isin([100, 221]))]\nprint (df4)\n\n", "if you want to do by index id you could do:\nnew_df = (df.iloc[[1]]).append(df.iloc[[6]])\n" ]
[ 18, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0040885318_pandas_python.txt
Q: How to skip Index count using enumerate when value is zero? I have currently one dict with key-value and one list with values. e.g. data = { 'total': { '06724': 0, '06725': 0, '06726': 0, '06727': 0, '06712': 22, '06713': 35, '06714': 108, '06715': 70, '06716': 0, '06717': 24, '06718': 0, '06719': 0, '06720': 0, '06709': 75, '06710': 123, '06711': 224, '06708': 28, '06723': 0, '06721': 0, '06722': 0 }, 'item_number': ['1', '2', '3', '4', '5', '6', '7', '8', '9'] } for Index, value in enumerate(data['total'].values()): if value and value != '0': print(data['item_number'][Index], value) What I am trying to do is that I want to remove all values in 'total' that has the value 0 meaning that it would only end up being 9 numbers which adds up to the item_number amount. What I am trying to achieve is that I want print out: Expected: { '1': 22, '2': 35, '3': 108, '4': 70, '5': 24, '6': 75, '7': 123, '8': 224, '9': 28 } where key is the item_number and the value is total. However the code I am currently trying gives me the error: print(data['item_number'][Index], value) IndexError: list index out of range which I believe is due to the Index increasing for each loop. I wonder how can I skip the counting increase if the value is 0? A: If I understand you correctly: out = dict( zip(data["item_number"], (v for v in data["total"].values() if v != 0)) ) print(out) Prints: { "1": 22, "2": 35, "3": 108, "4": 70, "5": 24, "6": 75, "7": 123, "8": 224, "9": 28, } A: Either use a count variable instead of enumerate(), or filter before enumerating. Count variable index = 0 for value in data['total'].values(): if value != 0: print((data['item_number'][index], value)) index += 1 Output: ('1', 22) ('2', 35) ('3', 108) ('4', 70) ('5', 24) ('6', 75) ('7', 123) ('8', 224) ('9', 28) Filter first for index, value in enumerate(v for v in data['total'].values() if v != 0): print((data['item_number'][index], value)) (Same output) And this can be simplified further using zip() like in Andrej's answer. Note: I'm not using a dict here for simplicity and because your code as posted doesn't use one. I think it's obvious how you would incorporate one though. A: There are two tasks at hand here: Filter out the zero values from data["total"] Create a dictionary, where keys are the elements of data["item_number"], and values are from the filtered collection we created above. So, let's do that: filtered_values = [val for val in data["total"].values() if val] # [22, 35, 108, 70, 24, 75, 123, 224, 28] new_dict = dict(zip(data["item_number"], filtered_values)) Which gives the required new_dict: {'1': 22, '2': 35, '3': 108, '4': 70, '5': 24, '6': 75, '7': 123, '8': 224, '9': 28} You can save one loop through your data["total"] if you define filtered_values as a generator expression, or combine the two lines: new_dict = dict(zip(data["item_number"], (val for val in data["total"].values() if val)))
How to skip Index count using enumerate when value is zero?
I have currently one dict with key-value and one list with values. e.g. data = { 'total': { '06724': 0, '06725': 0, '06726': 0, '06727': 0, '06712': 22, '06713': 35, '06714': 108, '06715': 70, '06716': 0, '06717': 24, '06718': 0, '06719': 0, '06720': 0, '06709': 75, '06710': 123, '06711': 224, '06708': 28, '06723': 0, '06721': 0, '06722': 0 }, 'item_number': ['1', '2', '3', '4', '5', '6', '7', '8', '9'] } for Index, value in enumerate(data['total'].values()): if value and value != '0': print(data['item_number'][Index], value) What I am trying to do is that I want to remove all values in 'total' that has the value 0 meaning that it would only end up being 9 numbers which adds up to the item_number amount. What I am trying to achieve is that I want print out: Expected: { '1': 22, '2': 35, '3': 108, '4': 70, '5': 24, '6': 75, '7': 123, '8': 224, '9': 28 } where key is the item_number and the value is total. However the code I am currently trying gives me the error: print(data['item_number'][Index], value) IndexError: list index out of range which I believe is due to the Index increasing for each loop. I wonder how can I skip the counting increase if the value is 0?
[ "If I understand you correctly:\nout = dict(\n zip(data[\"item_number\"], (v for v in data[\"total\"].values() if v != 0))\n)\nprint(out)\n\nPrints:\n{\n \"1\": 22,\n \"2\": 35,\n \"3\": 108,\n \"4\": 70,\n \"5\": 24,\n \"6\": 75,\n \"7\": 123,\n \"8\": 224,\n \"9\": 28,\n}\n\n", "Either use a count variable instead of enumerate(), or filter before enumerating.\nCount variable\nindex = 0\nfor value in data['total'].values():\n if value != 0:\n print((data['item_number'][index], value))\n index += 1\n\nOutput:\n('1', 22)\n('2', 35)\n('3', 108)\n('4', 70)\n('5', 24)\n('6', 75)\n('7', 123)\n('8', 224)\n('9', 28)\n\nFilter first\nfor index, value in enumerate(v for v in data['total'].values() if v != 0):\n print((data['item_number'][index], value))\n\n(Same output)\nAnd this can be simplified further using zip() like in Andrej's answer.\n\nNote: I'm not using a dict here for simplicity and because your code as posted doesn't use one. I think it's obvious how you would incorporate one though.\n", "There are two tasks at hand here:\n\nFilter out the zero values from data[\"total\"]\nCreate a dictionary, where keys are the elements of data[\"item_number\"], and values are from the filtered collection we created above.\n\nSo, let's do that:\nfiltered_values = [val for val in data[\"total\"].values() if val]\n# [22, 35, 108, 70, 24, 75, 123, 224, 28]\n\nnew_dict = dict(zip(data[\"item_number\"], filtered_values))\n\nWhich gives the required new_dict:\n{'1': 22,\n '2': 35,\n '3': 108,\n '4': 70,\n '5': 24,\n '6': 75,\n '7': 123,\n '8': 224,\n '9': 28}\n\nYou can save one loop through your data[\"total\"] if you define filtered_values as a generator expression, or combine the two lines:\nnew_dict = dict(zip(data[\"item_number\"], (val for val in data[\"total\"].values() if val)))\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074648412_python.txt
Q: Error when trying to update pip Good morning, I use Linux Void with Openbox and SpaceFm. I would like to update the pip, but uninstalling the installed version (9.0.3) fails: is there a solution? I also tried the "python3 -m pip install --upgrade pip" command with the same result. This is the terminal output. Thank you. $ pip install --upgrade pip Collecting pip Downloading https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl (1.3MB) 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.3MB 280kB/s Installing collected packages: pip Found existing installation: pip 9.0.3 Uninstalling pip-9.0.3: Exception: Traceback (most recent call last): File "/usr/lib/python3.6/shutil.py", line 544, in move os.rename(src, real_dst) OSError: [Errno 18] Collegamento tra dispositivi non valido: '/usr/lib/python3.6/site-packages/pip' -> '/tmp/pip-xsteet9f-uninstall/usr/lib/python3.6/site-packages/pip' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/lib/python3.6/site-packages/pip/commands/install.py", line 342, in run prefix=options.prefix_path, File "/usr/lib/python3.6/site-packages/pip/req/req_set.py", line 778, in install requirement.uninstall(auto_confirm=True) File "/usr/lib/python3.6/site-packages/pip/req/req_install.py", line 754, in uninstall paths_to_remove.remove(auto_confirm) File "/usr/lib/python3.6/site-packages/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/usr/lib/python3.6/site-packages/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "/usr/lib/python3.6/shutil.py", line 556, in move rmtree(src) File "/usr/lib/python3.6/shutil.py", line 480, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/usr/lib/python3.6/shutil.py", line 418, in _rmtree_safe_fd _rmtree_safe_fd(dirfd, fullname, onerror) File "/usr/lib/python3.6/shutil.py", line 438, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/usr/lib/python3.6/shutil.py", line 436, in _rmtree_safe_fd os.unlink(name, dir_fd=topfd) PermissionError: [Errno 13] Permesso negato: 'freeze.py' You are using pip version 9.0.3, however version 10.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. A: Try using "sudo" or any user with enough privileges like "root" A: Try putting "--user" at the end of the command
Error when trying to update pip
Good morning, I use Linux Void with Openbox and SpaceFm. I would like to update the pip, but uninstalling the installed version (9.0.3) fails: is there a solution? I also tried the "python3 -m pip install --upgrade pip" command with the same result. This is the terminal output. Thank you. $ pip install --upgrade pip Collecting pip Downloading https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl (1.3MB) 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.3MB 280kB/s Installing collected packages: pip Found existing installation: pip 9.0.3 Uninstalling pip-9.0.3: Exception: Traceback (most recent call last): File "/usr/lib/python3.6/shutil.py", line 544, in move os.rename(src, real_dst) OSError: [Errno 18] Collegamento tra dispositivi non valido: '/usr/lib/python3.6/site-packages/pip' -> '/tmp/pip-xsteet9f-uninstall/usr/lib/python3.6/site-packages/pip' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/lib/python3.6/site-packages/pip/commands/install.py", line 342, in run prefix=options.prefix_path, File "/usr/lib/python3.6/site-packages/pip/req/req_set.py", line 778, in install requirement.uninstall(auto_confirm=True) File "/usr/lib/python3.6/site-packages/pip/req/req_install.py", line 754, in uninstall paths_to_remove.remove(auto_confirm) File "/usr/lib/python3.6/site-packages/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/usr/lib/python3.6/site-packages/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "/usr/lib/python3.6/shutil.py", line 556, in move rmtree(src) File "/usr/lib/python3.6/shutil.py", line 480, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/usr/lib/python3.6/shutil.py", line 418, in _rmtree_safe_fd _rmtree_safe_fd(dirfd, fullname, onerror) File "/usr/lib/python3.6/shutil.py", line 438, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/usr/lib/python3.6/shutil.py", line 436, in _rmtree_safe_fd os.unlink(name, dir_fd=topfd) PermissionError: [Errno 13] Permesso negato: 'freeze.py' You are using pip version 9.0.3, however version 10.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command.
[ "Try using \"sudo\" or any user with enough privileges like \"root\"\n", "Try putting \"--user\" at the end of the command\n" ]
[ 0, 0 ]
[]
[]
[ "linux", "pip", "python", "python_3.x" ]
stackoverflow_0050113257_linux_pip_python_python_3.x.txt
Q: Continue running code when KeyError: "['Column'] not in index" occurs for null values? I get KeyError: "['marketCap'] not in index" when there is no data in the "marketCap" column for a particular symbol. How do I put "Null" when there's no data so the code can continue running and not error out? import pandas as pd from yahooquery import Ticker symbols = ['MSFT','GOOG','AAPL'] #I have to put 75,000+ symbols here. header = ["regularMarketPrice", "marketCap"] for tick in symbols: faang = Ticker(tick) faang.price df = pd.DataFrame(faang.price).T df.to_csv('output.csv', mode='a', index=True, header=False, columns=header) A: Got this working: import pandas as pd from yahooquery import Ticker symbols = ['MSFT','GOOG','AAPL'] #I have to put 75,000+ symbols here. header = ["regularMarketPrice", "marketCap"] for tick in symbols: faang = Ticker(tick) faang.price df = pd.DataFrame(faang.price,{"regularMarketPrice":[1],"marketCap":[2]}).T #adding the {...} fixed a 'scalar value' error as well. try: df.to_csv('mktcaptest.csv', mode='a', index=True, header=False, columns=header) except KeyError: continue
Continue running code when KeyError: "['Column'] not in index" occurs for null values?
I get KeyError: "['marketCap'] not in index" when there is no data in the "marketCap" column for a particular symbol. How do I put "Null" when there's no data so the code can continue running and not error out? import pandas as pd from yahooquery import Ticker symbols = ['MSFT','GOOG','AAPL'] #I have to put 75,000+ symbols here. header = ["regularMarketPrice", "marketCap"] for tick in symbols: faang = Ticker(tick) faang.price df = pd.DataFrame(faang.price).T df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)
[ "Got this working:\nimport pandas as pd\nfrom yahooquery import Ticker\n\nsymbols = ['MSFT','GOOG','AAPL'] #I have to put 75,000+ symbols here.\nheader = [\"regularMarketPrice\", \"marketCap\"]\n\nfor tick in symbols:\n faang = Ticker(tick)\n faang.price\n df = pd.DataFrame(faang.price,{\"regularMarketPrice\":[1],\"marketCap\":[2]}).T \n#adding the {...} fixed a 'scalar value' error as well.\n try:\n df.to_csv('mktcaptest.csv', mode='a', index=True, header=False, columns=header)\n except KeyError:\n continue\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074645315_arrays_csv_dataframe_pandas_python.txt
Q: How to read an excel file with data and some empty cells in panda's python? I have an excel file with huge dataset. I tried to read the excel file using the below command using pandas. df = pd.read_csv(f'{cwd}/data.csv', keep_default_na=False, header=None) print(df) However the empty rows found in the csv file is missing in the output. I get something like below. Input: Output from the code: 1 1 2 2 3 3 4 4 5 5 6 6 A: You need to specify the parameter skip_blank_lines=False from pandas.read_csv. Here's a fixed version of your code: import pandas as pd df = pd.read_csv(f'{cwd}/data.csv', header=None, na_filter=False, skip_blank_lines=False) df Outputs: Or: import pandas as pd df = pd.read_csv(f'{cwd}/data.csv', header=None, skip_blank_lines=False) df Outputs:
How to read an excel file with data and some empty cells in panda's python?
I have an excel file with huge dataset. I tried to read the excel file using the below command using pandas. df = pd.read_csv(f'{cwd}/data.csv', keep_default_na=False, header=None) print(df) However the empty rows found in the csv file is missing in the output. I get something like below. Input: Output from the code: 1 1 2 2 3 3 4 4 5 5 6 6
[ "You need to specify the parameter skip_blank_lines=False from pandas.read_csv. Here's a fixed version of your code:\nimport pandas as pd\n\ndf = pd.read_csv(f'{cwd}/data.csv', header=None, na_filter=False, skip_blank_lines=False)\ndf\n\nOutputs:\n\nOr:\nimport pandas as pd\n\ndf = pd.read_csv(f'{cwd}/data.csv', header=None, skip_blank_lines=False)\ndf\n\nOutputs:\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074648226_pandas_python.txt