content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Can setup.py / pip require a certain version of another package IF that package is already installed? I have two python packages (locust-swarm and locust-plugins). Neither has a strict requirement to the other, but they can work together, and my users install them separately. Sometimes there is a breaking change in one or the other, and I want to make sure nobody installs incompatible versions (by updating package A but not package B, for example). Is there a way to specify a minimum version of this "pseudo-dependency" and fail the install if it is not satisfied? A check that is only done if the other package is already installed. I do not want to add one package as a dependency of the other and force users of package A to install package B, just to be able to handle this case. Probably this question has been asked before, but I couldnt find an answer. A: I think you can do this in your A/setup.py file (and the same in your B/setup.py file, just change package_B_name to package_A_name: import importlib.util spec = importlib.util.find_spec(f'{package_B_name}') if spec is not None: requirements_list.append(f'{package_B_name}>={package_B_version}') Note that this only works in Python3.3+ and only source distributions. It will not work when installing from a binary wheel (.whl) A: If you know how to convert a requirements.txt to a setup.py file in accordance, try this for example: my-package>=minimum.version This checks for if the package is greater than this specific version or is this minimum version, therefore making a minimum version.
Can setup.py / pip require a certain version of another package IF that package is already installed?
I have two python packages (locust-swarm and locust-plugins). Neither has a strict requirement to the other, but they can work together, and my users install them separately. Sometimes there is a breaking change in one or the other, and I want to make sure nobody installs incompatible versions (by updating package A but not package B, for example). Is there a way to specify a minimum version of this "pseudo-dependency" and fail the install if it is not satisfied? A check that is only done if the other package is already installed. I do not want to add one package as a dependency of the other and force users of package A to install package B, just to be able to handle this case. Probably this question has been asked before, but I couldnt find an answer.
[ "I think you can do this in your A/setup.py file (and the same in your B/setup.py file, just change package_B_name to package_A_name:\nimport importlib.util\nspec = importlib.util.find_spec(f'{package_B_name}')\nif spec is not None:\n requirements_list.append(f'{package_B_name}>={package_B_version}')\n\nNote that this only works in Python3.3+ and only source distributions. It will not work when installing from a binary wheel (.whl)\n", "If you know how to convert a requirements.txt to a setup.py file in accordance, try this for example:\nmy-package>=minimum.version\n\nThis checks for if the package is greater than this specific version or is this minimum version, therefore making a minimum version.\n" ]
[ 0, 0 ]
[ "I don't know if I understand the question correctly but you can specify the minimum required version in the install_requires array in the setup function like so.\ninstall_requires=['locust-swarm >= 1.2', 'locust-plugins >= 1.1']\n\nI hope this answers your question, if it doesn't, let me know, and I will look into it further.\n" ]
[ -1 ]
[ "pip", "python", "python_packaging", "setup.py" ]
stackoverflow_0074041392_pip_python_python_packaging_setup.py.txt
Q: Separating .txt file with Python I have to separate .txt file into small pieces, based on the matched value. For example, I have .txt file looks like: Names Age Country Mark 19 USA John 19 UK Elon 20 CAN Dominic 21 USA Andreas 21 UK I have to extract all rows with the same value “Age” and to copy them to other file or perfom some other action.. How it is possible to be done with Python, I have never do that before. Thank you in advance :) I am asking, because of I have no idea how it should be done. The excpected result is to have this data separated: Names Age Country Mark 19 USA John 19 UK Names Age Country Elon 20 CAN Names Age Country Dominic 21 USA Andreas 21 UK A: Here is a possible solution: with open('yourfile.txt') as infile: header = next(infile) ages = {} for line in infile: name, age, country = line.rsplit(' ', 2) if age not in ages: ages[age] = [] ages[age].append([name, age, country]) for age in ages: with open(f'age-{age}.txt', 'w') as agefile: agefile.writeline(header) agefile.writelines(ages[age]) For the sample you posted, the code above will leave you with files named age-19.txt, age-20.txt, and age-21.txt, with the contents separated by age, as you requested.
Separating .txt file with Python
I have to separate .txt file into small pieces, based on the matched value. For example, I have .txt file looks like: Names Age Country Mark 19 USA John 19 UK Elon 20 CAN Dominic 21 USA Andreas 21 UK I have to extract all rows with the same value “Age” and to copy them to other file or perfom some other action.. How it is possible to be done with Python, I have never do that before. Thank you in advance :) I am asking, because of I have no idea how it should be done. The excpected result is to have this data separated: Names Age Country Mark 19 USA John 19 UK Names Age Country Elon 20 CAN Names Age Country Dominic 21 USA Andreas 21 UK
[ "Here is a possible solution:\nwith open('yourfile.txt') as infile:\n header = next(infile)\n ages = {}\n\n for line in infile:\n name, age, country = line.rsplit(' ', 2)\n if age not in ages:\n ages[age] = []\n ages[age].append([name, age, country])\n\n for age in ages:\n with open(f'age-{age}.txt', 'w') as agefile:\n agefile.writeline(header) \n agefile.writelines(ages[age])\n\nFor the sample you posted, the code above will leave you with files named age-19.txt, age-20.txt, and age-21.txt, with the contents separated by age, as you requested.\n" ]
[ 0 ]
[ "If you have them all in a list you can use something like this...\nalltext = [\"Names Age Country\", \"Mark 21 USA\", \"John 21 UK\",\"Elon 20 CAN\",\"Dominic 21 USA\", \"Andreas 21 UK\"]\n\nCanada = [alltext[0]] #Creates a list with your column header\nNotCanada = [alltext[0]] #Creates a list with your column header\n\nfor row in alltext[1:]:\n x = row.split()\n if x[2] == \"CAN\":\n Canada.append(row)\n else:\n NotCanada.append(row)\n\nprint(Canada)\nprint(NotCanada)\n\nWill print two different lists of your separated players.\n['Names Age Country', 'Elon 20 CAN']\n['Names Age Country', 'Mark 21 USA', 'John 21 UK', 'Dominic 21 USA', 'Andreas 21 UK']\n" ]
[ -1 ]
[ "file", "python", "txt" ]
stackoverflow_0074635111_file_python_txt.txt
Q: Why is Beautifulsoup's selector showing error while the Scrapy's response.css working absolutely fine? I am trying to scrape this div tag which has an id attribute equal to today's date. I created a Beautifulsoup object and used the select method but it's showing this error My code res = requests.get('https://sports.ndtv.com/fifa-world-cup-2022/schedules-fixtures') soup = BeautifulSoup(res.text,'html.parser') date_today = date.today() d1 = date_today.strftime("%d-%m-%Y") cont = soup.select('div#'+d1) This is raising the error raise SelectorSyntaxError(msg, self.pattern, index) soupsieve.util.SelectorSyntaxError: Malformed id selector at position 3 line 1: div#30-11-2022 While when I use Scrapy shell - response.css('div#'+d1)[0].css('span.location::text').get() 'Ahmad Bin Ali Stadium, Al Rayyan' It's working perfectly fine. Can anyone please suggest what am I doing wrong? Thanks A: bs4 [or rather soupsieve] doesn't like it when id selectors [#id] have hyphens (-) in them for some reason, but you can get around it by using attribute selector instead cont = soup.select(f'div[id="{d1}"]') should work - give it a try.
Why is Beautifulsoup's selector showing error while the Scrapy's response.css working absolutely fine?
I am trying to scrape this div tag which has an id attribute equal to today's date. I created a Beautifulsoup object and used the select method but it's showing this error My code res = requests.get('https://sports.ndtv.com/fifa-world-cup-2022/schedules-fixtures') soup = BeautifulSoup(res.text,'html.parser') date_today = date.today() d1 = date_today.strftime("%d-%m-%Y") cont = soup.select('div#'+d1) This is raising the error raise SelectorSyntaxError(msg, self.pattern, index) soupsieve.util.SelectorSyntaxError: Malformed id selector at position 3 line 1: div#30-11-2022 While when I use Scrapy shell - response.css('div#'+d1)[0].css('span.location::text').get() 'Ahmad Bin Ali Stadium, Al Rayyan' It's working perfectly fine. Can anyone please suggest what am I doing wrong? Thanks
[ "bs4 [or rather soupsieve] doesn't like it when id selectors [#id] have hyphens (-) in them for some reason, but you can get around it by using attribute selector instead\ncont = soup.select(f'div[id=\"{d1}\"]')\n\nshould work - give it a try.\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "css", "html", "python", "scrapy" ]
stackoverflow_0074620048_beautifulsoup_css_html_python_scrapy.txt
Q: Python inspect.stack is slow I was just profiling my Python program to see why it seemed to be rather slow. I discovered that the majority of its running time was spent in the inspect.stack() method (for outputting debug messages with modules and line numbers), at 0.005 seconds per call. This seems rather high; is inspect.stack really this slow, or could something be wrong with my program? A: inspect.stack() does two things: collect the stack by asking the interpreter for the stack frame from the caller (sys._getframe(1)) then following all the .f_back references. This is cheap. per frame, collect the filename, linenumber, and source file context (the source file line plus some extra lines around it if requested). The latter requires reading the source file for each stack frame. This is the expensive step. To switch off the file context loading, set the context parameter to 0: inspect.stack(0) Even with context set to 0, you still incur some filesystem access per frame as the filename is determined and verified to exist for each frame. A: inspect.stack(0) can be faster than inspect.stack(). Even so, it is fastest to avoid calling it altogether, and perhaps use a pattern such as this instead: frame = inspect.currentframe() while frame: if has_what_i_want(frame): # customize return what_i_want(frame) # customize frame = frame.f_back Note that the last frame.f_back is None, and the loop will then end. sys._getframe(1) should obviously not be used because it is an internal method. As an alternative, inspect.getouterframes(inspect.currentframe()) can be looped over, but this is expected to be slower than the above approach. A: Here's a concrete example building on the other answers, showing how to efficiently walk back up the stack to find the typical caller information (filename, line number, function name) incorporated into debug messages. import sys from collections import namedtuple FrameInfo = namedtuple('FrameInfo', ['filename', 'lineno', 'function']) def frame_info(walkback=0): # NOTE: sys._getframe() is a tiny bit faster than inspect.currentframe() # Although the function name is prefixed with an underscore, it is # documented and fine to use assuming we are running under CPython: # # https://docs.python.org/3/library/sys.html#sys._getframe # frame = sys._getframe().f_back for __ in range(walkback): f_back = frame.f_back if not f_back: break frame = f_back return FrameInfo(frame.f_code.co_filename, frame.f_lineno, frame.f_code.co_name)
Python inspect.stack is slow
I was just profiling my Python program to see why it seemed to be rather slow. I discovered that the majority of its running time was spent in the inspect.stack() method (for outputting debug messages with modules and line numbers), at 0.005 seconds per call. This seems rather high; is inspect.stack really this slow, or could something be wrong with my program?
[ "inspect.stack() does two things:\n\ncollect the stack by asking the interpreter for the stack frame from the caller (sys._getframe(1)) then following all the .f_back references. This is cheap.\nper frame, collect the filename, linenumber, and source file context (the source file line plus some extra lines around it if requested). The latter requires reading the source file for each stack frame. This is the expensive step.\n\nTo switch off the file context loading, set the context parameter to 0:\ninspect.stack(0)\n\nEven with context set to 0, you still incur some filesystem access per frame as the filename is determined and verified to exist for each frame.\n", "inspect.stack(0) can be faster than inspect.stack(). Even so, it is fastest to avoid calling it altogether, and perhaps use a pattern such as this instead:\nframe = inspect.currentframe()\nwhile frame:\n if has_what_i_want(frame): # customize\n return what_i_want(frame) # customize\n frame = frame.f_back\n\nNote that the last frame.f_back is None, and the loop will then end.\nsys._getframe(1) should obviously not be used because it is an internal method.\nAs an alternative, inspect.getouterframes(inspect.currentframe()) can be looped over, but this is expected to be slower than the above approach.\n", "Here's a concrete example building on the other answers, showing how to efficiently walk back up the stack to find the typical caller information (filename, line number, function name) incorporated into debug messages.\nimport sys\nfrom collections import namedtuple\n\n\nFrameInfo = namedtuple('FrameInfo', ['filename', 'lineno', 'function'])\n\n\ndef frame_info(walkback=0):\n # NOTE: sys._getframe() is a tiny bit faster than inspect.currentframe()\n # Although the function name is prefixed with an underscore, it is\n # documented and fine to use assuming we are running under CPython:\n #\n # https://docs.python.org/3/library/sys.html#sys._getframe\n #\n frame = sys._getframe().f_back\n\n for __ in range(walkback):\n f_back = frame.f_back\n if not f_back:\n break\n\n frame = f_back\n\n return FrameInfo(frame.f_code.co_filename, frame.f_lineno, frame.f_code.co_name)\n\n" ]
[ 21, 20, 0 ]
[]
[]
[ "inspect", "introspection", "python" ]
stackoverflow_0017407119_inspect_introspection_python.txt
Q: Selecting specific column tags to display from a set of rows using BeautifulSoup and Flask I'm trying to create a table that retrieves that most recent n injuries from the CBS NFL injuries page, just to add an aesthetic to a project. I have no problem scraping the data and separating it into rows or individual columns, but I've spent 2 full days trying to find answers and fix this, but I need to move onward. What I am seeing: I would like to see (for each row): ARI Budda Baker SS Ankle ARI Markus Golden OLB Illness etc. My current python app.py code: @app.route("/", methods=("GET", "POST"), strict_slashes=False) def index(): # Parsing Code will go here if request.method == "POST": try: global url, specific_element url = "https://www.cbssports.com/nfl/injuries/daily" only_tr = SoupStrainer('tr') source = requests.get(url).text soup = BeautifulSoup(source, 'lxml', parse_only=only_tr) specific_element = soup.find_all('tr', limit=16)[1:] return render_template("index.html", results=specific_element ) except Exception as e: flash(e, 'danger') return render_template('index.html') if __name__ == "__main__": app.run() My HTML: <div class="w3-third"> <div class="w3-card-4 w3-container" style="min-height:500px"> <div class="col-md-8"> <h3>15 Most Recent NFL Injuries</h3> <div class="bg-white shadow p-4 rounded results"> {% if results %} {% for result in results %} <p> {{ result | striptags }} </p> {% endfor %} {% endif %} </div> </div> </div> </form> </div> </div> I've tried to separate the individual values to create a dataframe using each column in a dict, as such: # Scrape CBS NFL Daily Injuries executable_path = {'executable_path': ChromeDriverManager().install()} browser = Browser('chrome', **executable_path, headless=True) # Visit the webpage url = "https://www.cbssports.com/nfl/injuries/daily" browser.visit(url) # Convert the browser html to a soup object html = browser.html soup = BeautifulSoup(html, 'lxml') # Create empty lists player = [] position = [] injury = [] team = [] logo = [] # Add try/except for error handling try: specific_element = soup.select('tr.TableBase-bodyTr') # Find all of the Tr rows rows = soup.findAll('tr', limit=21)[1:] # the 0th tr is headers except AttributeError: return None, None # Get info from each row for i in range(len(rows)): player.append(specific_element[i].find( 'span', class_='CellPlayerName--long').get_text()) position.append(specific_element[i].find( 'td', class_='TableBase-bodyTd').next_sibling.next_sibling.get_text().strip()) injury.append(specific_element[i].find( 'td', class_='TableBase-bodyTd').next_sibling.next_sibling.next_sibling.get_text().strip()) team.append(specific_element[i].find('span', class_='TeamName').get_text()) logo.append(specific_element[i].find('img', class_='TeamLogo-image').get('src')) recent_injuries = pd.DataFrame({ 'Team': team, 'Player': player, 'Position': position, 'Injury': injury }) return recent_injuries.to_html(classes="table table-hover") This returned a dataframe with the correct values in jupyter, but when I tried to place the dataframe into the HTML table, it formatted it with a lot of brackets. I also tried using find_all('td'), which does return all of the correct information, but separates each by row. I wanted to use the 'tr' because all of the information is there, but I don't know how to remove the lines I don't want, and to separate the values that I do. A: The issue with the doubling of the name was a dynamic change with the webpage size providing a shortened name with small size and longer name when larger, but they were under the same tag in ('td')1. By specifically calling the 'span' with class_='CellPlayerName--long', I was able to only extract the longer version of the name. The updated working code follows: Flask: @app.route("/", methods=("GET", "POST"), strict_slashes=False) def index(): url = "https://www.cbssports.com/nfl/injuries/daily" source = requests.get(url) soup = BeautifulSoup(source.text, 'html.parser') data = soup.find('tr') table_data = [] trs = soup.select('tr.TableBase-bodyTr') for tr in trs[1:16]: row = [] row.extend([tr.select('td')[0].text]) row.extend([tr.find('span', class_='CellPlayerName--long').text]) row.extend([tr.select('td')[2].text]) row.extend([tr.select('td')[3].text]) table_data.append(row) data = table_data return render_template('index.html', data=data) HTML: <div class="w3-third"> <div class="w3-card-4 w3-container" style="min-height:500px;width:97%"> <div class="w3-col s4"> <h3 class="text-nowrap">15 Most Recent Injuries</h3> <table class="table table-striped text-nowrap"> <thead class="table-header"> <th scope="col">Team</th> <th scope="col">Player</th> <th scope="col">Position</th> <th scope="col">Injury</th> </thead> <tbody> {% for element in data %} <tr> <th class="font-weight-light" scope="col">{{element[0]}}</th> <th class="font-weight-light" scope="col">{{element[1]}}</th> <th class="font-weight-light" scope="col">{{element[2]}}</th> <th class="font-weight-light" scope="col">{{element[3]}}</th> </tr> {% endfor %} </tbody> </table> </div> </div> </div> CSS All CSS here is from w3, so no code needs to be added to style.css file <link rel="stylesheet" href="https://www.w3schools.com/w3css/4/w3.css"> The updated table:
Selecting specific column tags to display from a set of rows using BeautifulSoup and Flask
I'm trying to create a table that retrieves that most recent n injuries from the CBS NFL injuries page, just to add an aesthetic to a project. I have no problem scraping the data and separating it into rows or individual columns, but I've spent 2 full days trying to find answers and fix this, but I need to move onward. What I am seeing: I would like to see (for each row): ARI Budda Baker SS Ankle ARI Markus Golden OLB Illness etc. My current python app.py code: @app.route("/", methods=("GET", "POST"), strict_slashes=False) def index(): # Parsing Code will go here if request.method == "POST": try: global url, specific_element url = "https://www.cbssports.com/nfl/injuries/daily" only_tr = SoupStrainer('tr') source = requests.get(url).text soup = BeautifulSoup(source, 'lxml', parse_only=only_tr) specific_element = soup.find_all('tr', limit=16)[1:] return render_template("index.html", results=specific_element ) except Exception as e: flash(e, 'danger') return render_template('index.html') if __name__ == "__main__": app.run() My HTML: <div class="w3-third"> <div class="w3-card-4 w3-container" style="min-height:500px"> <div class="col-md-8"> <h3>15 Most Recent NFL Injuries</h3> <div class="bg-white shadow p-4 rounded results"> {% if results %} {% for result in results %} <p> {{ result | striptags }} </p> {% endfor %} {% endif %} </div> </div> </div> </form> </div> </div> I've tried to separate the individual values to create a dataframe using each column in a dict, as such: # Scrape CBS NFL Daily Injuries executable_path = {'executable_path': ChromeDriverManager().install()} browser = Browser('chrome', **executable_path, headless=True) # Visit the webpage url = "https://www.cbssports.com/nfl/injuries/daily" browser.visit(url) # Convert the browser html to a soup object html = browser.html soup = BeautifulSoup(html, 'lxml') # Create empty lists player = [] position = [] injury = [] team = [] logo = [] # Add try/except for error handling try: specific_element = soup.select('tr.TableBase-bodyTr') # Find all of the Tr rows rows = soup.findAll('tr', limit=21)[1:] # the 0th tr is headers except AttributeError: return None, None # Get info from each row for i in range(len(rows)): player.append(specific_element[i].find( 'span', class_='CellPlayerName--long').get_text()) position.append(specific_element[i].find( 'td', class_='TableBase-bodyTd').next_sibling.next_sibling.get_text().strip()) injury.append(specific_element[i].find( 'td', class_='TableBase-bodyTd').next_sibling.next_sibling.next_sibling.get_text().strip()) team.append(specific_element[i].find('span', class_='TeamName').get_text()) logo.append(specific_element[i].find('img', class_='TeamLogo-image').get('src')) recent_injuries = pd.DataFrame({ 'Team': team, 'Player': player, 'Position': position, 'Injury': injury }) return recent_injuries.to_html(classes="table table-hover") This returned a dataframe with the correct values in jupyter, but when I tried to place the dataframe into the HTML table, it formatted it with a lot of brackets. I also tried using find_all('td'), which does return all of the correct information, but separates each by row. I wanted to use the 'tr' because all of the information is there, but I don't know how to remove the lines I don't want, and to separate the values that I do.
[ "The issue with the doubling of the name was a dynamic change with the webpage size providing a shortened name with small size and longer name when larger, but they were under the same tag in ('td')1. By specifically calling the 'span' with class_='CellPlayerName--long', I was able to only extract the longer version of the name. The updated working code follows:\nFlask:\n @app.route(\"/\", methods=(\"GET\", \"POST\"), strict_slashes=False)\ndef index():\n\n url = \"https://www.cbssports.com/nfl/injuries/daily\"\n source = requests.get(url)\n soup = BeautifulSoup(source.text, 'html.parser')\n\n data = soup.find('tr')\n table_data = []\n trs = soup.select('tr.TableBase-bodyTr')\n \n\n for tr in trs[1:16]:\n row = []\n row.extend([tr.select('td')[0].text])\n row.extend([tr.find('span', class_='CellPlayerName--long').text])\n row.extend([tr.select('td')[2].text])\n row.extend([tr.select('td')[3].text])\n\n table_data.append(row)\n\n data = table_data\n\n\n return render_template('index.html', data=data)\n\nHTML:\n<div class=\"w3-third\">\n <div class=\"w3-card-4 w3-container\" style=\"min-height:500px;width:97%\"> \n <div class=\"w3-col s4\">\n <h3 class=\"text-nowrap\">15 Most Recent Injuries</h3>\n <table class=\"table table-striped text-nowrap\">\n <thead class=\"table-header\">\n <th scope=\"col\">Team</th>\n <th scope=\"col\">Player</th>\n <th scope=\"col\">Position</th>\n <th scope=\"col\">Injury</th>\n </thead>\n <tbody>\n {% for element in data %}\n <tr>\n <th class=\"font-weight-light\" scope=\"col\">{{element[0]}}</th>\n <th class=\"font-weight-light\" scope=\"col\">{{element[1]}}</th>\n <th class=\"font-weight-light\" scope=\"col\">{{element[2]}}</th>\n <th class=\"font-weight-light\" scope=\"col\">{{element[3]}}</th>\n </tr>\n {% endfor %}\n </tbody>\n </table>\n </div>\n </div>\n </div>\n\nCSS All CSS here is from w3, so no code needs to be added to style.css file\n <link rel=\"stylesheet\" href=\"https://www.w3schools.com/w3css/4/w3.css\">\n\nThe updated table:\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "flask", "html", "python" ]
stackoverflow_0074484725_beautifulsoup_flask_html_python.txt
Q: How to remove the special character '^' in a python string without removing whitespace with it i've been wondering how to remove the special character '^' in a python string , it seems like it doesn't count like the other special characters. I actually was trying to remove some special characters in a dataframe by using this code below : def remove_special_characters(text, remove_digits=True): text=re.sub(r'[^a-zA-z0-9\s]+','',text) return text df['review']=df['review'].apply(remove_special_characters) but the symbol '^' is still appearing in my data , do you know some code to remove it please ? A: You can escape special characters: r'[\^a-zA-z0-9\s]+' But the use case you're tackling is already addressed by translate(), without any need to resort to power tools like regexes. https://docs.python.org/3/library/stdtypes.html#str.maketrans You're incurring the cost of parsing / compiling the regex N times, when a single time would suffice. Consider defining this: pattern = re.compile(r'[\^a-zA-z0-9\s]+')
How to remove the special character '^' in a python string without removing whitespace with it
i've been wondering how to remove the special character '^' in a python string , it seems like it doesn't count like the other special characters. I actually was trying to remove some special characters in a dataframe by using this code below : def remove_special_characters(text, remove_digits=True): text=re.sub(r'[^a-zA-z0-9\s]+','',text) return text df['review']=df['review'].apply(remove_special_characters) but the symbol '^' is still appearing in my data , do you know some code to remove it please ?
[ "You can escape special characters:\nr'[\\^a-zA-z0-9\\s]+'\n\nBut the use case you're tackling is already addressed\nby translate(), without any need to resort to power tools\nlike regexes.\nhttps://docs.python.org/3/library/stdtypes.html#str.maketrans\n\nYou're incurring the cost of parsing / compiling the regex N\ntimes, when a single time would suffice.\nConsider defining this:\npattern = re.compile(r'[\\^a-zA-z0-9\\s]+')\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "python", "string", "symbols" ]
stackoverflow_0074635432_dataframe_python_string_symbols.txt
Q: How to "stretch" out a bounding box given from minAreaRect function in openCV? I wish to run a line detector between two known points on an image but firstly I need to widen the area around the line so my line detector has more area to work with. The main issue it stretch the area around line with respect to the line slope. For instance: white line generated form two points with black bounding box. I tried manualy manipulating the box array: input_to_min_area = np.array([[660, 888], [653, 540]]) # this works instead of contour as an input to minAreaRect rect = cv.minAreaRect(input_to_min_area) box = cv.boxPoints(rect) box[[0, 3], 0] += 20 box[[1, 2], 0] -= 20 box = np.int0(box) cv.drawContours(self.images[0], [box], 0, (0, 255, 255), 2) But that doesn't work for any line slope. From vertical to this angle everything is fine, but for the horizontal lines doesn't work. What would be a simpler solution that works for any line slope? A: A minAreaRect() gives you a center point, the size of the rectangle, and an angle. You could just add to the shorter side length of the rectangle. Then you have a description of a "wider rectangle". You can then do with it whatever you want, such as call boxPoints() on it. padding = 42 rect = cv.minAreaRect(input_to_min_area) (center, (w,h), angle) = rect # take it apart if w < h: # we don't know which side is longer, add to shorter side w += padding else: h += padding rect = (center, (w,h), angle) # rebuild A box around your two endpoints, widened: A: We may add the padding in the axis that is perpendicular to the angle of the " minAreaRect". Get the angle, and convert to radians angle = np.deg2rad(rect[2]) # Angle of minAreaRect Padding in each direction is perpendicular to the angle of minAreaRect pad_x = 20*np.sin(angle) pad_y = 20*np.cos(angle) Add the padding in both axes: Assume the order of the point in box is sorted according to the angle of rect (I don't know if it's always true - sorting the points may be required). box[[0, 3], 0] -= pad_x box[[1, 2], 0] += pad_x box[[0, 3], 1] += pad_y box[[1, 2], 1] -= pad_y box = np.int0(box) Code sample: import cv2 import numpy as np img = cv2.imread('sketch.png') #input_to_min_area = np.array([[584, 147], [587, 502]]) # this works instead of contour as an input to minAreaRect #input_to_min_area = np.array([[109, 515], [585, 144]]) # this works instead of contour as an input to minAreaRect input_to_min_area = np.array([[80, 103], [590, 502]]) # this works instead of contour as an input to minAreaRect rect = cv2.minAreaRect(input_to_min_area) box = cv2.boxPoints(rect) angle = np.deg2rad(rect[2]) # Angle of minAreaRect # Padding in each direction is perpendicular to the angle of minAreaRect pad_x = 20*np.sin(angle) pad_y = 20*np.cos(angle) box[[0, 3], 0] -= pad_x box[[1, 2], 0] += pad_x box[[0, 3], 1] += pad_y box[[1, 2], 1] -= pad_y box = np.int0(box) cv2.drawContours(img, [box], 0, (0, 255, 255), 2) cv2.imshow('img', img) cv2.waitKey() cv2.destroyAllWindows() Sample output: We may also want to expand the box in the parallel direction. I am still not sure about the signs... In this case it's simpler to update input_to_min_area: import cv2 import numpy as np img = cv2.imread('sketch.png') #input_to_min_area = np.array([[584, 147], [587, 502]]) # this works instead of contour as an input to minAreaRect #input_to_min_area = np.array([[109, 515], [585, 144]]) # this works instead of contour as an input to minAreaRect input_to_min_area = np.array([[80, 103], [590, 502]]) # this works instead of contour as an input to minAreaRect rect = cv2.minAreaRect(input_to_min_area) angle = np.deg2rad(rect[2]) # Angle of minAreaRect pad_x = int(round(20*np.cos(angle))) pad_y = int(round(20*np.sin(angle))) tmp_to_min_area = np.array([[input_to_min_area[0, 0]+pad_x, input_to_min_area[0, 1]+pad_y], [input_to_min_area[1, 0]-pad_x, input_to_min_area[1, 1]-pad_y]]) rect = cv2.minAreaRect(tmp_to_min_area) box = cv2.boxPoints(rect) angle = np.deg2rad(rect[2]) # Angle of minAreaRect # Padding in each direction is perpendicular to the angle of minAreaRect pad_x = 20*np.sin(angle) pad_y = 20*np.cos(angle) box[[0, 3], 0] -= pad_x box[[1, 2], 0] += pad_x box[[0, 3], 1] += pad_y box[[1, 2], 1] -= pad_y box = np.int0(box) cv2.drawContours(img, [box], 0, (0, 255, 255), 2) cv2.line(img, (tmp_to_min_area[0, 0], tmp_to_min_area[0, 1]), (tmp_to_min_area[1, 0], tmp_to_min_area[1, 1]), (255, 0, 0), 2) cv2.imshow('img', img) cv2.waitKey() cv2.destroyAllWindows() Output:
How to "stretch" out a bounding box given from minAreaRect function in openCV?
I wish to run a line detector between two known points on an image but firstly I need to widen the area around the line so my line detector has more area to work with. The main issue it stretch the area around line with respect to the line slope. For instance: white line generated form two points with black bounding box. I tried manualy manipulating the box array: input_to_min_area = np.array([[660, 888], [653, 540]]) # this works instead of contour as an input to minAreaRect rect = cv.minAreaRect(input_to_min_area) box = cv.boxPoints(rect) box[[0, 3], 0] += 20 box[[1, 2], 0] -= 20 box = np.int0(box) cv.drawContours(self.images[0], [box], 0, (0, 255, 255), 2) But that doesn't work for any line slope. From vertical to this angle everything is fine, but for the horizontal lines doesn't work. What would be a simpler solution that works for any line slope?
[ "A minAreaRect() gives you a center point, the size of the rectangle, and an angle.\nYou could just add to the shorter side length of the rectangle. Then you have a description of a \"wider rectangle\". You can then do with it whatever you want, such as call boxPoints() on it.\npadding = 42\n\nrect = cv.minAreaRect(input_to_min_area)\n\n(center, (w,h), angle) = rect # take it apart\n\nif w < h: # we don't know which side is longer, add to shorter side\n w += padding\nelse:\n h += padding\n\nrect = (center, (w,h), angle) # rebuild\n\nA box around your two endpoints, widened:\n\n", "We may add the padding in the axis that is perpendicular to the angle of the \"\nminAreaRect\".\n\nGet the angle, and convert to radians\n angle = np.deg2rad(rect[2]) # Angle of minAreaRect\n\n\nPadding in each direction is perpendicular to the angle of minAreaRect\n pad_x = 20*np.sin(angle)\n pad_y = 20*np.cos(angle)\n\n\nAdd the padding in both axes:\nAssume the order of the point in box is sorted according to the angle of rect (I don't know if it's always true - sorting the points may be required).\n box[[0, 3], 0] -= pad_x\n box[[1, 2], 0] += pad_x\n box[[0, 3], 1] += pad_y\n box[[1, 2], 1] -= pad_y\n box = np.int0(box)\n\n\n\n\nCode sample:\nimport cv2\nimport numpy as np\n\nimg = cv2.imread('sketch.png')\n\n#input_to_min_area = np.array([[584, 147], [587, 502]]) # this works instead of contour as an input to minAreaRect\n#input_to_min_area = np.array([[109, 515], [585, 144]]) # this works instead of contour as an input to minAreaRect\ninput_to_min_area = np.array([[80, 103], [590, 502]]) # this works instead of contour as an input to minAreaRect\n\nrect = cv2.minAreaRect(input_to_min_area)\nbox = cv2.boxPoints(rect)\n\nangle = np.deg2rad(rect[2]) # Angle of minAreaRect\n\n# Padding in each direction is perpendicular to the angle of minAreaRect\npad_x = 20*np.sin(angle)\npad_y = 20*np.cos(angle)\n\nbox[[0, 3], 0] -= pad_x\nbox[[1, 2], 0] += pad_x\nbox[[0, 3], 1] += pad_y\nbox[[1, 2], 1] -= pad_y\nbox = np.int0(box)\n\ncv2.drawContours(img, [box], 0, (0, 255, 255), 2)\n\ncv2.imshow('img', img)\ncv2.waitKey()\ncv2.destroyAllWindows()\n\n\nSample output:\n\n\nWe may also want to expand the box in the parallel direction.\nI am still not sure about the signs...\nIn this case it's simpler to update input_to_min_area:\nimport cv2\nimport numpy as np\n\nimg = cv2.imread('sketch.png')\n\n#input_to_min_area = np.array([[584, 147], [587, 502]]) # this works instead of contour as an input to minAreaRect\n#input_to_min_area = np.array([[109, 515], [585, 144]]) # this works instead of contour as an input to minAreaRect\ninput_to_min_area = np.array([[80, 103], [590, 502]]) # this works instead of contour as an input to minAreaRect\n\nrect = cv2.minAreaRect(input_to_min_area)\n\nangle = np.deg2rad(rect[2]) # Angle of minAreaRect\npad_x = int(round(20*np.cos(angle)))\npad_y = int(round(20*np.sin(angle)))\ntmp_to_min_area = np.array([[input_to_min_area[0, 0]+pad_x, input_to_min_area[0, 1]+pad_y], [input_to_min_area[1, 0]-pad_x, input_to_min_area[1, 1]-pad_y]])\nrect = cv2.minAreaRect(tmp_to_min_area)\n\nbox = cv2.boxPoints(rect)\n\nangle = np.deg2rad(rect[2]) # Angle of minAreaRect\n\n# Padding in each direction is perpendicular to the angle of minAreaRect\npad_x = 20*np.sin(angle)\npad_y = 20*np.cos(angle)\n\nbox[[0, 3], 0] -= pad_x\nbox[[1, 2], 0] += pad_x\nbox[[0, 3], 1] += pad_y\nbox[[1, 2], 1] -= pad_y\n\nbox = np.int0(box)\n\ncv2.drawContours(img, [box], 0, (0, 255, 255), 2)\n\ncv2.line(img, (tmp_to_min_area[0, 0], tmp_to_min_area[0, 1]), (tmp_to_min_area[1, 0], tmp_to_min_area[1, 1]), (255, 0, 0), 2)\n\ncv2.imshow('img', img)\ncv2.waitKey()\ncv2.destroyAllWindows()\n\n\nOutput:\n\n" ]
[ 3, 2 ]
[]
[]
[ "geometry", "image", "image_processing", "opencv", "python" ]
stackoverflow_0074633504_geometry_image_image_processing_opencv_python.txt
Q: Print a specific key from json file I'm trying to print a specific key from a dictionary(key:value) like this in a JSON file(below). I tried this code but prints everything: reda.json: [{"alice": 24, "bob": 27}, {"carl": 33}, {"carl": 55}, {"user": "user2"}, {"user": "user2"}, {"user": "123"},] Python: import json filename = 'reda.json' json_data = json.load(open('reda.json')) if type(json_data) is dict: json_data = [json_data] for i in json_data: print(i) A: You could turn the list of dicts into a single dict. Downside is dups would be squished, so e.g. "carl" would map to just a single number. As it stands, you probably want to see all of carl's values, using something like this: json_data = json.load(open('reda.json')) for d in json_data: print(d) k = "carl" print(f"\nHere is {k}:") for d in json_data: if k in d: print(k, d[k]) To see if e.g. "carl" is in the data, use this: def contains_favorite_key(d: dict, k="carl"): return k in d if any(map(contains_favorite_key, json_data)): print("found at least one occurrence!") To say "bye, bye, Carl!" use del: k = "carl" assert k in d print(d[k]) del d[k] print(d[k]) # Notice that this now reports KeyError.
Print a specific key from json file
I'm trying to print a specific key from a dictionary(key:value) like this in a JSON file(below). I tried this code but prints everything: reda.json: [{"alice": 24, "bob": 27}, {"carl": 33}, {"carl": 55}, {"user": "user2"}, {"user": "user2"}, {"user": "123"},] Python: import json filename = 'reda.json' json_data = json.load(open('reda.json')) if type(json_data) is dict: json_data = [json_data] for i in json_data: print(i)
[ "You could turn the list of dicts into a single dict.\nDownside is dups would be squished, so e.g. \"carl\" would map to just a single number.\nAs it stands, you probably want to see all of carl's values,\nusing something like this:\njson_data = json.load(open('reda.json'))\nfor d in json_data:\n print(d)\n\nk = \"carl\"\nprint(f\"\\nHere is {k}:\")\nfor d in json_data:\n if k in d:\n print(k, d[k])\n\n\nTo see if e.g. \"carl\" is in the data, use this:\ndef contains_favorite_key(d: dict, k=\"carl\"):\n return k in d\n\nif any(map(contains_favorite_key, json_data)):\n print(\"found at least one occurrence!\")\n\n\n\nTo say \"bye, bye, Carl!\" use del:\nk = \"carl\"\nassert k in d\nprint(d[k])\n\ndel d[k]\n\nprint(d[k]) # Notice that this now reports KeyError.\n\n" ]
[ 1 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074635498_list_python.txt
Q: Speeding up python computation time (solving differential equations) so some time ago i was assigned a project to find the position relative to time of a simulated pendulum on a free moving cart, i managed to calculate some equations to describe this motion and i tried to simulate it in python to make sure it is correct. The program i made can run and plot its position correctly, but it is quite slow especially when i try to plot it with higher accuracy. How can i improve this program, any tips is greatly appreciated. the program : from scipy.integrate import quad from scipy.optimize import fsolve import numpy as np import matplotlib.pyplot as plt # These values can be changed masstot = 5 mass = 2 g= 9.8 l = 9.8 wan = (g/l)**(1/2) vuk = 0.1 oug = 1 def afad(lah): # Find first constant wan = 1 vuk = 0.1 oug = 1 kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2))+(4*(oug**2)) return (kan) solua = fsolve(afad, 1) intsolua = sum(solua) def kfad(solua, wan, vuk): # Find second constant res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2))+(wan**2)))/((5*(solua**2))+4)))**(1/2) return (res) ksol = kfad(solua, wan, vuk) def deg(t, solua, vuk, ksol): # Find angle of pendulum relative to time res = 2*np.arctan(solua*np.exp(-1*vuk*t)*np.sin(ksol*t)) return(res) def chandeg(t, solua, vuk, ksol): # Find velocity of pendulum relative to time res = (((-2*solua*vuk*np.exp(vuk*t)*np.sin(ksol*t))+(2*solua*ksol*np.exp(vuk*t)*np.cos(ksol*t)))/(np.exp(2*vuk*t)+((solua**2)*(np.sin(ksol*t)**2)))) return(res) xs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy and length def dinte1(deg, bond, solua, vuk, ksol): # used to plot angle at at a certain time res = [] for x in (bond): res.append(deg(x, solua, vuk, ksol)) return res def dinte2(chandeg, bond, solua, vuk, ksol): # used to plot angular velocity at a certain time res = [] for x in (bond): res.append(chandeg(x, solua, vuk, ksol)) return res def dinte(a, bond, mass, l, solua, vuk, ksol, g, masstot ): # used to plot acceleration of system at certain time res = [] for x in (bond): res.append(a(x, mass, l, solua, vuk, ksol, g, masstot)) return res def a(t, mass, l, solua, vuk, ksol, g, masstot): # define acceleration of system to time return (((mass*l*(chandeg(t, solua, vuk, ksol)**2))+(mass*g*np.cos(deg(t, solua, vuk, ksol))))*np.sin(deg(t, solua, vuk, ksol))/masstot) def j(t): return sum(a(t, mass, l, intsolua, vuk, ksol, g, masstot)) def f(ub): return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0] def int2(f, bond): # Integrates system acceleration twice to get posistion relative to time res = [] for x in (bond): res.append(f(x)) print(res) return res plt.plot(xs, int2(f, xs)) # This part of the program runs quite slowly #plt.plot(xs, dinte(a, xs, mass, l, solua, vuk, ksol, g, masstot)) #plt.plot(xs, dinte2(chandeg, xs, solua, vuk, ksol)) #plt.plot(xs, dinte1(deg, xs, solua, vuk, ksol)) plt.show() Ran the program, it can run relatively well just very slowly. Disclaimer that i am new at using python and scipy so it's probably a very inneficient program. A: You can try to calculate values only once and then reuse them. from scipy.integrate import quad from scipy.optimize import fsolve import numpy as np import matplotlib.pyplot as plt # These values can be changed masstot = 5 mass = 2 g = 9.8 l = 9.8 wan = (g/l)**(1/2) vuk = 0.1 oug = 1 def afad(lah): # Find first constant wan = 1 vuk = 0.1 oug = 1 kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2))+(4*(oug**2)) return (kan) solua = fsolve(afad, 1)[0] def kfad(solua, wan, vuk): # Find second constant res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2))+(wan**2)))/((5*(solua**2))+4)))**(1/2) return (res) ksol = kfad(solua, wan, vuk) res_a = {} def a(t): # define acceleration of system to time if t in res_a: return res_a[t] vuk_t = vuk * t macro1 = 2 * solua * np.exp(vuk_t) ksol_t = ksol * t sin_ksol_t = np.sin(ksol_t) deg = 2 * np.arctan(solua * np.exp(-1 * vuk_t) * sin_ksol_t) chandeg = macro1 * (-vuk * sin_ksol_t + ksol * np.cos(ksol_t)) / (np.exp(2*vuk_t) + ((solua**2) * sin_ksol_t**2)) res = (((l * (chandeg**2)) + (g * np.cos(deg))) * mass * np.sin(deg) / masstot) res_a[t] = res return res res_j = {} def j(t): if t in res_j: return res_j[t] res = a(t) res_j[t] = res return res def f(ub): return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0] def int2(bond): res = [] for x in (bond): res.append(f(x)) print(res) return res xs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy and length plt.plot(xs, int2(xs)) # This part of the program runs quite slowly plt.show() This example looks to be 6x faster. A: An alternative solution to the one of @IvanPerehiniak, is to use a JIT compiler like Numba so to do many low-level optimization that the CPython interpreter do not. Indeed, numerically intensive pure-Python code running on CPython are generally very inefficient. Numpy can provide relatively good performance for large arrays but it is very slow for small one. The thing is you use a lot of small arrays and pure-Python scalar operations. Numba is not a silver-bullet though: it just mitigate many overhead from Numpy and CPython. You still have to optimize the code further if you want to get a very fast code. Hopefully, this method can be combined with the one of @IvanPerehiniak (though the dictionary need not to be global which is cumbersome in many cases). Note Numba can pre-compute global constants for you. The compilation time is done during the first call or when the function has a user-defined explicit signature. import numba as nb from scipy.integrate import quad from scipy.optimize import fsolve import numpy as np import matplotlib.pyplot as plt import scipy # These values can be changed masstot = 5.0 mass = 2.0 g= 9.8 l = 9.8 wan = (g/l)**(1/2) vuk = 0.1 oug = 1.0 @nb.njit def afad(lah): # Find first constant wan = 1.0 vuk = 0.1 oug = 1.0 kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2))+(4*(oug**2)) return (kan) solua = fsolve(afad, 1) intsolua = np.sum(solua) @nb.njit def kfad(solua, wan, vuk): # Find second constant res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2))+(wan**2)))/((5*(solua**2))+4)))**(1/2) return (res) ksol = kfad(solua, wan, vuk) @nb.njit def deg(t, solua, vuk, ksol): # Find angle of pendulum relative to time res = 2*np.arctan(solua*np.exp(-1*vuk*t)*np.sin(ksol*t)) return(res) @nb.njit def chandeg(t, solua, vuk, ksol): # Find velocity of pendulum relative to time res = (((-2*solua*vuk*np.exp(vuk*t)*np.sin(ksol*t))+(2*solua*ksol*np.exp(vuk*t)*np.cos(ksol*t)))/(np.exp(2*vuk*t)+((solua**2)*(np.sin(ksol*t)**2)))) return(res) xs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy and length @nb.njit def dinte1(deg, bond, solua, vuk, ksol): # used to plot angle at at a certain time res = [] for x in (bond): res.append(deg(x, solua, vuk, ksol)) return res @nb.njit def dinte2(chandeg, bond, solua, vuk, ksol): # used to plot angular velocity at a certain time res = [] for x in (bond): res.append(chandeg(x, solua, vuk, ksol)) return res @nb.njit def dinte(a, bond, mass, l, solua, vuk, ksol, g, masstot ): # used to plot acceleration of system at certain time res = [] for x in (bond): res.append(a(x, mass, l, solua, vuk, ksol, g, masstot)) return res @nb.njit def a(t, mass, l, solua, vuk, ksol, g, masstot): # define acceleration of system to time return (((mass*l*(chandeg(t, solua, vuk, ksol)**2))+(mass*g*np.cos(deg(t, solua, vuk, ksol))))*np.sin(deg(t, solua, vuk, ksol))/masstot) # See: https://stackoverflow.com/questions/71244504/reducing-redundancy-for-calculating-large-number-of-integrals-numerically/71245570#71245570 @nb.cfunc('float64(float64)') def j(t): return np.sum(a(t, mass, l, intsolua, vuk, ksol, g, masstot)) j = scipy.LowLevelCallable(j.ctypes) # Cannot be jitted due to "quad" def f(ub): return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0] # Cannot be jitted due to "f" not being jitted def int2(f, bond): # Integrates system acceleration twice to get posistion relative to time res = [] for x in (bond): res.append(f(x)) print(res) return res plt.plot(xs, int2(f, xs)) # This part of the program runs quite slowly #plt.plot(xs, dinte(a, xs, mass, l, solua, vuk, ksol, g, masstot)) #plt.plot(xs, dinte2(chandeg, xs, solua, vuk, ksol)) #plt.plot(xs, dinte1(deg, xs, solua, vuk, ksol)) plt.show() Here are results: Initial solution: 35.5 s Ivan Perehiniak's solution: 5.9 s This solution (first run): 3.1 s This solution (second run): 1.5 s This solution is slower the first time the script is run because the JIT needs to compile all the functions the first time. Subsequent calls to the functions are significantly faster. In fact, int2 takes only 0.5 seconds on my machine the second time.
Speeding up python computation time (solving differential equations)
so some time ago i was assigned a project to find the position relative to time of a simulated pendulum on a free moving cart, i managed to calculate some equations to describe this motion and i tried to simulate it in python to make sure it is correct. The program i made can run and plot its position correctly, but it is quite slow especially when i try to plot it with higher accuracy. How can i improve this program, any tips is greatly appreciated. the program : from scipy.integrate import quad from scipy.optimize import fsolve import numpy as np import matplotlib.pyplot as plt # These values can be changed masstot = 5 mass = 2 g= 9.8 l = 9.8 wan = (g/l)**(1/2) vuk = 0.1 oug = 1 def afad(lah): # Find first constant wan = 1 vuk = 0.1 oug = 1 kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2))+(4*(oug**2)) return (kan) solua = fsolve(afad, 1) intsolua = sum(solua) def kfad(solua, wan, vuk): # Find second constant res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2))+(wan**2)))/((5*(solua**2))+4)))**(1/2) return (res) ksol = kfad(solua, wan, vuk) def deg(t, solua, vuk, ksol): # Find angle of pendulum relative to time res = 2*np.arctan(solua*np.exp(-1*vuk*t)*np.sin(ksol*t)) return(res) def chandeg(t, solua, vuk, ksol): # Find velocity of pendulum relative to time res = (((-2*solua*vuk*np.exp(vuk*t)*np.sin(ksol*t))+(2*solua*ksol*np.exp(vuk*t)*np.cos(ksol*t)))/(np.exp(2*vuk*t)+((solua**2)*(np.sin(ksol*t)**2)))) return(res) xs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy and length def dinte1(deg, bond, solua, vuk, ksol): # used to plot angle at at a certain time res = [] for x in (bond): res.append(deg(x, solua, vuk, ksol)) return res def dinte2(chandeg, bond, solua, vuk, ksol): # used to plot angular velocity at a certain time res = [] for x in (bond): res.append(chandeg(x, solua, vuk, ksol)) return res def dinte(a, bond, mass, l, solua, vuk, ksol, g, masstot ): # used to plot acceleration of system at certain time res = [] for x in (bond): res.append(a(x, mass, l, solua, vuk, ksol, g, masstot)) return res def a(t, mass, l, solua, vuk, ksol, g, masstot): # define acceleration of system to time return (((mass*l*(chandeg(t, solua, vuk, ksol)**2))+(mass*g*np.cos(deg(t, solua, vuk, ksol))))*np.sin(deg(t, solua, vuk, ksol))/masstot) def j(t): return sum(a(t, mass, l, intsolua, vuk, ksol, g, masstot)) def f(ub): return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0] def int2(f, bond): # Integrates system acceleration twice to get posistion relative to time res = [] for x in (bond): res.append(f(x)) print(res) return res plt.plot(xs, int2(f, xs)) # This part of the program runs quite slowly #plt.plot(xs, dinte(a, xs, mass, l, solua, vuk, ksol, g, masstot)) #plt.plot(xs, dinte2(chandeg, xs, solua, vuk, ksol)) #plt.plot(xs, dinte1(deg, xs, solua, vuk, ksol)) plt.show() Ran the program, it can run relatively well just very slowly. Disclaimer that i am new at using python and scipy so it's probably a very inneficient program.
[ "You can try to calculate values only once and then reuse them.\nfrom scipy.integrate import quad\nfrom scipy.optimize import fsolve\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# These values can be changed\nmasstot = 5\nmass = 2\ng = 9.8\nl = 9.8\nwan = (g/l)**(1/2)\nvuk = 0.1\noug = 1\n\ndef afad(lah): # Find first constant\n wan = 1\n vuk = 0.1\n oug = 1\n kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2))+(4*(oug**2))\n return (kan)\n\nsolua = fsolve(afad, 1)[0]\n\ndef kfad(solua, wan, vuk): # Find second constant\n res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2))+(wan**2)))/((5*(solua**2))+4)))**(1/2)\n return (res)\n\nksol = kfad(solua, wan, vuk)\n\n\n\nres_a = {}\ndef a(t): # define acceleration of system to time\n if t in res_a:\n return res_a[t]\n\n vuk_t = vuk * t\n macro1 = 2 * solua * np.exp(vuk_t)\n ksol_t = ksol * t\n sin_ksol_t = np.sin(ksol_t)\n deg = 2 * np.arctan(solua * np.exp(-1 * vuk_t) * sin_ksol_t)\n chandeg = macro1 * (-vuk * sin_ksol_t + ksol * np.cos(ksol_t)) / (np.exp(2*vuk_t) + ((solua**2) * sin_ksol_t**2))\n\n res = (((l * (chandeg**2)) + (g * np.cos(deg))) * mass * np.sin(deg) / masstot)\n\n res_a[t] = res\n return res\n\nres_j = {}\ndef j(t):\n if t in res_j:\n return res_j[t]\n\n res = a(t)\n\n res_j[t] = res\n return res\n\ndef f(ub):\n return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0]\n\ndef int2(bond):\n res = []\n for x in (bond):\n res.append(f(x))\n print(res)\n\n return res\n\nxs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy and length\nplt.plot(xs, int2(xs)) # This part of the program runs quite slowly\nplt.show()\n\nThis example looks to be 6x faster.\n", "An alternative solution to the one of @IvanPerehiniak, is to use a JIT compiler like Numba so to do many low-level optimization that the CPython interpreter do not. Indeed, numerically intensive pure-Python code running on CPython are generally very inefficient. Numpy can provide relatively good performance for large arrays but it is very slow for small one. The thing is you use a lot of small arrays and pure-Python scalar operations. Numba is not a silver-bullet though: it just mitigate many overhead from Numpy and CPython. You still have to optimize the code further if you want to get a very fast code. Hopefully, this method can be combined with the one of @IvanPerehiniak (though the dictionary need not to be global which is cumbersome in many cases). Note Numba can pre-compute global constants for you. The compilation time is done during the first call or when the function has a user-defined explicit signature.\nimport numba as nb\nfrom scipy.integrate import quad\nfrom scipy.optimize import fsolve\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy\n\n# These values can be changed\nmasstot = 5.0\nmass = 2.0\ng= 9.8\nl = 9.8\nwan = (g/l)**(1/2)\nvuk = 0.1\noug = 1.0\n\[email protected]\ndef afad(lah): # Find first constant\n wan = 1.0\n vuk = 0.1\n oug = 1.0\n kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2))+(4*(oug**2))\n return (kan)\n\nsolua = fsolve(afad, 1)\n\nintsolua = np.sum(solua) \n\[email protected]\ndef kfad(solua, wan, vuk): # Find second constant\n res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2))+(wan**2)))/((5*(solua**2))+4)))**(1/2)\n return (res)\n\nksol = kfad(solua, wan, vuk)\n\[email protected]\ndef deg(t, solua, vuk, ksol): # Find angle of pendulum relative to time\n res = 2*np.arctan(solua*np.exp(-1*vuk*t)*np.sin(ksol*t))\n return(res)\n\[email protected]\ndef chandeg(t, solua, vuk, ksol): # Find velocity of pendulum relative to time\n res = (((-2*solua*vuk*np.exp(vuk*t)*np.sin(ksol*t))+(2*solua*ksol*np.exp(vuk*t)*np.cos(ksol*t)))/(np.exp(2*vuk*t)+((solua**2)*(np.sin(ksol*t)**2))))\n return(res)\n\nxs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy and length\n\[email protected]\ndef dinte1(deg, bond, solua, vuk, ksol): # used to plot angle at at a certain time\n res = []\n for x in (bond):\n res.append(deg(x, solua, vuk, ksol))\n return res\n\[email protected]\ndef dinte2(chandeg, bond, solua, vuk, ksol): # used to plot angular velocity at a certain time\n res = []\n for x in (bond):\n res.append(chandeg(x, solua, vuk, ksol))\n return res\n\[email protected]\ndef dinte(a, bond, mass, l, solua, vuk, ksol, g, masstot ): # used to plot acceleration of system at certain time\n res = []\n for x in (bond):\n res.append(a(x, mass, l, solua, vuk, ksol, g, masstot))\n return res\n\[email protected]\ndef a(t, mass, l, solua, vuk, ksol, g, masstot): # define acceleration of system to time\n return (((mass*l*(chandeg(t, solua, vuk, ksol)**2))+(mass*g*np.cos(deg(t, solua, vuk, ksol))))*np.sin(deg(t, solua, vuk, ksol))/masstot)\n\n# See: https://stackoverflow.com/questions/71244504/reducing-redundancy-for-calculating-large-number-of-integrals-numerically/71245570#71245570\[email protected]('float64(float64)')\ndef j(t):\n return np.sum(a(t, mass, l, intsolua, vuk, ksol, g, masstot))\nj = scipy.LowLevelCallable(j.ctypes)\n\n# Cannot be jitted due to \"quad\"\ndef f(ub):\n return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0]\n\n# Cannot be jitted due to \"f\" not being jitted\ndef int2(f, bond): # Integrates system acceleration twice to get posistion relative to time\n res = []\n for x in (bond):\n res.append(f(x))\n print(res)\n return res\n\nplt.plot(xs, int2(f, xs)) # This part of the program runs quite slowly\n#plt.plot(xs, dinte(a, xs, mass, l, solua, vuk, ksol, g, masstot))\n#plt.plot(xs, dinte2(chandeg, xs, solua, vuk, ksol))\n#plt.plot(xs, dinte1(deg, xs, solua, vuk, ksol))\nplt.show()\n\nHere are results:\nInitial solution: 35.5 s\nIvan Perehiniak's solution: 5.9 s\nThis solution (first run): 3.1 s\nThis solution (second run): 1.5 s\n\nThis solution is slower the first time the script is run because the JIT needs to compile all the functions the first time. Subsequent calls to the functions are significantly faster. In fact, int2 takes only 0.5 seconds on my machine the second time.\n" ]
[ 2, 2 ]
[]
[]
[ "differential_equations", "numpy", "python", "scipy", "simulation" ]
stackoverflow_0074634028_differential_equations_numpy_python_scipy_simulation.txt
Q: Python pytube calculate download speed and elapsed time So i have a download callback function def downloadCallback(stream, chunk, file_handle, bytes_remaining): fileSize = stream.filesize bytes_downloaded = fileSize - bytes_remaining percentage = round((bytes_downloaded / fileSize) * 100, 2) print(f"{percentage}% Downloaded", end="\r") So far I have been able to get the percentage. but no luck when it comes to getting the download speed and elapsed time. This callback is being called continuously on this code yt = YouTube(link, on_progress_callback=downloadCallback) streamVideo = yt.streams.first() streamVideo.download() Feel free to mark this as duplicate because I've also seen a lot of questions regarding this problem. But most of them is just confusing. I actually want someone that would explain the formula to me in layman's term A: This method is very simple and will not give exact values, but it is quite close. You must first take the time value before starting the download. Then in the function "downloadCallback" you return to take the value of time. Subtracting this value from the value taken before starting the download, we will have the elapsed time since the start of the download. Now we can calculate download speed, simply by dividing the number of bytes downloaded by the elapsed time. If you want the value in Mega bytes, you must divide by 1024 twice. To calculate the remaining time, just divide the bytes remaining to be downloaded by the speed: from pytube import YouTube from datetime import datetime download_start_time = datetime.now() def downloadCallback(stream, chunk, bytes_remaining): global download_start_time seconds_since_download_start = (datetime.now()- download_start_time).total_seconds() total_size = stream.filesize bytes_downloaded = total_size - bytes_remaining percentage_of_completion = bytes_downloaded / total_size * 100 speed = round(((bytes_downloaded / 1024) / 1024) / seconds_since_download_start, 2) seconds_left = round(((bytes_remaining / 1024) / 1024) / float(speed), 2) print("percentage_of_completion:", round(percentage_of_completion, 2), "%") print("seconds_since_download_start:", round(seconds_since_download_start, 2), "seconds") print("speed:", round(speed, 2), "Mbps") print("seconds_left:", round(seconds_left, 2), "seconds") print() def main(): global download_start_time chunk_size = 1024 url = "https://youtu.be/BBnomwpF_uY" yt = YouTube(url) video = yt.streams.get_highest_resolution() yt.register_on_progress_callback(downloadCallback) print(f"Fetching \"{video.title}\"..") print(f"Fetching successful\n") print(f"Information: \n" f"File size: {round(video.filesize * 0.000001, 2)} MegaBytes\n" f"Highest Resolution: {video.resolution}\n" f"Author: {yt.author}") print("Views: {:,}\n".format(yt.views)) print(f"Downloading \"{video.title}\"..") download_start_time = datetime.now() video.download() main()
Python pytube calculate download speed and elapsed time
So i have a download callback function def downloadCallback(stream, chunk, file_handle, bytes_remaining): fileSize = stream.filesize bytes_downloaded = fileSize - bytes_remaining percentage = round((bytes_downloaded / fileSize) * 100, 2) print(f"{percentage}% Downloaded", end="\r") So far I have been able to get the percentage. but no luck when it comes to getting the download speed and elapsed time. This callback is being called continuously on this code yt = YouTube(link, on_progress_callback=downloadCallback) streamVideo = yt.streams.first() streamVideo.download() Feel free to mark this as duplicate because I've also seen a lot of questions regarding this problem. But most of them is just confusing. I actually want someone that would explain the formula to me in layman's term
[ "This method is very simple and will not give exact values, but it is quite close.\nYou must first take the time value before starting the download.\nThen in the function \"downloadCallback\" you return to take the value of time. Subtracting this value from the value taken before starting the download, we will have the elapsed time since the start of the download.\nNow we can calculate download speed, simply by dividing the number of bytes downloaded by the elapsed time. If you want the value in Mega bytes, you must divide by 1024 twice.\nTo calculate the remaining time, just divide the bytes remaining to be downloaded by the speed:\n from pytube import YouTube\n from datetime import datetime\n\n download_start_time = datetime.now()\n\n def downloadCallback(stream, chunk, bytes_remaining):\n global download_start_time\n seconds_since_download_start = (datetime.now()- download_start_time).total_seconds() \n total_size = stream.filesize\n bytes_downloaded = total_size - bytes_remaining\n percentage_of_completion = bytes_downloaded / total_size * 100\n speed = round(((bytes_downloaded / 1024) / 1024) / seconds_since_download_start, 2) \n seconds_left = round(((bytes_remaining / 1024) / 1024) / float(speed), 2)\n print(\"percentage_of_completion:\", round(percentage_of_completion, 2), \"%\")\n print(\"seconds_since_download_start:\", round(seconds_since_download_start, 2), \"seconds\")\n print(\"speed:\", round(speed, 2), \"Mbps\")\n print(\"seconds_left:\", round(seconds_left, 2), \"seconds\")\n print()\n\n\n def main():\n global download_start_time\n chunk_size = 1024\n url = \"https://youtu.be/BBnomwpF_uY\"\n yt = YouTube(url)\n video = yt.streams.get_highest_resolution()\n yt.register_on_progress_callback(downloadCallback)\n print(f\"Fetching \\\"{video.title}\\\"..\")\n print(f\"Fetching successful\\n\")\n print(f\"Information: \\n\"\n f\"File size: {round(video.filesize * 0.000001, 2)} MegaBytes\\n\"\n f\"Highest Resolution: {video.resolution}\\n\"\n f\"Author: {yt.author}\")\n print(\"Views: {:,}\\n\".format(yt.views))\n\n print(f\"Downloading \\\"{video.title}\\\"..\")\n\n download_start_time = datetime.now()\n video.download()\n\n main()\n\n" ]
[ 0 ]
[]
[]
[ "python", "pytube" ]
stackoverflow_0058256277_python_pytube.txt
Q: Python behave fixture on feature level not loaded This is somewhat related to this question, but I have some further problems in this minimal example below. For a feature test I prepared a fixture which backs up a file which shall be modified during the test run (e.g. a line is appended). After the test run this fixture restores the original file. Project Files: └───features │ environment.py │ modify_file.feature │ └───steps file_ops.py #!/usr/bin/env python # FILE: features/environment.py import logging from behave import fixture from behave.runner import Context logger = logging.getLogger(__name__) @fixture def backup_file(context: Context): """ A file will be modified during the feature test. This fixture shall backup the file before the feature test and restore the backup after the test. """ file = Path.home() / "important.txt" backup_suffix = ".backup" file.touch() file.replace(file.with_suffix(backup_suffix)) logger.info("File backed up") yield file.with_suffix(backup_suffix).replace(file) logger.info("File restored") # FILE: features/modify_file.feature @fixture.backup.file Feature: Modify file @wip Scenario: Append a line to a file Given the file exists When I append a line to the file Then the line appears at the end of the file #!/usr/bin/env python # File features/steps/file_ops.py from pathlib import Path from behave import given from behave import when from behave import then from behave.runner import Context import logging logger = logging.getLogger(__name__) file = Path.home() / "important.txt" @given("the file exists") def step_impl(context: Context): logger.info(f"Touching file") file.touch() @when("I append a line to the file") def step_impl(context: Context): logger.info(f"Appending a line to file") context.appended = "Test line appended\n" with open(file, mode="a") as f: f.write(context.appended) @then("the line appears at the end of the file") def step_impl(context: Context): logger.info(f"Checking if line was appended") with open(file, mode="r") as f: for line in f: pass logger.info(f"Last line is '{line.strip()}'") assert line == context.appended I want to apply the fixture at the feature level before all scenarios are run. The file shall be restored after all scenarios have run. However, this is apparently not the case. When I run behave -w (no log capture, wip tags only), I don't see any log lines from the fixture being output and also with every run I see another line appended to the file. This means the file is not being backed up and restored. The fixture is not applied even if in the I move the fixture down to the Scenario level modify_file.feature file. Can you help me understand what is going on here? I'm also curious why the fixture tag is used with dot notation (@fixture.backup.file) rather than (fixture.backup_file) as this would be similar to the actual function name. There is no explanation of this in the behave documentation. A: I also had trouble setting up a fixture because the docs aren't super clear that you have to explicitly enable them. It isn't enough to have the @fixture decoration in features/environment.py. You also have to call use_fixture(). For example, inside before_tag(), like this: def before_tag(context, tag): if tag == "fixture.backup.file": use_fixture(backup_file, context) This example helped me figure that out: https://github.com/behave/behave/blob/main/features/fixture.feature#L16 A: Here is how you can do it without @fixture, just by using the before_tag hook: #!/usr/bin/env python # FILE: features/environment.py import logging from behave.runner import Context from behave.model import Tag from pathlib import Path logger = logging.getLogger(__name__) def rename_file(origin: Path, target: Path) -> tuple[Path, Path]: """ Will rename the `origin` to `target`, replaces existing `target` """ origin.touch() backup = origin.replace(target) logger.info(f"File {str(origin)} renamed to {str(target)}") return origin, backup def before_tag(context: Context, tag: Tag): if tag == "backup.file.important.txt": file = Path.home() / "important.txt" backup = file.with_suffix(".backup") context.file, context.backup_file = rename_file(file, backup) context.add_cleanup(rename_file, context.backup_file, context.file) #!/usr/bin/env python # File features/steps/file_ops.py from pathlib import Path from behave import given from behave import when from behave import then from behave.runner import Context import logging logger = logging.getLogger(__name__) @given("the file exists") def step_impl(context: Context): logger.info(f"Touching file") context.file.touch() @when("I append a line to the file") def step_impl(context: Context): logger.info(f"Appending a line to file") context.appended = "Test line appended\n" with open(context.file, mode="a") as f: f.write(context.appended) @then("the line appears at the end of the file") def step_impl(context: Context): logger.info(f"Checking if line was appended") with open(context.file, mode="r") as f: for n, line in enumerate(f, start=1): logger.info(f"Line {n} is {line}") logger.info(f"Last line is '{line.strip()}'") assert line == context.appended But using a fixture and the fixture registry as described in the Realistic example in behave's documentation is superior, as you will likely end up having many more setup-cleanup pairs later on.
Python behave fixture on feature level not loaded
This is somewhat related to this question, but I have some further problems in this minimal example below. For a feature test I prepared a fixture which backs up a file which shall be modified during the test run (e.g. a line is appended). After the test run this fixture restores the original file. Project Files: └───features │ environment.py │ modify_file.feature │ └───steps file_ops.py #!/usr/bin/env python # FILE: features/environment.py import logging from behave import fixture from behave.runner import Context logger = logging.getLogger(__name__) @fixture def backup_file(context: Context): """ A file will be modified during the feature test. This fixture shall backup the file before the feature test and restore the backup after the test. """ file = Path.home() / "important.txt" backup_suffix = ".backup" file.touch() file.replace(file.with_suffix(backup_suffix)) logger.info("File backed up") yield file.with_suffix(backup_suffix).replace(file) logger.info("File restored") # FILE: features/modify_file.feature @fixture.backup.file Feature: Modify file @wip Scenario: Append a line to a file Given the file exists When I append a line to the file Then the line appears at the end of the file #!/usr/bin/env python # File features/steps/file_ops.py from pathlib import Path from behave import given from behave import when from behave import then from behave.runner import Context import logging logger = logging.getLogger(__name__) file = Path.home() / "important.txt" @given("the file exists") def step_impl(context: Context): logger.info(f"Touching file") file.touch() @when("I append a line to the file") def step_impl(context: Context): logger.info(f"Appending a line to file") context.appended = "Test line appended\n" with open(file, mode="a") as f: f.write(context.appended) @then("the line appears at the end of the file") def step_impl(context: Context): logger.info(f"Checking if line was appended") with open(file, mode="r") as f: for line in f: pass logger.info(f"Last line is '{line.strip()}'") assert line == context.appended I want to apply the fixture at the feature level before all scenarios are run. The file shall be restored after all scenarios have run. However, this is apparently not the case. When I run behave -w (no log capture, wip tags only), I don't see any log lines from the fixture being output and also with every run I see another line appended to the file. This means the file is not being backed up and restored. The fixture is not applied even if in the I move the fixture down to the Scenario level modify_file.feature file. Can you help me understand what is going on here? I'm also curious why the fixture tag is used with dot notation (@fixture.backup.file) rather than (fixture.backup_file) as this would be similar to the actual function name. There is no explanation of this in the behave documentation.
[ "I also had trouble setting up a fixture because the docs aren't super clear that you have to explicitly enable them. It isn't enough to have the @fixture decoration in features/environment.py. You also have to call use_fixture(). For example, inside before_tag(), like this:\ndef before_tag(context, tag):\n if tag == \"fixture.backup.file\":\n use_fixture(backup_file, context)\n\nThis example helped me figure that out:\nhttps://github.com/behave/behave/blob/main/features/fixture.feature#L16\n", "Here is how you can do it without @fixture, just by using the before_tag hook:\n#!/usr/bin/env python\n# FILE: features/environment.py\n\nimport logging\nfrom behave.runner import Context\nfrom behave.model import Tag\nfrom pathlib import Path\n\nlogger = logging.getLogger(__name__)\n\n\ndef rename_file(origin: Path, target: Path) -> tuple[Path, Path]:\n \"\"\"\n Will rename the `origin` to `target`, replaces existing `target`\n \"\"\"\n origin.touch()\n backup = origin.replace(target)\n logger.info(f\"File {str(origin)} renamed to {str(target)}\")\n return origin, backup\n\n\ndef before_tag(context: Context, tag: Tag):\n if tag == \"backup.file.important.txt\":\n file = Path.home() / \"important.txt\"\n backup = file.with_suffix(\".backup\")\n context.file, context.backup_file = rename_file(file, backup)\n context.add_cleanup(rename_file, context.backup_file, context.file)\n\n#!/usr/bin/env python\n# File features/steps/file_ops.py\n\nfrom pathlib import Path\nfrom behave import given\nfrom behave import when\nfrom behave import then\nfrom behave.runner import Context\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\n@given(\"the file exists\")\ndef step_impl(context: Context):\n logger.info(f\"Touching file\")\n context.file.touch()\n\n\n@when(\"I append a line to the file\")\ndef step_impl(context: Context):\n logger.info(f\"Appending a line to file\")\n context.appended = \"Test line appended\\n\"\n with open(context.file, mode=\"a\") as f:\n f.write(context.appended)\n\n\n@then(\"the line appears at the end of the file\")\ndef step_impl(context: Context):\n logger.info(f\"Checking if line was appended\")\n with open(context.file, mode=\"r\") as f:\n for n, line in enumerate(f, start=1):\n logger.info(f\"Line {n} is {line}\")\n logger.info(f\"Last line is '{line.strip()}'\")\n assert line == context.appended\n\nBut using a fixture and the fixture registry as described in the Realistic example in behave's documentation is superior, as you will likely end up having many more setup-cleanup pairs later on.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_behave" ]
stackoverflow_0071777018_python_python_behave.txt
Q: Multiple waitKey calls not working well with cv2 I discovered that more than one waitKey calls in an opencv program make it lag out, and all the calls do not get registered properly. You sometimes have to hold some keys for over 4 seconds in order for their code to execute. Said faulty calls work like this: if cv2.waitKey(1) == 100: show_crop = not show_crop if cv2.waitKey(1) == 99: show_cv = not show_cv if cv2.waitKey(1) == 116: show_curr_track = not show_curr_track The program detects none of the calls at the desired button press, instead you need to hold the said button for multiple seconds, before its code executes. How can I fix this problem? A: I ran into this issue in my program, and decided to answer this question Q&A style! I came up with a very simple workaround. First, use a single waitKey call to get the key required as so- inp = waitKey(1) Now, create a dictionary with its keys as the ordinals of the buttons you're pressing, and values as the code you want to execute (use ; for multiline code, or break the code into a separate function)- d_exec = { 27: "cap.release();cv2.destroyAllWindows();break;", 100:"show_crop = not show_crop", 99:"show_cv = not show_cv", 116:"show_curr_track = not show_curr_track", 115:"save()" } Here, 27 is Esc, 99 is c, 116 is t, 115 is s on the keyboard for my system and so on. You can also use the ord function if you don't know the actual integer values for keys. Finally, you can use just the single waitKey function in conjugation with your dictionary, and the inbuilt exec function as follows- inp = cv2.waitKey(1) if inp in d_exec: exec(d_exec[inp]) inp = None Here, the exec function takes in a string as an input, and runs it as python code. NOTE: Use ; for multiline code (as shown in dictionary line 1), or break it into single line function calls. Make sure to set inp back to None or some other out of dictionary value, so that the same code doesn't accidentally execute on the next iteration of the main loop. You can also use the more traditional if else series beneath inp = waitKey(1), but the dictionary method looks more cleaner to me :)
Multiple waitKey calls not working well with cv2
I discovered that more than one waitKey calls in an opencv program make it lag out, and all the calls do not get registered properly. You sometimes have to hold some keys for over 4 seconds in order for their code to execute. Said faulty calls work like this: if cv2.waitKey(1) == 100: show_crop = not show_crop if cv2.waitKey(1) == 99: show_cv = not show_cv if cv2.waitKey(1) == 116: show_curr_track = not show_curr_track The program detects none of the calls at the desired button press, instead you need to hold the said button for multiple seconds, before its code executes. How can I fix this problem?
[ "I ran into this issue in my program, and decided to answer this question Q&A style!\nI came up with a very simple workaround.\nFirst, use a single waitKey call to get the key required as so-\ninp = waitKey(1)\nNow, create a dictionary with its keys as the ordinals of the buttons you're pressing, and values as the code you want to execute (use ; for multiline code, or break the code into a separate function)-\nd_exec = {\n27: \"cap.release();cv2.destroyAllWindows();break;\",\n100:\"show_crop = not show_crop\", \n99:\"show_cv = not show_cv\", \n116:\"show_curr_track = not show_curr_track\", \n115:\"save()\"\n}\n\nHere, 27 is Esc,\n99 is c,\n116 is t,\n115 is s on the keyboard for my system and so on. You can also use the ord function if you don't know the actual integer values for keys.\nFinally, you can use just the single waitKey function in conjugation with your dictionary, and the inbuilt exec function as follows-\ninp = cv2.waitKey(1)\nif inp in d_exec:\n exec(d_exec[inp])\ninp = None\n\nHere, the exec function takes in a string as an input, and runs it as python code.\nNOTE: Use ; for multiline code (as shown in dictionary line 1), or break it into single line function calls.\nMake sure to set inp back to None or some other out of dictionary value, so that the same code doesn't accidentally execute on the next iteration of the main loop.\nYou can also use the more traditional if else series beneath inp = waitKey(1), but the dictionary method looks more cleaner to me :)\n" ]
[ 0 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0074635429_opencv_python.txt
Q: How do I fill in the rest of an image with a certain color after rotating an ROI with scikit-image? I have the following image: I am attempting to use handwritten OCR to capture this number, and for some images, I need to manually rotate the image. The code I am using to rotate this image is the following: When I execute this code, the following image is the result: I want the white background surrounding the 1 to completely encase the 1. I do not want the upper left and upper right of the image to have the black areas (triangles). I have tried to adjust the resizing, but this does not seem to help. Does anyone have an idea as to what I can do to prevent this? A: As already pointed out by fmw42 in the comments, the API you are using has optional arguments to deal with this. Please peruse the docs for the API you use: skimage.transform.rotate The docs offer a mode, which says how to fill in those pixels that don't come from the source image. The modes replicate/edge and constant should be of interest to you. edge Pads with the edge values of array. constant (default) Pads with a constant value, in conjunction with the cval argument (default 0, i.e. black).
How do I fill in the rest of an image with a certain color after rotating an ROI with scikit-image?
I have the following image: I am attempting to use handwritten OCR to capture this number, and for some images, I need to manually rotate the image. The code I am using to rotate this image is the following: When I execute this code, the following image is the result: I want the white background surrounding the 1 to completely encase the 1. I do not want the upper left and upper right of the image to have the black areas (triangles). I have tried to adjust the resizing, but this does not seem to help. Does anyone have an idea as to what I can do to prevent this?
[ "As already pointed out by fmw42 in the comments, the API you are using has optional arguments to deal with this.\nPlease peruse the docs for the API you use: skimage.transform.rotate\nThe docs offer a mode, which says how to fill in those pixels that don't come from the source image.\nThe modes replicate/edge and constant should be of interest to you.\n\nedge Pads with the edge values of array.\nconstant (default) Pads with a constant value, in conjunction with the cval argument (default 0, i.e. black).\n\n" ]
[ 1 ]
[]
[]
[ "image_processing", "opencv", "python", "scikit_image" ]
stackoverflow_0074617780_image_processing_opencv_python_scikit_image.txt
Q: 'RecursionError' in a for loop I have tried to implement a flatten function to even flatten strings but got an error for Recursion. Could someone help resolve this puzzle? def flatten(items): for x in items: if isinstance(x, Iterable): yield from flatten(x) else: yield x items = [2, [3, 4, [5, 6], 7], 8, 'abc'] for x in flatten(items): print(x) I was expecting to print '2, 3, 4, 5, 6, 7, 8, a, b, c'; but instead, I got '2, 3, 4, 5, 6, 7, 8 and a RecursionError. I think the 'abc' is also 'Iterable', so why the code doesn't work? Thank you! A: The problem as jasonharper pointed is that 'a' is an iterable element which contains 'a' and so on. You can however, rewrite the code with another if before the yield from flatten(x) something like from collections.abc import Iterable def flatten(items): for x in items: if isinstance(x, Iterable): if len(x)==1: yield next(iter(x)) else: yield from flatten(x) else: yield x items = [2, [3, 4, [5, 6], 7], 8, 'abc'] for x in flatten(items): print(x) A: This happens because you're exceeding the limit of the call stack I won't get into the nitty-gritty here but, you can read this article: https://towardsdatascience.com/python-stack-frames-and-tail-call-optimization-4d0ea55b0542 Recursion is a tricky problem to get right, it's often best to avoid it when possible in my own personal opinion. If you refactor your code to use a minimal amount of recursion and use the built-in iter() function on string values it works without exiting the call stack like so. from collections.abc import Iterable def flatten(items): for x in items: if isinstance(x, str): yield from iter(x) elif isinstance(x, Iterable): yield from flatten(x) else: yield x items = [2, [3, 4, [5, 6], 7], 8, 'abc'] for x in flatten(items): print(x)
'RecursionError' in a for loop
I have tried to implement a flatten function to even flatten strings but got an error for Recursion. Could someone help resolve this puzzle? def flatten(items): for x in items: if isinstance(x, Iterable): yield from flatten(x) else: yield x items = [2, [3, 4, [5, 6], 7], 8, 'abc'] for x in flatten(items): print(x) I was expecting to print '2, 3, 4, 5, 6, 7, 8, a, b, c'; but instead, I got '2, 3, 4, 5, 6, 7, 8 and a RecursionError. I think the 'abc' is also 'Iterable', so why the code doesn't work? Thank you!
[ "The problem as jasonharper pointed is that 'a' is an iterable element which contains 'a' and so on. You can however, rewrite the code with another if before the yield from flatten(x) something like\nfrom collections.abc import Iterable\ndef flatten(items):\n for x in items:\n if isinstance(x, Iterable):\n if len(x)==1:\n yield next(iter(x))\n else: \n yield from flatten(x)\n else:\n yield x\n\nitems = [2, [3, 4, [5, 6], 7], 8, 'abc']\n\nfor x in flatten(items):\n print(x)\n\n", "This happens because you're exceeding the limit of the call stack I won't get into the nitty-gritty here but, you can read this article: https://towardsdatascience.com/python-stack-frames-and-tail-call-optimization-4d0ea55b0542\nRecursion is a tricky problem to get right, it's often best to avoid it when possible in my own personal opinion. If you refactor your code to use a minimal amount of recursion and use the built-in iter() function on string values it works without exiting the call stack like so.\n from collections.abc import Iterable\n\ndef flatten(items):\n for x in items:\n if isinstance(x, str):\n yield from iter(x)\n elif isinstance(x, Iterable):\n yield from flatten(x)\n else: \n yield x\n\nitems = [2, [3, 4, [5, 6], 7], 8, 'abc']\n\nfor x in flatten(items):\n print(x)\n\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074633320_python.txt
Q: Replace the whole sting in dataframe if it matches the pattern I want to replace the value in df if this value contains a partial string. My solution: stimuli_dict = {'121': 'mp4', '212': 'mp3'} stimuli_dict = {r"^{}".format(k): v for k, v in stimuli_dict.items()} df['stimulus'] = df['stimulus'].replace(stimuli_dict, regex=True) But it replaces only the partial string in a column. I want to replace the entire string with a dictionary value A: if you can add .* into your keys it would solve your problem. It is about regular expression. You need to tell that we are looking for something starts with 121 and rest is not important. stimuli_dict = {'121.*': 'mp4', '212.*': 'mp3'} "." means any character, "*" means previous character between zero and unlimited times. for further explanations, you can check regular expressions.
Replace the whole sting in dataframe if it matches the pattern
I want to replace the value in df if this value contains a partial string. My solution: stimuli_dict = {'121': 'mp4', '212': 'mp3'} stimuli_dict = {r"^{}".format(k): v for k, v in stimuli_dict.items()} df['stimulus'] = df['stimulus'].replace(stimuli_dict, regex=True) But it replaces only the partial string in a column. I want to replace the entire string with a dictionary value
[ "if you can add .* into your keys it would solve your problem. It is about regular expression. You need to tell that we are looking for something starts with 121 and rest is not important.\nstimuli_dict = {'121.*': 'mp4', '212.*': 'mp3'}\n\n\".\" means any character, \"*\" means previous character between zero and unlimited times. for further explanations, you can check regular expressions.\n" ]
[ 2 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074635367_dataframe_python.txt
Q: MQTT payload parsing in Python I am using Paho library to receive MQTT data. I saved the data in a file. The data in the file reads: EP]�gr:G�2D��?G��D0uG�:G`�D�龹�:G�9D����R��A[[B���A�@ZBʟ�A��ZB"j�AʆYBIC�B�A��A���BM���ffNk>>>] In binary format, it converts to: b'<<<[\x16\x00\x00\x00\x00\xb8PG\x00\x90\xdeE-&4\x90\x99\x03\x00\x00\x00\x0fQG\x000\xf0E\x9d\x89\x98\xe7\xbf\x03\x00\x00\x00tQG\x00\x90\xe6E\xbd`Kq\xbc\x03\x00\x00\x00BQG\x00\xd8\xf5E\xdb\x11\x82\xcb\xb9\x03\x00\x00\x00\x00\x00B\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00@B\x00\x00\x88A:f\x00\x00\x00\x00\x00\x00\xa1J\xb1A\xbc\xee+B:\xc6\xe9Ad\xc9\xe3A*+\xd2A\xe7|\x07B\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00dBff.A\x00\x00\xf4A\xcb\xa1\x8bA\xcf\xf7\x9cBMUUwe\xf5\x17\x00\x00\xd8\xc0>>>]' I am unable to understand the format of this data. Is there anyway to convert this to ASCII or utf-8 format or any other readable format? Am I doing something wrong in receiving the data? My receive callback function is as follows: def on_message(client, userdata, message): f = open("test.txt","wb") f.write((message.payload)) f.close() print("message received " ,message.payload) print("message topic=",message.topic) print("message qos=",message.qos) print("message retain flag=",message.retain) A: message.payload.decode() Is what your looking for
MQTT payload parsing in Python
I am using Paho library to receive MQTT data. I saved the data in a file. The data in the file reads: EP]�gr:G�2D��?G��D0uG�:G`�D�龹�:G�9D����R��A[[B���A�@ZBʟ�A��ZB"j�AʆYBIC�B�A��A���BM���ffNk>>>] In binary format, it converts to: b'<<<[\x16\x00\x00\x00\x00\xb8PG\x00\x90\xdeE-&4\x90\x99\x03\x00\x00\x00\x0fQG\x000\xf0E\x9d\x89\x98\xe7\xbf\x03\x00\x00\x00tQG\x00\x90\xe6E\xbd`Kq\xbc\x03\x00\x00\x00BQG\x00\xd8\xf5E\xdb\x11\x82\xcb\xb9\x03\x00\x00\x00\x00\x00B\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00@B\x00\x00\x88A:f\x00\x00\x00\x00\x00\x00\xa1J\xb1A\xbc\xee+B:\xc6\xe9Ad\xc9\xe3A*+\xd2A\xe7|\x07B\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00dBff.A\x00\x00\xf4A\xcb\xa1\x8bA\xcf\xf7\x9cBMUUwe\xf5\x17\x00\x00\xd8\xc0>>>]' I am unable to understand the format of this data. Is there anyway to convert this to ASCII or utf-8 format or any other readable format? Am I doing something wrong in receiving the data? My receive callback function is as follows: def on_message(client, userdata, message): f = open("test.txt","wb") f.write((message.payload)) f.close() print("message received " ,message.payload) print("message topic=",message.topic) print("message qos=",message.qos) print("message retain flag=",message.retain)
[ "message.payload.decode() Is what your looking for\n" ]
[ 0 ]
[]
[]
[ "binary", "mqtt", "paho", "parsing", "python" ]
stackoverflow_0066345180_binary_mqtt_paho_parsing_python.txt
Q: python calling empty list invalid syntax I am trying to create an empty list and for some reason it is telling me it's invalid syntax? it also flags the next line with the same error, saying that while count<amount: is invalid. am i wrong for thinking this doesnt make sense? using vsc. thanks in advance. my code looks like this. list=[] count=0 while count < amount : s=int(input"enter a number:") list.append(s) count= count+1 i tried to use list={}, list=() even though i know those are wrong. it also flags lines like list4=[1,3] ?? A: amount is not defined. Define it with a number like 5 and try then. You also need to make sure the variable list is called something else, it is a python-reserved word. Lastly, make sure the input function has parenthesis () around it. e.x. input("enter number: ") A: In python the indent is 4 spaces. You need to change the variable name "list" because it is a built in name in python. You need to put brackets after input. amount = 5 numberList = [] count = 0 while count < amount: s = int(input("enter a number:")) numberList.append(s) count += 1 A: I think the problem there is that you're using input() wrong, it should be: int(input("Enter a number: ")) Also, I am assuming that you defined amount earlier in your code, otherwise you will need to in order for your code to work :D I also saw a comment saying that list is a python reserved function: It is, however you can use it as a variable name and it will not return an error :) Have a good day :D
python calling empty list invalid syntax
I am trying to create an empty list and for some reason it is telling me it's invalid syntax? it also flags the next line with the same error, saying that while count<amount: is invalid. am i wrong for thinking this doesnt make sense? using vsc. thanks in advance. my code looks like this. list=[] count=0 while count < amount : s=int(input"enter a number:") list.append(s) count= count+1 i tried to use list={}, list=() even though i know those are wrong. it also flags lines like list4=[1,3] ??
[ "amount is not defined. Define it with a number like 5 and try then.\nYou also need to make sure the variable list is called something else, it is a python-reserved word.\nLastly, make sure the input function has parenthesis () around it. e.x. input(\"enter number: \")\n", "In python the indent is 4 spaces.\nYou need to change the variable name \"list\" because it is a built in name in python.\nYou need to put brackets after input.\namount = 5\n\nnumberList = []\ncount = 0\nwhile count < amount:\n s = int(input(\"enter a number:\"))\n numberList.append(s)\n count += 1\n\n", "I think the problem there is that you're using input() wrong, it should be:\nint(input(\"Enter a number: \"))\n\nAlso, I am assuming that you defined amount earlier in your code, otherwise you will need to in order for your code to work :D\nI also saw a comment saying that list is a python reserved function: It is, however you can use it as a variable name and it will not return an error :)\nHave a good day :D\n" ]
[ 2, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074635614_python.txt
Q: create a program that computes the average of a collection of values entered by the user **2. In this exercise you will create a program that computes the average of a collection of values entered by the user. The user will enter 0 as a sentinel value to indicate that no further values will be provided. Your program should display an appropriate error message if the first value entered by the user is 0. Hint: Because the 0 marks the end of the input it should not be included in the average.in python is that correct? while True: x= int(input('enter a first number')) if (x==0): print(' zero should not be included in the average') y=int(input('enter a second number')) if (y==0): break i = (x+y)/2 print(i) A: Ok, you want a program that can calculate the average of some numbers (in any length), so I have some tips for you: you have a while loop to repeat something (there, getting number), so get user input only once (in while loop) you need a list, to add all user inputs to it... so create a list before wile loop (data = []) you have a while loop and you need a break condition (equal to 0), so check it in your loop (after getting number from user) after getting each number, you must add it to your list (simply, google how to add an element to a list in python) if input is 0, so break (as you did in your code)... but before that calculate average... the formula is sum of numbers divided by length of numbers (both of them have a builtin function in python, if you don't know, simply google it... you get as first result) you want to know if 0 is first number... check it in your main if (number == 0). I mean check whether length of list is 0 or not, and if it is 0, print an error and break. be successful... and if you have any question, comment it here :)
create a program that computes the average of a collection of values entered by the user
**2. In this exercise you will create a program that computes the average of a collection of values entered by the user. The user will enter 0 as a sentinel value to indicate that no further values will be provided. Your program should display an appropriate error message if the first value entered by the user is 0. Hint: Because the 0 marks the end of the input it should not be included in the average.in python is that correct? while True: x= int(input('enter a first number')) if (x==0): print(' zero should not be included in the average') y=int(input('enter a second number')) if (y==0): break i = (x+y)/2 print(i)
[ "Ok, you want a program that can calculate the average of some numbers (in any length), so I have some tips for you:\n\nyou have a while loop to repeat something (there, getting number), so get user input only once (in while loop)\nyou need a list, to add all user inputs to it... so create a list before wile loop (data = [])\nyou have a while loop and you need a break condition (equal to 0), so check it in your loop (after getting number from user)\nafter getting each number, you must add it to your list (simply, google how to add an element to a list in python)\nif input is 0, so break (as you did in your code)... but before that calculate average... the formula is sum of numbers divided by length of numbers (both of them have a builtin function in python, if you don't know, simply google it... you get as first result)\nyou want to know if 0 is first number... check it in your main if (number == 0). I mean check whether length of list is 0 or not, and if it is 0, print an error and break.\n\nbe successful... and if you have any question, comment it here :)\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074635199_python.txt
Q: Pairing bluetooth devices with Passkey/Password in python - RFCOMM (Linux) I am working on a Python script to search for bluetooth devices and connect them using RFCOMM. This devices has Passkey/Password. I am using PyBlueZ and, as far as I know, this library cannot handle Passkey/Password connections (Python PyBluez connecting to passkey protected device). I am able to discover the devices and retrieve their names and addresses: nearby_devices = bluetooth.discover_devices(duration=4,lookup_names=True, flush_cache=True, lookup_class=False) But if tried to connect to a specific device using: s = bluetooth.BluetoothSocket(bluetooth.RFCOMM) s.connect((addr,port)) I get an error 'Device or resource busy (16)'. I tried some bash commands using the hcitool and bluetooth-agent, but I need to do the connection programmatically. I was able to connect to my device using the steps described here: How to pair a bluetooth device from command line on Linux. I want to ask if someone has connected to a bluetooth device with Passkey/Password using Python. I am thinking about to use the bash commands in Python using subprocess.call(), but I am not sure if it is a good idea. Thanks for any help. A: Finally I am able to connect to a device using PyBlueZ. I hope this answer will help others in the future. I tried the following: First, import the modules and discover the devices. import bluetooth, subprocess nearby_devices = bluetooth.discover_devices(duration=4,lookup_names=True, flush_cache=True, lookup_class=False) When you discover the device you want to connect, you need to know port, the address and passkey. With that information do the next: name = name # Device name addr = addr # Device Address port = 1 # RFCOMM port passkey = "1111" # passkey of the device you want to connect # kill any "bluetooth-agent" process that is already running subprocess.call("kill -9 `pidof bluetooth-agent`",shell=True) # Start a new "bluetooth-agent" process where XXXX is the passkey status = subprocess.call("bluetooth-agent " + passkey + " &",shell=True) # Now, connect in the same way as always with PyBlueZ try: s = bluetooth.BluetoothSocket(bluetooth.RFCOMM) s.connect((addr,port)) except bluetooth.btcommon.BluetoothError as err: # Error handler pass Now, you are connected!! You can use your socket for the task you need: s.recv(1024) # Buffer size s.send("Hello World!") Official PyBlueZ documentation is available here A: Is there a way to connect two phones via Bluetooth , the script should be running on a Linux host. Any suggestions of using pybluez or any other APIs? I have seen some examples where a Linux host is used as Client and is connect a phone (which is a server), but here I'm want to use Linux host as just a device to run the script and make two phones connect via Bluetooth.
Pairing bluetooth devices with Passkey/Password in python - RFCOMM (Linux)
I am working on a Python script to search for bluetooth devices and connect them using RFCOMM. This devices has Passkey/Password. I am using PyBlueZ and, as far as I know, this library cannot handle Passkey/Password connections (Python PyBluez connecting to passkey protected device). I am able to discover the devices and retrieve their names and addresses: nearby_devices = bluetooth.discover_devices(duration=4,lookup_names=True, flush_cache=True, lookup_class=False) But if tried to connect to a specific device using: s = bluetooth.BluetoothSocket(bluetooth.RFCOMM) s.connect((addr,port)) I get an error 'Device or resource busy (16)'. I tried some bash commands using the hcitool and bluetooth-agent, but I need to do the connection programmatically. I was able to connect to my device using the steps described here: How to pair a bluetooth device from command line on Linux. I want to ask if someone has connected to a bluetooth device with Passkey/Password using Python. I am thinking about to use the bash commands in Python using subprocess.call(), but I am not sure if it is a good idea. Thanks for any help.
[ "Finally I am able to connect to a device using PyBlueZ. I hope this answer will help others in the future. I tried the following:\nFirst, import the modules and discover the devices.\nimport bluetooth, subprocess\nnearby_devices = bluetooth.discover_devices(duration=4,lookup_names=True,\n flush_cache=True, lookup_class=False)\n\nWhen you discover the device you want to connect, you need to know port, the address and passkey. With that information do the next:\nname = name # Device name\naddr = addr # Device Address\nport = 1 # RFCOMM port\npasskey = \"1111\" # passkey of the device you want to connect\n\n# kill any \"bluetooth-agent\" process that is already running\nsubprocess.call(\"kill -9 `pidof bluetooth-agent`\",shell=True)\n\n# Start a new \"bluetooth-agent\" process where XXXX is the passkey\nstatus = subprocess.call(\"bluetooth-agent \" + passkey + \" &\",shell=True)\n\n# Now, connect in the same way as always with PyBlueZ\ntry:\n s = bluetooth.BluetoothSocket(bluetooth.RFCOMM)\n s.connect((addr,port))\nexcept bluetooth.btcommon.BluetoothError as err:\n # Error handler\n pass\n\nNow, you are connected!! You can use your socket for the task you need:\ns.recv(1024) # Buffer size\ns.send(\"Hello World!\")\n\nOfficial PyBlueZ documentation is available here\n", "Is there a way to connect two phones via Bluetooth , the script should be running on a Linux host. Any suggestions of using pybluez or any other APIs?\nI have seen some examples where a Linux host is used as Client and is connect a phone (which is a server), but here I'm want to use Linux host as just a device to run the script and make two phones connect via Bluetooth.\n" ]
[ 16, 0 ]
[]
[]
[ "bluetooth", "linux", "pybluez", "python" ]
stackoverflow_0037465157_bluetooth_linux_pybluez_python.txt
Q: Load JSON data into postgres table using airflow I have an Airflow DAG that runs a spark file (reads two parquet files, performs transformations on them, and loads the data into a single JSON file). Now the data from this JSON file needs to be pushed into a Postgres table. At first, I was having trouble reading the JSON, but then I found a way to read the JSON as a whole list of multiple dictionaries. But I don't know how to load this data into the Postgres table. Here is my DAG snippet: import os, json from airflow import DAG from datetime import datetime, timedelta from airflow.operators.bash_operator import BashOperator from airflow.providers.postgres.operators.postgres import PostgresOperator from airflow.providers.postgres.hooks.postgres import PostgresHook from airflow.operators.python_operator import PythonOperator def read_json_file(filename): # function that I found online to read JSON with open(filename, "r") as r: response = r.read() response = response.replace('\n', '') response = response.replace('}{', '},{') response = "[" + response + "]" return json.loads(response) def load_data(ds, **kwargs): path_to_json = '/path/to/json/staging/day=20220815/' json_files = [pos_json for pos_json in os.listdir(path_to_json) if pos_json.endswith('.json')] filename = path_to_json+str(json_files[0]) doc = read_json_file(filename) date_id = [doc[i]['day'] for i in range(len(doc))] interact_id = [doc[i]['interact_id'] for i in range(len(doc))] case_id = [doc[i]['case_id'] for i in range(len(doc))] # 1 topic_id = [doc[i]['topic_id'] for i in range(len(doc))] create_date = [doc[i]['create_date'] for i in range(len(doc))] end_date = [doc[i]['end_date'] for i in range(len(doc))] topic_start_time = [doc[i]['topic_start_time'] for i in range(len(doc))] title = [doc[i]['title'] for i in range(len(doc))] direction = [doc[i]['direction'] for i in range(len(doc))] notes = [doc[i]['notes'] for i in range(len(doc))] _type_ = [doc[i]['_type_'] for i in range(len(doc))] reason = [doc[i]['reason'] for i in range(len(doc))] result = [doc[i]['result'] for i in range(len(doc))] # 2 msisdn = [doc[i]['msisdn'] for i in range(len(doc))] price_plan = [doc[i]['x_price_plan'] for i in range(len(doc))] cust_type = [doc[i]['cust_type'] for i in range(len(doc))] # 3 credit_limit = [doc[i]['credit_limit'] for i in range(len(doc))] # 4 unit = [doc[i]['unit'] for i in range(len(doc))] supervisor = [doc[i]['supervisor'] for i in range(len(doc))] sdc = [doc[i]['sdc'] for i in range(len(doc))] # 5 dealer_id = [doc[i]['dealer_id'] for i in range(len(doc))] # 6 year = [doc[i]['year'] for i in range(len(doc))] month = [doc[i]['month'] for i in range(len(doc))] subs_no = [doc[i]['subs_no'] for i in range(len(doc))] # 7 cust_bill_cycle = [doc[i]['cust_bill_cycle'] for i in range(len(doc))] # 8 row = (date_id,interact_id,case_id,topic_id,create_date,end_date,topic_start_time,title,\ direction,notes,_type_,reason,result,msisdn,price_plan,cust_type,credit_limit,\ unit,supervisor,sdc,dealer_id,year,month,subs_no,cust_bill_cycle) insert_cmd = """ INSERT INTO table_name (date_id,interact_id,case_id,topic_id,create_date,end_date,topic_start_time,title, direction,notes,_type_,reason,result,msisdn,price_plan,cust_type,credit_limit, unit,supervisor,sdc,dealer_id,year,month,subs_no,cust_bill_cycle) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s); """ pg_hook = PostgresHook(postgres_conn_id='postgres_default', sql=insert_cmd) for d in entry_data: pg_hook.run(insert_cmd, parameters=row) default_args = { 'retries': 3, } with DAG ( dag_id='final_DAG', schedule_interval='0 0 * * *', start_date= datetime(2022, 11, 30), catchup=False, default_args=default_args ) as dag: execute_spark = BashOperator( task_id='execute_spark', bash_command=""" cd python3 path/to/spark_notebook.py """ ) load_data_task = PythonOperator( task_id='load_data_task', provide_context=True, python_callable=load_data, dag=dag) execute_spark >> load_data_task When the load_data_task is triggered, I get this error listed in my logs: psycopg2.errors.DatatypeMismatch: column "date_id" is of type date but expression is of type text[] I understand what the error is saying, but don't know how to deal with it. How can I get this thing done? A: The problem statement provided has multiple issues. The statement would benefit from the addition of, an example of what the json file or doc variable looks like the table definition for the table_name table code is missing the definition of entry_data The following solutions applies assumptions due to the missing information mentioned and uses a limited example. The error message appears to be saying that the date_id column in the table_name PostGRES table is of type DATE. Whereas the python variable named date_id is a list of strings (or in PostGRES terms data type text[]). It looks like all of the python variables input into the row variable are a lists. This is not a correct format to use for the SQL insert statement. Part 0. Assumptions Assumption 1 - doc looks like this [{ "day":"2022-11-30", "interact_id":"8675309", "case_id":"12345", "topic_id":"09876", "create_date":"2022-01-01", "end_date":"2022-12-05" }, { "day":"2022-11-29", "interact_id":"8675307", "case_id":"12344", "topic_id":"08888", "create_date":"2022-02-02", "end_date":"2023-01-05" }] Assumption 2 - table_name column data types are the following table_name column_name data_type table_name date_id DATE table_name interact_id TEXT table_name case_id TEXT table_name topic_id TEXT table_name create_date TEXT table_name end_date TEXT Look this up for your table using the following command, SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'table_name'; Part 1. Get rid of the python lists for each variable. This solution loops through the json and inserts into the sql table for each item. # esablish postgres connection pg_hook = PostgresHook(postgres_conn_id='postgres_default') insert_cmd = """ INSERT INTO table_name (date_id,interact_id,case_id,topic_id,create_date,end_date) VALUES(%s,%s,%s,%s,%s,%s); """ # load file doc = read_json_file(filename) # loop through items in doc for i in range(len(doc)): date_id = i['day'] interact_id = i['interact_id'] case_id = i['case_id'] topic_id = i['topic_id'] create_date = i['create_date'] end_date = i['end_date'] row = (date_id, interact_id, case_id, topic_id, create_date, end_date) # insert item to table pg_hook.run(insert_cmd, parameters=row) Part 2. Ensure each variable matches the data type that PostGRES expects The PostGRES DATE type format accepts many different input types: https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-DATE-TABLE yyyy-mm-dd is the recommended DATE format. So we will continue this solution with the assumption that is the format used by the table_name table To fix the error, the python date_id variable will need to be reformatted to a python datetime data type using the python datetime library. The python datetime format definition '%Y/%m/%d' defines the yyyy-mm-dd datetime format instead of this date_id = i['day'] use this to convert the string to a datetime type date_id = datetime.strptime(i['day'], '%Y/%m/%d') more about datetime.strptime function here: https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
Load JSON data into postgres table using airflow
I have an Airflow DAG that runs a spark file (reads two parquet files, performs transformations on them, and loads the data into a single JSON file). Now the data from this JSON file needs to be pushed into a Postgres table. At first, I was having trouble reading the JSON, but then I found a way to read the JSON as a whole list of multiple dictionaries. But I don't know how to load this data into the Postgres table. Here is my DAG snippet: import os, json from airflow import DAG from datetime import datetime, timedelta from airflow.operators.bash_operator import BashOperator from airflow.providers.postgres.operators.postgres import PostgresOperator from airflow.providers.postgres.hooks.postgres import PostgresHook from airflow.operators.python_operator import PythonOperator def read_json_file(filename): # function that I found online to read JSON with open(filename, "r") as r: response = r.read() response = response.replace('\n', '') response = response.replace('}{', '},{') response = "[" + response + "]" return json.loads(response) def load_data(ds, **kwargs): path_to_json = '/path/to/json/staging/day=20220815/' json_files = [pos_json for pos_json in os.listdir(path_to_json) if pos_json.endswith('.json')] filename = path_to_json+str(json_files[0]) doc = read_json_file(filename) date_id = [doc[i]['day'] for i in range(len(doc))] interact_id = [doc[i]['interact_id'] for i in range(len(doc))] case_id = [doc[i]['case_id'] for i in range(len(doc))] # 1 topic_id = [doc[i]['topic_id'] for i in range(len(doc))] create_date = [doc[i]['create_date'] for i in range(len(doc))] end_date = [doc[i]['end_date'] for i in range(len(doc))] topic_start_time = [doc[i]['topic_start_time'] for i in range(len(doc))] title = [doc[i]['title'] for i in range(len(doc))] direction = [doc[i]['direction'] for i in range(len(doc))] notes = [doc[i]['notes'] for i in range(len(doc))] _type_ = [doc[i]['_type_'] for i in range(len(doc))] reason = [doc[i]['reason'] for i in range(len(doc))] result = [doc[i]['result'] for i in range(len(doc))] # 2 msisdn = [doc[i]['msisdn'] for i in range(len(doc))] price_plan = [doc[i]['x_price_plan'] for i in range(len(doc))] cust_type = [doc[i]['cust_type'] for i in range(len(doc))] # 3 credit_limit = [doc[i]['credit_limit'] for i in range(len(doc))] # 4 unit = [doc[i]['unit'] for i in range(len(doc))] supervisor = [doc[i]['supervisor'] for i in range(len(doc))] sdc = [doc[i]['sdc'] for i in range(len(doc))] # 5 dealer_id = [doc[i]['dealer_id'] for i in range(len(doc))] # 6 year = [doc[i]['year'] for i in range(len(doc))] month = [doc[i]['month'] for i in range(len(doc))] subs_no = [doc[i]['subs_no'] for i in range(len(doc))] # 7 cust_bill_cycle = [doc[i]['cust_bill_cycle'] for i in range(len(doc))] # 8 row = (date_id,interact_id,case_id,topic_id,create_date,end_date,topic_start_time,title,\ direction,notes,_type_,reason,result,msisdn,price_plan,cust_type,credit_limit,\ unit,supervisor,sdc,dealer_id,year,month,subs_no,cust_bill_cycle) insert_cmd = """ INSERT INTO table_name (date_id,interact_id,case_id,topic_id,create_date,end_date,topic_start_time,title, direction,notes,_type_,reason,result,msisdn,price_plan,cust_type,credit_limit, unit,supervisor,sdc,dealer_id,year,month,subs_no,cust_bill_cycle) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s); """ pg_hook = PostgresHook(postgres_conn_id='postgres_default', sql=insert_cmd) for d in entry_data: pg_hook.run(insert_cmd, parameters=row) default_args = { 'retries': 3, } with DAG ( dag_id='final_DAG', schedule_interval='0 0 * * *', start_date= datetime(2022, 11, 30), catchup=False, default_args=default_args ) as dag: execute_spark = BashOperator( task_id='execute_spark', bash_command=""" cd python3 path/to/spark_notebook.py """ ) load_data_task = PythonOperator( task_id='load_data_task', provide_context=True, python_callable=load_data, dag=dag) execute_spark >> load_data_task When the load_data_task is triggered, I get this error listed in my logs: psycopg2.errors.DatatypeMismatch: column "date_id" is of type date but expression is of type text[] I understand what the error is saying, but don't know how to deal with it. How can I get this thing done?
[ "The problem statement provided has multiple issues. The statement would benefit from the addition of,\n\nan example of what the json file or doc variable looks like\nthe table definition for the table_name table\ncode is missing the definition of entry_data\n\nThe following solutions applies assumptions due to the missing information mentioned and uses a limited example.\n\nThe error message appears to be saying that the date_id column in the table_name PostGRES table is of type DATE. Whereas the python variable named date_id is a list of strings (or in PostGRES terms data type text[]).\nIt looks like all of the python variables input into the row variable are a lists. This is not a correct format to use for the SQL insert statement.\n\nPart 0. Assumptions\nAssumption 1 - doc looks like this\n\n[{\n\"day\":\"2022-11-30\",\n\"interact_id\":\"8675309\",\n\"case_id\":\"12345\",\n\"topic_id\":\"09876\",\n\"create_date\":\"2022-01-01\",\n\"end_date\":\"2022-12-05\"\n},\n{\n\"day\":\"2022-11-29\",\n\"interact_id\":\"8675307\",\n\"case_id\":\"12344\",\n\"topic_id\":\"08888\",\n\"create_date\":\"2022-02-02\",\n\"end_date\":\"2023-01-05\"\n}]\n\nAssumption 2 - table_name column data types are the following\n\n\n\n\ntable_name\ncolumn_name\ndata_type\n\n\n\n\ntable_name\ndate_id\nDATE\n\n\ntable_name\ninteract_id\nTEXT\n\n\ntable_name\ncase_id\nTEXT\n\n\ntable_name\ntopic_id\nTEXT\n\n\ntable_name\ncreate_date\nTEXT\n\n\ntable_name\nend_date\nTEXT\n\n\n\n\nLook this up for your table using the following command,\nSELECT \n table_name, \n column_name, \n data_type \nFROM \n information_schema.columns\nWHERE \n table_name = 'table_name';\n\nPart 1. Get rid of the python lists for each variable.\nThis solution loops through the json and inserts into the sql table for each item.\n# esablish postgres connection\npg_hook = PostgresHook(postgres_conn_id='postgres_default')\ninsert_cmd = \"\"\"\n INSERT INTO table_name (date_id,interact_id,case_id,topic_id,create_date,end_date)\n VALUES(%s,%s,%s,%s,%s,%s);\n \"\"\"\n# load file\ndoc = read_json_file(filename)\n\n# loop through items in doc\nfor i in range(len(doc)):\n date_id = i['day']\n interact_id = i['interact_id']\n case_id = i['case_id']\n topic_id = i['topic_id']\n create_date = i['create_date']\n end_date = i['end_date']\n row = (date_id, interact_id, case_id, topic_id, create_date, end_date)\n\n # insert item to table\n pg_hook.run(insert_cmd, parameters=row)\n\nPart 2. Ensure each variable matches the data type that PostGRES expects\nThe PostGRES DATE type format accepts many different input types: https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-DATE-TABLE\nyyyy-mm-dd is the recommended DATE format. So we will continue this solution with the assumption that is the format used by the table_name table\nTo fix the error, the python date_id variable will need to be reformatted to a python datetime data type using the python datetime library.\nThe python datetime format definition '%Y/%m/%d' defines the yyyy-mm-dd datetime format\ninstead of this\ndate_id = i['day']\n\nuse this to convert the string to a datetime type\ndate_id = datetime.strptime(i['day'], '%Y/%m/%d')\n\nmore about datetime.strptime function here: https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior\n" ]
[ 1 ]
[]
[]
[ "airflow", "json", "postgresql", "python" ]
stackoverflow_0074633594_airflow_json_postgresql_python.txt
Q: how to get a json object via its position in python I have an array of json objects and I want to obtain a parameter of the last json object, but when I do it with the code that I will leave below, I get the last character of the string from the end_date parameter of all objects.How can I always get the end_date of the last json object? I hope you can help me the array is has the following structure: json = [ {'id':1,'name':'name1','init_date':'date','end_date':'date'}, {'id':2,'name':'name2','init_date':'date','end_date':'date'}, {'id':3,'name':'name3','init_date':'date','end_date':'date'}, {'id':4,'name':'name4','init_date':'date','end_date':'date'} ] My code: tk = token['token_type'] + " " + token['access_token'] url_enterprise = "url" response_monitor = requests.get(url_enterprise,headers={'Authorization': tk}).json() for i in reponse_monitor: if 'detail' not in response_monitor: print(i[end_date][-1]) A: Simply you can use the following to return "end_date" of last json object: json =[{'id':1,'name':'name1','init_date':'date','end_date':'date'}, {'id':2,'name':'name2','init_date':'date','end_date':'date'}, {'id':3,'name':'name3','init_date':'date','end_date':'date'}, {'id':4,'name':'name4','init_date':'date','end_date':'date'}] print(json[-1]['end_date'])
how to get a json object via its position in python
I have an array of json objects and I want to obtain a parameter of the last json object, but when I do it with the code that I will leave below, I get the last character of the string from the end_date parameter of all objects.How can I always get the end_date of the last json object? I hope you can help me the array is has the following structure: json = [ {'id':1,'name':'name1','init_date':'date','end_date':'date'}, {'id':2,'name':'name2','init_date':'date','end_date':'date'}, {'id':3,'name':'name3','init_date':'date','end_date':'date'}, {'id':4,'name':'name4','init_date':'date','end_date':'date'} ] My code: tk = token['token_type'] + " " + token['access_token'] url_enterprise = "url" response_monitor = requests.get(url_enterprise,headers={'Authorization': tk}).json() for i in reponse_monitor: if 'detail' not in response_monitor: print(i[end_date][-1])
[ "Simply you can use the following to return \"end_date\" of last json object:\njson =[{'id':1,'name':'name1','init_date':'date','end_date':'date'}, \n{'id':2,'name':'name2','init_date':'date','end_date':'date'}, \n{'id':3,'name':'name3','init_date':'date','end_date':'date'}, \n{'id':4,'name':'name4','init_date':'date','end_date':'date'}] \n\nprint(json[-1]['end_date'])\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "json", "python" ]
stackoverflow_0074635724_arrays_json_python.txt
Q: Finding root of a function with two outputs specified in return statement I am currently writing a code in Python where the objective is to find the root of the output of a function with respect to input variable x. The code looks like this: def Compound_Correlation_Function(x): # Here comes a long part of the code... Equity_Solve = Tranches.loc[0, 'Par_Spread_bps'] - Market_Data.iloc[0,0] Mezzanine_Solve = Tranches.loc[1, 'Par_Spread_bps'] - Market_Data.iloc[1,0] return Equity_Solve, Mezzanine_Solve Correlation_Value = optimize.root(Compound_Correlation_Function, x0 = 0.3) As can be seen in the code block above, there are two outputs specified: Equity_Solve Mezzanine_Solve I now want to find the root for both outputs separately. If I comment out the Mezzanine_Solve part in the return statement, then the the optimize procedure gives me the solution I want. Obviously, I want to automate my code as much as possible. Is it possible to specify the output for which I want to find the root in the optimize statement? I tried the following, without success: Correlation_Value = optimize.root(Compound_Correlation_Function[0], x0 = 0.3) Correlation_Value = optimize.root(Compound_Correlation_Function(x)[0], x0 = 0.3) Correlation_Value = optimize.root(Compound_Correlation_Function()[], x0 = 0.3) Any help is appreciated. Thank you in advance! A: I think the problem is that your function returns a tuple of numbers, but root is expecting a single number. Assuming you want to solve each equation separately, then you could include an argument in Compound_Correlation_Function to switch between the functions: def Compound_Correlation_Function(x, return_equity=True): # Here comes a long part of the code... if return_equity: Equity_Solve = Tranches.loc[0, 'Par_Spread_bps'] - Market_Data.iloc[0,0] return Equity_Solve else: Mezzanine_Solve = Tranches.loc[1, 'Par_Spread_bps'] - Market_Data.iloc[1,0] return Mezzanine_Solve Then pass the return_equity argument in as an extra argument via args, i.e. call root(Compound_Correlation_Function, x0=0.3, args=(True,)) to solve Equity_Solve, and set args=(False,) to solve Mezzanine_Solve. You could also define a function wrapper that calls Compound_Correlation_Function and returns only one of the values. A: surely you're overthinking it. Just define two new functions: def equity_solve(x): return Compound_Correlation_Function(x)[0] def mezzanine_solve(x): return Compound_Correlation_Function(x)[1]
Finding root of a function with two outputs specified in return statement
I am currently writing a code in Python where the objective is to find the root of the output of a function with respect to input variable x. The code looks like this: def Compound_Correlation_Function(x): # Here comes a long part of the code... Equity_Solve = Tranches.loc[0, 'Par_Spread_bps'] - Market_Data.iloc[0,0] Mezzanine_Solve = Tranches.loc[1, 'Par_Spread_bps'] - Market_Data.iloc[1,0] return Equity_Solve, Mezzanine_Solve Correlation_Value = optimize.root(Compound_Correlation_Function, x0 = 0.3) As can be seen in the code block above, there are two outputs specified: Equity_Solve Mezzanine_Solve I now want to find the root for both outputs separately. If I comment out the Mezzanine_Solve part in the return statement, then the the optimize procedure gives me the solution I want. Obviously, I want to automate my code as much as possible. Is it possible to specify the output for which I want to find the root in the optimize statement? I tried the following, without success: Correlation_Value = optimize.root(Compound_Correlation_Function[0], x0 = 0.3) Correlation_Value = optimize.root(Compound_Correlation_Function(x)[0], x0 = 0.3) Correlation_Value = optimize.root(Compound_Correlation_Function()[], x0 = 0.3) Any help is appreciated. Thank you in advance!
[ "I think the problem is that your function returns a tuple of numbers, but root is expecting a single number.\nAssuming you want to solve each equation separately, then you could include an argument in Compound_Correlation_Function to switch between the functions:\ndef Compound_Correlation_Function(x, return_equity=True):\n \n # Here comes a long part of the code...\n \n if return_equity:\n Equity_Solve = Tranches.loc[0, 'Par_Spread_bps'] - Market_Data.iloc[0,0]\n return Equity_Solve\n else:\n Mezzanine_Solve = Tranches.loc[1, 'Par_Spread_bps'] - Market_Data.iloc[1,0]\n return Mezzanine_Solve\n\nThen pass the return_equity argument in as an extra argument via args, i.e. call\nroot(Compound_Correlation_Function, x0=0.3, args=(True,))\n\nto solve Equity_Solve, and set args=(False,) to solve Mezzanine_Solve.\nYou could also define a function wrapper that calls Compound_Correlation_Function and returns only one of the values.\n", "surely you're overthinking it. Just define two new functions:\ndef equity_solve(x):\n return Compound_Correlation_Function(x)[0]\n\ndef mezzanine_solve(x):\n return Compound_Correlation_Function(x)[1]\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "scipy_optimize" ]
stackoverflow_0074635091_python_scipy_optimize.txt
Q: SLURM Array Job BASH scripting within python subprocess Update: I was able to get a variable assignment from SLURM_JOB_ID with this line. JOBID=`echo ${SLURM_JOB_ID}` However, I haven't yet gotten SLURM_ARRAY_JOB_ID to assign itself to JOBID. Due to needing to support existing HPC workflows. I have a need to pass a bash script within a python subprocess. It was working great with openpbs, now I need to convert it to SLURM. I have it largely working in SLURM hosted on Ubuntu 20.04 except that the job array is not being populated. Below is a code snippet greatly stripped down to what's relevant. The specific question I have is. Why are the lines JOBID=${SLURM_JOB_ID} and JOBID=${SLURM_ARRAY_JOB_ID} are not getting their assignments? I've tried using a heredoc and various bashisms without success. The code certainly can be cleaner, it's the result of multiple people without a common standard. These are relevant Accessing task id for array jobs Handling bash system variables and slurm environmental variables in a wrapper script sbatch_arguments = "#SBATCH --array=1-{}".format(get_instance_count()) proc = Popen('ssh ${USER}@server_hostname /apps/workflows/slurm_wrapper.sh sbatch', shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True) job_string = """#!/bin/bash -x #SBATCH --job-name=%(name)s #SBATCH -t %(walltime)s #SBATCH --cpus-per-task %(processors)s #SBATCH --mem=%(memory)s %(sbatch_args)s # Assign JOBID if [ %(num_jobs)s -eq 1 ]; then JOBID=${SLURM_JOB_ID} else JOBID=${SLURM_ARRAY_JOB_ID} fi exit ${returnCode} """ % ({"walltime": walltime ,"processors": total_cores ,"binary": self.binary_name ,"name": ''.join(x for x in self.binary_name if x.isalnum()) ,"memory": memory ,"num_jobs": self.get_instance_count() ,"sbatch_args": sbatch_arguments }) # Send job_string to sbatch stdout, stderr = proc.communicate(input=job_string) A: Following up on this. I sovled it by passing SBATCH directives as args to the sbatch command sbatch_args = """--job-name=%(name)s --time=%(walltime)s --partition=defq --cpus-per-task=%(processors)s --mem=%(memory)s""" % ( {"walltime": walltime ,"processors": cores ,"name": ''.join(x for x in self.binary_name if x.isalnum()) ,"memory": memory }) # Open a pipe to the sbatch command. {tee /home/ahs/schuec1/_stderr_slurmqueue | sbatch; } # The SLURM variables SLURM_ARRAY_* do not exist until after sbatch is called. # Popen.communicate has BASH interpret all variables at the same time the script is sent. # Because of that, the job array needs to be declared prior to the rest of the BASH script. # It seems further that all SBATCH directives are not being evaultated when passed via a string with .communicate # due to this, all SBATCH directives will be passed as arguments to the slurm_wrapper.sh as the first command to the Popen pipe. proc = Popen('ssh ${USER}@ch3lahpcgw1.corp.cat.com /apps/workflows/slurm_wrapper.sh sbatch %s' % sbatch_args, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True, executable='/bin/bash')
SLURM Array Job BASH scripting within python subprocess
Update: I was able to get a variable assignment from SLURM_JOB_ID with this line. JOBID=`echo ${SLURM_JOB_ID}` However, I haven't yet gotten SLURM_ARRAY_JOB_ID to assign itself to JOBID. Due to needing to support existing HPC workflows. I have a need to pass a bash script within a python subprocess. It was working great with openpbs, now I need to convert it to SLURM. I have it largely working in SLURM hosted on Ubuntu 20.04 except that the job array is not being populated. Below is a code snippet greatly stripped down to what's relevant. The specific question I have is. Why are the lines JOBID=${SLURM_JOB_ID} and JOBID=${SLURM_ARRAY_JOB_ID} are not getting their assignments? I've tried using a heredoc and various bashisms without success. The code certainly can be cleaner, it's the result of multiple people without a common standard. These are relevant Accessing task id for array jobs Handling bash system variables and slurm environmental variables in a wrapper script sbatch_arguments = "#SBATCH --array=1-{}".format(get_instance_count()) proc = Popen('ssh ${USER}@server_hostname /apps/workflows/slurm_wrapper.sh sbatch', shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True) job_string = """#!/bin/bash -x #SBATCH --job-name=%(name)s #SBATCH -t %(walltime)s #SBATCH --cpus-per-task %(processors)s #SBATCH --mem=%(memory)s %(sbatch_args)s # Assign JOBID if [ %(num_jobs)s -eq 1 ]; then JOBID=${SLURM_JOB_ID} else JOBID=${SLURM_ARRAY_JOB_ID} fi exit ${returnCode} """ % ({"walltime": walltime ,"processors": total_cores ,"binary": self.binary_name ,"name": ''.join(x for x in self.binary_name if x.isalnum()) ,"memory": memory ,"num_jobs": self.get_instance_count() ,"sbatch_args": sbatch_arguments }) # Send job_string to sbatch stdout, stderr = proc.communicate(input=job_string)
[ "Following up on this. I sovled it by passing SBATCH directives as args to the sbatch command\n sbatch_args = \"\"\"--job-name=%(name)s --time=%(walltime)s --partition=defq --cpus-per-task=%(processors)s --mem=%(memory)s\"\"\" % (\n {\"walltime\": walltime\n ,\"processors\": cores\n ,\"name\": ''.join(x for x in self.binary_name if x.isalnum())\n ,\"memory\": memory\n })\n\n # Open a pipe to the sbatch command. {tee /home/ahs/schuec1/_stderr_slurmqueue | sbatch; }\n # The SLURM variables SLURM_ARRAY_* do not exist until after sbatch is called.\n # Popen.communicate has BASH interpret all variables at the same time the script is sent.\n # Because of that, the job array needs to be declared prior to the rest of the BASH script.\n\n # It seems further that all SBATCH directives are not being evaultated when passed via a string with .communicate\n # due to this, all SBATCH directives will be passed as arguments to the slurm_wrapper.sh as the first command to the Popen pipe.\n\n proc = Popen('ssh ${USER}@ch3lahpcgw1.corp.cat.com /apps/workflows/slurm_wrapper.sh sbatch %s' % sbatch_args,\n shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE,\n close_fds=True,\n executable='/bin/bash')\n\n" ]
[ 0 ]
[]
[]
[ "hpc", "python", "slurm", "subprocess", "ubuntu" ]
stackoverflow_0074323372_hpc_python_slurm_subprocess_ubuntu.txt
Q: Negative lookbehind + Non capturing group (?<!")https:\/\/t.me\/(c)?\/?([\+a-zA-Z0-9]+)\/?([0-9]*)? I want to find all telegram links without quotation marks (") but I don't want the leading negative lookbehind to be a group, how can I do this? I tried the following but it didn't work. This code works but i want the initial negative lookbehind not to create group. My steps: (?:(?<!")) not worked, (?<!(?:")) not worked either Examples: https://t.me/+AjFb2c8u85UfYrY0 -> True (1 group -> +AjFb2c8u85UfYrY0) (not two groups) "https://t.me/+AjFb2c8u85UfYrY0 -> False A: This code: const regex = /(?<!")https:\/\/t.me\/(c)?\/?([\+a-zA-Z0-9]+)\/?([0-9]*)?/ const text = 'https://t.me/+AjFb2c8u85UfYrY0' const [fullMatch, ...groups] = text.match(regex); console.log(groups); Returns [undefined, "+AjFb2c8u85UfYrY0", undefined] So you might think the first undefined is because of your negative lookbehind, but it's not! The first undefined is the result of the (c)? capturing group. So if you capture groups on a url with the /c/ pattern, here is what you'll get: const regex = /(?<!")https:\/\/t.me\/(c)?\/?([\+a-zA-Z0-9]+)\/?([0-9]*)?/ const text = 'https://t.me/c/+AjFb2c8u85UfYrY0' const [fullMatch, ...groups] = text.match(regex); console.log(groups); So there is nothing you need to do for the initial negative lookbehind not to create group. It is already working.
Negative lookbehind + Non capturing group
(?<!")https:\/\/t.me\/(c)?\/?([\+a-zA-Z0-9]+)\/?([0-9]*)? I want to find all telegram links without quotation marks (") but I don't want the leading negative lookbehind to be a group, how can I do this? I tried the following but it didn't work. This code works but i want the initial negative lookbehind not to create group. My steps: (?:(?<!")) not worked, (?<!(?:")) not worked either Examples: https://t.me/+AjFb2c8u85UfYrY0 -> True (1 group -> +AjFb2c8u85UfYrY0) (not two groups) "https://t.me/+AjFb2c8u85UfYrY0 -> False
[ "This code:\n\n\nconst regex = /(?<!\")https:\\/\\/t.me\\/(c)?\\/?([\\+a-zA-Z0-9]+)\\/?([0-9]*)?/\nconst text = 'https://t.me/+AjFb2c8u85UfYrY0'\nconst [fullMatch, ...groups] = text.match(regex);\nconsole.log(groups);\n\n\n\nReturns [undefined, \"+AjFb2c8u85UfYrY0\", undefined]\nSo you might think the first undefined is because of your negative lookbehind, but it's not! The first undefined is the result of the (c)? capturing group.\nSo if you capture groups on a url with the /c/ pattern, here is what you'll get:\n\n\nconst regex = /(?<!\")https:\\/\\/t.me\\/(c)?\\/?([\\+a-zA-Z0-9]+)\\/?([0-9]*)?/\nconst text = 'https://t.me/c/+AjFb2c8u85UfYrY0'\nconst [fullMatch, ...groups] = text.match(regex);\nconsole.log(groups);\n\n\n\nSo there is nothing you need to do for the initial negative lookbehind not to create group. It is already working.\n" ]
[ 0 ]
[]
[]
[ "hyperlink", "python", "regex", "regex_group", "telegram" ]
stackoverflow_0074634950_hyperlink_python_regex_regex_group_telegram.txt
Q: How python jira lib to change issue's resolution I try to update an issue's resolution through python jira lib, but get below error. >>> jp=JiraProject('VCART', 'https://jira.microhard.com') >>> _issue=jp.issue('VCART-4046') >>> _issue.update({'Resolution': {'name': 'Done'}}) Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/jira/resources.py", line 485, in update super(Issue, self).update(async_=async_, jira=jira, notify=notify, fields=data) File "/usr/local/lib/python3.6/dist-packages/jira/resources.py", line 233, in update self.self + querystring, data=data) File "/usr/local/lib/python3.6/dist-packages/jira/resilientsession.py", line 157, in put return self.__verb('PUT', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/jira/resilientsession.py", line 147, in __verb raise_on_error(response, verb=verb, **kwargs) File "/usr/local/lib/python3.6/dist-packages/jira/resilientsession.py", line 57, in raise_on_error r.status_code, error, r.url, request=request, response=r, **kwargs) jira.exceptions.JIRAError: JiraError HTTP 400 url: https://jira.microhard.com/rest/api/2/issue/4559852 text: Field 'Resolution' cannot be set. It is not on the appropriate screen, or unknown. response headers = {'Content-Type': 'application/json;charset=UTF-8', 'Transfer-Encoding': 'chunked', 'Connection': 'close', 'X-AREQUESTID': '407x17688874x7', 'X-ASESSIONID': '1l8dfq5', 'X-ANODEID': 'jira1prda2', 'Referrer-Policy': 'strict-origin-when-cross-origin', 'X-XSS-Protection': '1; mode=block', 'X-Content-Type-Options': 'nosniff', 'Strict-Transport-Security': 'max-age=31536000', 'X-Seraph-LoginReason': 'OK', 'X-RateLimit-Limit': '500', 'X-RateLimit-Remaining': '499', 'X-RateLimit-FillRate': '40', 'X-RateLimit-Interval-Seconds': '5', 'Retry-After': '0', 'X-AUSERNAME': 'buildaudit', 'Cache-Control': 'no-cache, no-store, no-transform', 'Content-Encoding': 'gzip', 'Vary': 'User-Agent', 'Date': 'Wed, 30 Nov 2022 14:47:38 GMT'} response text = {"errorMessages":[],"errors":{"Resolution":"Field 'Resolution' cannot be set. It is not on the appropriate screen, or unknown."}} Below neither works _issue.update({"fields": {"resolution": {"name": "Closed"}}}) The status of current issue is Reopened A: In the UI there is the long-standing Atlassian-suggested approach of adding a global self-transition to all statuses in a workflow, withe a Resolution screen. Then bulk transition the issues back to the same status. More info at https://confluence.atlassian.com/cloudkb/best-practices-on-using-the-resolution-field-968660796.html#Bestpracticesonusingthe%22Resolution%22field-Thefollowinginstructionsaremeanttocorrectpreviousissuesthathavetheresolutionfieldincorrectlyset. Then there is the way that admins with ScriptRunner do it, using the built-in script to fix resolutions. To script this with an external call is harder. I guess reopening the issue and then setting the resolution when closing it again is possible, though slow
How python jira lib to change issue's resolution
I try to update an issue's resolution through python jira lib, but get below error. >>> jp=JiraProject('VCART', 'https://jira.microhard.com') >>> _issue=jp.issue('VCART-4046') >>> _issue.update({'Resolution': {'name': 'Done'}}) Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/jira/resources.py", line 485, in update super(Issue, self).update(async_=async_, jira=jira, notify=notify, fields=data) File "/usr/local/lib/python3.6/dist-packages/jira/resources.py", line 233, in update self.self + querystring, data=data) File "/usr/local/lib/python3.6/dist-packages/jira/resilientsession.py", line 157, in put return self.__verb('PUT', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/jira/resilientsession.py", line 147, in __verb raise_on_error(response, verb=verb, **kwargs) File "/usr/local/lib/python3.6/dist-packages/jira/resilientsession.py", line 57, in raise_on_error r.status_code, error, r.url, request=request, response=r, **kwargs) jira.exceptions.JIRAError: JiraError HTTP 400 url: https://jira.microhard.com/rest/api/2/issue/4559852 text: Field 'Resolution' cannot be set. It is not on the appropriate screen, or unknown. response headers = {'Content-Type': 'application/json;charset=UTF-8', 'Transfer-Encoding': 'chunked', 'Connection': 'close', 'X-AREQUESTID': '407x17688874x7', 'X-ASESSIONID': '1l8dfq5', 'X-ANODEID': 'jira1prda2', 'Referrer-Policy': 'strict-origin-when-cross-origin', 'X-XSS-Protection': '1; mode=block', 'X-Content-Type-Options': 'nosniff', 'Strict-Transport-Security': 'max-age=31536000', 'X-Seraph-LoginReason': 'OK', 'X-RateLimit-Limit': '500', 'X-RateLimit-Remaining': '499', 'X-RateLimit-FillRate': '40', 'X-RateLimit-Interval-Seconds': '5', 'Retry-After': '0', 'X-AUSERNAME': 'buildaudit', 'Cache-Control': 'no-cache, no-store, no-transform', 'Content-Encoding': 'gzip', 'Vary': 'User-Agent', 'Date': 'Wed, 30 Nov 2022 14:47:38 GMT'} response text = {"errorMessages":[],"errors":{"Resolution":"Field 'Resolution' cannot be set. It is not on the appropriate screen, or unknown."}} Below neither works _issue.update({"fields": {"resolution": {"name": "Closed"}}}) The status of current issue is Reopened
[ "In the UI there is the long-standing Atlassian-suggested approach of adding a global self-transition to all statuses in a workflow, withe a Resolution screen. Then bulk transition the issues back to the same status. More info at\nhttps://confluence.atlassian.com/cloudkb/best-practices-on-using-the-resolution-field-968660796.html#Bestpracticesonusingthe%22Resolution%22field-Thefollowinginstructionsaremeanttocorrectpreviousissuesthathavetheresolutionfieldincorrectlyset.\nThen there is the way that admins with ScriptRunner do it, using the built-in script to fix resolutions.\nTo script this with an external call is harder. I guess reopening the issue and then setting the resolution when closing it again is possible, though slow\n" ]
[ 0 ]
[]
[]
[ "jira", "python", "python_jira" ]
stackoverflow_0074629832_jira_python_python_jira.txt
Q: Change date format of these string using Python I have a string from a pdf that I want to transform it to the date format that I want to work with later, the string is 05Dec22 how can I change it to 12/05/2022? import datetime date1 = '05Dec22' date1 = datetime.datetime.strptime(date1, '%d%m%Y').strftime('%m/%d/%y') date1 = str(date1) This is what i tried so far A: If you execute the code you'll get the following error, ValueError: time data '05Dec22' does not match format '%d%m%Y' this is because your time string is not in the specified format given ('%d%m%Y'). You can search for tables on the internet which show the placeholders that represent a certain formatting, if you look at the one provided here, you'll see that the formatting your string has is '%d%b%y', in this case, the %b placeholder represents the abbreviated month name and the %y placeholder is the year without century, just as your example string. Now, if you fix that in your code, import datetime date1 = '05Dec22' date1 = datetime.datetime.strptime(date1, '%d%b%y').strftime('%m/%d/%Y') date1 = str(date1) you'll get the desired result. Note that you also have to change the output format in strftime. As I said before, the %y placeholder is the year without century. For you to get the year including the century, you have to use %Y.
Change date format of these string using Python
I have a string from a pdf that I want to transform it to the date format that I want to work with later, the string is 05Dec22 how can I change it to 12/05/2022? import datetime date1 = '05Dec22' date1 = datetime.datetime.strptime(date1, '%d%m%Y').strftime('%m/%d/%y') date1 = str(date1) This is what i tried so far
[ "If you execute the code you'll get the following error,\nValueError: time data '05Dec22' does not match format '%d%m%Y'\n\nthis is because your time string is not in the specified format given ('%d%m%Y'). You can search for tables on the internet which show the placeholders that represent a certain formatting, if you look at the one provided here, you'll see that the formatting your string has is '%d%b%y', in this case, the %b placeholder represents the abbreviated month name and the %y placeholder is the year without century, just as your example string. Now, if you fix that in your code,\nimport datetime\n\n\ndate1 = '05Dec22'\ndate1 = datetime.datetime.strptime(date1, '%d%b%y').strftime('%m/%d/%Y')\ndate1 = str(date1)\n\nyou'll get the desired result.\nNote that you also have to change the output format in strftime. As I said before, the %y placeholder is the year without century. For you to get the year including the century, you have to use %Y.\n" ]
[ 0 ]
[]
[]
[ "datetime", "python", "string" ]
stackoverflow_0074635764_datetime_python_string.txt
Q: Why is this function not returning the list of values? By printing the array "total", I can see that the values are appending correctly. And yet when I print(linked_list_values(a)), it returns None. a = Node(5) b = Node(3) c = Node(9) total = [] def linked_list_values(head): print(total) if head == None: return None total.append(head.num) linked_list_values(head.next) print(linked_list_values(a)) A: The function returns None because you never have a return statement in it. It does mutate total, but it gets mutated in-place. Try printing the value of total after the function runs. >>> linked_list_values(a) None >>> total [5, 3, 9] # Assuming a.next == b and b.next == c A: Your function isn't returning anything for the statement at the bottom to print meaning Python is gonna read the return value of your function as None. A simple fix would be to add a return statement after the recursive call. a = Node(5) b = Node(3) c = Node(9) total = [] def linked_list_values(head): if head == None: return None total.append(head.num) linked_list_values(head.next) return total print(linked_list_values(a))
Why is this function not returning the list of values?
By printing the array "total", I can see that the values are appending correctly. And yet when I print(linked_list_values(a)), it returns None. a = Node(5) b = Node(3) c = Node(9) total = [] def linked_list_values(head): print(total) if head == None: return None total.append(head.num) linked_list_values(head.next) print(linked_list_values(a))
[ "The function returns None because you never have a return statement in it. It does mutate total, but it gets mutated in-place.\nTry printing the value of total after the function runs.\n>>> linked_list_values(a)\nNone\n>>> total\n[5, 3, 9] # Assuming a.next == b and b.next == c\n\n", "Your function isn't returning anything for the statement at the bottom to print meaning Python is gonna read the return value of your function as None. A simple fix would be to add a return statement after the recursive call.\na = Node(5)\nb = Node(3) \nc = Node(9) \ntotal = [] \ndef linked_list_values(head): \n if head == None: \n return None \n total.append(head.num) \n linked_list_values(head.next)\n return total \nprint(linked_list_values(a))\n\n" ]
[ 2, 0 ]
[]
[]
[ "linked_list", "python" ]
stackoverflow_0074635810_linked_list_python.txt
Q: Counting drop on an image I have image of drops and I want to calculate the number of it. Here is the original image : And Here after threshold application : i tried a lot of fonction on OpenCV and it's never right. Do you have any ideas on how to do ? Thanks The best I got, was by using : (img_morph is my binairized image) rbc_bw = label(img_morph) rbc_props = regionprops(rbc_bw) fig, ax = plt.subplots(figsize=(18, 8)) ax.imshow(img_morph) rbc_count = 0 for i, prop in enumerate(filter(lambda x: x.area > 250, rbc_props)): y1, x1, y2, x2 = (prop.bbox[0], prop.bbox[1], prop.bbox[2], prop.bbox[3]) width = x2 - x1 height = y2 - y1 r = plt.Rectangle((x1, y1), width = width, height=height, color='b', fill=False) ax.add_patch(r) rbc_count += 1 print('Red Blood Cell Count:', rbc_count) plt.show() And all my circles are detected here but also the gap in between. A more difficult image : A: Core idea: matchTemplate. Approach: pick a template manually from the picture histogram equalization for badly lit inputs (or always) matchTemplate with suitable matching mode also using copyMakeBorder to catch instances clipping the border thresholding and non-maximum suppression I'll skip the boring parts and use the first example input. Manually picked template: scores = cv.matchTemplate(haystack, template, cv.TM_CCOEFF_NORMED) Thresholding and NMS: levelmask = (scores >= 0.3) localmax = cv.dilate(scores, None, iterations=26) localmax = (scores == localmax) candidates = levelmask & localmax (nlabels, labels, stats, centroids) = cv.connectedComponentsWithStats(candidates.astype(np.uint8), connectivity=8) print(nlabels-1, "found") # background counted too # and then draw a circle for each centroid except label 0 And that finds 766 instances. I see a few false negatives (missed) and saw a false positive too once, but that looks like less than 1%.
Counting drop on an image
I have image of drops and I want to calculate the number of it. Here is the original image : And Here after threshold application : i tried a lot of fonction on OpenCV and it's never right. Do you have any ideas on how to do ? Thanks The best I got, was by using : (img_morph is my binairized image) rbc_bw = label(img_morph) rbc_props = regionprops(rbc_bw) fig, ax = plt.subplots(figsize=(18, 8)) ax.imshow(img_morph) rbc_count = 0 for i, prop in enumerate(filter(lambda x: x.area > 250, rbc_props)): y1, x1, y2, x2 = (prop.bbox[0], prop.bbox[1], prop.bbox[2], prop.bbox[3]) width = x2 - x1 height = y2 - y1 r = plt.Rectangle((x1, y1), width = width, height=height, color='b', fill=False) ax.add_patch(r) rbc_count += 1 print('Red Blood Cell Count:', rbc_count) plt.show() And all my circles are detected here but also the gap in between. A more difficult image :
[ "Core idea: matchTemplate.\nApproach:\n\npick a template manually from the picture\n\nhistogram equalization for badly lit inputs (or always)\n\n\nmatchTemplate with suitable matching mode\n\nalso using copyMakeBorder to catch instances clipping the border\n\n\nthresholding and non-maximum suppression\n\nI'll skip the boring parts and use the first example input.\nManually picked template:\n\nscores = cv.matchTemplate(haystack, template, cv.TM_CCOEFF_NORMED)\n\nThresholding and NMS:\nlevelmask = (scores >= 0.3)\n\nlocalmax = cv.dilate(scores, None, iterations=26)\nlocalmax = (scores == localmax)\n\ncandidates = levelmask & localmax\n\n(nlabels, labels, stats, centroids) = cv.connectedComponentsWithStats(candidates.astype(np.uint8), connectivity=8)\nprint(nlabels-1, \"found\") # background counted too\n# and then draw a circle for each centroid except label 0\n\nAnd that finds 766 instances. I see a few false negatives (missed) and saw a false positive too once, but that looks like less than 1%.\n\n" ]
[ 2 ]
[]
[]
[ "detection", "opencv", "python" ]
stackoverflow_0074633013_detection_opencv_python.txt
Q: How Do For loop to make short my statement I think it would be a simple question but I really stuck on it! how can I use for loop to make my statement more complex and short? I need the output be the exactly same format this is a code courses_data = pd.read_csv('.........') selected_features = ['course_name','course_link','university_name','course_type', 'university_logo', 'time_required', 'course_language', 'course_subtitles', 'course_skills', 'course_rating', 'category', 'sub_category', 'course_level'] combined_features = courses_data['course_name']+' '+courses_data['course_link']+' '+courses_data['university_name']+' '+courses_data['course_type']+' '+courses_data['university_logo']+' '+courses_data['time_required']+' '+courses_data['course_language']+' '+courses_data['course_subtitles']+' '+courses_data['course_skills']+' '+courses_data['course_rating']+' '+courses_data['category']+' '+courses_data['sub_category']+' '+courses_data['course_level'] print(combined_features) A: Something along these lines should work: combined_features = courses_data[selected_features[0]] for feature in selected_features[1:]: combined_features += ' ' + courses_data[feature] A: Since you are dealing with a pandas dataframe you could just do df2 = courses_data[selected_features].copy() print(df2) OR df2 = courses_data.filter(selected_features, axis=1) print(df2) Source A: combined_features = "" selected_features = ['course_name','course_link','university_name','course_type', 'university_logo', 'time_required', 'course_language', 'course_subtitles', 'course_skills', 'course_rating', 'category', 'sub_category', 'course_level'] for feature in selected_features: combined_features += courses_data[feature] + " " print(combined_features)
How Do For loop to make short my statement
I think it would be a simple question but I really stuck on it! how can I use for loop to make my statement more complex and short? I need the output be the exactly same format this is a code courses_data = pd.read_csv('.........') selected_features = ['course_name','course_link','university_name','course_type', 'university_logo', 'time_required', 'course_language', 'course_subtitles', 'course_skills', 'course_rating', 'category', 'sub_category', 'course_level'] combined_features = courses_data['course_name']+' '+courses_data['course_link']+' '+courses_data['university_name']+' '+courses_data['course_type']+' '+courses_data['university_logo']+' '+courses_data['time_required']+' '+courses_data['course_language']+' '+courses_data['course_subtitles']+' '+courses_data['course_skills']+' '+courses_data['course_rating']+' '+courses_data['category']+' '+courses_data['sub_category']+' '+courses_data['course_level'] print(combined_features)
[ "Something along these lines should work:\ncombined_features = courses_data[selected_features[0]]\nfor feature in selected_features[1:]:\n combined_features += ' ' + courses_data[feature]\n\n", "Since you are dealing with a pandas dataframe you could just do\ndf2 = courses_data[selected_features].copy()\nprint(df2)\n\nOR\ndf2 = courses_data.filter(selected_features, axis=1)\nprint(df2)\n\nSource\n", "combined_features = \"\"\nselected_features = ['course_name','course_link','university_name','course_type',\n 'university_logo', 'time_required', 'course_language',\n 'course_subtitles', 'course_skills', 'course_rating',\n 'category', 'sub_category', 'course_level']\n\nfor feature in selected_features:\n combined_features += courses_data[feature] + \" \"\n\nprint(combined_features)\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "dataframe", "for_loop", "python" ]
stackoverflow_0074635839_dataframe_for_loop_python.txt
Q: Python lambda log format with multline output to cloudwatch I can override the default Lambda python log format like so: LOG_FORMAT = '[%(levelname)s] %(asctime)s.%(msecs)dZ [%(filename)s] [%(funcName)s] %(message)s' DATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S' logging.basicConfig(format=LOG_FORMAT, level=logging.INFO, datefmt=DATETIME_FORMAT, force=True) LOG = logging.getLogger() But I've noticed that the log lines will print onto separate lines in cloudwatch based on newline characters. If I try and log a formatted XML document (to Cloudwatch) I end up, effectively, with new log lines for every line in the XML document. But prior to modifying the default format the XML document would appear as a single log line, that I could easily copy & paste out of Cloudwatch Prior to format change: [INFO] 2022-11-30T02:16:54.345Z dc2518d8-60f3-461c-812b-c70b1b836592 SNS message: <?xml version="1.0"?> <Document> ... </Document> In cloudwatch clicking on the default output displays the entire document, where it can be easily copied. After format change: [INFO] 2022-11-29T11:40:47.563Z [function.py] [handler] SNS message: 2022-11-29T11:40:47.563Z <?xml version="1.0"?> 2022-11-29T11:40:47.563Z <Document> 2022-11-29T11:40:47.563Z ... 2022-11-29T11:40:47.563Z</Document> Is the default formatter parsing the string and replacing newline characters? I know \r is treated differently to \n in CloudWatch. This is the Lamba Python runtime bootstrap code: https://github.com/aws/aws-lambda-python-runtime-interface-client/blob/main/awslambdaric/bootstrap.py A: Figured out a workaround by editing the existing formatter: LOG = logging.getLogger() LOG.setLevel(logging.INFO) log_handler = LOG.handlers[0] log_handler.setFormatter(logging.Formatter('[%(levelname)s] %(asctime)s.%(msecs)dZ [%(filename)s] [%(funcName)s] %(message)s\n'))
Python lambda log format with multline output to cloudwatch
I can override the default Lambda python log format like so: LOG_FORMAT = '[%(levelname)s] %(asctime)s.%(msecs)dZ [%(filename)s] [%(funcName)s] %(message)s' DATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S' logging.basicConfig(format=LOG_FORMAT, level=logging.INFO, datefmt=DATETIME_FORMAT, force=True) LOG = logging.getLogger() But I've noticed that the log lines will print onto separate lines in cloudwatch based on newline characters. If I try and log a formatted XML document (to Cloudwatch) I end up, effectively, with new log lines for every line in the XML document. But prior to modifying the default format the XML document would appear as a single log line, that I could easily copy & paste out of Cloudwatch Prior to format change: [INFO] 2022-11-30T02:16:54.345Z dc2518d8-60f3-461c-812b-c70b1b836592 SNS message: <?xml version="1.0"?> <Document> ... </Document> In cloudwatch clicking on the default output displays the entire document, where it can be easily copied. After format change: [INFO] 2022-11-29T11:40:47.563Z [function.py] [handler] SNS message: 2022-11-29T11:40:47.563Z <?xml version="1.0"?> 2022-11-29T11:40:47.563Z <Document> 2022-11-29T11:40:47.563Z ... 2022-11-29T11:40:47.563Z</Document> Is the default formatter parsing the string and replacing newline characters? I know \r is treated differently to \n in CloudWatch. This is the Lamba Python runtime bootstrap code: https://github.com/aws/aws-lambda-python-runtime-interface-client/blob/main/awslambdaric/bootstrap.py
[ "Figured out a workaround by editing the existing formatter:\nLOG = logging.getLogger()\nLOG.setLevel(logging.INFO)\nlog_handler = LOG.handlers[0]\nlog_handler.setFormatter(logging.Formatter('[%(levelname)s] %(asctime)s.%(msecs)dZ [%(filename)s] [%(funcName)s] %(message)s\\n'))\n\n" ]
[ 0 ]
[]
[]
[ "aws_lambda", "python" ]
stackoverflow_0074633534_aws_lambda_python.txt
Q: Element wise between a 2-D numpy array and a list I'm trying to apply a function def lead(x,n): if n>0: x = np.roll(x,-n) x[-n:]=1 return x to each element of Qxx, a 2-D numpy array (121,121), BUT WITH ROLLING the "n" argument from a list [0,1,2,3,4,....121] for example and in a element wise way. the following code is working but SLOW ! xx = [[lead(qx,n) for n in range(len(qx))] for qx in Qxx] how can I do it with apply_long_axis or map or... smtg like : xx = np.apply_along_axis(lead,1,arr = Qxx,n=range(121)) thanks A: This seems to be much faster: list(map(lambda x: lead(Qxx[x], x), range(121))) Performance: The OP's solution: %%timeit [[lead(qx,n) for n in range(len(qx))] for qx in Qxx] 178 ms ± 3.32 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) My solution: %%timeit list(map(lambda x: lead(Qxx[x], x), range(121))) 1.63 ms ± 60.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Data: Qxx = np.array(np.tile(np.arange(121), 121)).reshape((121, 121))
Element wise between a 2-D numpy array and a list
I'm trying to apply a function def lead(x,n): if n>0: x = np.roll(x,-n) x[-n:]=1 return x to each element of Qxx, a 2-D numpy array (121,121), BUT WITH ROLLING the "n" argument from a list [0,1,2,3,4,....121] for example and in a element wise way. the following code is working but SLOW ! xx = [[lead(qx,n) for n in range(len(qx))] for qx in Qxx] how can I do it with apply_long_axis or map or... smtg like : xx = np.apply_along_axis(lead,1,arr = Qxx,n=range(121)) thanks
[ "This seems to be much faster:\nlist(map(lambda x: lead(Qxx[x], x), range(121)))\n\nPerformance:\nThe OP's solution:\n%%timeit\n\n[[lead(qx,n) for n in range(len(qx))] for qx in Qxx]\n\n178 ms ± 3.32 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nMy solution:\n%%timeit\n\nlist(map(lambda x: lead(Qxx[x], x), range(121)))\n\n1.63 ms ± 60.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nData:\nQxx = np.array(np.tile(np.arange(121), 121)).reshape((121, 121))\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074635544_numpy_python.txt
Q: How do you segment the darker spots from the blurry gray corners? I'm trying to segment the dark grayish spots from the blurry gray areas on the corner, I did binary thresholding and morphological operations and it works great from the blobs in the middle, but the corners I'm having a bit of trouble. black and white blob image # Binary Thresholding ret,threshImg = cv2.threshold(denoiseImg, 220, 255,cv2.THRESH_BINARY) threshImg = cv2.bitwise_not(threshImg) # Morphological Operation # Initialization of kernel size kernel2 = np.ones((2,2), np.uint8) kernel5 = np.ones((5,5), np.uint8) # Morphological Dilation dilationImg = cv2.dilate(threshImg, kernel2, iterations = 1) # # Morphological Closing closingImg = cv2.morphologyEx(dilationImg, cv2.MORPH_CLOSE, kernel5) # uses closing to fill gaps in the foreground closingImg = cv2.bitwise_not(closingImg) This is the result. segmented blob A: You can do division normalization in Python/OpenCV to mitigate some of that issue. You basically blur the image and then divide the image by the blurred version. Input: import cv2 import numpy as np # read the image img = cv2.imread('dark_spots.jpg') # convert to gray gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # blur smooth = cv2.GaussianBlur(gray, None, sigmaX=10, sigmaY=10) # divide gray by morphology image division = cv2.divide(gray, smooth, scale=255) # save results cv2.imwrite('dark_spots_division.jpg',division) # show results cv2.imshow('smooth', smooth) cv2.imshow('division', division) cv2.waitKey(0) cv2.destroyAllWindows() Results:
How do you segment the darker spots from the blurry gray corners?
I'm trying to segment the dark grayish spots from the blurry gray areas on the corner, I did binary thresholding and morphological operations and it works great from the blobs in the middle, but the corners I'm having a bit of trouble. black and white blob image # Binary Thresholding ret,threshImg = cv2.threshold(denoiseImg, 220, 255,cv2.THRESH_BINARY) threshImg = cv2.bitwise_not(threshImg) # Morphological Operation # Initialization of kernel size kernel2 = np.ones((2,2), np.uint8) kernel5 = np.ones((5,5), np.uint8) # Morphological Dilation dilationImg = cv2.dilate(threshImg, kernel2, iterations = 1) # # Morphological Closing closingImg = cv2.morphologyEx(dilationImg, cv2.MORPH_CLOSE, kernel5) # uses closing to fill gaps in the foreground closingImg = cv2.bitwise_not(closingImg) This is the result. segmented blob
[ "You can do division normalization in Python/OpenCV to mitigate some of that issue. You basically blur the image and then divide the image by the blurred version.\nInput:\n\nimport cv2\nimport numpy as np\n\n# read the image\nimg = cv2.imread('dark_spots.jpg')\n\n# convert to gray\ngray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n\n# blur\nsmooth = cv2.GaussianBlur(gray, None, sigmaX=10, sigmaY=10)\n\n# divide gray by morphology image\ndivision = cv2.divide(gray, smooth, scale=255)\n\n# save results\ncv2.imwrite('dark_spots_division.jpg',division)\n\n# show results\ncv2.imshow('smooth', smooth) \ncv2.imshow('division', division) \ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\nResults:\n\n" ]
[ 2 ]
[]
[]
[ "image_processing", "image_segmentation", "opencv", "python" ]
stackoverflow_0074635041_image_processing_image_segmentation_opencv_python.txt
Q: Python: How to get the name of an Enum? A coworker who is on vacation has code that is similar to, for example: from enum import Enum class MyEnum(Enum): A = 1 B = 2 def lookup(enum_type: Enum, value: str) -> Any: try: return enum_type[value] except ValueError: # PROBLEM IS HERE enum_name = ??? raise ConfigurationError(enum_name, value) Given an Enum like this, is there any way to retrieve its name? In this case, I would like to have enum_name = 'MyEnum'. We could do some parsing if necessary, but it would be very handy to just be able to get the name of the Enum. In addition, PyCharm is giving me a warning on the lookup in: return enum_type[value] with suggestion: Ignore an unresolved reference enum.Enum.__getitem__ Any help to clean this up would be appreciated. We are using Python 3.10. Any suggestions? A: The name of the enum would be enum_type.__name__. Be aware that the square bracket look-up (i.e. enum_type[value]) is actually looking up by member name, not member value. Member value would be enum_type(value).
Python: How to get the name of an Enum?
A coworker who is on vacation has code that is similar to, for example: from enum import Enum class MyEnum(Enum): A = 1 B = 2 def lookup(enum_type: Enum, value: str) -> Any: try: return enum_type[value] except ValueError: # PROBLEM IS HERE enum_name = ??? raise ConfigurationError(enum_name, value) Given an Enum like this, is there any way to retrieve its name? In this case, I would like to have enum_name = 'MyEnum'. We could do some parsing if necessary, but it would be very handy to just be able to get the name of the Enum. In addition, PyCharm is giving me a warning on the lookup in: return enum_type[value] with suggestion: Ignore an unresolved reference enum.Enum.__getitem__ Any help to clean this up would be appreciated. We are using Python 3.10. Any suggestions?
[ "The name of the enum would be enum_type.__name__.\nBe aware that the square bracket look-up (i.e. enum_type[value]) is actually looking up by member name, not member value. Member value would be enum_type(value).\n" ]
[ 1 ]
[]
[]
[ "enums", "python", "python_3.x" ]
stackoverflow_0074635885_enums_python_python_3.x.txt
Q: Question about getting global coordinates of lidar point cloud from relative in Webots I need to do custom mapping of surroundings with lidar using mobile robot in Webots. What I use for that: GPS for getting robot position. Compass for getting direction robot. Lidar for getting info about surroundings. Maybe someone familiar with Webots and can show basic code example or explain the math behind it or there is a method that I missed in Webots? I did translation and rotation of relative points from lidar, which worked well when robot is on flat surface (2D rotation). But no matter how much I tried I can't figure out how to get accurate global coordinates from point cloud relative points, when robot is even a bit tilted (3D rotation). My guess is that it suppose to use spatial transformation matrices, but I not sure how to use Webots Compass values in rotation matrix. A: After getting some useful info in StackExchange. Basic Example of solution on Python: from scipy.spatial.transform import Rotation as Rotation RobotPoint = gps.getValues() STR = Rotation.from_quat(InertialUnit.getQuaternion()) for RelativeCloudPoint in lidar.getPointCloud(): Point2 = STR.apply(RelativeCloudPoint) GlobalCloudPoint = RelativeCloudPoint + RobotPoint Using InternalUnit to get Quaternion for spartial rotation matrix. Then apply it to relative coordinates. After that add to it real robot coordinates from GPS. In the end you will get global coordinates of points you need.
Question about getting global coordinates of lidar point cloud from relative in Webots
I need to do custom mapping of surroundings with lidar using mobile robot in Webots. What I use for that: GPS for getting robot position. Compass for getting direction robot. Lidar for getting info about surroundings. Maybe someone familiar with Webots and can show basic code example or explain the math behind it or there is a method that I missed in Webots? I did translation and rotation of relative points from lidar, which worked well when robot is on flat surface (2D rotation). But no matter how much I tried I can't figure out how to get accurate global coordinates from point cloud relative points, when robot is even a bit tilted (3D rotation). My guess is that it suppose to use spatial transformation matrices, but I not sure how to use Webots Compass values in rotation matrix.
[ "After getting some useful info in StackExchange.\nBasic Example of solution on Python:\nfrom scipy.spatial.transform import Rotation as Rotation\n\nRobotPoint = gps.getValues()\nSTR = Rotation.from_quat(InertialUnit.getQuaternion())\nfor RelativeCloudPoint in lidar.getPointCloud():\n Point2 = STR.apply(RelativeCloudPoint)\n GlobalCloudPoint = RelativeCloudPoint + RobotPoint\n\nUsing InternalUnit to get Quaternion for spartial rotation matrix. Then apply it to relative coordinates. After that add to it real robot coordinates from GPS. In the end you will get global coordinates of points you need.\n" ]
[ 0 ]
[]
[]
[ "compass", "coordinate_transformation", "lidar", "python", "webots" ]
stackoverflow_0074619579_compass_coordinate_transformation_lidar_python_webots.txt
Q: How to keep VARCHAR in the DB (MYSQL), but ENUM in the sqlalchemy model I want to add a new Int column to my MYSQL DB, so that in the sqlalchemy ORM it will be converted to an ENUM. For example, let's say I have this enum: class employee_type(Enum): Full_time = 1 Part_time = 2 Student = 3 I want to keep in the DB those params - 1,2,3..., but when developers will write code that involves this model - they will just use the Enum, without having to go through getter and setter functions. So they will be able to do - instance_of_model.employee_type and get an Enum. And - new_instance = model_name(employee_type=Employee_type.Full_time..) How should I define my sqlalchemy model so it will work? (I've heard of hybrid types but not sure it will work here) Thanks! A: Apparently the answer is super simple(!), there is nothing special we need to do - SQLAlchemy support it by itself. Meaning - you can set the specific column to be INT in the DB, but enum in the model, and when querying the DB SQLAlchemy will convert it by itself. same goes when inserting to the DB :) I used it with declarative enum, it means it's values are strings (and it has a function - from_string()). So once I used VARCHAR columns, it worked like a charm! A: Sqlalchemy checks the db if native type enum is available. If the db does not support this, the default type is VARCHAR. So you can Enum(MyEnum, native_enum=False)
How to keep VARCHAR in the DB (MYSQL), but ENUM in the sqlalchemy model
I want to add a new Int column to my MYSQL DB, so that in the sqlalchemy ORM it will be converted to an ENUM. For example, let's say I have this enum: class employee_type(Enum): Full_time = 1 Part_time = 2 Student = 3 I want to keep in the DB those params - 1,2,3..., but when developers will write code that involves this model - they will just use the Enum, without having to go through getter and setter functions. So they will be able to do - instance_of_model.employee_type and get an Enum. And - new_instance = model_name(employee_type=Employee_type.Full_time..) How should I define my sqlalchemy model so it will work? (I've heard of hybrid types but not sure it will work here) Thanks!
[ "Apparently the answer is super simple(!), there is nothing special we need to do - SQLAlchemy support it by itself.\nMeaning - you can set the specific column to be INT in the DB, but enum in the model, and when querying the DB SQLAlchemy will convert it by itself. same goes when inserting to the DB :)\nI used it with declarative enum, it means it's values are strings (and it has a function - from_string()).\nSo once I used VARCHAR columns, it worked like a charm! \n", "Sqlalchemy checks the db if native type enum is available.\nIf the db does not support this, the default type is VARCHAR.\nSo you can\nEnum(MyEnum, native_enum=False)\n\n" ]
[ 0, 0 ]
[]
[]
[ "enums", "mysql", "orm", "python", "sqlalchemy" ]
stackoverflow_0049802164_enums_mysql_orm_python_sqlalchemy.txt
Q: HDF5 multidimmensional array storage I got this very simple pandas dataframe with a multidimmensional array: df_foo = pd.DataFrame({ 'Value': [[1, 2], [3, 4], [5, 6]] }) Here is what's happening when I try to store it in an hdf5 file : # Using HDFStore: h5 = HDFStore('foo.h5') h5.put('foo', df_foo, format='table', data_columns=True) #TypeError: Cannot serialize the column [Value] because its data contents are not [string] but [mixed] object dtype # Using H5py: h5 = h5py.File('foo.h5','w') h5.create_dataset('foo', data=df_foo) #TypeError: Object dtype dtype('O') has no native HDF5 equivalent I can't find here or on other forums or documentation a satisfactory response to help me. How can I store a multidimmensional array in an hdf5 file ? A: You can't store a multidimensional array in a pandas Series. So when you create your "array" in your example, you're actually creating a pandas column with object dtype, where each element is a python list. One option for storing a MultiDimensional array as HDF5 is by using xarray, another pydata project which extends the pandas concept of labeled indices, but to N dimensions. The xarray equivalent of your example goes something like this: import xarray as xr ds = xr.Dataset({ "Value": (("x", "y"), [[1, 2], [3, 4], [5, 6]]), }) ds.to_netcdf("foo.h5", engine="h5netcdf") This creates a valid HDF5 file using the NetCDF4 standard, using the h5netcdf package, which is built on h5py.
HDF5 multidimmensional array storage
I got this very simple pandas dataframe with a multidimmensional array: df_foo = pd.DataFrame({ 'Value': [[1, 2], [3, 4], [5, 6]] }) Here is what's happening when I try to store it in an hdf5 file : # Using HDFStore: h5 = HDFStore('foo.h5') h5.put('foo', df_foo, format='table', data_columns=True) #TypeError: Cannot serialize the column [Value] because its data contents are not [string] but [mixed] object dtype # Using H5py: h5 = h5py.File('foo.h5','w') h5.create_dataset('foo', data=df_foo) #TypeError: Object dtype dtype('O') has no native HDF5 equivalent I can't find here or on other forums or documentation a satisfactory response to help me. How can I store a multidimmensional array in an hdf5 file ?
[ "You can't store a multidimensional array in a pandas Series. So when you create your \"array\" in your example, you're actually creating a pandas column with object dtype, where each element is a python list.\nOne option for storing a MultiDimensional array as HDF5 is by using xarray, another pydata project which extends the pandas concept of labeled indices, but to N dimensions.\nThe xarray equivalent of your example goes something like this:\nimport xarray as xr\n\nds = xr.Dataset({\n \"Value\": ((\"x\", \"y\"), [[1, 2], [3, 4], [5, 6]]),\n})\n\nds.to_netcdf(\"foo.h5\", engine=\"h5netcdf\")\n\nThis creates a valid HDF5 file using the NetCDF4 standard, using the h5netcdf package, which is built on h5py.\n" ]
[ 0 ]
[]
[]
[ "hdf5", "pandas", "python" ]
stackoverflow_0074635832_hdf5_pandas_python.txt
Q: File indexing issue in python for this function, i need to traverse through a file and count each line based on certain signifiers. If that certain signifier is present in the line, i need to add the string as a key to the dictionary and increment its value by one each time its present. I am not outright looking for the answer, I am just lost as to what I have done wrong and where I can proceed from here. Both of the counter variables and the dictionary are returning empty. I need them to return the values based on what is present on a given file. file line example: RT @taylorswift13: Feeling like the luckiest person alive to get to take these brilliant artists out on tour w/ me: @paramore, @beabad00bee & @OwennMusic. I can’t WAIT to see you. It’s been a long time coming code: def top_retweeted(tweets_file_name, num_top_retweeted): total_tweets = 0 total_retweets = 0 retweets_users = {} f_read = open(tweets_file_name, "r") f_write = open(tweets_file_name, "w") lines = f_read.readlines() for line in lines: total_tweets =+1 elements = line.split(":") for element in elements: if "RT" in element: total_retweets =+1 user_name = element.split() retweet_users[user_name]=+1 print("There were " + str(total_tweets) + " tweets in the file, " + str(total_retweets) + " of which were retweets") return retweets_user A: f_read = open(tweets_file_name, "r") f_write = open(tweets_file_name, "w") You're opening the file for reading and then also opening it for writing, which destroys the existing contents.
File indexing issue in python
for this function, i need to traverse through a file and count each line based on certain signifiers. If that certain signifier is present in the line, i need to add the string as a key to the dictionary and increment its value by one each time its present. I am not outright looking for the answer, I am just lost as to what I have done wrong and where I can proceed from here. Both of the counter variables and the dictionary are returning empty. I need them to return the values based on what is present on a given file. file line example: RT @taylorswift13: Feeling like the luckiest person alive to get to take these brilliant artists out on tour w/ me: @paramore, @beabad00bee & @OwennMusic. I can’t WAIT to see you. It’s been a long time coming code: def top_retweeted(tweets_file_name, num_top_retweeted): total_tweets = 0 total_retweets = 0 retweets_users = {} f_read = open(tweets_file_name, "r") f_write = open(tweets_file_name, "w") lines = f_read.readlines() for line in lines: total_tweets =+1 elements = line.split(":") for element in elements: if "RT" in element: total_retweets =+1 user_name = element.split() retweet_users[user_name]=+1 print("There were " + str(total_tweets) + " tweets in the file, " + str(total_retweets) + " of which were retweets") return retweets_user
[ "f_read = open(tweets_file_name, \"r\")\nf_write = open(tweets_file_name, \"w\")\n\nYou're opening the file for reading and then also opening it for writing, which destroys the existing contents.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "file", "python", "traversal" ]
stackoverflow_0074635723_dictionary_file_python_traversal.txt
Q: If statement requires float, terminal returns error if datatype is string I am a beginner programmer, working on a project for an online course. I am trying to build a tip calculator. I want it to take input from the user for three values: Bill total, how many are splitting the bill, and the percent they would wish to tip. My conditional statement only has one if: if meal_price >= 0.01: example(example) else: example(example) There are no elifs, only an else clause, stating to the user to enter only a numerical value. The program is designed to loop if the else clause runs, or continue if the 'if' condition is met. I would like this program to be completely user-friendly and run regardless of what is typed in. But instead of the else clause being ran when a user enters a string value, the terminal returns an error. How would I check the datatype the user enters, and run my conditional statement based off of that instead of the literal user response? Note, I've tried: if isinstance(meal_price, float): Converting the user input into a string, but then the conditional statement becomes the problem Thank you all for the help. I started my coding journey about 3 months ago and I am trying to learn as much as I can. Any feedback or criticism is GREATLY appreciated. enter image description here def calculation(): tip_percent = percentage / 100 tip_amount = meal_price * tip_percent meal_and_tip = tip_amount + meal_price total_to_return = meal_and_tip / to_split return total_to_return print("\nWelcome to the \"Bill Tip Calculator\"!") print("All you need to do is enter the bill, the amount of people splitting it, and the percent you would like to tip.\n") while True: print("First, what was the total for the bill?") meal_price = float(input("Bill (Numerical values only): ")) if meal_price >= 0.01: meal_price2 = str(meal_price) print("\nPerfect. The total is " + "$" + meal_price2 + ".") while True: print("\nHow many people are splitting the bill?") to_split = int(input("People: ")) if to_split >= 1: to_split2 = str(to_split) print("\nAwesome, there is", "\"" + to_split2 + "\"", "person(s) paying.") while True: print("\nWhat percent would you like to tip?") percentage = float(input("Percentage (Numerical values only, include decimals): ")) if percentage >= 0: percentage2 = str(percentage) print("\nGot it.", percentage2 + '%.') calculation() total = str(calculation()) #total2 = str(total) print("\n\nEach person pays", "$" + total + ".") exit() else: print("\nPlease enter only a numerical value. No decimals or special characters.") else: print("\nPlease respond with a numerical value greater than 0.\n") else: print("Please remember to enter only a numerical value.\n") Included image snapshot in case copy & paste isn't accurate. A: The user's input will be a string, so you need to check if the parse to the float was successful. You can do that with a try/except, and then loop back over asking for more input: print("First, what was the total for the bill?") meal_price = None while meal_price == None: try: meal_price = float(input("Bill (Numerical values only):")) except ValueError: print("That didn't look like a number, please try again") print(meal_price) A: Adding on to @OliverRadini's answer, you use the same structure a lot for each of your inputs that could be generalized into a single function like so def get_input(prompt, datatype): value = input(prompt) try: a = datatype(value) return a except: print("Input failed, please use {:s}".format(str(datatype))) return get_input(prompt, datatype) a = get_input("Bill total: ", float) print(a) A: Perhaps the main point of confusion is that input() will always return what the user enters as a string. Therefore trying to check whether meal_price is something other than a string will always fail. Only some strings can be converted into floats - if you try on an inappropriate string, an exception (specifically, a ValueError) will be raised. So this is where you need to learn a bit about exception handling. Try opening with this block: meal_price = None while meal_price is None: try: meal_price = float(input("Bill (Numerical values only): ")) except ValueError: print("Please remember to enter only a numerical value.\n") This will try to execute your statement, but in the event it encounters a value error, you tell it not to raise the exception, but to instead print a message (and the loop restarts until they get it right, or a different kind of error that you haven't handled occurs). A: Thank you all! After looking into your comments and making the amendments, my program works perfectly! You lot rock!! def calculation(): tip_percent = tip / 100 tip_amount = bill * tip_percent meal_and_tip = tip_amount + bill total_to_return = meal_and_tip / to_split return total_to_return def user_input(prompt, datatype): value = input(prompt) try: input_to_return = datatype(value) return input_to_return except ValueError: print("Input failed, please use {:s}".format(str(datatype))) return user_input(prompt, datatype) print("\nWelcome to the \"Bill Tip Calculator\"!") print("\nAll you need to do is:\n1.) Enter your bill\n2.) Enter the amount of people (if bill is being split)\n3.) Enter the amount you would like to tip.") print("\n\n1.) What was the total for the bill?") bill = user_input("Total Bill: ", float) print("\nAwesome, the total for your meal was " + "$" + str(bill) + ".") print("\n\n2.) How many people are splitting the bill?") to_split = user_input("Number of People: ", int) print("\nSo the bill is divided", str(to_split), "way(s).") print("\n\n3.) What percent of the bill would you like to leave as a tip? (Enter a numeral value only. No special characters.)") tip = user_input("Tip: ", int) print("\nYou would like to tip", str(tip) + "%! Nice!") total = calculation() print("\n\n\n\nYour total is " + "$" + str(total), "each! Thank you for using the \"Bill Tip Calculator\"!")
If statement requires float, terminal returns error if datatype is string
I am a beginner programmer, working on a project for an online course. I am trying to build a tip calculator. I want it to take input from the user for three values: Bill total, how many are splitting the bill, and the percent they would wish to tip. My conditional statement only has one if: if meal_price >= 0.01: example(example) else: example(example) There are no elifs, only an else clause, stating to the user to enter only a numerical value. The program is designed to loop if the else clause runs, or continue if the 'if' condition is met. I would like this program to be completely user-friendly and run regardless of what is typed in. But instead of the else clause being ran when a user enters a string value, the terminal returns an error. How would I check the datatype the user enters, and run my conditional statement based off of that instead of the literal user response? Note, I've tried: if isinstance(meal_price, float): Converting the user input into a string, but then the conditional statement becomes the problem Thank you all for the help. I started my coding journey about 3 months ago and I am trying to learn as much as I can. Any feedback or criticism is GREATLY appreciated. enter image description here def calculation(): tip_percent = percentage / 100 tip_amount = meal_price * tip_percent meal_and_tip = tip_amount + meal_price total_to_return = meal_and_tip / to_split return total_to_return print("\nWelcome to the \"Bill Tip Calculator\"!") print("All you need to do is enter the bill, the amount of people splitting it, and the percent you would like to tip.\n") while True: print("First, what was the total for the bill?") meal_price = float(input("Bill (Numerical values only): ")) if meal_price >= 0.01: meal_price2 = str(meal_price) print("\nPerfect. The total is " + "$" + meal_price2 + ".") while True: print("\nHow many people are splitting the bill?") to_split = int(input("People: ")) if to_split >= 1: to_split2 = str(to_split) print("\nAwesome, there is", "\"" + to_split2 + "\"", "person(s) paying.") while True: print("\nWhat percent would you like to tip?") percentage = float(input("Percentage (Numerical values only, include decimals): ")) if percentage >= 0: percentage2 = str(percentage) print("\nGot it.", percentage2 + '%.') calculation() total = str(calculation()) #total2 = str(total) print("\n\nEach person pays", "$" + total + ".") exit() else: print("\nPlease enter only a numerical value. No decimals or special characters.") else: print("\nPlease respond with a numerical value greater than 0.\n") else: print("Please remember to enter only a numerical value.\n") Included image snapshot in case copy & paste isn't accurate.
[ "The user's input will be a string, so you need to check if the parse to the float was successful. You can do that with a try/except, and then loop back over asking for more input:\nprint(\"First, what was the total for the bill?\")\n\nmeal_price = None\nwhile meal_price == None:\n try:\n meal_price = float(input(\"Bill (Numerical values only):\"))\n except ValueError:\n print(\"That didn't look like a number, please try again\")\nprint(meal_price)\n\n", "Adding on to @OliverRadini's answer, you use the same structure a lot for each of your inputs that could be generalized into a single function like so\ndef get_input(prompt, datatype):\n value = input(prompt)\n try:\n a = datatype(value)\n return a\n except:\n print(\"Input failed, please use {:s}\".format(str(datatype)))\n return get_input(prompt, datatype)\n\na = get_input(\"Bill total: \", float)\nprint(a)\n\n", "Perhaps the main point of confusion is that input() will always return what the user enters as a string.\nTherefore trying to check whether meal_price is something other than a string will always fail.\nOnly some strings can be converted into floats - if you try on an inappropriate string, an exception (specifically, a ValueError) will be raised.\nSo this is where you need to learn a bit about exception handling. Try opening with this block:\nmeal_price = None\n\nwhile meal_price is None:\n\n try:\n meal_price = float(input(\"Bill (Numerical values only): \"))\n except ValueError:\n print(\"Please remember to enter only a numerical value.\\n\")\n\nThis will try to execute your statement, but in the event it encounters a value error, you tell it not to raise the exception, but to instead print a message (and the loop restarts until they get it right, or a different kind of error that you haven't handled occurs).\n", "Thank you all! After looking into your comments and making the amendments, my program works perfectly! You lot rock!!\ndef calculation():\n tip_percent = tip / 100\n tip_amount = bill * tip_percent\n meal_and_tip = tip_amount + bill\n total_to_return = meal_and_tip / to_split\n return total_to_return\n\ndef user_input(prompt, datatype):\n value = input(prompt)\n try:\n input_to_return = datatype(value)\n return input_to_return\n except ValueError:\n print(\"Input failed, please use {:s}\".format(str(datatype)))\n return user_input(prompt, datatype)\n\n\nprint(\"\\nWelcome to the \\\"Bill Tip Calculator\\\"!\")\nprint(\"\\nAll you need to do is:\\n1.) Enter your bill\\n2.) Enter the amount of \npeople (if bill is being split)\\n3.) Enter the amount you would like to \ntip.\")\nprint(\"\\n\\n1.) What was the total for the bill?\")\nbill = user_input(\"Total Bill: \", float)\nprint(\"\\nAwesome, the total for your meal was \" + \"$\" + str(bill) + \".\")\nprint(\"\\n\\n2.) How many people are splitting the bill?\")\nto_split = user_input(\"Number of People: \", int)\nprint(\"\\nSo the bill is divided\", str(to_split), \"way(s).\")\nprint(\"\\n\\n3.) What percent of the bill would you like to leave as a tip? \n(Enter a numeral value only. No special characters.)\")\ntip = user_input(\"Tip: \", int)\nprint(\"\\nYou would like to tip\", str(tip) + \"%! Nice!\")\ntotal = calculation()\nprint(\"\\n\\n\\n\\nYour total is \" + \"$\" + str(total), \"each! Thank you for using \nthe \\\"Bill Tip Calculator\\\"!\")\n\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0074634781_if_statement_python.txt
Q: Visual Studio stalling python extension on m1 Macbook I tried to install and manually install the Microsoft python extension, both not working. I have installed python 3.9.7 on my m1 macbook. After I click "installing", then the following error message appears: And in the log: Also when I tried to install manually via the vsix file: In the log it appears: What is going on? A: There are many possible causes for XHR errors. You can refer to this article, and I think the easiest way is to restart or reinstall vscode.
Visual Studio stalling python extension on m1 Macbook
I tried to install and manually install the Microsoft python extension, both not working. I have installed python 3.9.7 on my m1 macbook. After I click "installing", then the following error message appears: And in the log: Also when I tried to install manually via the vsix file: In the log it appears: What is going on?
[ "There are many possible causes for XHR errors. You can refer to this article, and I think the easiest way is to restart or reinstall vscode.\n" ]
[ 0 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074630383_python_visual_studio_code.txt
Q: Plotly Dash / Python -- Interaction(s) between Dropdown, Graph and Rangeslider I've been getting into Python as a means to visualize data. I'm still very much of a novice. To practice I'm working with the gapminder dataset in Plotly Express in Jupyter Notebook. Been stuck on something I can't quite wrap my head around. I have this container for a graph: dcc.Graph(id='the_graph') I've managed to create a dcc.RangeSlider that interacts with my linegraph. It looks something like this: dcc.RangeSlider( id='the_year', min=data['year'].min(), max=data['year'].max(), value=[data['year'].min(), data['year'].max()], step=None, marks={str(x):str(x) for x in data['year'].unique()} ) So what I did here was provide the Slider with everything it needs. Next I created the dcc.Dropdown which looks something like this: dcc.Dropdown( id='the_country', placeholder='Select country/countries', options=[ {'label': x, 'value': x} for x in data['country'].sort_values().unique() ], multi=True ) Here I wanted to fill the dropdown with all the countries in my data in alphabetical order. That also worked after figuring out how to use a loop to make my life a bit easier. So far so good. But here is where it goes wrong. I'm able to adjust the graph based on the Slider, but the Dropdown does nothing and I fail to understand why. @app.callback( Output('the_graph', 'figure'), [Input('the_year', 'value'), Input('the_country', 'value')] ) def update_graph(sel_year, sel_country): dff = data[(data['year']>=sel_year[0]) & (data['year']<=sel_year[1])] graph = px.line( dff, x=dff['year'], y=dff['pop'], color=dff['country'], markers=True, ) return graph In the callback, I found out multiple inputs are an option when put into a list. So I gave it a go: The output should be put into 'the_graph', which it does. The dff line is to filter based on year selected in the Slider. If I understood correctly, a function can have more than one argument and is processed left to right. What I expected to happen was that since I'm taking the value of my Slider and the value of my Dropdown, it would 'filter' the graph based on those selections. However, only the Slider seems to work. Is my approach wrong? Am I missing something obvious here? I feel it should be possible to have multiple things (slider AND dropdown) decide what is shown in a graph. Any help / guidance would be very much appreciated. Have a great day! A: I think you should add conditions for Dropdown. Something as below: @app.callback( Output('the_graph', 'figure'), [Input('the_year', 'value'), Input('the_country', 'value')] ) def update_graph(sel_year, sel_country): dff = data[(data['year']>=sel_year[0]) & (data['year']<=sel_year[1])] if sel_country == []: dff_2 = dff.copy() elif sel_country != []: dff_2 = dff[dff['country'].isin(sel_country)] graph = px.line( dff_2, x=dff_2['year'], y=ddff_2['pop'], color=dff_2['country'], markers=True, ) return graph
Plotly Dash / Python -- Interaction(s) between Dropdown, Graph and Rangeslider
I've been getting into Python as a means to visualize data. I'm still very much of a novice. To practice I'm working with the gapminder dataset in Plotly Express in Jupyter Notebook. Been stuck on something I can't quite wrap my head around. I have this container for a graph: dcc.Graph(id='the_graph') I've managed to create a dcc.RangeSlider that interacts with my linegraph. It looks something like this: dcc.RangeSlider( id='the_year', min=data['year'].min(), max=data['year'].max(), value=[data['year'].min(), data['year'].max()], step=None, marks={str(x):str(x) for x in data['year'].unique()} ) So what I did here was provide the Slider with everything it needs. Next I created the dcc.Dropdown which looks something like this: dcc.Dropdown( id='the_country', placeholder='Select country/countries', options=[ {'label': x, 'value': x} for x in data['country'].sort_values().unique() ], multi=True ) Here I wanted to fill the dropdown with all the countries in my data in alphabetical order. That also worked after figuring out how to use a loop to make my life a bit easier. So far so good. But here is where it goes wrong. I'm able to adjust the graph based on the Slider, but the Dropdown does nothing and I fail to understand why. @app.callback( Output('the_graph', 'figure'), [Input('the_year', 'value'), Input('the_country', 'value')] ) def update_graph(sel_year, sel_country): dff = data[(data['year']>=sel_year[0]) & (data['year']<=sel_year[1])] graph = px.line( dff, x=dff['year'], y=dff['pop'], color=dff['country'], markers=True, ) return graph In the callback, I found out multiple inputs are an option when put into a list. So I gave it a go: The output should be put into 'the_graph', which it does. The dff line is to filter based on year selected in the Slider. If I understood correctly, a function can have more than one argument and is processed left to right. What I expected to happen was that since I'm taking the value of my Slider and the value of my Dropdown, it would 'filter' the graph based on those selections. However, only the Slider seems to work. Is my approach wrong? Am I missing something obvious here? I feel it should be possible to have multiple things (slider AND dropdown) decide what is shown in a graph. Any help / guidance would be very much appreciated. Have a great day!
[ "I think you should add conditions for Dropdown. Something as below:\[email protected](\n Output('the_graph', 'figure'),\n [Input('the_year', 'value'),\n Input('the_country', 'value')]\n)\n\ndef update_graph(sel_year, sel_country):\n dff = data[(data['year']>=sel_year[0]) & (data['year']<=sel_year[1])]\n if sel_country == []:\n dff_2 = dff.copy()\n elif sel_country != []:\n dff_2 = dff[dff['country'].isin(sel_country)]\n \n graph = px.line(\n dff_2,\n x=dff_2['year'],\n y=ddff_2['pop'],\n color=dff_2['country'],\n markers=True,\n )\n \n return graph\n\n" ]
[ 0 ]
[]
[]
[ "plotly_dash", "python" ]
stackoverflow_0074627025_plotly_dash_python.txt
Q: Is there a faster way of evaluating every combination of booleans in an if statement in python? If I have 4 booleans e.g if ((a(x) == True) and (b(x) == True) and (c(x) == True) and (d(x) == True) then I want to do something different for each combination including when only 3 of them are true (including which ones), 2..., then only each 1... etc... Is there a quicker way than writing a bunch of elifs? Possibly using a loop A: You could build a lookup table using a dict: lookup = {(True, True, True, True): func_1, (True, True, True, False): func_2, (True, True, False, True): func_3, ... etc. } func = lookup[a(x), b(x), c(x), d(x)] func() A: You can count the number of True booleans using arithmetic operators numTrue = (a(x) == True) + (b(x) == True) + (c(x) == True) + (d(x) == True) if numTrue==4: # foo elif numTrue==3: # bar elif numTrue==2: # ... elif numTrue==1: #... else: # ... Note that I kept your redundant structure x==True, which, from a boolean point a view is just a convoluted version of x. But at least, now, I am sure that your booleans (since I know nothing of a(x) and others) are True or False, not other truthiness are falseness. A: including when only 3 of them are true This suggests you'd like to know that N of the 4 values are true: vals = a(x), b(x), c(x), d(x) num_true = sum(map(bool, vals)) Given such a vector of booleans, there are 2 ** 4 == 16 possibilities. We can put 16 values into a dict named d and go from there. k = " ".join(map(str, map(int, map(bool, vals)))) print(d[k]) If the identities don't matter to you, then sort so we have fewer than 16 possibilities: k = " ".join(sorted(map(str, map(int, map(bool, vals))))) A: The solution with the lookup table is great but if you wanted to do something that takes less memory and around the same speed. You can use the if statement results to build on each other this way you'd only have 2^4(16) cases i.e: if (a(x)): if(b(x): if(c(x)): if(d(x)): #do x else: #do y else: if (d(x)): #do z else: #do w #...
Is there a faster way of evaluating every combination of booleans in an if statement in python?
If I have 4 booleans e.g if ((a(x) == True) and (b(x) == True) and (c(x) == True) and (d(x) == True) then I want to do something different for each combination including when only 3 of them are true (including which ones), 2..., then only each 1... etc... Is there a quicker way than writing a bunch of elifs? Possibly using a loop
[ "You could build a lookup table using a dict:\nlookup = {(True, True, True, True): func_1,\n (True, True, True, False): func_2,\n (True, True, False, True): func_3,\n ... etc.\n }\nfunc = lookup[a(x), b(x), c(x), d(x)]\nfunc()\n\n", "You can count the number of True booleans using arithmetic operators\nnumTrue = (a(x) == True) + (b(x) == True) + (c(x) == True) + (d(x) == True)\nif numTrue==4:\n # foo\nelif numTrue==3:\n # bar\nelif numTrue==2:\n # ...\nelif numTrue==1:\n #...\nelse:\n # ...\n\nNote that I kept your redundant structure x==True, which, from a boolean point a view is just a convoluted version of x. But at least, now, I am sure that your booleans (since I know nothing of a(x) and others) are True or False, not other truthiness are falseness.\n", "\nincluding when only 3 of them are true\n\nThis suggests you'd like to know that N of the 4 values are true:\nvals = a(x), b(x), c(x), d(x)\nnum_true = sum(map(bool, vals))\n\n\nGiven such a vector of booleans, there are 2 ** 4 == 16 possibilities.\nWe can put 16 values into a dict named d and go from there.\nk = \" \".join(map(str, map(int, map(bool, vals))))\nprint(d[k])\n\n\nIf the identities don't matter to you,\nthen sort so we have fewer than 16 possibilities:\nk = \" \".join(sorted(map(str, map(int, map(bool, vals)))))\n\n", "The solution with the lookup table is great but if you wanted to do something that takes less memory and around the same speed. You can use the if statement results to build on each other this way you'd only have 2^4(16) cases i.e:\nif (a(x)):\n if(b(x):\n if(c(x)):\n if(d(x)):\n #do x\n else:\n #do y\n else:\n if (d(x)):\n #do z\n else:\n #do w\n #...\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "boolean_expression", "if_statement", "python" ]
stackoverflow_0074635930_boolean_expression_if_statement_python.txt
Q: Section postgresql not found in the database.ini file I'm trying to create tables in my database (postgresql 9.6) and when I launch my python script to do so, it returns me an error of the following type: "Section postgresql not found in the $FILEDIR/database.ini file" It seems like the parser cannot read the section, but I don't understand why. This is my config method: def config(filename='$FILEDIR/database.ini', section='postgresql'): parser = ConfigParser() parser.read(filename) db = {} if parser.has_section(section): params = parser.items(section) for param in params: db[param[0]] = param[1] else: raise Exception('Section {0} not found in the {1} file'.format(section, filename)) return db Database.ini: [postgresql] host=localhost database=mydatabase user=myuser password=mypassword I've tried the answers in this following thread but it does not help me at all. Anyone knows the cause? I'm using python 2.7 and I've executed "pip install config" and "pip install configparser" for dependencies. A: I had the same issue aswell, it was cured by placing the whole file path into the kwarg in config: def config(filename='/Users/gramb0t/Desktop/python-postgre/data/database.ini', section='postgresql'): A: #just remove the $FILEDIR. it worked for me. from configparser import ConfigParser def config(filename="database.ini", section="postgresql"): # create a parser parser = ConfigParser() # read config file parser.read(filename) # get section, default to postgresql db = {} if parser.has_section(section): params = parser.items(section) for param in params: db[param[0]] = param[1] else: raise Exception( "Section {0} not found in the {1} file".format(section, filename) ) return db A: the problem in path for solve this you can use os to get dir to database.ini as following example import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) params = config(filename=BASE_DIR+'\memory\database.ini', section='postgresql_pc__server') A: # The thing is, you made a mistake. You don’t need to add or invent anything, just do this: # def config(filename=r'/database.ini', section='postgresql'): parser = ConfigParser() parser.read(filename) # получить раздел, по умолчанию postgresql db = {} if parser.has_section(section): print(section) params = parser.items(section) for param in params: db[param[0]] = param[1] else: raise Exception("Раздел {0} не найдено в {1} файл".format(section, filename)) return db ## but if you are in the main directory then (r, '/') you most likely need to remove and leave everything as in the documentation ##
Section postgresql not found in the database.ini file
I'm trying to create tables in my database (postgresql 9.6) and when I launch my python script to do so, it returns me an error of the following type: "Section postgresql not found in the $FILEDIR/database.ini file" It seems like the parser cannot read the section, but I don't understand why. This is my config method: def config(filename='$FILEDIR/database.ini', section='postgresql'): parser = ConfigParser() parser.read(filename) db = {} if parser.has_section(section): params = parser.items(section) for param in params: db[param[0]] = param[1] else: raise Exception('Section {0} not found in the {1} file'.format(section, filename)) return db Database.ini: [postgresql] host=localhost database=mydatabase user=myuser password=mypassword I've tried the answers in this following thread but it does not help me at all. Anyone knows the cause? I'm using python 2.7 and I've executed "pip install config" and "pip install configparser" for dependencies.
[ "I had the same issue aswell, it was cured by placing the whole file path into the kwarg in config:\ndef config(filename='/Users/gramb0t/Desktop/python-postgre/data/database.ini', section='postgresql'):\n", "#just remove the $FILEDIR. it worked for me.\n\nfrom configparser import ConfigParser\n\ndef config(filename=\"database.ini\", section=\"postgresql\"):\n # create a parser\n parser = ConfigParser()\n # read config file\n parser.read(filename)\n\n # get section, default to postgresql\n db = {}\n if parser.has_section(section):\n params = parser.items(section)\n for param in params:\n db[param[0]] = param[1]\n else:\n raise Exception(\n \"Section {0} not found in the {1} file\".format(section, filename)\n )\n\n return db\n\n", "the problem in path\nfor solve this you can use os to get dir to database.ini as following example\nimport os\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nparams = config(filename=BASE_DIR+'\\memory\\database.ini', section='postgresql_pc__server')\n\n", "# The thing is, you made a mistake. You don’t need to add or invent anything, just do this: #\n\ndef config(filename=r'/database.ini', section='postgresql'):\n parser = ConfigParser()\n parser.read(filename)\n # получить раздел, по умолчанию postgresql\n db = {}\n\n if parser.has_section(section):\n print(section)\n params = parser.items(section)\n\n for param in params:\n db[param[0]] = param[1]\n else:\n raise Exception(\"Раздел {0} не найдено в {1} файл\".format(section, filename))\n return db\n \n## but if you are in the main directory then (r, '/') you most likely need to remove and leave everything as in the documentation ##\n\n" ]
[ 7, 3, 2, 0 ]
[]
[]
[ "configparser", "python", "python_2.7" ]
stackoverflow_0049406058_configparser_python_python_2.7.txt
Q: How to write one array value at a time (dataframe to csv)? This is working great, but I have thousands of rows to write to csv. It takes hours to finish and sometimes my connection will drop and prevent the query from finishing. import pandas as pd from yahooquery import Ticker symbols = ['AAPL','GOOG','MSFT'] faang = Ticker(symbols) faang.summary_detail df = pd.DataFrame(faang.summary_detail).T df.to_csv('output.csv', mode='a', index=True, header=True) Above is only three symbols: symbols = ['AAPL','GOOG','MSFT'], but imagine there are 50,000 symbols. What I am currently doing is breaking it down into 500 symbols at a time: import pandas as pd from yahooquery import Ticker symbols = ['AAPL','GOOG','MSFT'] #imagine here are 500 symbols. faang = Ticker(symbols) faang.summary_detail df = pd.DataFrame(faang.summary_detail).T df.to_csv('summary_detailsample.csv', mode='a', index=True, header=True) symbols = ['BABA','AMD','NVDA'] #imagine here are 500 symbols. faang = Ticker(symbols) faang.summary_detail df = pd.DataFrame(faang.summary_detail).T df.to_csv('output.csv', mode='a', index=True, header=True) #Repeat the last five lines 100+ times for 50,000 symbols (500 symbols x 100 blocks of code). So the last five lines of code I copy 100+ times to append/write all the symbols' data. It works great, but I would like to not have 500 lines of code. I would like it to append a record one symbol at a time and throw all the 50,000 symbols in there one time (not have to copy code over and over). Perhaps most importantly I would like the first symbol's column headers to be followed by the rest of the symbols. Some of the symbols will have 20 columns and others will have 15 or so. The data ends up not matching. The rows won't match other rows, etc. A: Try the below looping through the list of tickers, appending the dataframes as you loop onto the CSV. import pandas as pd from yahooquery import Ticker symbols = [#All Of Your Symbols Here] for tick in symbols: faang = Ticker(tick) faang.summary_detail df = pd.DataFrame(faang.summary_detail).T df.to_csv('summary_detailsample.csv', mode='a', index=True, header=False)
How to write one array value at a time (dataframe to csv)?
This is working great, but I have thousands of rows to write to csv. It takes hours to finish and sometimes my connection will drop and prevent the query from finishing. import pandas as pd from yahooquery import Ticker symbols = ['AAPL','GOOG','MSFT'] faang = Ticker(symbols) faang.summary_detail df = pd.DataFrame(faang.summary_detail).T df.to_csv('output.csv', mode='a', index=True, header=True) Above is only three symbols: symbols = ['AAPL','GOOG','MSFT'], but imagine there are 50,000 symbols. What I am currently doing is breaking it down into 500 symbols at a time: import pandas as pd from yahooquery import Ticker symbols = ['AAPL','GOOG','MSFT'] #imagine here are 500 symbols. faang = Ticker(symbols) faang.summary_detail df = pd.DataFrame(faang.summary_detail).T df.to_csv('summary_detailsample.csv', mode='a', index=True, header=True) symbols = ['BABA','AMD','NVDA'] #imagine here are 500 symbols. faang = Ticker(symbols) faang.summary_detail df = pd.DataFrame(faang.summary_detail).T df.to_csv('output.csv', mode='a', index=True, header=True) #Repeat the last five lines 100+ times for 50,000 symbols (500 symbols x 100 blocks of code). So the last five lines of code I copy 100+ times to append/write all the symbols' data. It works great, but I would like to not have 500 lines of code. I would like it to append a record one symbol at a time and throw all the 50,000 symbols in there one time (not have to copy code over and over). Perhaps most importantly I would like the first symbol's column headers to be followed by the rest of the symbols. Some of the symbols will have 20 columns and others will have 15 or so. The data ends up not matching. The rows won't match other rows, etc.
[ "Try the below looping through the list of tickers, appending the dataframes as you loop onto the CSV.\nimport pandas as pd\nfrom yahooquery import Ticker\n\n\nsymbols = [#All Of Your Symbols Here]\nfor tick in symbols:\n faang = Ticker(tick)\n faang.summary_detail\n df = pd.DataFrame(faang.summary_detail).T\n \n df.to_csv('summary_detailsample.csv', mode='a', index=True, header=False)\n\n" ]
[ 2 ]
[]
[]
[ "arrays", "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074635011_arrays_csv_dataframe_pandas_python.txt
Q: Add a "|" symbol while grouping a data frame by multiple columns with python pandas I am just starting learning pandas package and have been asked to group a data frame by multiple columns ('BRANCH_NO', 'CUSTOMER_NO') since the combination of them forms a unique value in the data frame while adding a "|" symbol between the values in other columns that have the same combination of 'BRANCH_NO' and 'CUSTOMER_NO'. I wonder if I should use for loop to loop through each column to achieve this. Thanks. person_in_charge_raw = pd.DataFrame({'BRANCH_NO':['123','123','123','123','124','124'], 'CUSTOMER_NO':['001', '001', '001', '001','001','001'], 'DEPARTMENT_NO':['A01','B01','C01','D01', 'A01','B01'], 'STAFF_ID':['S001','S002','S003', 'S004', 'S001', 'S002']}) final_result = pd.DataFrame({'BRANCH_NO':['123', '124'], 'CUSTOMER_NO':['001', '001'], 'DEPARMENT_NO':['A01 | B01 | C01 | D01', 'A01 | B01'], 'STAFF_ID':['S001 | S002 | S003 | S004', 'S001 | S002']}) Here is the screenshot of the desired result. A: g = person_in_charge_raw.groupby(['BRANCH_NO', 'CUSTOMER_NO']) g.agg('|'.join).reset_index()
Add a "|" symbol while grouping a data frame by multiple columns with python pandas
I am just starting learning pandas package and have been asked to group a data frame by multiple columns ('BRANCH_NO', 'CUSTOMER_NO') since the combination of them forms a unique value in the data frame while adding a "|" symbol between the values in other columns that have the same combination of 'BRANCH_NO' and 'CUSTOMER_NO'. I wonder if I should use for loop to loop through each column to achieve this. Thanks. person_in_charge_raw = pd.DataFrame({'BRANCH_NO':['123','123','123','123','124','124'], 'CUSTOMER_NO':['001', '001', '001', '001','001','001'], 'DEPARTMENT_NO':['A01','B01','C01','D01', 'A01','B01'], 'STAFF_ID':['S001','S002','S003', 'S004', 'S001', 'S002']}) final_result = pd.DataFrame({'BRANCH_NO':['123', '124'], 'CUSTOMER_NO':['001', '001'], 'DEPARMENT_NO':['A01 | B01 | C01 | D01', 'A01 | B01'], 'STAFF_ID':['S001 | S002 | S003 | S004', 'S001 | S002']}) Here is the screenshot of the desired result.
[ "g = person_in_charge_raw.groupby(['BRANCH_NO', 'CUSTOMER_NO'])\ng.agg('|'.join).reset_index()\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "pipe", "python" ]
stackoverflow_0074636052_pandas_pipe_python.txt
Q: Scikit learn not importing in vscode from sklearn.metrics.pairwise import cosine_similarity I have tried "pip install scikit-learn" and "pip install sklearn" so many times. It is showing reportMissingImports error A: Do you have multiple python environments on your machine? Make sure you are using the one you have sklearn installed on. You can use the following code to check the interpreter you are using, and then use the obtained path to install the sklearn package for the current environment. import sys print(sys.executable) Or select the interpreter that has installed the sklearn package in the Select Interpreter panel(Ctrl+Shift+P --> Python:Select Interpreter), and then create a new terminal activation environment.
Scikit learn not importing in vscode
from sklearn.metrics.pairwise import cosine_similarity I have tried "pip install scikit-learn" and "pip install sklearn" so many times. It is showing reportMissingImports error
[ "Do you have multiple python environments on your machine? Make sure you are using the one you have sklearn installed on.\nYou can use the following code to check the interpreter you are using, and then use the obtained path to install the sklearn package for the current environment.\nimport sys\nprint(sys.executable)\n\n\nOr select the interpreter that has installed the sklearn package in the Select Interpreter panel(Ctrl+Shift+P --> Python:Select Interpreter), and then create a new terminal activation environment.\n\n" ]
[ 0 ]
[]
[]
[ "python", "scikit_learn", "visual_studio_code" ]
stackoverflow_0074627459_python_scikit_learn_visual_studio_code.txt
Q: await Faust Agent ask() never receive from yield generator Hi I am trying to integrate faust with fastapi endpoints following this example: toh995/fastapi-faust-example I am working with a simple DummyOrder model class DummyOrder(faust.Record,MyAvroModel,serializer='avro_order_codec'): order_id: str amount: int I have an faust agent that yields balance @app.agent(test_topic) async def table_balance(orders: faust.Stream): async for order in orders.group_by(DummyOrder.order_id): print (f'order id: {order.order_id} has balance of {balance_order_table[order.order_id]}') yield balance_order_table[order.order_id] For fastapi, I have @fastapi_app.on_event("startup") async def startup(): #set up the faust app worker.faust_app_for_api('faust') faust_app = worker.get_faust_app() print('starting client') #start the faust app in client mode asyncio.create_task( faust_app.start_client() ) print('Client Created') @fastapi_app.get("/") async def entrypoint(): from order.infrastructure.faust_app.tasks import order_balance print("getting balance") balance = await order_balance.table_balance.ask(DummyOrder(order_id='AB001', amount=0)) print(balance) return balance if __name__ == '__main__': uvicorn.run("fast_api_app:fastapi_app", host="0.0.0.0", port=3000) Then I ran both faust worker and fastapi with the following faust.App configuration for main faust worker app = faust.App( id=faust_app_id, broker=[ f'kafka://{self.bootstrap_server}' ], broker_credentials=faust.SASLCredentials( username=self.username, password=self.password, ssl_context=self.ssl_settings ), autodiscover=True, origin="order.infrastructure.faust_app", #mandetory if autodiscover is enabled value_serializer='raw', ##need to set to 3 in order for faust to work. it will create a new topic ## <faust-id>-__assignor-__leader topic topic_replication_factor=3, topic_disable_leader=False, topic_allow_declare = True, ) for fastapi, I have the following configuration. I include a loop argument that looks for current event loop by using asyncio.get_running_loop() app_api = faust.App( id=faust_app_id, broker=[ f'kafka://{self.bootstrap_server}' ], broker_credentials=faust.SASLCredentials( username=self.username, password=self.password, ssl_context=self.ssl_settings ), autodiscover=True, origin="order.infrastructure.faust_app", #mandetory if autodiscover is enabled loop=asyncio.get_running_loop(), value_serializer='raw', reply_to="faust_reply_topic" ) The problem is when the entrypoint() is triggered by hitting the root url of fastapi, the process sends out message to worker without any issue. The worker console log shows the agent stream is being triggered and executed without any problem [2022-04-15 09:31:24,975] [53402] [WARNING] order id: AB001 has balance of 0 Then the whole app just hangs on here. fastapi never receive anything from awaiting the agent that is supposed to yield balance_order_table[order.order_id]. I am working this project with confluent cloud + self-hosted kafka cluster and both seemed to display the same behaviour. A: This is caused by faust not waiting for agent/table initialisation in client-only mode. Just replace FastAPI's app startup handler with something like that: @app.on_event("startup") async def startup(): # set up the faust app faust_app = worker.set_faust_app_for_api() await faust_app.start_client() await asyncio.sleep(5.0) # wait for agents and table to initialise await faust_app.topics.on_client_only_start() # resubscribe to topics Please note that in case you're using faust-streaming fork please make sure to use a version newer than 5th of October 2022 as it had another code issue preventing this code to work.
await Faust Agent ask() never receive from yield generator
Hi I am trying to integrate faust with fastapi endpoints following this example: toh995/fastapi-faust-example I am working with a simple DummyOrder model class DummyOrder(faust.Record,MyAvroModel,serializer='avro_order_codec'): order_id: str amount: int I have an faust agent that yields balance @app.agent(test_topic) async def table_balance(orders: faust.Stream): async for order in orders.group_by(DummyOrder.order_id): print (f'order id: {order.order_id} has balance of {balance_order_table[order.order_id]}') yield balance_order_table[order.order_id] For fastapi, I have @fastapi_app.on_event("startup") async def startup(): #set up the faust app worker.faust_app_for_api('faust') faust_app = worker.get_faust_app() print('starting client') #start the faust app in client mode asyncio.create_task( faust_app.start_client() ) print('Client Created') @fastapi_app.get("/") async def entrypoint(): from order.infrastructure.faust_app.tasks import order_balance print("getting balance") balance = await order_balance.table_balance.ask(DummyOrder(order_id='AB001', amount=0)) print(balance) return balance if __name__ == '__main__': uvicorn.run("fast_api_app:fastapi_app", host="0.0.0.0", port=3000) Then I ran both faust worker and fastapi with the following faust.App configuration for main faust worker app = faust.App( id=faust_app_id, broker=[ f'kafka://{self.bootstrap_server}' ], broker_credentials=faust.SASLCredentials( username=self.username, password=self.password, ssl_context=self.ssl_settings ), autodiscover=True, origin="order.infrastructure.faust_app", #mandetory if autodiscover is enabled value_serializer='raw', ##need to set to 3 in order for faust to work. it will create a new topic ## <faust-id>-__assignor-__leader topic topic_replication_factor=3, topic_disable_leader=False, topic_allow_declare = True, ) for fastapi, I have the following configuration. I include a loop argument that looks for current event loop by using asyncio.get_running_loop() app_api = faust.App( id=faust_app_id, broker=[ f'kafka://{self.bootstrap_server}' ], broker_credentials=faust.SASLCredentials( username=self.username, password=self.password, ssl_context=self.ssl_settings ), autodiscover=True, origin="order.infrastructure.faust_app", #mandetory if autodiscover is enabled loop=asyncio.get_running_loop(), value_serializer='raw', reply_to="faust_reply_topic" ) The problem is when the entrypoint() is triggered by hitting the root url of fastapi, the process sends out message to worker without any issue. The worker console log shows the agent stream is being triggered and executed without any problem [2022-04-15 09:31:24,975] [53402] [WARNING] order id: AB001 has balance of 0 Then the whole app just hangs on here. fastapi never receive anything from awaiting the agent that is supposed to yield balance_order_table[order.order_id]. I am working this project with confluent cloud + self-hosted kafka cluster and both seemed to display the same behaviour.
[ "This is caused by faust not waiting for agent/table initialisation in client-only mode. Just replace FastAPI's app startup handler with something like that:\[email protected]_event(\"startup\")\nasync def startup():\n # set up the faust app\n faust_app = worker.set_faust_app_for_api()\n await faust_app.start_client()\n await asyncio.sleep(5.0) # wait for agents and table to initialise \n await faust_app.topics.on_client_only_start() # resubscribe to topics\n\nPlease note that in case you're using faust-streaming fork please make sure to use a version newer than 5th of October 2022 as it had another code issue preventing this code to work.\n" ]
[ 0 ]
[]
[]
[ "apache_kafka_streams", "async_await", "fastapi", "faust", "python" ]
stackoverflow_0071879233_apache_kafka_streams_async_await_fastapi_faust_python.txt
Q: When loading data from a .txt file, why does one of my columns in a MySQL database format a date value properly and another column does not? I have a .txt file where the fields are terminated by pipes. The file is in a S3 bucket and I have written a script to load the data from the file to a MySQL database. I have just about everything working properly, but I have come across a problem that I am stuck on. The issue is in formatting date values. The strange thing is I have two columns that are both the same date format which is: DD-Mmm-YY (01-Jan-96), and originally they were both being loaded to the database as 0000-00-00. I have been successful in formatting one of the columns, but I can't seem to properly format the second one. In the CREATE TABLE statement they are both DATE values with DEFAULT NULL. So, both columns have been created the same and both columns are in the same format in my .txt file. When listing my column names in my LOAD DATA LOCAL INFILE statement I have used variables for both, and the formatting of the dates is being done in a SET statement. This is what I have tried, as well as a lot of variations of this. import mysql.connector conn = mysql.connector.connect( user='username', password='pw', host='hostname', database='db', allow_local_infile=True ) cursor = conn.cursor() sql = """LOAD DATA LOCAL INFILE '/myfile.txt' INTO TABLE tablename FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' IGNORE 1 LINES (column1, column2, @s_date_value, column4, @o_date_value, column5) SET S_DATE = date_format(str_to_date(@s_date_value, '%d-%b-%y'), '%d-%b-%y'), O_DATE = date_format(str_to_date(@o_date_value, '%d-%b-%y'), '%d-%b-%y');""" cursor.execute(sql) conn.commit() conn.close() So, the S_DATE is loading correctly and the O_DATE written like this will load as 0000-00-00. If I write the line starting with O_DATE without the date_format I am able to get the actual dates loaded in the format of YYYY-MM-DD, like such: O_DATE = str_to_date(@o_date_value, '%d-%b-%y');""" If I write it with just date_format I get all NULL values, like such: O_DATE = date_format(@o_date_value, '%d-%b-%y');""" One thing I do not understand is why I even need str_to_date since they were created as DATE values in the first place. But, the combination of date_format and str_to_date is working for the S_DATE. I also considered the issue having something to do with listing more than one query in the SET statement, but I seemed to find through my research that it was acceptable, and I tried to just have the O_DATE in the SET statement without the S_DATE and I got the same results. I have also tried combining them in an UPDATE statement and executing it with cursor.execute(). Another solution I tried was to write the SET statement using regex, but that was unsuccessful as well. This is my first post on Stack Overflow so please let me know if additional info is needed. If anybody could please offer some help I would greatly appreciate it! A: I was able to solve it by creating an ALTER statement and first making that column TEXT then combining date_format and str_to_date like I did on S_DATE. Still not really sure why I had to do that for one column and not the other.
When loading data from a .txt file, why does one of my columns in a MySQL database format a date value properly and another column does not?
I have a .txt file where the fields are terminated by pipes. The file is in a S3 bucket and I have written a script to load the data from the file to a MySQL database. I have just about everything working properly, but I have come across a problem that I am stuck on. The issue is in formatting date values. The strange thing is I have two columns that are both the same date format which is: DD-Mmm-YY (01-Jan-96), and originally they were both being loaded to the database as 0000-00-00. I have been successful in formatting one of the columns, but I can't seem to properly format the second one. In the CREATE TABLE statement they are both DATE values with DEFAULT NULL. So, both columns have been created the same and both columns are in the same format in my .txt file. When listing my column names in my LOAD DATA LOCAL INFILE statement I have used variables for both, and the formatting of the dates is being done in a SET statement. This is what I have tried, as well as a lot of variations of this. import mysql.connector conn = mysql.connector.connect( user='username', password='pw', host='hostname', database='db', allow_local_infile=True ) cursor = conn.cursor() sql = """LOAD DATA LOCAL INFILE '/myfile.txt' INTO TABLE tablename FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' IGNORE 1 LINES (column1, column2, @s_date_value, column4, @o_date_value, column5) SET S_DATE = date_format(str_to_date(@s_date_value, '%d-%b-%y'), '%d-%b-%y'), O_DATE = date_format(str_to_date(@o_date_value, '%d-%b-%y'), '%d-%b-%y');""" cursor.execute(sql) conn.commit() conn.close() So, the S_DATE is loading correctly and the O_DATE written like this will load as 0000-00-00. If I write the line starting with O_DATE without the date_format I am able to get the actual dates loaded in the format of YYYY-MM-DD, like such: O_DATE = str_to_date(@o_date_value, '%d-%b-%y');""" If I write it with just date_format I get all NULL values, like such: O_DATE = date_format(@o_date_value, '%d-%b-%y');""" One thing I do not understand is why I even need str_to_date since they were created as DATE values in the first place. But, the combination of date_format and str_to_date is working for the S_DATE. I also considered the issue having something to do with listing more than one query in the SET statement, but I seemed to find through my research that it was acceptable, and I tried to just have the O_DATE in the SET statement without the S_DATE and I got the same results. I have also tried combining them in an UPDATE statement and executing it with cursor.execute(). Another solution I tried was to write the SET statement using regex, but that was unsuccessful as well. This is my first post on Stack Overflow so please let me know if additional info is needed. If anybody could please offer some help I would greatly appreciate it!
[ "I was able to solve it by creating an ALTER statement and first making that column TEXT then combining date_format and str_to_date like I did on S_DATE. Still not really sure why I had to do that for one column and not the other.\n" ]
[ 0 ]
[]
[]
[ "date_format", "load_data_infile", "mysql", "python", "str_to_date" ]
stackoverflow_0074634670_date_format_load_data_infile_mysql_python_str_to_date.txt
Q: I want to fetch data from a website and put in MySQL workbench, but it's not working First time programmer here, please don't be harsh on me. I want to fetch data from the URL's and put it inside MYSQL workbench database, it says that it's working, see image: enter image description here. But it's not doing so, what is wrong in the script? # GET ALL WorldRecords from https://api.isuresults.eu/records import requests import pandas as pd from pandas.io.json import json_normalize from helper_db import make_db_connection engine = make_db_connection def get_isu_worldrecord_db(engine): URL = "https://api.isuresults.eu/records/?type=WR" df_final=pd.DataFrame() for i in range(1,20): params = {'page': i} api = requests.get(url=URL, params=params) data = api.json() df = json_normalize(data,'results') df_final=df_final.append(df,ignore_index=True,sort=False) df_final=df_final.drop(['laps'], axis=1) df_final.to_sql("Tester", con=engine,if_exists="replace", chunksize=1000) return A: You define this method, but you don't really run it. Add another line at the last: get_isu_worldrecord_db(engine)
I want to fetch data from a website and put in MySQL workbench, but it's not working
First time programmer here, please don't be harsh on me. I want to fetch data from the URL's and put it inside MYSQL workbench database, it says that it's working, see image: enter image description here. But it's not doing so, what is wrong in the script? # GET ALL WorldRecords from https://api.isuresults.eu/records import requests import pandas as pd from pandas.io.json import json_normalize from helper_db import make_db_connection engine = make_db_connection def get_isu_worldrecord_db(engine): URL = "https://api.isuresults.eu/records/?type=WR" df_final=pd.DataFrame() for i in range(1,20): params = {'page': i} api = requests.get(url=URL, params=params) data = api.json() df = json_normalize(data,'results') df_final=df_final.append(df,ignore_index=True,sort=False) df_final=df_final.drop(['laps'], axis=1) df_final.to_sql("Tester", con=engine,if_exists="replace", chunksize=1000) return
[ "You define this method, but you don't really run it.\nAdd another line at the last:\nget_isu_worldrecord_db(engine)\n\n" ]
[ 1 ]
[]
[]
[ "mysql_workbench", "python", "visual_studio_code" ]
stackoverflow_0074632136_mysql_workbench_python_visual_studio_code.txt
Q: how to remove duplicate entries from json file using python? How to remove duplicate entries from a JSON file using python? I have a JSON file that looks like this: appreciate some one can help to provide a solution for fixing it json_data = [ { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9099", "aks9098", "aks9100", "aks9100", "aks9101", "aks9102", "aks9103", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9098", "aks9100", "aks9101", "aks9102", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9100", "aks9101", "aks9102", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" } ] I would like to remove duplicate entries from the list and expected result should be looks like this: Appreciate you can help to provide a solution for fixing it json_data = [ { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" } ] A: Does the following solve your problem? new_list=[] for i in json_data: if not i in new_list: new_list.append(i) print(new_list) A: Even though OP asked to do this in Python, this can readily be done in jq using function unique with a single update assignment: $ jq '.[].permissions[].collections |= unique' json.txt [ { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9098", "aks9099", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9098", "aks9099", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9098", "aks9099", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" } ] To invoke this in Python, one could do this: import subprocess return_obj = subprocess.run(["jq", ".[].permissions[].collections |= unique","json.txt"], stdout=subprocess.PIPE) json_data = return_obj.stdout.decode()
how to remove duplicate entries from json file using python?
How to remove duplicate entries from a JSON file using python? I have a JSON file that looks like this: appreciate some one can help to provide a solution for fixing it json_data = [ { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9099", "aks9098", "aks9100", "aks9100", "aks9101", "aks9102", "aks9103", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9098", "aks9100", "aks9101", "aks9102", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9100", "aks9101", "aks9102", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" } ] I would like to remove duplicate entries from the list and expected result should be looks like this: Appreciate you can help to provide a solution for fixing it json_data = [ { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" }, { "authType": "ldap", "password": "", "permissions": [ { "collections": [ "aks9099", "aks9098", "aks9100", "aks9101", "aks9102", "aks9103" ], "project": "Central Project" } ], "role": "devSecOps", "username": "[email protected]" } ]
[ "Does the following solve your problem?\nnew_list=[]\nfor i in json_data:\n if not i in new_list:\n new_list.append(i)\nprint(new_list)\n\n", "Even though OP asked to do this in Python, this can readily be done in jq using function unique with a single update assignment:\n$ jq '.[].permissions[].collections |= unique' json.txt \n[\n {\n \"authType\": \"ldap\",\n \"password\": \"\",\n \"permissions\": [\n {\n \"collections\": [\n \"aks9098\",\n \"aks9099\",\n \"aks9100\",\n \"aks9101\",\n \"aks9102\",\n \"aks9103\"\n ],\n \"project\": \"Central Project\"\n }\n ],\n \"role\": \"devSecOps\",\n \"username\": \"[email protected]\"\n },\n {\n \"authType\": \"ldap\",\n \"password\": \"\",\n \"permissions\": [\n {\n \"collections\": [\n \"aks9098\",\n \"aks9099\",\n \"aks9100\",\n \"aks9101\",\n \"aks9102\",\n \"aks9103\"\n ],\n \"project\": \"Central Project\"\n }\n ],\n \"role\": \"devSecOps\",\n \"username\": \"[email protected]\"\n },\n {\n \"authType\": \"ldap\",\n \"password\": \"\",\n \"permissions\": [\n {\n \"collections\": [\n \"aks9098\",\n \"aks9099\",\n \"aks9100\",\n \"aks9101\",\n \"aks9102\",\n \"aks9103\"\n ],\n \"project\": \"Central Project\"\n }\n ],\n \"role\": \"devSecOps\",\n \"username\": \"[email protected]\"\n }\n]\n\nTo invoke this in Python, one could do this:\nimport subprocess\nreturn_obj = subprocess.run([\"jq\", \".[].permissions[].collections |= unique\",\"json.txt\"], stdout=subprocess.PIPE)\njson_data = return_obj.stdout.decode()\n\n" ]
[ 0, 0 ]
[]
[]
[ "json", "python", "python_3.x" ]
stackoverflow_0074635590_json_python_python_3.x.txt
Q: How to effectively use parallelization with Ray in Python? I'm trying to learn how to use the Ray API and comparing with my code for joblib. However, I don't know how to effectively use this (my machine has 16 CPU). Am I doing something incorrectly? If not, why is Ray so much slower? import ray from joblib import Parallel, delayed num_cpus = 16 @ray.remote(num_cpus=num_cpus) def square(x): return x * x def square2(x): return x * x Ray: %%time # Launch parallel square tasks. futures = [square.remote(x=i) for i in range(1000)] # Retrieve results. print(len(ray.get(futures))) # CPU times: user 310 ms, sys: 79.7 ms, total: 390 ms # Wall time: 612 ms Joblib: %%time futures = Parallel(n_jobs=num_cpus)(delayed(square2)(x=i) for i in range(1000)) print(len(futures)) # CPU times: user 92.5 ms, sys: 21.4 ms, total: 114 ms # Wall time: 106 ms A: The Ray scheduler decides how many Ray tasks run concurrently based on their num_cpus value (along with other resource types for more advanced use cases). By default, this value is set to 1, meaning that you can run parallel tasks up to the total number of cores. By setting it to 16, you are telling Ray that each task requires all 16 CPUs to run, so essentially you are running the square tasks sequentially. Try running it again with just a plain @ray.remote! You may also want to warm up Ray by running a few times within the same script, since there is some cost from process startup at the beginning. Finally, in an actual workload, you would probably want to do more than multiply two integers together. Each task will finish nearly instantaneously, so you will see more overhead than benefit from the extra work of distributing the execution. There's some good info on this anti-design pattern here.
How to effectively use parallelization with Ray in Python?
I'm trying to learn how to use the Ray API and comparing with my code for joblib. However, I don't know how to effectively use this (my machine has 16 CPU). Am I doing something incorrectly? If not, why is Ray so much slower? import ray from joblib import Parallel, delayed num_cpus = 16 @ray.remote(num_cpus=num_cpus) def square(x): return x * x def square2(x): return x * x Ray: %%time # Launch parallel square tasks. futures = [square.remote(x=i) for i in range(1000)] # Retrieve results. print(len(ray.get(futures))) # CPU times: user 310 ms, sys: 79.7 ms, total: 390 ms # Wall time: 612 ms Joblib: %%time futures = Parallel(n_jobs=num_cpus)(delayed(square2)(x=i) for i in range(1000)) print(len(futures)) # CPU times: user 92.5 ms, sys: 21.4 ms, total: 114 ms # Wall time: 106 ms
[ "The Ray scheduler decides how many Ray tasks run concurrently based on their num_cpus value (along with other resource types for more advanced use cases). By default, this value is set to 1, meaning that you can run parallel tasks up to the total number of cores. By setting it to 16, you are telling Ray that each task requires all 16 CPUs to run, so essentially you are running the square tasks sequentially. Try running it again with just a plain @ray.remote!\nYou may also want to warm up Ray by running a few times within the same script, since there is some cost from process startup at the beginning.\nFinally, in an actual workload, you would probably want to do more than multiply two integers together. Each task will finish nearly instantaneously, so you will see more overhead than benefit from the extra work of distributing the execution. There's some good info on this anti-design pattern here.\n" ]
[ 2 ]
[]
[]
[ "joblib", "parallel_processing", "python", "ray" ]
stackoverflow_0074635033_joblib_parallel_processing_python_ray.txt
Q: Python click: subcommand from partial func Say I have a function created not by def but by a partial() call (or even just by assignment). In the example below, how can I add bar as a click sub-command to the cli group? I can't use the decorator approach (as with foo). My failed approaches are shown below in-line. import functools import click @click.group() def cli(): pass @cli.command() def foo(myname="foo"): print(f"I am {myname}") bar = functools.partial(foo, myname="bar") # this has no effect # cli.command(bar) # results in: AttributeError: 'functools.partial' object has no attribute 'name' # cli.add_command(bar) # results in: AttributeError: 'functools.partial' object has no attribute 'hidden' # cli.add_command(bar, name="bar") if __name__ == "__main__": cli() UPDATE: Actually, it looks like the partial is the culprit here. This answer in a different but related thread, points out that partial objects are "missing certain attributes, specifically __module__ and __name__". A: I think you're missing the fact that the @command decorator turns the foo function into a Command that uses the original foo as callback. The original function is still accessible as foo.callback but foo is a Command. Still, you can't use a partial object as callback because it lacks __name__, but you can work around that passing the name explicitly: bar = functools.partial(foo.callback, myname="bar") cli.command("bar")(bar) # Alternative: cli.add_command(click.Command("bar", callback=bar))
Python click: subcommand from partial func
Say I have a function created not by def but by a partial() call (or even just by assignment). In the example below, how can I add bar as a click sub-command to the cli group? I can't use the decorator approach (as with foo). My failed approaches are shown below in-line. import functools import click @click.group() def cli(): pass @cli.command() def foo(myname="foo"): print(f"I am {myname}") bar = functools.partial(foo, myname="bar") # this has no effect # cli.command(bar) # results in: AttributeError: 'functools.partial' object has no attribute 'name' # cli.add_command(bar) # results in: AttributeError: 'functools.partial' object has no attribute 'hidden' # cli.add_command(bar, name="bar") if __name__ == "__main__": cli() UPDATE: Actually, it looks like the partial is the culprit here. This answer in a different but related thread, points out that partial objects are "missing certain attributes, specifically __module__ and __name__".
[ "I think you're missing the fact that the @command decorator turns the foo function into a Command that uses the original foo as callback. The original function is still accessible as foo.callback but foo is a Command. Still, you can't use a partial object as callback because it lacks __name__, but you can work around that passing the name explicitly:\nbar = functools.partial(foo.callback, myname=\"bar\")\ncli.command(\"bar\")(bar)\n\n# Alternative:\ncli.add_command(click.Command(\"bar\", callback=bar))\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_click" ]
stackoverflow_0074239499_python_python_click.txt
Q: No error in code but Login is not working for python selenium simple script from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get("https://opensource-demo.orangehrmlive.com/web/index.php/auth/login") driver.implicitly_wait(5) username ="//input[@placeholder='Username']" password ="//input[@placeholder='Password']" driver.find_element(By.XPATH, username).send_keys("Admin") driver.find_element(By.XPATH, password).send_keys("admin123") driver.find_element(By.XPATH, "//button[normalize-space()='Login']").click() print("Test pass") Above script is not having driver.close() but still chrome is getting closed automatically and login page is not opening Not sure what mistake I am doing here I am expecting Logged in paged after given login credentials But chrome is getting closed instead A: Add the below code and try: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options options = Options() options.add_experimental_option("detach", True) driver = webdriver.Chrome(service=Service(<chromedriver.exe path>), options=options) driver.get(<URL>)
No error in code but Login is not working for python selenium simple script
from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get("https://opensource-demo.orangehrmlive.com/web/index.php/auth/login") driver.implicitly_wait(5) username ="//input[@placeholder='Username']" password ="//input[@placeholder='Password']" driver.find_element(By.XPATH, username).send_keys("Admin") driver.find_element(By.XPATH, password).send_keys("admin123") driver.find_element(By.XPATH, "//button[normalize-space()='Login']").click() print("Test pass") Above script is not having driver.close() but still chrome is getting closed automatically and login page is not opening Not sure what mistake I am doing here I am expecting Logged in paged after given login credentials But chrome is getting closed instead
[ "Add the below code and try:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\n\noptions = Options()\noptions.add_experimental_option(\"detach\", True)\n\ndriver = webdriver.Chrome(service=Service(<chromedriver.exe path>), options=options)\n\ndriver.get(<URL>)\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074636120_python_selenium.txt
Q: Pandas creating a new table and convert it to wide format I have this data frame. Type Generation Grass 1 Grass 1 Fire 1 Fire 1 Grass 2 Grass 3 I am trying to create a new column where it adds the number of same types corresponding to its generation number, and reshape the data into wide format. looking like; Type Generation 1 Generation 2 Generation 3 Grass 2 1 1 Fire 2 0 0 I have sliced columns from the original data frame: df_Type = df2[['Type 1', 'Generation']].copy() print(df_Type) and I was trying to create a new column to count but this did not work. Type_Generation = df_Type.groupby('Generation').agg(no_types = ('Type 1', 'sum')) print(Type_Generation) is there a more efficient way of reshaping the data? A: crosstab pd.crosstab(df['Type'], df['Generation']).rename(columns=lambda x: f'Generation_{x}') result: Generation Generation_1 Generation_2 Generation_3 Type Fire 2 0 0 Grass 2 1 1 or you can use add_prefix instead rename pd.crosstab(df['Type'], df['Generation']).add_prefix('Generation_') and you can change order of index by reindex pd.crosstab(df['Type'], df['Generation']).add_prefix('Generation_').reindex(df['Type'].unique()) result: Generation Generation_1 Generation_2 Generation_3 Type Grass 2 1 1 Fire 2 0 0
Pandas creating a new table and convert it to wide format
I have this data frame. Type Generation Grass 1 Grass 1 Fire 1 Fire 1 Grass 2 Grass 3 I am trying to create a new column where it adds the number of same types corresponding to its generation number, and reshape the data into wide format. looking like; Type Generation 1 Generation 2 Generation 3 Grass 2 1 1 Fire 2 0 0 I have sliced columns from the original data frame: df_Type = df2[['Type 1', 'Generation']].copy() print(df_Type) and I was trying to create a new column to count but this did not work. Type_Generation = df_Type.groupby('Generation').agg(no_types = ('Type 1', 'sum')) print(Type_Generation) is there a more efficient way of reshaping the data?
[ "crosstab\npd.crosstab(df['Type'], df['Generation']).rename(columns=lambda x: f'Generation_{x}')\n\nresult:\nGeneration Generation_1 Generation_2 Generation_3\nType \nFire 2 0 0\nGrass 2 1 1\n\n\nor you can use add_prefix instead rename\npd.crosstab(df['Type'], df['Generation']).add_prefix('Generation_')\n\n\nand you can change order of index by reindex\npd.crosstab(df['Type'], df['Generation']).add_prefix('Generation_').reindex(df['Type'].unique())\n\nresult:\nGeneration Generation_1 Generation_2 Generation_3\nType \nGrass 2 1 1\nFire 2 0 0\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "pivot", "python" ]
stackoverflow_0074636192_pandas_pivot_python.txt
Q: How to get reproducible weights initializaiton in Keras? I set both numpy and tensorflow random seeds as suggested Generate some data - this part is reproducible, gives same results always Create a simple network and make a prediction (without training, just with random weights) - prediction is different every time import numpy as np from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras import Sequential, optimizers import tensorflow as tf np.random.seed(32) tf.set_random_seed(33) random_data = np.random.rand(10, 2048) print(random_data[:,0]) def make_classifier(): model = Sequential() model.add(Dense(1024, activation='relu', input_dim=2048)) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=2e-4), loss='binary_crossentropy') return model model = make_classifier() # model.summary() model.predict(random_data) When I re-run the whole cell, the print statement always outputs [0.85888927 0.23846818 0.17757634 0.07244977 0.71119893 0.09223853 0.86074647 0.31838194 0.7568638 0.38197083]. However the prediction is different every time: array([[0.5825965 ], [0.8677979 ], [0.70151913], [0.64572096], [0.78101623], [0.76483005], [0.7946336 ], [0.6281612 ], [0.8208673 ], [0.8273002 ]], dtype=float32) array([[0.51012236], [0.6562015 ], [0.5593666 ], [0.686155 ], [0.6488372 ], [0.5966359 ], [0.6236731 ], [0.58099884], [0.68447435], [0.58886844]], dtype=float32) And so on. How do I get reproducible prediction for a just-initialized network? When does the weights initialization happen exactly? when I compile the model or?.. A: I've been struggling a lot with this and turns out there are quite a few points that have to be set in order to achieve complete consistency for every case: First of all, make sure the data (and the order of the data) that you feed to your model is consistent. Then, for the model weights initialization: 1)numpy random seed import numpy as np np.seed(1) 2)tensor flow random seed import tensorflow as tf tf.set_random_seed(2) 3)python random seed import random random.seed(3) Additionally to that, you have to set two (if you have multiprocessing capabilities) arguments to model.fit. These ones are not often mentioned on the answers I've seen around: model.fit(..., shuffle=False, use_multiprocessing=False) Only then I have achieved complete consistency in training runs. Hope that helps people! A: tf.keras.initializers objects have a seed argument for reproducible initialization. import tensorflow as tf import numpy as np initializer = tf.keras.initializers.GlorotUniform(seed=42) for _ in range(10): print(np.round(initializer((4,)), 3)) [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] [-0.377 -0.003 0.373 -0.831] In a Keras layer, you can use it like this: tf.keras.layers.Dense(1024, activation='relu', input_dim=2048, kernel_initializer=tf.keras.initializers.GlorotUniform(seed=42)) A: I think a better practice is tf.keras.utils.set_random_seed( seed ) ref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/set_random_seed
How to get reproducible weights initializaiton in Keras?
I set both numpy and tensorflow random seeds as suggested Generate some data - this part is reproducible, gives same results always Create a simple network and make a prediction (without training, just with random weights) - prediction is different every time import numpy as np from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras import Sequential, optimizers import tensorflow as tf np.random.seed(32) tf.set_random_seed(33) random_data = np.random.rand(10, 2048) print(random_data[:,0]) def make_classifier(): model = Sequential() model.add(Dense(1024, activation='relu', input_dim=2048)) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=2e-4), loss='binary_crossentropy') return model model = make_classifier() # model.summary() model.predict(random_data) When I re-run the whole cell, the print statement always outputs [0.85888927 0.23846818 0.17757634 0.07244977 0.71119893 0.09223853 0.86074647 0.31838194 0.7568638 0.38197083]. However the prediction is different every time: array([[0.5825965 ], [0.8677979 ], [0.70151913], [0.64572096], [0.78101623], [0.76483005], [0.7946336 ], [0.6281612 ], [0.8208673 ], [0.8273002 ]], dtype=float32) array([[0.51012236], [0.6562015 ], [0.5593666 ], [0.686155 ], [0.6488372 ], [0.5966359 ], [0.6236731 ], [0.58099884], [0.68447435], [0.58886844]], dtype=float32) And so on. How do I get reproducible prediction for a just-initialized network? When does the weights initialization happen exactly? when I compile the model or?..
[ "I've been struggling a lot with this and turns out there are quite a few points that have to be set in order to achieve complete consistency for every case:\nFirst of all, make sure the data (and the order of the data) that you feed to your model is consistent. Then, for the model weights initialization:\n1)numpy random seed\nimport numpy as np\nnp.seed(1)\n\n2)tensor flow random seed\nimport tensorflow as tf\ntf.set_random_seed(2)\n\n3)python random seed\nimport random\nrandom.seed(3)\n\nAdditionally to that, you have to set two (if you have multiprocessing capabilities) arguments to model.fit. These ones are not often mentioned on the answers I've seen around:\nmodel.fit(..., shuffle=False, use_multiprocessing=False)\n\nOnly then I have achieved complete consistency in training runs.\nHope that helps people!\n", "tf.keras.initializers objects have a seed argument for reproducible initialization.\nimport tensorflow as tf\nimport numpy as np\n\ninitializer = tf.keras.initializers.GlorotUniform(seed=42)\n\nfor _ in range(10):\n print(np.round(initializer((4,)), 3))\n\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n[-0.377 -0.003 0.373 -0.831]\n\nIn a Keras layer, you can use it like this:\ntf.keras.layers.Dense(1024, \n activation='relu', \n input_dim=2048,\n kernel_initializer=tf.keras.initializers.GlorotUniform(seed=42))\n\n", "I think a better practice is\ntf.keras.utils.set_random_seed(\n seed\n)\n\nref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/set_random_seed\n" ]
[ 2, 1, 0 ]
[]
[]
[ "keras", "python", "reproducible_research", "tensorflow" ]
stackoverflow_0065794491_keras_python_reproducible_research_tensorflow.txt
Q: How to add value into sub-list Looking for a way to add value 555 right after 6000 to make it and display it. mylist = [4, 11, [300, 400, [5000, 6000, 555], 500], 30, 40] Original list below. mylist = [4, 11, [300, 400, [5000, 6000], 500], 30, 40] A: Can you try adding append to corresponding index mylist[2][2].append(555)
How to add value into sub-list
Looking for a way to add value 555 right after 6000 to make it and display it. mylist = [4, 11, [300, 400, [5000, 6000, 555], 500], 30, 40] Original list below. mylist = [4, 11, [300, 400, [5000, 6000], 500], 30, 40]
[ "Can you try adding append to corresponding index\nmylist[2][2].append(555)\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074636215_python.txt
Q: ValueError: not enough values to unpack (expected 2, got 1) in a for loop I am tryin to make a field structure but I am having problems while using the for loop with 3 entries in a .items(). dirs = df_vol_erp.groupby(['country', 'primary_volcano_type'])['volcano_name_x'].apply(list) #for pais, (tipos, nombres) in dirs.items(): for pais, tipos, nombres in dirs.items(): path_pais = os.path.join(new_path, str(pais)) if not os.path.exists(path_pais): os.makedirs(os.path.join(path_pais), exist_ok=True) for tipo in tipos: path_tipos = os.path.join(path_pais, str(tipo)) if not os.path.exists(path_tipos): os.makedirs(os.path.join(path_tipos), exist_ok=True) for nombre in nombres: path_nombre = os.path.join(path_tipos, str(nombre)) if not os.path.exists(path_nombre): os.makedirs(os.path.join(path_nombre), exist_ok=True) I have this code and when run it i get ValueError: not enough values to unpack (expected 2, got 1). I also tried: for pais, values in dirs.items(): tipos, nombres = values What can I do ? A: Assign two variables, then in the loop body you can split the second variable into two more variables. for pais, value in dirs.items(): tipos, nombres = value
ValueError: not enough values to unpack (expected 2, got 1) in a for loop
I am tryin to make a field structure but I am having problems while using the for loop with 3 entries in a .items(). dirs = df_vol_erp.groupby(['country', 'primary_volcano_type'])['volcano_name_x'].apply(list) #for pais, (tipos, nombres) in dirs.items(): for pais, tipos, nombres in dirs.items(): path_pais = os.path.join(new_path, str(pais)) if not os.path.exists(path_pais): os.makedirs(os.path.join(path_pais), exist_ok=True) for tipo in tipos: path_tipos = os.path.join(path_pais, str(tipo)) if not os.path.exists(path_tipos): os.makedirs(os.path.join(path_tipos), exist_ok=True) for nombre in nombres: path_nombre = os.path.join(path_tipos, str(nombre)) if not os.path.exists(path_nombre): os.makedirs(os.path.join(path_nombre), exist_ok=True) I have this code and when run it i get ValueError: not enough values to unpack (expected 2, got 1). I also tried: for pais, values in dirs.items(): tipos, nombres = values What can I do ?
[ "Assign two variables, then in the loop body you can split the second variable into two more variables.\nfor pais, value in dirs.items():\n tipos, nombres = value\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "items", "python", "valueerror" ]
stackoverflow_0074636264_for_loop_items_python_valueerror.txt
Q: calling child class method from parent class file in python parent.py: class A(object): def methodA(self): print("in methodA") child.py: from parent import A class B(A): def methodb(self): print("am in methodb") Is there anyway to call methodb() in parent.py? A: Doing this would only make sense if A is an abstract base class, meaning that A is only meant to be used as a base for other classes, not instantiated directly. If that were the case, you would define methodB on class A, but leave it unimplemented: class A(object): def methodA(self): print("in methodA") def methodB(self): raise NotImplementedError("Must override methodB") from parent import A class B(A): def methodB(self): print("am in methodB") This isn't strictly necessary. If you don't declare methodB anywhere in A, and instantiate B, you'd still be able to call methodB from the body of methodA, but it's a bad practice; it's not clear where methodA is supposed to come from, or that child classes need to override it. If you want to be more formal, you can use the Python abc module to declare A as an abstract base class. from abc import ABC, abstractmethod class A(ABC): def methodA(self): print("in methodA") @abstractmethod def methodB(self): raise NotImplementedError("Must override methodB") Or if using Python 2.x: from abc import ABCMeta, abstractmethod class A(object): __metaclass__ = ABCMeta def methodA(self): print("in methodA") @abstractmethod def methodB(self): raise NotImplementedError("Must override methodB") Using this will actually prevent you from instantiating A or any class that inherits from A without overriding methodB. For example, if B looked like this: class B(A): pass You'd get an error trying to instantiate it: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class B with abstract methods methodB The same would happen if you tried instantiating A. A: You can do something like this: class A(): def foo(self): self.testb() class B(A): def testb(self): print('lol, it works') b = B() b.foo() Which would return this of course: lol, it works Note, that in fact there is no call from parent, there is just call of function foo from instance of child class, this instance has inherited foo from parent, i.e. this is impossible: a=A() a.foo() will produce: AttributeError: A instance has no attribute 'testb' because >>> dir(A) ['__doc__', '__module__', 'foo'] >>> dir(B) ['__doc__', '__module__', 'foo', 'testb'] What I've wanted to show that you can create instance of child class, and it will have all methods and parameters from both parent and it's own classes. A: There are three approaches/ways to do this ! but I highly recommend to use the approach #3 because composition/decoupling has certain benefits in terms of design pattern. (GOF) ## approach 1 inheritance class A(): def methodA(self): print("in methodA") def call_mehtodB(self): self.methodb() class B(A): def methodb(self): print("am in methodb") b=B() b.call_mehtodB() ## approach 2 using abstract method still class highly coupled from abc import ABC, abstractmethod class A(ABC): def methodA(self): print("in methodA") @abstractmethod def methodb(self): pass class B(A): def methodb(self): print("am in methodb") b=B() b.methodb() #approach 3 the recommended way ! Composition class A(): def __init__(self, message): self.message=message def methodA(self): print(self.message) class B(): def __init__(self,messageB, messageA): self.message=messageB self.a=A(messageA) def methodb(self): print(self.message) def methodA(self): print(self.a.message) b=B("am in methodb", "am in methodA") b.methodb() b.methodA() A: You could use the function anywhere so long as it was attached to an object, which it appears to be from your sample. If you have a B object, then you can use its methodb() function from absolutely anywhere. parent.py: class A(object): def methoda(self): print("in methoda") def aFoo(obj): obj.methodb() child.py from parent import A class B(A): def methodb(self): print("am in methodb") You can see how this works after you import: >>> from parent import aFoo >>> from child import B >>> obj = B() >>> aFoo(obj) am in methodb Granted, you will not be able to create a new B object from inside parent.py, but you will still be able to use its methods if it's passed in to a function in parent.py somehow. A: If the both class in same .py file then you can directly call child class method from parents class. It gave me warning but it run well. class A(object): def methodA(self): print("in methodA") Self.methodb() class B(A): def methodb(self): print("am in methodb") A: You can certainly do this - parent.py class A(object): def __init__(self,obj): self.obj_B = obj def test(self): self.obj_B.methodb() child.py from parent import A class B(A): def __init__(self,id): self.id = id super().__init__(self) def methodb(self): print("in method b with id:",self.id) Now if you want to call it from class B object b1 = B(1) b1.test() >>> in method b with id: 1 Or if you want to call it from class A object b2 = B(2) a = A(b2) a.test() >>> in method b with id: 2 You can even make new objects in super class by invoking class dict objects of the object passed to super class from child class.
calling child class method from parent class file in python
parent.py: class A(object): def methodA(self): print("in methodA") child.py: from parent import A class B(A): def methodb(self): print("am in methodb") Is there anyway to call methodb() in parent.py?
[ "Doing this would only make sense if A is an abstract base class, meaning that A is only meant to be used as a base for other classes, not instantiated directly. If that were the case, you would define methodB on class A, but leave it unimplemented:\nclass A(object):\n def methodA(self):\n print(\"in methodA\")\n\n def methodB(self):\n raise NotImplementedError(\"Must override methodB\")\n\n\nfrom parent import A\nclass B(A):\n def methodB(self):\n print(\"am in methodB\")\n\nThis isn't strictly necessary. If you don't declare methodB anywhere in A, and instantiate B, you'd still be able to call methodB from the body of methodA, but it's a bad practice; it's not clear where methodA is supposed to come from, or that child classes need to override it.\nIf you want to be more formal, you can use the Python abc module to declare A as an abstract base class.\nfrom abc import ABC, abstractmethod\n\nclass A(ABC):\n\n def methodA(self):\n print(\"in methodA\")\n\n @abstractmethod\n def methodB(self):\n raise NotImplementedError(\"Must override methodB\")\n\nOr if using Python 2.x:\nfrom abc import ABCMeta, abstractmethod\n\nclass A(object):\n __metaclass__ = ABCMeta\n\n def methodA(self):\n print(\"in methodA\")\n\n @abstractmethod\n def methodB(self):\n raise NotImplementedError(\"Must override methodB\")\n\nUsing this will actually prevent you from instantiating A or any class that inherits from A without overriding methodB. For example, if B looked like this:\nclass B(A):\n pass\n\nYou'd get an error trying to instantiate it:\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: Can't instantiate abstract class B with abstract methods methodB\n\nThe same would happen if you tried instantiating A.\n", "You can do something like this:\nclass A():\n def foo(self):\n self.testb()\n\nclass B(A):\n def testb(self):\n print('lol, it works')\nb = B()\nb.foo()\n\nWhich would return this of course:\nlol, it works\n\nNote, that in fact there is no call from parent, there is just call of function foo from instance of child class, this instance has inherited foo from parent, i.e. this is impossible:\na=A()\na.foo()\n\nwill produce:\n AttributeError: A instance has no attribute 'testb'\nbecause\n>>> dir(A)\n['__doc__', '__module__', 'foo']\n>>> dir(B)\n['__doc__', '__module__', 'foo', 'testb']\n\nWhat I've wanted to show that you can create instance of child class, and it will have all methods and parameters from both parent and it's own classes. \n", "There are three approaches/ways to do this ! but I highly recommend to use the approach #3 because composition/decoupling has certain benefits in terms of design pattern. (GOF)\n## approach 1 inheritance \nclass A():\n def methodA(self):\n print(\"in methodA\")\n def call_mehtodB(self):\n self.methodb()\n\nclass B(A):\n def methodb(self):\n print(\"am in methodb\")\n\nb=B()\nb.call_mehtodB()\n\n\n## approach 2 using abstract method still class highly coupled\nfrom abc import ABC, abstractmethod\nclass A(ABC):\n def methodA(self):\n print(\"in methodA\")\n @abstractmethod \n def methodb(self):\n pass\n\nclass B(A):\n\n def methodb(self):\n print(\"am in methodb\")\n\nb=B()\nb.methodb()\n\n#approach 3 the recommended way ! Composition \n\nclass A():\n def __init__(self, message):\n self.message=message\n\n def methodA(self):\n print(self.message)\n\nclass B():\n def __init__(self,messageB, messageA):\n self.message=messageB\n self.a=A(messageA)\n\n def methodb(self):\n print(self.message)\n\n def methodA(self):\n print(self.a.message)\n\nb=B(\"am in methodb\", \"am in methodA\")\nb.methodb()\nb.methodA()\n\n", "You could use the function anywhere so long as it was attached to an object, which it appears to be from your sample. If you have a B object, then you can use its methodb() function from absolutely anywhere.\nparent.py:\nclass A(object):\n def methoda(self):\n print(\"in methoda\")\n\ndef aFoo(obj):\n obj.methodb()\n\nchild.py\nfrom parent import A\nclass B(A):\n def methodb(self):\n print(\"am in methodb\")\n\nYou can see how this works after you import:\n>>> from parent import aFoo\n>>> from child import B\n>>> obj = B()\n>>> aFoo(obj)\nam in methodb\n\nGranted, you will not be able to create a new B object from inside parent.py, but you will still be able to use its methods if it's passed in to a function in parent.py somehow.\n", "If the both class in same .py file then you can directly call child class method from parents class.\nIt gave me warning but it run well.\nclass A(object):\ndef methodA(self):\n\n print(\"in methodA\")\n\n Self.methodb()\n\nclass B(A):\ndef methodb(self):\n\n print(\"am in methodb\")\n\n", "You can certainly do this -\nparent.py\nclass A(object):\n def __init__(self,obj):\n self.obj_B = obj\n\n def test(self): \n self.obj_B.methodb()\n\nchild.py\nfrom parent import A\n\nclass B(A):\n def __init__(self,id):\n self.id = id\n super().__init__(self)\n \n def methodb(self):\n print(\"in method b with id:\",self.id)\n\nNow if you want to call it from class B object\n\nb1 = B(1)\nb1.test()\n\n>>> in method b with id: 1\n\nOr if you want to call it from class A object\nb2 = B(2)\na = A(b2)\na.test()\n\n>>> in method b with id: 2\n\nYou can even make new objects in super class by invoking class dict objects of the object passed to super class from child class.\n" ]
[ 46, 20, 2, 1, 0, 0 ]
[]
[]
[ "class", "inheritance", "parent", "python" ]
stackoverflow_0025062114_class_inheritance_parent_python.txt
Q: Finding difference between list items list = [4, 7, 11, 15] I'm trying to create a function to loop through list items, and find the difference between list[1] and list[0], and then list[2] and list[1], and then list[3] and list[2]... and so on for the entirety of the list. I am thinking of using a for loop but there might be a better way. Thanks. output would be: list_diff = [3, 4, 4] def difference(list): for items in list(): or def difference(list): list_diff.append(list[1] - list[0]) list_diff.append(list[2] - list[1]) etc. ... A: If you are in Python 3.10+ you could try pairwise: And you should try NOT to use the built-in list as the variable name. It's quite easy and straightforward to make this one-line into a function. from itertools import pairwise >>>[b-a for a, b in pairwise(lst)] # List Comprehension [3, 4, 4] # Or just zip() diffs = [b-a for a, b in zip(lst, lst[1:]) ] # no import A: You can simply loop for each item starting from element 1: def diff(source): return [source[i] - source[i - 1] for i in range(1, len(source))] print(diff([4, 7, 11, 15])) # [3, 4, 4] A: num_list = [4, 7, 11, 15] def difference(numbers): diff_list = [] for i in range(1, len(numbers)): diff_list.append(numbers[i] - numbers[i - 1]) return diff_list print(difference(num_list)) # [3, 4, 4]
Finding difference between list items
list = [4, 7, 11, 15] I'm trying to create a function to loop through list items, and find the difference between list[1] and list[0], and then list[2] and list[1], and then list[3] and list[2]... and so on for the entirety of the list. I am thinking of using a for loop but there might be a better way. Thanks. output would be: list_diff = [3, 4, 4] def difference(list): for items in list(): or def difference(list): list_diff.append(list[1] - list[0]) list_diff.append(list[2] - list[1]) etc. ...
[ "If you are in Python 3.10+ you could try pairwise:\nAnd you should try NOT to use the built-in list as the variable name.\nIt's quite easy and straightforward to make this one-line into a function.\n\nfrom itertools import pairwise\n\n>>>[b-a for a, b in pairwise(lst)] # List Comprehension\n[3, 4, 4]\n\n# Or just zip()\ndiffs = [b-a for a, b in zip(lst, lst[1:]) ] # no import \n\n\n", "You can simply loop for each item starting from element 1:\ndef diff(source):\n return [source[i] - source[i - 1] for i in range(1, len(source))]\n\nprint(diff([4, 7, 11, 15])) # [3, 4, 4]\n\n", "num_list = [4, 7, 11, 15]\n\ndef difference(numbers):\n diff_list = []\n\n for i in range(1, len(numbers)):\n diff_list.append(numbers[i] - numbers[i - 1])\n\n return diff_list\n\n\nprint(difference(num_list)) # [3, 4, 4]\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "for_loop", "list", "python" ]
stackoverflow_0074636222_for_loop_list_python.txt
Q: why Index error at / list index out of range? hello im having a problem with the index code in my views.py, aparrently the HTML index is out of range for some reason i dont understand, because before i wanted to make the page to add images as avatars work perfectly fine the index .html is in a templates folder , inside the APP folder, and i created a media folder outside of the APP for storage of media this is the views.py from django.http import HttpResponse from django.shortcuts import render from django.views.generic import ListView from django.views.generic.detail import DetailView from django.views.generic.edit import UpdateView, DeleteView, CreateView from django.contrib.auth.views import LoginView, LogoutView from django.contrib.auth.decorators import login_required from django.contrib.auth.mixins import LoginRequiredMixin from .forms import PosteoForm, SignUpForm, UserEditForm from .models import Posteo, Avatar from django.urls import reverse_lazy # Create your views here. def mostrar_index(request): imagenes = Avatar.objects.filter(user=request.user.id) return render(request, 'index.html', {'url': imagenes[0].images.url}) def mostrar_gallery(request): return render(request,'gallery.html') def mostrar_contact(request): return render(request,'contact.html') def cursoPost(request): return render(request,'Posts.html') @login_required def crear_post(request): if request.method == 'POST': posteo = PosteoForm(request.POST) print('posteo') if posteo.is_valid(): data = posteo.cleaned_data posteo = Posteo (titulo=data['titulo'], texto=data['texto']) posteo.save() return render(request,'index.html') else: posteo = PosteoForm() print('formulario') return render(request,'Posts.html',{'posteo':posteo}) def buscar_post(request): return render(request,'buscador.html') def buscador (request): if request.GET.get ('titulo', False): titulo = request.GET ['titulo'] post = Posteo.objects.filter(titulo__icontains=titulo) return render (request, 'buscador.html',{'post':post}) else: respuesta = 'no hay datos' return render (request, 'buscador.html', {'respuesta':respuesta}) class PosteoList(ListView): model = Posteo template_name = 'mostrar_post.html' class PosteoDetail(DetailView): model = Posteo template_name = 'posteo_detalle.html' def base(request): return render(request,'base.html') class PosteoDeleteView(LoginRequiredMixin, DeleteView): model = Posteo template_name = 'post_confirm_delete.html' success_url = '/mostrarPost' class PosteoUpdateView(LoginRequiredMixin, UpdateView): model = Posteo template_name = 'modificar_post.html' success_url = '/mostrarPost' fields =['titulo', 'subtitulo','texto','nombre', 'email'] class SignUpView(CreateView): form_class = SignUpForm success_url = reverse_lazy('index') template_name = "registro.html" def editar_usuario(request): usuario = request.User if request.method == 'POST': usuario_form = UserEditForm(request.POST) if usuario_form.is_valid(): informacion = usuario_form.cleaned_data usuario.username = informacion['username'] usuario.email = informacion['email'] usuario.password1 = informacion['password1'] usuario.password2 = informacion ['password2'] usuario.save() return render(request,'inicio.html') else: usuario_form = UserEditForm(initial={'username': usuario.username, 'email': usuario.email}) return render(request, 'admin_update.html', {'form': usuario_form, 'usuario': usuario}) class AdminLoginView(LoginView): template_name = 'login.html' class AdminLogoutView(LogoutView): template_name = 'logout.html' the index.html <!DOCTYPE html> <html lang="en"> <head> {% load static %} <!-- basic --> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <!-- mobile metas --> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="initial-scale=1, maximum-scale=1"> <!-- site metas --> <title>Entro</title> <meta name="keywords" content=""> <meta name="description" content=""> <meta name="author" content=""> <!-- fevicon --> <!-- bootstrap css --> <link rel="stylesheet" href={% static "css/bootstrap.min.css" %}> <!-- style css --> <link rel="stylesheet" href={% static "css/style.css" %}> <!-- Responsive--> <link rel="stylesheet" href={% static "css/responsive.css" %}> <!-- Scrollbar Custom CSS --> <link rel="stylesheet" href={% static "css/jquery.mCustomScrollbar.min.css" %}> <!-- Tweaks for older IEs--> <link rel="stylesheet" href="https://netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/fancybox/2.1.5/jquery.fancybox.min.css" media="screen"> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script><![endif]--> </head> <!-- body --> <body class="main-layout"> <!-- loader --> <div class="loader_bg"> <div class="loader"><img src={% static "images/loading.gif" %} alt="#" /></div> </div> <!-- end loader --> <!-- header --> <header> <!-- header inner --> <div class="header-top"> <div class="header"> <div class="container"> <div class="row"> <div class="col-xl-2 col-lg-2 col-md-2 col-sm-3 col logo_section"> <div class="full"> <div class="center-desk"> <!-- <div class="logo"> <a href="index.html"><img src={% static "images/logo.png" %} alt="#" /></a> </div> --> <!-- boton izquierdo de ticket --> </div> </div> </div> <div class="col-xl-10 col-lg-10 col-md-10 col-sm-9"> <div class="menu-area"> <div class="limit-box"> <nav class="main-menu "> <ul class="menu-area-main"> <li class="active"> <a href="{% url 'index' %}">Inicio</a> </li> <li> <a href="{% url 'mostrarpost' %}">Mostrar Posts</a> </li> <li> <a href="{% url 'crear' %}">Crear Posts</a> </li> <li> <a href="{% url 'buscar' %}">Buscar Posts</a> </li> <li> <a href="{% url 'Galeria'%}">Galeria</a> </li> <li> <a href="{% url 'base'%}">base</a> </li> <li> <a href="{% url 'Sign Up'%}">Usuario</a> </li> <li> <a class="last_manu" href="{% url 'buscar' %}"><img src={% static "images/search_icon.png" %} alt="#" /></a> </li> {% if request.user.is_authenticated %} <li> <a href="{% url 'Logout'%}">Logout</a> </li> {% else %} <li> <a href="{% url 'Login'%}">Login</a> </li> {% endif %} </ul> </nav> </div> </div> </div> </div> </div> </div> <!-- end header inner --> <!-- end header --> <section class="slider_section"> <div id="myCarousel" class="carousel slide" data-ride="carousel"> <ol class="carousel-indicators"> <li data-target="#myCarousel" data-slide-to="0" class="active"></li> <li data-target="#myCarousel" data-slide-to="1"></li> <li data-target="#myCarousel" data-slide-to="2"></li> </ol> <div class="carousel-inner"> <div class="carousel-item active"> <div class="container"> <div class="carousel-caption"> <div class="row"> <div class="col-md-12"> <div class="text-bg"> {% if request.user.is_authenticated %} <img src="{{url}}" alt=""> <h1>Hola {{user.username}} como te va? </h1> <p>Si deseas deslogearte haz click aqui! </p> <li> <a href="{% url 'Logout'%}">Logout</a> </li> {% else %} <span>La web</span> <h1>de musica para vos</h1> <p>Esta es una web diseñada para dejar al alcance de tus manos lo que necesitas para tu musica</p> <li> <a href="{% url 'Login'%}">Login</a> </li> {% endif %} </div> </div> </div> </div> </div> </div> <div class="carousel-item"> <div class="container "> <div class="carousel-caption"> <div class="row"> <div class="col-md-12"> <div class="text-bg"> <span>La web</span> <h1>de musica para vos</h1> <p>Haz click en los botones para crear algun posteo o ver los posteos existentes</p> <a href="{% url 'crear' %}">Crea un post</a> <a href="{% url 'CursoPost' %}">Post Musicales </a> </div> </div> </div> </div> </div> </div> <div class="carousel-item"> <div class="container"> <div class="carousel-caption "> <div class="row"> <div class="col-md-12"> <div class="text-bg"> <span>La web</span> <h1>de musica para vos</h1> <p>y algo más..</p> <a href="{% url 'crear' %}">Crea un post</a> <a href="{% url 'CursoPost' %}">Post Musicales </a> </div> </div> </div> </div> </div> </div> </div> <a class="carousel-control-prev" href="#myCarousel" role="button" data-slide="prev"> <i class="fa fa-long-arrow-left" aria-hidden="true"></i> </a> <a class="carousel-control-next" href="#myCarousel" role="button" data-slide="next"> <i class="fa fa-long-arrow-right" aria-hidden="true"></i> </a> </div> </section> </div> </header> <!-- about --> <div id="about" class="about"> <div class="container"> <div class="row display_boxflex"> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12"> <div class="about-box"> <h2>Acerca de Nosotros</h2> <p>Somos dos estudiantes del curso de Python de Coder House y este es el blog de presentacion de proyecto final</p> <a href="Javascript:void(0)">Leer más</a> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12"> <div class="about-box"> <figure><img src={% static "images/about.png" %} alt="#" /></figure> </div> </div> </div> </div> </div> <!-- end abouts --> <!-- upcoming --> <!-- end upcoming --> {% block codigoDinamico %} {% endblock %} <!-- Gallery --> <!-- <div id="gallery" class="Gallery"> <div class="container"> <div class="row display_boxflex"> <div class="col-xl-8 col-lg-8 col-md-8 col-sm-12"> <div class="row"> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 margi_bott"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery1.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 margi_bott"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery2.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 margi_bott1"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery3.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery4.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> </div> </div> <div class="col-xl-4 col-lg-4 col-md-4 col-sm-12"> <div class="Gallery_text"> <div class="titlepage"> <h2>Gallery</h2> </div> <p>It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has a more-or-less normal distribution of letters, as opposed to usin</p> <a href="Javascript:void(0)">Read More</a> </div> </div> </div> </div> </div> --> <!-- end Gallery --> <!-- footer --> <footr> <div class="footer "> <div class="container"> <div class="row"> <div class="col-md-12"> <form class="contact_bg"> <div class="row"> <div class="col-md-12"> <div class="titlepage"> <h2>Contactanos</h2> </div> <div class="col-md-12"> <input class="contactus" placeholder="Tu nombre" type="text" name="Your Name"> </div> <div class="col-md-12"> <input class="contactus" placeholder="Tu Email" type="text" name="Your Email"> </div> <div class="col-md-12"> <input class="contactus" placeholder="Numero de telefono" type="text" name="Your Phone"> </div> <div class="col-md-12"> <textarea class="textarea" placeholder="Mensaje" type="text" name="Message"></textarea> </div> <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12"> <button class="send">Enviar</button> </div> </div> </div> </form> </div> <div class="col-md-12 border_top"> <form class="news"> <h3>Newsletter</h3> <input class="newslatter" placeholder="ENTER YOUR MAIL" type="text" name=" ENTER YOUR MAIL"> <button class="submit">Subscribe</button> </form> </div> <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12 "> <div class="row"> <!-- <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12 "> <div class="address"> <ul class="loca"> <li> <a href="#"><img src={% static "icon/loc.png" %}alt="#" /></a>Locations <li> <a href="#"><img src={% static "icon/call.png"%} alt="#" /></a>+12586954775 </li> <li> <a href="#"><img src={% static "icon/email.png"%} alt="#" /></a>[email protected] </li> </ul> --> </div> </div> <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12 "> <ul class="social_link"> <li><a href="#"><i class="fa fa-facebook" aria-hidden="true"></i></a></li> <li><a href="#"><i class="fa fa-twitter" aria-hidden="true"></i></a></li> <li><a href="#"><i class="fa fa-linkedin-square" aria-hidden="true"></i></a></li> <li><a href="#"><i class="fa fa-instagram" aria-hidden="true"></i></a></li> </ul> </div> </div> </div> </div> </div> <div class="container"> <div class="copyright"> <p>Copyright 2019 All Right Reserved By <a href="https://html.design/">Free html Templates</a></p> </div> </div> </div> </footr> <!-- end footer --> <!-- Javascript files--> <script src={% static "js/jquery.min.js"%}></script> <script src={% static "js/popper.min.js"%}></script> <script src={% static "js/bootstrap.bundle.min.js"%}></script> <script src={% static "js/jquery-3.0.0.min.js"%}></script> <script src={% static "js/plugin.js"%}></script> <!-- sidebar --> <script src={% static "js/jquery.mCustomScrollbar.concat.min.js"%}></script> <script src={% static "js/custom.js"%}></script> <script src="https:cdnjs.cloudflare.com/ajax/libs/fancybox/2.1.5/jquery.fancybox.min.js"></script> </body> </html> A: def mostrar_index(request): imagenes = Avatar.objects.filter(user=request.user.id) return render(request, 'index.html', {'avatar': imagenes}) {% if request.user.is_authenticated %} {% if avatar %} <img src="{{avatar.images.url}}" alt=""> {% else %} <img src="{% static 'default/image.ext' %}" alt=""> {% endif %} <h1>Hola {{user.username}} como te va? </h1> <p>Si deseas deslogearte haz click aqui! </p> <li> <a href="{% url 'Logout'%}">Logout</a> </li> {% else %} ... {% endif %}
why Index error at / list index out of range?
hello im having a problem with the index code in my views.py, aparrently the HTML index is out of range for some reason i dont understand, because before i wanted to make the page to add images as avatars work perfectly fine the index .html is in a templates folder , inside the APP folder, and i created a media folder outside of the APP for storage of media this is the views.py from django.http import HttpResponse from django.shortcuts import render from django.views.generic import ListView from django.views.generic.detail import DetailView from django.views.generic.edit import UpdateView, DeleteView, CreateView from django.contrib.auth.views import LoginView, LogoutView from django.contrib.auth.decorators import login_required from django.contrib.auth.mixins import LoginRequiredMixin from .forms import PosteoForm, SignUpForm, UserEditForm from .models import Posteo, Avatar from django.urls import reverse_lazy # Create your views here. def mostrar_index(request): imagenes = Avatar.objects.filter(user=request.user.id) return render(request, 'index.html', {'url': imagenes[0].images.url}) def mostrar_gallery(request): return render(request,'gallery.html') def mostrar_contact(request): return render(request,'contact.html') def cursoPost(request): return render(request,'Posts.html') @login_required def crear_post(request): if request.method == 'POST': posteo = PosteoForm(request.POST) print('posteo') if posteo.is_valid(): data = posteo.cleaned_data posteo = Posteo (titulo=data['titulo'], texto=data['texto']) posteo.save() return render(request,'index.html') else: posteo = PosteoForm() print('formulario') return render(request,'Posts.html',{'posteo':posteo}) def buscar_post(request): return render(request,'buscador.html') def buscador (request): if request.GET.get ('titulo', False): titulo = request.GET ['titulo'] post = Posteo.objects.filter(titulo__icontains=titulo) return render (request, 'buscador.html',{'post':post}) else: respuesta = 'no hay datos' return render (request, 'buscador.html', {'respuesta':respuesta}) class PosteoList(ListView): model = Posteo template_name = 'mostrar_post.html' class PosteoDetail(DetailView): model = Posteo template_name = 'posteo_detalle.html' def base(request): return render(request,'base.html') class PosteoDeleteView(LoginRequiredMixin, DeleteView): model = Posteo template_name = 'post_confirm_delete.html' success_url = '/mostrarPost' class PosteoUpdateView(LoginRequiredMixin, UpdateView): model = Posteo template_name = 'modificar_post.html' success_url = '/mostrarPost' fields =['titulo', 'subtitulo','texto','nombre', 'email'] class SignUpView(CreateView): form_class = SignUpForm success_url = reverse_lazy('index') template_name = "registro.html" def editar_usuario(request): usuario = request.User if request.method == 'POST': usuario_form = UserEditForm(request.POST) if usuario_form.is_valid(): informacion = usuario_form.cleaned_data usuario.username = informacion['username'] usuario.email = informacion['email'] usuario.password1 = informacion['password1'] usuario.password2 = informacion ['password2'] usuario.save() return render(request,'inicio.html') else: usuario_form = UserEditForm(initial={'username': usuario.username, 'email': usuario.email}) return render(request, 'admin_update.html', {'form': usuario_form, 'usuario': usuario}) class AdminLoginView(LoginView): template_name = 'login.html' class AdminLogoutView(LogoutView): template_name = 'logout.html' the index.html <!DOCTYPE html> <html lang="en"> <head> {% load static %} <!-- basic --> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <!-- mobile metas --> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="initial-scale=1, maximum-scale=1"> <!-- site metas --> <title>Entro</title> <meta name="keywords" content=""> <meta name="description" content=""> <meta name="author" content=""> <!-- fevicon --> <!-- bootstrap css --> <link rel="stylesheet" href={% static "css/bootstrap.min.css" %}> <!-- style css --> <link rel="stylesheet" href={% static "css/style.css" %}> <!-- Responsive--> <link rel="stylesheet" href={% static "css/responsive.css" %}> <!-- Scrollbar Custom CSS --> <link rel="stylesheet" href={% static "css/jquery.mCustomScrollbar.min.css" %}> <!-- Tweaks for older IEs--> <link rel="stylesheet" href="https://netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/fancybox/2.1.5/jquery.fancybox.min.css" media="screen"> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script><![endif]--> </head> <!-- body --> <body class="main-layout"> <!-- loader --> <div class="loader_bg"> <div class="loader"><img src={% static "images/loading.gif" %} alt="#" /></div> </div> <!-- end loader --> <!-- header --> <header> <!-- header inner --> <div class="header-top"> <div class="header"> <div class="container"> <div class="row"> <div class="col-xl-2 col-lg-2 col-md-2 col-sm-3 col logo_section"> <div class="full"> <div class="center-desk"> <!-- <div class="logo"> <a href="index.html"><img src={% static "images/logo.png" %} alt="#" /></a> </div> --> <!-- boton izquierdo de ticket --> </div> </div> </div> <div class="col-xl-10 col-lg-10 col-md-10 col-sm-9"> <div class="menu-area"> <div class="limit-box"> <nav class="main-menu "> <ul class="menu-area-main"> <li class="active"> <a href="{% url 'index' %}">Inicio</a> </li> <li> <a href="{% url 'mostrarpost' %}">Mostrar Posts</a> </li> <li> <a href="{% url 'crear' %}">Crear Posts</a> </li> <li> <a href="{% url 'buscar' %}">Buscar Posts</a> </li> <li> <a href="{% url 'Galeria'%}">Galeria</a> </li> <li> <a href="{% url 'base'%}">base</a> </li> <li> <a href="{% url 'Sign Up'%}">Usuario</a> </li> <li> <a class="last_manu" href="{% url 'buscar' %}"><img src={% static "images/search_icon.png" %} alt="#" /></a> </li> {% if request.user.is_authenticated %} <li> <a href="{% url 'Logout'%}">Logout</a> </li> {% else %} <li> <a href="{% url 'Login'%}">Login</a> </li> {% endif %} </ul> </nav> </div> </div> </div> </div> </div> </div> <!-- end header inner --> <!-- end header --> <section class="slider_section"> <div id="myCarousel" class="carousel slide" data-ride="carousel"> <ol class="carousel-indicators"> <li data-target="#myCarousel" data-slide-to="0" class="active"></li> <li data-target="#myCarousel" data-slide-to="1"></li> <li data-target="#myCarousel" data-slide-to="2"></li> </ol> <div class="carousel-inner"> <div class="carousel-item active"> <div class="container"> <div class="carousel-caption"> <div class="row"> <div class="col-md-12"> <div class="text-bg"> {% if request.user.is_authenticated %} <img src="{{url}}" alt=""> <h1>Hola {{user.username}} como te va? </h1> <p>Si deseas deslogearte haz click aqui! </p> <li> <a href="{% url 'Logout'%}">Logout</a> </li> {% else %} <span>La web</span> <h1>de musica para vos</h1> <p>Esta es una web diseñada para dejar al alcance de tus manos lo que necesitas para tu musica</p> <li> <a href="{% url 'Login'%}">Login</a> </li> {% endif %} </div> </div> </div> </div> </div> </div> <div class="carousel-item"> <div class="container "> <div class="carousel-caption"> <div class="row"> <div class="col-md-12"> <div class="text-bg"> <span>La web</span> <h1>de musica para vos</h1> <p>Haz click en los botones para crear algun posteo o ver los posteos existentes</p> <a href="{% url 'crear' %}">Crea un post</a> <a href="{% url 'CursoPost' %}">Post Musicales </a> </div> </div> </div> </div> </div> </div> <div class="carousel-item"> <div class="container"> <div class="carousel-caption "> <div class="row"> <div class="col-md-12"> <div class="text-bg"> <span>La web</span> <h1>de musica para vos</h1> <p>y algo más..</p> <a href="{% url 'crear' %}">Crea un post</a> <a href="{% url 'CursoPost' %}">Post Musicales </a> </div> </div> </div> </div> </div> </div> </div> <a class="carousel-control-prev" href="#myCarousel" role="button" data-slide="prev"> <i class="fa fa-long-arrow-left" aria-hidden="true"></i> </a> <a class="carousel-control-next" href="#myCarousel" role="button" data-slide="next"> <i class="fa fa-long-arrow-right" aria-hidden="true"></i> </a> </div> </section> </div> </header> <!-- about --> <div id="about" class="about"> <div class="container"> <div class="row display_boxflex"> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12"> <div class="about-box"> <h2>Acerca de Nosotros</h2> <p>Somos dos estudiantes del curso de Python de Coder House y este es el blog de presentacion de proyecto final</p> <a href="Javascript:void(0)">Leer más</a> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12"> <div class="about-box"> <figure><img src={% static "images/about.png" %} alt="#" /></figure> </div> </div> </div> </div> </div> <!-- end abouts --> <!-- upcoming --> <!-- end upcoming --> {% block codigoDinamico %} {% endblock %} <!-- Gallery --> <!-- <div id="gallery" class="Gallery"> <div class="container"> <div class="row display_boxflex"> <div class="col-xl-8 col-lg-8 col-md-8 col-sm-12"> <div class="row"> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 margi_bott"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery1.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 margi_bott"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery2.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 margi_bott1"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery3.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> <div class="col-xl-6 col-lg-6 col-md-6 col-sm-12"> <div class="Gallery_img"> <figure><img src={% static "images/Gallery4.jpg"%} alt="#"/></figure> </div> <div class="hover_box"> <ul class="icon_hu"> <h3>Music night</h3> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> <li><a href="#"><img src={% static "icon/h.png"%} alt="#"/></a></li> </ul> </div> </div> </div> </div> <div class="col-xl-4 col-lg-4 col-md-4 col-sm-12"> <div class="Gallery_text"> <div class="titlepage"> <h2>Gallery</h2> </div> <p>It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has a more-or-less normal distribution of letters, as opposed to usin</p> <a href="Javascript:void(0)">Read More</a> </div> </div> </div> </div> </div> --> <!-- end Gallery --> <!-- footer --> <footr> <div class="footer "> <div class="container"> <div class="row"> <div class="col-md-12"> <form class="contact_bg"> <div class="row"> <div class="col-md-12"> <div class="titlepage"> <h2>Contactanos</h2> </div> <div class="col-md-12"> <input class="contactus" placeholder="Tu nombre" type="text" name="Your Name"> </div> <div class="col-md-12"> <input class="contactus" placeholder="Tu Email" type="text" name="Your Email"> </div> <div class="col-md-12"> <input class="contactus" placeholder="Numero de telefono" type="text" name="Your Phone"> </div> <div class="col-md-12"> <textarea class="textarea" placeholder="Mensaje" type="text" name="Message"></textarea> </div> <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12"> <button class="send">Enviar</button> </div> </div> </div> </form> </div> <div class="col-md-12 border_top"> <form class="news"> <h3>Newsletter</h3> <input class="newslatter" placeholder="ENTER YOUR MAIL" type="text" name=" ENTER YOUR MAIL"> <button class="submit">Subscribe</button> </form> </div> <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12 "> <div class="row"> <!-- <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12 "> <div class="address"> <ul class="loca"> <li> <a href="#"><img src={% static "icon/loc.png" %}alt="#" /></a>Locations <li> <a href="#"><img src={% static "icon/call.png"%} alt="#" /></a>+12586954775 </li> <li> <a href="#"><img src={% static "icon/email.png"%} alt="#" /></a>[email protected] </li> </ul> --> </div> </div> <div class="col-xl-12 col-lg-12 col-md-12 col-sm-12 "> <ul class="social_link"> <li><a href="#"><i class="fa fa-facebook" aria-hidden="true"></i></a></li> <li><a href="#"><i class="fa fa-twitter" aria-hidden="true"></i></a></li> <li><a href="#"><i class="fa fa-linkedin-square" aria-hidden="true"></i></a></li> <li><a href="#"><i class="fa fa-instagram" aria-hidden="true"></i></a></li> </ul> </div> </div> </div> </div> </div> <div class="container"> <div class="copyright"> <p>Copyright 2019 All Right Reserved By <a href="https://html.design/">Free html Templates</a></p> </div> </div> </div> </footr> <!-- end footer --> <!-- Javascript files--> <script src={% static "js/jquery.min.js"%}></script> <script src={% static "js/popper.min.js"%}></script> <script src={% static "js/bootstrap.bundle.min.js"%}></script> <script src={% static "js/jquery-3.0.0.min.js"%}></script> <script src={% static "js/plugin.js"%}></script> <!-- sidebar --> <script src={% static "js/jquery.mCustomScrollbar.concat.min.js"%}></script> <script src={% static "js/custom.js"%}></script> <script src="https:cdnjs.cloudflare.com/ajax/libs/fancybox/2.1.5/jquery.fancybox.min.js"></script> </body> </html>
[ "def mostrar_index(request):\n imagenes = Avatar.objects.filter(user=request.user.id)\n return render(request, 'index.html', {'avatar': imagenes})\n\n{% if request.user.is_authenticated %}\n {% if avatar %}\n <img src=\"{{avatar.images.url}}\" alt=\"\">\n {% else %}\n <img src=\"{% static 'default/image.ext' %}\" alt=\"\">\n {% endif %}\n <h1>Hola {{user.username}} como te va? </h1>\n <p>Si deseas deslogearte haz click aqui! </p>\n <li> <a href=\"{% url 'Logout'%}\">Logout</a> </li>\n{% else %}\n...\n{% endif %}\n\n" ]
[ 0 ]
[]
[]
[ "css", "django", "html", "python" ]
stackoverflow_0074635380_css_django_html_python.txt
Q: How do I install pandas datareader on windows when hit with this error? I am trying to install pandas datareader, but I am hit with this error: Collecting pandas-datareader Using cached pandas_datareader-0.10.0-py3-none-any.whl (109 kB) Collecting lxml Using cached lxml-4.9.1.tar.gz (3.4 MB) Preparing metadata (setup.py) ... done Requirement already satisfied: pandas>=0.23 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas-datareader) (1.5.2) Requirement already satisfied: requests>=2.19.0 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas-datareader) (2.28.1) Requirement already satisfied: python-dateutil>=2.8.1 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas>=0.23->pandas-datareader) (2.8.2) Requirement already satisfied: pytz>=2020.1 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas>=0.23->pandas-datareader) (2022.6) Requirement already satisfied: numpy>=1.21.0 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas>=0.23->pandas-datareader) (1.23.5) Requirement already satisfied: charset-normalizer<3,>=2 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (2.1.1) Requirement already satisfied: idna<4,>=2.5 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (1.26.13) Requirement already satisfied: certifi>=2017.4.17 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (2022.9.24) Requirement already satisfied: six>=1.5 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from python-dateutil>=2.8.1->pandas>=0.23->pandas-datareader) (1.16.0) Installing collected packages: lxml, pandas-datareader DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559 Running setup.py install for lxml ... error error: subprocess-exited-with-error × Running setup.py install for lxml did not run successfully. │ exit code: 1 ╰─> [96 lines of output] Building lxml version 4.9.1. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running install C:\Users\marcu\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\lxml copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml creating build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html creating build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\src creating build\temp.win-amd64-cpython-311\Release\src\lxml "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\marcu\AppData\Local\Programs\Python\Python311\include -IC:\Users\marcu\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w cl : Command line warning D9025 : overriding '/W3' with '/w' etree.c C:\Users\marcu\AppData\Local\Programs\Python\Python311\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or directory Compile failed: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.32.31326\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 creating Users creating Users\marcu creating Users\marcu\AppData creating Users\marcu\AppData\Local creating Users\marcu\AppData\Local\Temp "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /TcC:\Users\marcu\AppData\Local\Temp\xmlXPathInit65mzfy1g.c /FoUsers\marcu\AppData\Local\Temp\xmlXPathInit65mzfy1g.obj xmlXPathInit65mzfy1g.c C:\Users\marcu\AppData\Local\Temp\xmlXPathInit65mzfy1g.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.32.31326\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> lxml note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. I am using python 3.11.0, and the command I am using to install pandas datareader is "pip install pandas-datareader" Thanks for the help! It looks like your post is mostly code; please add some more details. It looks like your post is mostly code; please add some more details. It looks like your post is mostly code; please add some more details. A: try: pip install --upgrade pip On Windows the recommended command is: python -m pip install --upgrade pip A: You're almost there. Check the error logs carefully. You'll notice it's asking for: Is libxml2 installed? Try installing that with pip then retry.
How do I install pandas datareader on windows when hit with this error?
I am trying to install pandas datareader, but I am hit with this error: Collecting pandas-datareader Using cached pandas_datareader-0.10.0-py3-none-any.whl (109 kB) Collecting lxml Using cached lxml-4.9.1.tar.gz (3.4 MB) Preparing metadata (setup.py) ... done Requirement already satisfied: pandas>=0.23 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas-datareader) (1.5.2) Requirement already satisfied: requests>=2.19.0 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas-datareader) (2.28.1) Requirement already satisfied: python-dateutil>=2.8.1 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas>=0.23->pandas-datareader) (2.8.2) Requirement already satisfied: pytz>=2020.1 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas>=0.23->pandas-datareader) (2022.6) Requirement already satisfied: numpy>=1.21.0 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from pandas>=0.23->pandas-datareader) (1.23.5) Requirement already satisfied: charset-normalizer<3,>=2 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (2.1.1) Requirement already satisfied: idna<4,>=2.5 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (1.26.13) Requirement already satisfied: certifi>=2017.4.17 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from requests>=2.19.0->pandas-datareader) (2022.9.24) Requirement already satisfied: six>=1.5 in c:\users\marcu\appdata\local\programs\python\python311\lib\site-packages (from python-dateutil>=2.8.1->pandas>=0.23->pandas-datareader) (1.16.0) Installing collected packages: lxml, pandas-datareader DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559 Running setup.py install for lxml ... error error: subprocess-exited-with-error × Running setup.py install for lxml did not run successfully. │ exit code: 1 ╰─> [96 lines of output] Building lxml version 4.9.1. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running install C:\Users\marcu\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\lxml copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml creating build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html creating build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\src creating build\temp.win-amd64-cpython-311\Release\src\lxml "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Users\marcu\AppData\Local\Programs\Python\Python311\include -IC:\Users\marcu\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w cl : Command line warning D9025 : overriding '/W3' with '/w' etree.c C:\Users\marcu\AppData\Local\Programs\Python\Python311\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or directory Compile failed: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.32.31326\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 creating Users creating Users\marcu creating Users\marcu\AppData creating Users\marcu\AppData\Local creating Users\marcu\AppData\Local\Temp "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I/usr/include/libxml2 "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /TcC:\Users\marcu\AppData\Local\Temp\xmlXPathInit65mzfy1g.c /FoUsers\marcu\AppData\Local\Temp\xmlXPathInit65mzfy1g.obj xmlXPathInit65mzfy1g.c C:\Users\marcu\AppData\Local\Temp\xmlXPathInit65mzfy1g.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.32.31326\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> lxml note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. I am using python 3.11.0, and the command I am using to install pandas datareader is "pip install pandas-datareader" Thanks for the help! It looks like your post is mostly code; please add some more details. It looks like your post is mostly code; please add some more details. It looks like your post is mostly code; please add some more details.
[ "try:\n\npip install --upgrade pip\n\nOn Windows the recommended command is:\n\npython -m pip install --upgrade pip\n\n", "You're almost there. Check the error logs carefully. You'll notice it's asking for:\nIs libxml2 installed?\n\nTry installing that with pip then retry.\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074636322_pandas_python.txt
Q: Problem with Logging Module in Google Colab I have a python script with an error handling using the logging module. Although this python script works when imported to google colab, it doesn't log the errors in the log file. As an experiment, I tried this following script in google colab just to see if it writes log at all import logging logging.basicConfig(filename="log_file_test.log", filemode='a', format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s', datefmt='%H:%M:%S', level=logging.DEBUG) logging.info("This is a test log ..") To my dismay, it didn't even create a log file named log_file_test.log. I tried running the same script locally and it did produce a file log_file_test.log with the following text 13:20:53,441 root INFO This is a test log .. What is it that I am missing here? For the time being, I am replacing the error logs with print statements, but I assume that there must be a workaround to this. A: Perhaps you've reconfigured your environment somehow? (Try Runtime menu -> Reset all runtimes...) Your snippets works exactly as written for me -- A: logging.basicConfig can be run just once* Any subsequent call to basicConfig is ignored. * unless you are in Python 3.8 and use the flag force=True logging.basicConfig(filename='app.log', level=logging.DEBUG, force=True, # Resets any previous configuration ) Workarounds (2) (1) You can easily reset the Colab workspace with this command exit Wait for it to come back and try your commands again. (2) But, if you plan to do the reset more than once and/or are learning to use logging, maybe it is better to use %%python magic to run the entire cell in a subprocess. See photo below. What is it that I am missing here? Deeper understanding of how logging works. It is a bit tricky, but there are many good webs explaining the gotchas. In Colab https://realpython.com/python-logging A: [This answer][1] cover the issue. You have to: Clear your log handlers from the environment with logging.root.removeHandler Set log level with logging.getLogger('RootLogger').setLevel(logging.DEBUG). Setting level with logging.basicConfig only did not work for me.
Problem with Logging Module in Google Colab
I have a python script with an error handling using the logging module. Although this python script works when imported to google colab, it doesn't log the errors in the log file. As an experiment, I tried this following script in google colab just to see if it writes log at all import logging logging.basicConfig(filename="log_file_test.log", filemode='a', format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s', datefmt='%H:%M:%S', level=logging.DEBUG) logging.info("This is a test log ..") To my dismay, it didn't even create a log file named log_file_test.log. I tried running the same script locally and it did produce a file log_file_test.log with the following text 13:20:53,441 root INFO This is a test log .. What is it that I am missing here? For the time being, I am replacing the error logs with print statements, but I assume that there must be a workaround to this.
[ "Perhaps you've reconfigured your environment somehow? (Try Runtime menu -> Reset all runtimes...) Your snippets works exactly as written for me --\n\n", "logging.basicConfig can be run just once*\nAny subsequent call to basicConfig is ignored.\n* unless you are in Python 3.8 and use the flag force=True\nlogging.basicConfig(filename='app.log',\n level=logging.DEBUG,\n force=True, # Resets any previous configuration\n )\n\n\nWorkarounds (2)\n(1) You can easily reset the Colab workspace with this command\nexit\n\nWait for it to come back and try your commands again.\n(2) But, if you plan to do the reset more than once and/or are learning to use logging, maybe it is better to use %%python magic to run the entire cell in a subprocess. See photo below.\n\n\n\nWhat is it that I am missing here?\n\nDeeper understanding of how logging works. It is a bit tricky, but there are many good webs explaining the gotchas.\n\nIn Colab\nhttps://realpython.com/python-logging\n\n", "[This answer][1] cover the issue.\nYou have to:\n\nClear your log handlers from the environment with logging.root.removeHandler\nSet log level with logging.getLogger('RootLogger').setLevel(logging.DEBUG).\n\nSetting level with logging.basicConfig only did not work for me.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "error_handling", "google_colaboratory", "jupyter_notebook", "logging", "python" ]
stackoverflow_0054597462_error_handling_google_colaboratory_jupyter_notebook_logging_python.txt
Q: Delete key and pair from Json file I'm trying to delete a key and its pair from a json file . I tried the codes below but nothing triggers or work. Anyone can modify and assist me reda.json file [{"carl": 33}, {"carl": 55}, {"user": "user2", "id": "21780"}, {"user": "user2"}, {"user": "123"}, {"user": []}, {"user": []}] import json json_data = json.load(open('reda.json')) k = "carl" for d in json_data: if k in d: del d[k] A: When you load a JSON file into Python using json.load, it creates a copy of that JSON in Python. When that copy is changed, these changes are not reflected in the file. So what you need to do is then transfer your changed copy back to the file. This can be achieved via a method in the same json library as you're using, dump. Additionally, we need to supply an additional parameter to open to specify that we are writing to the file, not just reading. import json json_data = json.load(open('reda.json')) k = "carl" for d in json_data: if k in d: del d[k] json.dump(json_data, open('reda.json','w')) References Python 3.11.0 Documentation: json.dump Python 3.11.0 Documentation: open
Delete key and pair from Json file
I'm trying to delete a key and its pair from a json file . I tried the codes below but nothing triggers or work. Anyone can modify and assist me reda.json file [{"carl": 33}, {"carl": 55}, {"user": "user2", "id": "21780"}, {"user": "user2"}, {"user": "123"}, {"user": []}, {"user": []}] import json json_data = json.load(open('reda.json')) k = "carl" for d in json_data: if k in d: del d[k]
[ "When you load a JSON file into Python using json.load, it creates a copy of that JSON in Python. When that copy is changed, these changes are not reflected in the file.\nSo what you need to do is then transfer your changed copy back to the file.\nThis can be achieved via a method in the same json library as you're using, dump. Additionally, we need to supply an additional parameter to open to specify that we are writing to the file, not just reading.\nimport json\n\njson_data = json.load(open('reda.json'))\nk = \"carl\"\nfor d in json_data:\n if k in d:\n del d[k]\n\njson.dump(json_data, open('reda.json','w'))\n\nReferences\n\nPython 3.11.0 Documentation: json.dump\nPython 3.11.0 Documentation: open\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "python", "python_3.x" ]
stackoverflow_0074636162_dictionary_python_python_3.x.txt
Q: Using global variables in a function How do I create or use a global variable inside a function? How do I use a global variable that was defined in one function inside other functions? Failing to use the global keyword where appropriate often causes UnboundLocalError. The precise rules for this are explained at UnboundLocalError on local variable when reassigned after first use. Generally, please close other questions as a duplicate of that question when an explanation is sought, and this question when someone simply needs to know the global keyword. A: You can use a global variable within other functions by declaring it as global within each function that assigns a value to it: globvar = 0 def set_globvar_to_one(): global globvar # Needed to modify global copy of globvar globvar = 1 def print_globvar(): print(globvar) # No need for global declaration to read value of globvar set_globvar_to_one() print_globvar() # Prints 1 Since it's unclear whether globvar = 1 is creating a local variable or changing a global variable, Python defaults to creating a local variable, and makes you explicitly choose the other behavior with the global keyword. See other answers if you want to share a global variable across modules. A: If I'm understanding your situation correctly, what you're seeing is the result of how Python handles local (function) and global (module) namespaces. Say you've got a module like this: # sample.py _my_global = 5 def func1(): _my_global = 42 def func2(): print _my_global func1() func2() You might expecting this to print 42, but instead it prints 5. As has already been mentioned, if you add a 'global' declaration to func1(), then func2() will print 42. def func1(): global _my_global _my_global = 42 What's going on here is that Python assumes that any name that is assigned to, anywhere within a function, is local to that function unless explicitly told otherwise. If it is only reading from a name, and the name doesn't exist locally, it will try to look up the name in any containing scopes (e.g. the module's global scope). When you assign 42 to the name _my_global, therefore, Python creates a local variable that shadows the global variable of the same name. That local goes out of scope and is garbage-collected when func1() returns; meanwhile, func2() can never see anything other than the (unmodified) global name. Note that this namespace decision happens at compile time, not at runtime -- if you were to read the value of _my_global inside func1() before you assign to it, you'd get an UnboundLocalError, because Python has already decided that it must be a local variable but it has not had any value associated with it yet. But by using the 'global' statement, you tell Python that it should look elsewhere for the name instead of assigning to it locally. (I believe that this behavior originated largely through an optimization of local namespaces -- without this behavior, Python's VM would need to perform at least three name lookups each time a new name is assigned to inside a function (to ensure that the name didn't already exist at module/builtin level), which would significantly slow down a very common operation.) A: You may want to explore the notion of namespaces. In Python, the module is the natural place for global data: Each module has its own private symbol table, which is used as the global symbol table by all functions defined in the module. Thus, the author of a module can use global variables in the module without worrying about accidental clashes with a user’s global variables. On the other hand, if you know what you are doing you can touch a module’s global variables with the same notation used to refer to its functions, modname.itemname. A specific use of global-in-a-module is described here - How do I share global variables across modules?, and for completeness the contents are shared here: The canonical way to share information across modules within a single program is to create a special configuration module (often called config or cfg). Just import the configuration module in all modules of your application; the module then becomes available as a global name. Because there is only one instance of each module, any changes made to the module object get reflected everywhere. For example: File: config.py x = 0 # Default value of the 'x' configuration setting File: mod.py import config config.x = 1 File: main.py import config import mod print config.x A: Python uses a simple heuristic to decide which scope it should load a variable from, between local and global. If a variable name appears on the left hand side of an assignment, but is not declared global, it is assumed to be local. If it does not appear on the left hand side of an assignment, it is assumed to be global. >>> import dis >>> def foo(): ... global bar ... baz = 5 ... print bar ... print baz ... print quux ... >>> dis.disassemble(foo.func_code) 3 0 LOAD_CONST 1 (5) 3 STORE_FAST 0 (baz) 4 6 LOAD_GLOBAL 0 (bar) 9 PRINT_ITEM 10 PRINT_NEWLINE 5 11 LOAD_FAST 0 (baz) 14 PRINT_ITEM 15 PRINT_NEWLINE 6 16 LOAD_GLOBAL 1 (quux) 19 PRINT_ITEM 20 PRINT_NEWLINE 21 LOAD_CONST 0 (None) 24 RETURN_VALUE >>> See how baz, which appears on the left side of an assignment in foo(), is the only LOAD_FAST variable. A: If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables. However, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can overshadow any existing global variable with the same name. Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP is overkill. A: If I create a global variable in one function, how can I use that variable in another function? We can create a global with the following function: def create_global_variable(): global global_variable # must declare it to be a global first # modifications are thus reflected on the module's global scope global_variable = 'Foo' Writing a function does not actually run its code. So we call the create_global_variable function: >>> create_global_variable() Using globals without modification You can just use it, so long as you don't expect to change which object it points to: For example, def use_global_variable(): return global_variable + '!!!' and now we can use the global variable: >>> use_global_variable() 'Foo!!!' Modification of the global variable from inside a function To point the global variable at a different object, you are required to use the global keyword again: def change_global_variable(): global global_variable global_variable = 'Bar' Note that after writing this function, the code actually changing it has still not run: >>> use_global_variable() 'Foo!!!' So after calling the function: >>> change_global_variable() we can see that the global variable has been changed. The global_variable name now points to 'Bar': >>> use_global_variable() 'Bar!!!' Note that "global" in Python is not truly global - it's only global to the module level. So it is only available to functions written in the modules in which it is global. Functions remember the module in which they are written, so when they are exported into other modules, they still look in the module in which they were created to find global variables. Local variables with the same name If you create a local variable with the same name, it will overshadow a global variable: def use_local_with_same_name_as_global(): # bad name for a local variable, though. global_variable = 'Baz' return global_variable + '!!!' >>> use_local_with_same_name_as_global() 'Baz!!!' But using that misnamed local variable does not change the global variable: >>> use_global_variable() 'Bar!!!' Note that you should avoid using the local variables with the same names as globals unless you know precisely what you are doing and have a very good reason to do so. I have not yet encountered such a reason. We get the same behavior in classes A follow on comment asks: what to do if I want to create a global variable inside a function inside a class and want to use that variable inside another function inside another class? Here I demonstrate we get the same behavior in methods as we do in regular functions: class Foo: def foo(self): global global_variable global_variable = 'Foo' class Bar: def bar(self): return global_variable + '!!!' Foo().foo() And now: >>> Bar().bar() 'Foo!!!' But I would suggest instead of using global variables you use class attributes, to avoid cluttering the module namespace. Also note we don't use self arguments here - these could be class methods (handy if mutating the class attribute from the usual cls argument) or static methods (no self or cls). A: In addition to already existing answers and to make this more confusing: In Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a new value anywhere within the function’s body, it’s assumed to be a local. If a variable is ever assigned a new value inside the function, the variable is implicitly local, and you need to explicitly declare it as ‘global’. Though a bit surprising at first, a moment’s consideration explains this. On one hand, requiring global for assigned variables provides a bar against unintended side-effects. On the other hand, if global was required for all global references, you’d be using global all the time. You’d have to declare as global every reference to a built-in function or to a component of an imported module. This clutter would defeat the usefulness of the global declaration for identifying side-effects. Source: What are the rules for local and global variables in Python?. A: With parallel execution, global variables can cause unexpected results if you don't understand what is happening. Here is an example of using a global variable within multiprocessing. We can clearly see that each process works with its own copy of the variable: import multiprocessing import os import random import sys import time def worker(new_value): old_value = get_value() set_value(random.randint(1, 99)) print('pid=[{pid}] ' 'old_value=[{old_value:2}] ' 'new_value=[{new_value:2}] ' 'get_value=[{get_value:2}]'.format( pid=str(os.getpid()), old_value=old_value, new_value=new_value, get_value=get_value())) def get_value(): global global_variable return global_variable def set_value(new_value): global global_variable global_variable = new_value global_variable = -1 print('before set_value(), get_value() = [%s]' % get_value()) set_value(new_value=-2) print('after set_value(), get_value() = [%s]' % get_value()) processPool = multiprocessing.Pool(processes=5) processPool.map(func=worker, iterable=range(15)) Output: before set_value(), get_value() = [-1] after set_value(), get_value() = [-2] pid=[53970] old_value=[-2] new_value=[ 0] get_value=[23] pid=[53971] old_value=[-2] new_value=[ 1] get_value=[42] pid=[53970] old_value=[23] new_value=[ 4] get_value=[50] pid=[53970] old_value=[50] new_value=[ 6] get_value=[14] pid=[53971] old_value=[42] new_value=[ 5] get_value=[31] pid=[53972] old_value=[-2] new_value=[ 2] get_value=[44] pid=[53973] old_value=[-2] new_value=[ 3] get_value=[94] pid=[53970] old_value=[14] new_value=[ 7] get_value=[21] pid=[53971] old_value=[31] new_value=[ 8] get_value=[34] pid=[53972] old_value=[44] new_value=[ 9] get_value=[59] pid=[53973] old_value=[94] new_value=[10] get_value=[87] pid=[53970] old_value=[21] new_value=[11] get_value=[21] pid=[53971] old_value=[34] new_value=[12] get_value=[82] pid=[53972] old_value=[59] new_value=[13] get_value=[ 4] pid=[53973] old_value=[87] new_value=[14] get_value=[70] A: As it turns out the answer is always simple. Here is a small sample module with a simple way to show it in a main definition: def five(enterAnumber,sumation): global helper helper = enterAnumber + sumation def isTheNumber(): return helper Here is how to show it in a main definition: import TestPy def main(): atest = TestPy atest.five(5,8) print(atest.isTheNumber()) if __name__ == '__main__': main() This simple code works just like that, and it will execute. I hope it helps. A: What you are saying is to use the method like this: globvar = 5 def f(): var = globvar print(var) f() # Prints 5 But the better way is to use the global variable like this: globvar = 5 def f(): global globvar print(globvar) f() #prints 5 Both give the same output. A: You need to reference the global variable in every function you want to use. As follows: var = "test" def printGlobalText(): global var #wWe are telling to explicitly use the global version var = "global from printGlobalText fun." print "var from printGlobalText: " + var def printLocalText(): #We are NOT telling to explicitly use the global version, so we are creating a local variable var = "local version from printLocalText fun" print "var from printLocalText: " + var printGlobalText() printLocalText() """ Output Result: var from printGlobalText: global from printGlobalText fun. var from printLocalText: local version from printLocalText [Finished in 0.1s] """ A: Try this: def x1(): global x x += 1 print('x1: ', x) def x2(): global x x = x+1 print('x2: ', x) x = 5 print('x: ', x) x1() x2() # Output: # x: 5 # x1: 6 # x2: 7 A: You're not actually storing the global in a local variable, just creating a local reference to the same object that your original global reference refers to. Remember that pretty much everything in Python is a name referring to an object, and nothing gets copied in usual operation. If you didn't have to explicitly specify when an identifier was to refer to a predefined global, then you'd presumably have to explicitly specify when an identifier is a new local variable instead (for example, with something like the 'var' command seen in JavaScript). Since local variables are more common than global variables in any serious and non-trivial system, Python's system makes more sense in most cases. You could have a language which attempted to guess, using a global variable if it existed or creating a local variable if it didn't. However, that would be very error-prone. For example, importing another module could inadvertently introduce a global variable by that name, changing the behaviour of your program. A: In case you have a local variable with the same name, you might want to use the globals() function. globals()['your_global_var'] = 42 A: Following on and as an add on, use a file to contain all global variables all declared locally and then import as: File initval.py: Stocksin = 300 Prices = [] File getstocks.py: import initval as iv def getmystocks(): iv.Stocksin = getstockcount() def getmycharts(): for ic in range(iv.Stocksin): A: Writing to explicit elements of a global array does not apparently need the global declaration, though writing to it "wholesale" does have that requirement: import numpy as np hostValue = 3.14159 hostArray = np.array([2., 3.]) hostMatrix = np.array([[1.0, 0.0],[ 0.0, 1.0]]) def func1(): global hostValue # mandatory, else local. hostValue = 2.0 def func2(): global hostValue # mandatory, else UnboundLocalError. hostValue += 1.0 def func3(): global hostArray # mandatory, else local. hostArray = np.array([14., 15.]) def func4(): # no need for globals hostArray[0] = 123.4 def func5(): # no need for globals hostArray[1] += 1.0 def func6(): # no need for globals hostMatrix[1][1] = 12. def func7(): # no need for globals hostMatrix[0][0] += 0.33 func1() print "After func1(), hostValue = ", hostValue func2() print "After func2(), hostValue = ", hostValue func3() print "After func3(), hostArray = ", hostArray func4() print "After func4(), hostArray = ", hostArray func5() print "After func5(), hostArray = ", hostArray func6() print "After func6(), hostMatrix = \n", hostMatrix func7() print "After func7(), hostMatrix = \n", hostMatrix A: I'm adding this as I haven't seen it in any of the other answers and it might be useful for someone struggling with something similar. The globals() function returns a mutable global symbol dictionary where you can "magically" make data available for the rest of your code. For example: from pickle import load def loaditem(name): with open(r"C:\pickle\file\location"+"\{}.dat".format(name), "rb") as openfile: globals()[name] = load(openfile) return True and from pickle import dump def dumpfile(name): with open(name+".dat", "wb") as outfile: dump(globals()[name], outfile) return True Will just let you dump/load variables out of and into the global namespace. Super convenient, no muss, no fuss. Pretty sure it's Python 3 only. A: Reference the class namespace where you want the change to show up. In this example, runner is using max from the file config. I want my test to change the value of max when runner is using it. main/config.py max = 15000 main/runner.py from main import config def check_threads(): return max < thread_count tests/runner_test.py from main import runner # <----- 1. add file from main.runner import check_threads class RunnerTest(unittest): def test_threads(self): runner.max = 0 # <----- 2. set global check_threads() A: global_var = 10 # will be considered as a global variable def func_1(): global global_var # access variable using variable keyword global_var += 1 def func_2(): global global_var global_var *= 2 print(f"func_2: {global_var}") func_1() func_2() print("Global scope:", global_var) # will print 22 Explanation: global_var is a global variable and all functions and classes can access that variable. The func_1() accessed that global variable using the keyword global which points to the variable which is written in the global scope. If I didn't write the global keyword the variable global_var inside func_1 is considered a local variable that is only usable inside the function. Then inside func_1, I have incremented that global variable by 1. The same happened in func_2(). After calling func_1 and func_2, you'll see the global_var is changed A: Globals are fine - Except with Multiprocessing Globals in connection with multiprocessing on different platforms/envrionments as Windows/Mac OS on the one side and Linux on the other are troublesome. I will show you this with a simple example pointing out a problem which I run into some time ago. If you want to understand, why things are different on Windows/MacOs and Linux you need to know that, the default mechanism to start a new process on ... Windows/MacOs is 'spawn' Linux is 'fork' They are different in Memory allocation an initialisation ... (but I don't go into this here). Let's have a look at the problem/example ... import multiprocessing counter = 0 def do(task_id): global counter counter +=1 print(f'task {task_id}: counter = {counter}') if __name__ == '__main__': pool = multiprocessing.Pool(processes=4) task_ids = list(range(4)) pool.map(do, task_ids) Windows If you run this on Windows (And I suppose on MacOS too), you get the following output ... task 0: counter = 1 task 1: counter = 2 task 2: counter = 3 task 3: counter = 4 Linux If you run this on Linux, you get the following instead. task 0: counter = 1 task 1: counter = 1 task 2: counter = 1 task 3: counter = 1 A: There are 2 ways to declare a variable as global: 1. assign variable inside functions and use global line def declare_a_global_variable(): global global_variable_1 global_variable_1 = 1 # Note to use the function to global variables declare_a_global_variable() 2. assign variable outside functions: global_variable_2 = 2 Now we can use these declared global variables in the other functions: def declare_a_global_variable(): global global_variable_1 global_variable_1 = 1 # Note to use the function to global variables declare_a_global_variable() global_variable_2 = 2 def print_variables(): print(global_variable_1) print(global_variable_2) print_variables() # prints 1 & 2 Note 1: If you want to change a global variable inside another function like update_variables() you should use global line in that function before assigning the variable: global_variable_1 = 1 global_variable_2 = 2 def update_variables(): global global_variable_1 global_variable_1 = 11 global_variable_2 = 12 # will update just locally for this function update_variables() print(global_variable_1) # prints 11 print(global_variable_2) # prints 2 Note 2: There is a exception for note 1 for list and dictionary variables while not using global line inside a function: # declaring some global variables variable = 'peter' list_variable_1 = ['a','b'] list_variable_2 = ['c','d'] def update_global_variables(): """without using global line""" variable = 'PETER' # won't update in global scope list_variable_1 = ['A','B'] # won't update in global scope list_variable_2[0] = 'C' # updated in global scope surprisingly this way list_variable_2[1] = 'D' # updated in global scope surprisingly this way update_global_variables() print('variable is: %s'%variable) # prints peter print('list_variable_1 is: %s'%list_variable_1) # prints ['a', 'b'] print('list_variable_2 is: %s'%list_variable_2) # prints ['C', 'D'] A: Though this has been answered, I am giving solution again as I prefer single line This is if you wish to create global variable within function def someFunc(): x=20 globals()['y']=50 someFunc() # invoking function so that variable Y is created globally print(y) # output 50 print(x) #NameError: name 'x' is not defined as x was defined locally within function A: Like this code: myVar = 12 def myFunc(): myVar += 12 Key: If you declare a variable outside the strings, it become global. If you declare a variable inside the strings, it become local. If you want to declare a global variable inside the strings, use the keyword global before the variable you want to declare: myVar = 124 def myFunc(): global myVar2 myVar2 = 100 myFunc() print(myVar2) and then you have 100 in the document. A: Initialized = 0 #Here This Initialized is global variable def Initialize(): print("Initialized!") Initialized = 1 #This is local variable and assigning 1 to local variable while Initialized == 0: Here we are comparing global variable Initialized that 0, so while loop condition got true Initialize() Function will get called.Loop will be infinite #if we do Initialized=1 then loop will terminate else: print("Lets do something else now!")
Using global variables in a function
How do I create or use a global variable inside a function? How do I use a global variable that was defined in one function inside other functions? Failing to use the global keyword where appropriate often causes UnboundLocalError. The precise rules for this are explained at UnboundLocalError on local variable when reassigned after first use. Generally, please close other questions as a duplicate of that question when an explanation is sought, and this question when someone simply needs to know the global keyword.
[ "You can use a global variable within other functions by declaring it as global within each function that assigns a value to it:\nglobvar = 0\n\ndef set_globvar_to_one():\n global globvar # Needed to modify global copy of globvar\n globvar = 1\n\ndef print_globvar():\n print(globvar) # No need for global declaration to read value of globvar\n\nset_globvar_to_one()\nprint_globvar() # Prints 1\n\nSince it's unclear whether globvar = 1 is creating a local variable or changing a global variable, Python defaults to creating a local variable, and makes you explicitly choose the other behavior with the global keyword.\nSee other answers if you want to share a global variable across modules.\n", "If I'm understanding your situation correctly, what you're seeing is the result of how Python handles local (function) and global (module) namespaces.\nSay you've got a module like this:\n# sample.py\n_my_global = 5\n\ndef func1():\n _my_global = 42\n\ndef func2():\n print _my_global\n\nfunc1()\nfunc2()\n\nYou might expecting this to print 42, but instead it prints 5. As has already been mentioned, if you add a 'global' declaration to func1(), then func2() will print 42.\ndef func1():\n global _my_global \n _my_global = 42\n\nWhat's going on here is that Python assumes that any name that is assigned to, anywhere within a function, is local to that function unless explicitly told otherwise. If it is only reading from a name, and the name doesn't exist locally, it will try to look up the name in any containing scopes (e.g. the module's global scope).\nWhen you assign 42 to the name _my_global, therefore, Python creates a local variable that shadows the global variable of the same name. That local goes out of scope and is garbage-collected when func1() returns; meanwhile, func2() can never see anything other than the (unmodified) global name. Note that this namespace decision happens at compile time, not at runtime -- if you were to read the value of _my_global inside func1() before you assign to it, you'd get an UnboundLocalError, because Python has already decided that it must be a local variable but it has not had any value associated with it yet. But by using the 'global' statement, you tell Python that it should look elsewhere for the name instead of assigning to it locally.\n(I believe that this behavior originated largely through an optimization of local namespaces -- without this behavior, Python's VM would need to perform at least three name lookups each time a new name is assigned to inside a function (to ensure that the name didn't already exist at module/builtin level), which would significantly slow down a very common operation.)\n", "You may want to explore the notion of namespaces. In Python, the module is the natural place for global data:\n\nEach module has its own private symbol table, which is used as the global symbol table by all functions defined in the module. Thus, the author of a module can use global variables in the module without worrying about accidental clashes with a user’s global variables. On the other hand, if you know what you are doing you can touch a module’s global variables with the same notation used to refer to its functions, modname.itemname.\n\nA specific use of global-in-a-module is described here - How do I share global variables across modules?, and for completeness the contents are shared here:\n\nThe canonical way to share information across modules within a single program is to create a special configuration module (often called config or cfg). Just import the configuration module in all modules of your application; the module then becomes available as a global name. Because there is only one instance of each module, any changes made to the module object get reflected everywhere. For example:\n\n\nFile: config.py\n\n\nx = 0 # Default value of the 'x' configuration setting\n\n\n\nFile: mod.py\n\nimport config\nconfig.x = 1\n\n\nFile: main.py\n\nimport config\nimport mod\nprint config.x\n\n", "Python uses a simple heuristic to decide which scope it should load a variable from, between local and global. If a variable name appears on the left hand side of an assignment, but is not declared global, it is assumed to be local. If it does not appear on the left hand side of an assignment, it is assumed to be global. \n>>> import dis\n>>> def foo():\n... global bar\n... baz = 5\n... print bar\n... print baz\n... print quux\n... \n>>> dis.disassemble(foo.func_code)\n 3 0 LOAD_CONST 1 (5)\n 3 STORE_FAST 0 (baz)\n\n 4 6 LOAD_GLOBAL 0 (bar)\n 9 PRINT_ITEM \n 10 PRINT_NEWLINE \n\n 5 11 LOAD_FAST 0 (baz)\n 14 PRINT_ITEM \n 15 PRINT_NEWLINE \n\n 6 16 LOAD_GLOBAL 1 (quux)\n 19 PRINT_ITEM \n 20 PRINT_NEWLINE \n 21 LOAD_CONST 0 (None)\n 24 RETURN_VALUE \n>>> \n\nSee how baz, which appears on the left side of an assignment in foo(), is the only LOAD_FAST variable.\n", "If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables.\nHowever, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can overshadow any existing global variable with the same name.\nAlso, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP is overkill.\n", "\nIf I create a global variable in one function, how can I use that variable in another function?\n\nWe can create a global with the following function:\ndef create_global_variable():\n global global_variable # must declare it to be a global first\n # modifications are thus reflected on the module's global scope\n global_variable = 'Foo' \n\nWriting a function does not actually run its code. So we call the create_global_variable function:\n>>> create_global_variable()\n\nUsing globals without modification\nYou can just use it, so long as you don't expect to change which object it points to: \nFor example, \ndef use_global_variable():\n return global_variable + '!!!'\n\nand now we can use the global variable:\n>>> use_global_variable()\n'Foo!!!'\n\nModification of the global variable from inside a function\nTo point the global variable at a different object, you are required to use the global keyword again:\ndef change_global_variable():\n global global_variable\n global_variable = 'Bar'\n\nNote that after writing this function, the code actually changing it has still not run:\n>>> use_global_variable()\n'Foo!!!'\n\nSo after calling the function:\n>>> change_global_variable()\n\nwe can see that the global variable has been changed. The global_variable name now points to 'Bar':\n>>> use_global_variable()\n'Bar!!!'\n\nNote that \"global\" in Python is not truly global - it's only global to the module level. So it is only available to functions written in the modules in which it is global. Functions remember the module in which they are written, so when they are exported into other modules, they still look in the module in which they were created to find global variables.\nLocal variables with the same name\nIf you create a local variable with the same name, it will overshadow a global variable:\ndef use_local_with_same_name_as_global():\n # bad name for a local variable, though.\n global_variable = 'Baz' \n return global_variable + '!!!'\n\n>>> use_local_with_same_name_as_global()\n'Baz!!!'\n\nBut using that misnamed local variable does not change the global variable:\n>>> use_global_variable()\n'Bar!!!'\n\nNote that you should avoid using the local variables with the same names as globals unless you know precisely what you are doing and have a very good reason to do so. I have not yet encountered such a reason.\nWe get the same behavior in classes\nA follow on comment asks:\n\nwhat to do if I want to create a global variable inside a function inside a class and want to use that variable inside another function inside another class?\n\nHere I demonstrate we get the same behavior in methods as we do in regular functions:\nclass Foo:\n def foo(self):\n global global_variable\n global_variable = 'Foo'\n\nclass Bar:\n def bar(self):\n return global_variable + '!!!'\n\nFoo().foo()\n\nAnd now:\n>>> Bar().bar()\n'Foo!!!'\n\nBut I would suggest instead of using global variables you use class attributes, to avoid cluttering the module namespace. Also note we don't use self arguments here - these could be class methods (handy if mutating the class attribute from the usual cls argument) or static methods (no self or cls).\n", "In addition to already existing answers and to make this more confusing:\n\nIn Python, variables that are only referenced inside a function are\n implicitly global. If a variable is assigned a new value anywhere\n within the function’s body, it’s assumed to be a local. If a variable\n is ever assigned a new value inside the function, the variable is\n implicitly local, and you need to explicitly declare it as ‘global’.\nThough a bit surprising at first, a moment’s consideration explains\n this. On one hand, requiring global for assigned variables provides a\n bar against unintended side-effects. On the other hand, if global was\n required for all global references, you’d be using global all the\n time. You’d have to declare as global every reference to a built-in\n function or to a component of an imported module. This clutter would\n defeat the usefulness of the global declaration for identifying\n side-effects.\n\nSource: What are the rules for local and global variables in Python?.\n", "With parallel execution, global variables can cause unexpected results if you don't understand what is happening. Here is an example of using a global variable within multiprocessing. We can clearly see that each process works with its own copy of the variable:\nimport multiprocessing\nimport os\nimport random\nimport sys\nimport time\n\ndef worker(new_value):\n old_value = get_value()\n set_value(random.randint(1, 99))\n print('pid=[{pid}] '\n 'old_value=[{old_value:2}] '\n 'new_value=[{new_value:2}] '\n 'get_value=[{get_value:2}]'.format(\n pid=str(os.getpid()),\n old_value=old_value,\n new_value=new_value,\n get_value=get_value()))\n\ndef get_value():\n global global_variable\n return global_variable\n\ndef set_value(new_value):\n global global_variable\n global_variable = new_value\n\nglobal_variable = -1\n\nprint('before set_value(), get_value() = [%s]' % get_value())\nset_value(new_value=-2)\nprint('after set_value(), get_value() = [%s]' % get_value())\n\nprocessPool = multiprocessing.Pool(processes=5)\nprocessPool.map(func=worker, iterable=range(15))\n\nOutput:\nbefore set_value(), get_value() = [-1]\nafter set_value(), get_value() = [-2]\npid=[53970] old_value=[-2] new_value=[ 0] get_value=[23]\npid=[53971] old_value=[-2] new_value=[ 1] get_value=[42]\npid=[53970] old_value=[23] new_value=[ 4] get_value=[50]\npid=[53970] old_value=[50] new_value=[ 6] get_value=[14]\npid=[53971] old_value=[42] new_value=[ 5] get_value=[31]\npid=[53972] old_value=[-2] new_value=[ 2] get_value=[44]\npid=[53973] old_value=[-2] new_value=[ 3] get_value=[94]\npid=[53970] old_value=[14] new_value=[ 7] get_value=[21]\npid=[53971] old_value=[31] new_value=[ 8] get_value=[34]\npid=[53972] old_value=[44] new_value=[ 9] get_value=[59]\npid=[53973] old_value=[94] new_value=[10] get_value=[87]\npid=[53970] old_value=[21] new_value=[11] get_value=[21]\npid=[53971] old_value=[34] new_value=[12] get_value=[82]\npid=[53972] old_value=[59] new_value=[13] get_value=[ 4]\npid=[53973] old_value=[87] new_value=[14] get_value=[70]\n\n", "As it turns out the answer is always simple.\nHere is a small sample module with a simple way to show it in a main definition:\ndef five(enterAnumber,sumation):\n global helper\n helper = enterAnumber + sumation\n\ndef isTheNumber():\n return helper\n\nHere is how to show it in a main definition:\nimport TestPy\n\ndef main():\n atest = TestPy\n atest.five(5,8)\n print(atest.isTheNumber())\n\nif __name__ == '__main__':\n main()\n\nThis simple code works just like that, and it will execute. I hope it helps.\n", "What you are saying is to use the method like this:\nglobvar = 5\n\ndef f():\n var = globvar\n print(var)\n\nf() # Prints 5\n\nBut the better way is to use the global variable like this:\nglobvar = 5\ndef f():\n global globvar\n print(globvar)\nf() #prints 5\n\nBoth give the same output.\n", "You need to reference the global variable in every function you want to use.\nAs follows:\nvar = \"test\"\n\ndef printGlobalText():\n global var #wWe are telling to explicitly use the global version\n var = \"global from printGlobalText fun.\"\n print \"var from printGlobalText: \" + var\n\ndef printLocalText():\n #We are NOT telling to explicitly use the global version, so we are creating a local variable\n var = \"local version from printLocalText fun\"\n print \"var from printLocalText: \" + var\n\nprintGlobalText()\nprintLocalText()\n\"\"\"\nOutput Result:\nvar from printGlobalText: global from printGlobalText fun.\nvar from printLocalText: local version from printLocalText\n[Finished in 0.1s]\n\"\"\"\n\n", "Try this:\ndef x1():\n global x\n x += 1\n print('x1: ', x)\n\ndef x2():\n global x\n x = x+1\n print('x2: ', x)\n\nx = 5\nprint('x: ', x)\nx1()\nx2()\n\n# Output:\n# x: 5\n# x1: 6\n# x2: 7\n\n", "You're not actually storing the global in a local variable, just creating a local reference to the same object that your original global reference refers to. Remember that pretty much everything in Python is a name referring to an object, and nothing gets copied in usual operation.\nIf you didn't have to explicitly specify when an identifier was to refer to a predefined global, then you'd presumably have to explicitly specify when an identifier is a new local variable instead (for example, with something like the 'var' command seen in JavaScript). Since local variables are more common than global variables in any serious and non-trivial system, Python's system makes more sense in most cases.\nYou could have a language which attempted to guess, using a global variable if it existed or creating a local variable if it didn't. However, that would be very error-prone. For example, importing another module could inadvertently introduce a global variable by that name, changing the behaviour of your program.\n", "In case you have a local variable with the same name, you might want to use the globals() function.\nglobals()['your_global_var'] = 42\n\n", "Following on and as an add on, use a file to contain all global variables all declared locally and then import as:\nFile initval.py:\nStocksin = 300\nPrices = []\n\nFile getstocks.py:\nimport initval as iv\n\ndef getmystocks(): \n iv.Stocksin = getstockcount()\n\n\ndef getmycharts():\n for ic in range(iv.Stocksin):\n\n", "Writing to explicit elements of a global array does not apparently need the global declaration, though writing to it \"wholesale\" does have that requirement:\nimport numpy as np\n\nhostValue = 3.14159\nhostArray = np.array([2., 3.])\nhostMatrix = np.array([[1.0, 0.0],[ 0.0, 1.0]])\n\ndef func1():\n global hostValue # mandatory, else local.\n hostValue = 2.0\n\ndef func2():\n global hostValue # mandatory, else UnboundLocalError.\n hostValue += 1.0\n\ndef func3():\n global hostArray # mandatory, else local.\n hostArray = np.array([14., 15.])\n\ndef func4(): # no need for globals\n hostArray[0] = 123.4\n\ndef func5(): # no need for globals\n hostArray[1] += 1.0\n\ndef func6(): # no need for globals\n hostMatrix[1][1] = 12.\n\ndef func7(): # no need for globals\n hostMatrix[0][0] += 0.33\n\nfunc1()\nprint \"After func1(), hostValue = \", hostValue\nfunc2()\nprint \"After func2(), hostValue = \", hostValue\nfunc3()\nprint \"After func3(), hostArray = \", hostArray\nfunc4()\nprint \"After func4(), hostArray = \", hostArray\nfunc5()\nprint \"After func5(), hostArray = \", hostArray\nfunc6()\nprint \"After func6(), hostMatrix = \\n\", hostMatrix\nfunc7()\nprint \"After func7(), hostMatrix = \\n\", hostMatrix\n\n", "I'm adding this as I haven't seen it in any of the other answers and it might be useful for someone struggling with something similar. The globals() function returns a mutable global symbol dictionary where you can \"magically\" make data available for the rest of your code. \nFor example:\nfrom pickle import load\ndef loaditem(name):\n with open(r\"C:\\pickle\\file\\location\"+\"\\{}.dat\".format(name), \"rb\") as openfile:\n globals()[name] = load(openfile)\n return True\n\nand \nfrom pickle import dump\ndef dumpfile(name):\n with open(name+\".dat\", \"wb\") as outfile:\n dump(globals()[name], outfile)\n return True\n\nWill just let you dump/load variables out of and into the global namespace. Super convenient, no muss, no fuss. Pretty sure it's Python 3 only.\n", "Reference the class namespace where you want the change to show up. \nIn this example, runner is using max from the file config. I want my test to change the value of max when runner is using it.\nmain/config.py\nmax = 15000\n\nmain/runner.py\nfrom main import config\ndef check_threads():\n return max < thread_count \n\ntests/runner_test.py\nfrom main import runner # <----- 1. add file\nfrom main.runner import check_threads\nclass RunnerTest(unittest):\n def test_threads(self):\n runner.max = 0 # <----- 2. set global \n check_threads()\n\n", "global_var = 10 # will be considered as a global variable\n\n\ndef func_1():\n global global_var # access variable using variable keyword\n global_var += 1\n\n\ndef func_2():\n global global_var\n global_var *= 2\n print(f\"func_2: {global_var}\")\n\n\nfunc_1()\nfunc_2()\nprint(\"Global scope:\", global_var) # will print 22\n\nExplanation:\nglobal_var is a global variable and all functions and classes can access that variable.\nThe func_1() accessed that global variable using the keyword global which points to the variable which is written in the global scope. If I didn't write the global keyword the variable global_var inside func_1 is considered a local variable that is only usable inside the function. Then inside func_1, I have incremented that global variable by 1.\nThe same happened in func_2().\nAfter calling func_1 and func_2, you'll see the global_var is changed\n", "Globals are fine - Except with Multiprocessing\nGlobals in connection with multiprocessing on different platforms/envrionments \nas Windows/Mac OS on the one side and Linux on the other are troublesome.\nI will show you this with a simple example pointing out a problem which I run into some time ago. \nIf you want to understand, why things are different on Windows/MacOs and Linux you \nneed to know that, the default mechanism to start a new process on ...\n\nWindows/MacOs is 'spawn'\nLinux is 'fork'\n\nThey are different in Memory allocation an initialisation ... (but I don't go into this\nhere). \nLet's have a look at the problem/example ...\nimport multiprocessing\n\ncounter = 0\n\ndef do(task_id):\n global counter\n counter +=1\n print(f'task {task_id}: counter = {counter}')\n\nif __name__ == '__main__':\n\n pool = multiprocessing.Pool(processes=4)\n task_ids = list(range(4))\n pool.map(do, task_ids)\n\nWindows\nIf you run this on Windows (And I suppose on MacOS too), you get the following output ...\ntask 0: counter = 1\ntask 1: counter = 2\ntask 2: counter = 3\ntask 3: counter = 4\n\nLinux\nIf you run this on Linux, you get the following instead. \ntask 0: counter = 1\ntask 1: counter = 1\ntask 2: counter = 1\ntask 3: counter = 1\n\n", "There are 2 ways to declare a variable as global:\n1. assign variable inside functions and use global line\ndef declare_a_global_variable():\n global global_variable_1\n global_variable_1 = 1\n\n# Note to use the function to global variables\ndeclare_a_global_variable() \n\n2. assign variable outside functions:\nglobal_variable_2 = 2\n\nNow we can use these declared global variables in the other functions:\ndef declare_a_global_variable():\n global global_variable_1\n global_variable_1 = 1\n\n# Note to use the function to global variables\ndeclare_a_global_variable() \nglobal_variable_2 = 2\n\ndef print_variables():\n print(global_variable_1)\n print(global_variable_2)\nprint_variables() # prints 1 & 2\n\nNote 1:\nIf you want to change a global variable inside another function like update_variables() you should use global line in that function before assigning the variable:\nglobal_variable_1 = 1\nglobal_variable_2 = 2\n\ndef update_variables():\n global global_variable_1\n global_variable_1 = 11\n global_variable_2 = 12 # will update just locally for this function\n\nupdate_variables()\nprint(global_variable_1) # prints 11\nprint(global_variable_2) # prints 2\n\nNote 2:\nThere is a exception for note 1 for list and dictionary variables while not using global line inside a function:\n# declaring some global variables\nvariable = 'peter'\nlist_variable_1 = ['a','b']\nlist_variable_2 = ['c','d']\n\ndef update_global_variables():\n \"\"\"without using global line\"\"\"\n variable = 'PETER' # won't update in global scope\n list_variable_1 = ['A','B'] # won't update in global scope\n list_variable_2[0] = 'C' # updated in global scope surprisingly this way\n list_variable_2[1] = 'D' # updated in global scope surprisingly this way\n\nupdate_global_variables()\n\nprint('variable is: %s'%variable) # prints peter\nprint('list_variable_1 is: %s'%list_variable_1) # prints ['a', 'b']\nprint('list_variable_2 is: %s'%list_variable_2) # prints ['C', 'D']\n\n", "Though this has been answered, I am giving solution again as I prefer single line\nThis is if you wish to create global variable within function\ndef someFunc():\n x=20\n globals()['y']=50\nsomeFunc() # invoking function so that variable Y is created globally \nprint(y) # output 50\nprint(x) #NameError: name 'x' is not defined as x was defined locally within function\n\n", "Like this code:\nmyVar = 12\n\ndef myFunc():\n myVar += 12\n\nKey:\nIf you declare a variable outside the strings, it become global.\nIf you declare a variable inside the strings, it become local.\nIf you want to declare a global variable inside the strings, use the keyword global before the variable you want to declare:\nmyVar = 124\ndef myFunc():\n global myVar2\n myVar2 = 100\nmyFunc()\nprint(myVar2)\n\nand then you have 100 in the document.\n", "Initialized = 0 #Here This Initialized is global variable \n\ndef Initialize():\n print(\"Initialized!\")\n Initialized = 1 #This is local variable and assigning 1 to local variable\nwhile Initialized == 0: \n\nHere we are comparing global variable Initialized that 0, so while loop condition got true\n Initialize()\n\nFunction will get called.Loop will be infinite\n#if we do Initialized=1 then loop will terminate \n\nelse:\n print(\"Lets do something else now!\")\n\n" ]
[ 5007, 874, 267, 115, 74, 68, 57, 41, 35, 33, 30, 30, 27, 23, 20, 17, 9, 8, 8, 7, 6, 5, 1, 0 ]
[ "if you want to access global var you just add global keyword inside your function\nex:\nglobal_var = 'yeah'\ndef someFunc():\n global global_var;\n print(nam_of_var)\n\n" ]
[ -1 ]
[ "global_variables", "python", "scope" ]
stackoverflow_0000423379_global_variables_python_scope.txt
Q: pandas create new column based on divide column by another and check that I not divide by 0 I want to create a new column based on a division of two different columns, but make sure that I do not divide by 0, if the price is 0 set it to none. if I try to just divide I get 'inf' where the price is 0: df['new'] = df['memory'] / df['price'] id memory price 0 0 7568 751.64 1 1 53759 885.17 2 2 41140 1067.78 3 3 10558 0 4 4 44436 1023.13 I didn't find a way to add condition A: To avoid division by zero, I would avoid dividing the values by zero. Please take a look at the following example. I hope this helps. Best regards import pandas as pd data = {'id': [0, 1, 2, 3, 4], 'memory': [7568, 53759, 41140, 10558, 44436], 'price': [751.64, 885.17, 1067.78, 0, 1023.13]} df = pd.DataFrame(data) # adding a new column and setting the values to "none" df['new'] = "none" for i in range(len(df)): if df.iat[i,2] != 0: df.iat[i, 3] = df.iat[i, 1] / df.iat[i, 2] print(df)
pandas create new column based on divide column by another and check that I not divide by 0
I want to create a new column based on a division of two different columns, but make sure that I do not divide by 0, if the price is 0 set it to none. if I try to just divide I get 'inf' where the price is 0: df['new'] = df['memory'] / df['price'] id memory price 0 0 7568 751.64 1 1 53759 885.17 2 2 41140 1067.78 3 3 10558 0 4 4 44436 1023.13 I didn't find a way to add condition
[ "To avoid division by zero, I would avoid dividing the values by zero. Please take a look at the following example.\nI hope this helps.\nBest regards\nimport pandas as pd\n\ndata = {'id': [0, 1, 2, 3, 4], 'memory': [7568, 53759, 41140, 10558, 44436], 'price': [751.64, 885.17, 1067.78, 0, 1023.13]}\ndf = pd.DataFrame(data)\n\n# adding a new column and setting the values to \"none\"\ndf['new'] = \"none\"\n\nfor i in range(len(df)):\n if df.iat[i,2] != 0:\n df.iat[i, 3] = df.iat[i, 1] / df.iat[i, 2]\n \nprint(df)\n\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074636048_python.txt
Q: Plotly Scatter plot: how to create a scatter or line plot for only one group My question might seem very easy, but I am having a difficult time understanding how to create a scatter plot or line plot for only one group of values. For example, my data frame, has 3 columns. My table looks like the following: fruit lb price orange 1 1.4 orange 2 1.7 apple 3 2.1 apple 1 1.4 kiwi 2 1.1 I want to create a scatter plot that has the lb as the x axis and price as the y axis. However, I only want to make the plot only for the orange category. What parameter should I use to specify the orange category? What I have now is this: px.scatter(df, x=df.lb, y=df.price) A: Adding a user selection dropdown will accomplish your goal. Use a graph object to draw a graph for each type of fruit and show the Show/Hide setting. All and only each type will be available as a type of dropdown. Give the list of Show/Hide as input for the button. Now, the drop-down selection will toggle between show and hide. Please refer to the examples in the reference. import plotly.graph_objects as go fig = go.Figure() for f in df['fruit'].unique(): dff = df.query('fruit == @f') fig.add_trace(go.Scatter(mode='markers', x=dff.lb, y=dff.price, name=f, visible=True)) fig.update_layout( updatemenus=[ dict( active=0, buttons=list([ dict(label="ALL", method="update", args=[{"visible": [True, True, True]}, {"title": "All fruit"}]), dict(label="Orange", method="update", args=[{"visible": [True, False, False]}, {"title": "Orange"}]), dict(label="Apple", method="update", args=[{"visible": [False, True, False]}, {"title": "Apple"}]), dict(label="Kiwi", method="update", args=[{"visible": [False, False, True]}, {"title": "Kiwi"}]), ]), ) ]) fig.show()
Plotly Scatter plot: how to create a scatter or line plot for only one group
My question might seem very easy, but I am having a difficult time understanding how to create a scatter plot or line plot for only one group of values. For example, my data frame, has 3 columns. My table looks like the following: fruit lb price orange 1 1.4 orange 2 1.7 apple 3 2.1 apple 1 1.4 kiwi 2 1.1 I want to create a scatter plot that has the lb as the x axis and price as the y axis. However, I only want to make the plot only for the orange category. What parameter should I use to specify the orange category? What I have now is this: px.scatter(df, x=df.lb, y=df.price)
[ "Adding a user selection dropdown will accomplish your goal. Use a graph object to draw a graph for each type of fruit and show the Show/Hide setting. All and only each type will be available as a type of dropdown. Give the list of Show/Hide as input for the button. Now, the drop-down selection will toggle between show and hide. Please refer to the examples in the reference.\nimport plotly.graph_objects as go\n\nfig = go.Figure()\n\nfor f in df['fruit'].unique():\n dff = df.query('fruit == @f')\n fig.add_trace(go.Scatter(mode='markers', x=dff.lb, y=dff.price, name=f, visible=True))\n \nfig.update_layout(\n updatemenus=[\n dict(\n active=0,\n buttons=list([\n dict(label=\"ALL\",\n method=\"update\",\n args=[{\"visible\": [True, True, True]},\n {\"title\": \"All fruit\"}]),\n dict(label=\"Orange\",\n method=\"update\",\n args=[{\"visible\": [True, False, False]},\n {\"title\": \"Orange\"}]),\n dict(label=\"Apple\",\n method=\"update\",\n args=[{\"visible\": [False, True, False]},\n {\"title\": \"Apple\"}]),\n dict(label=\"Kiwi\",\n method=\"update\",\n args=[{\"visible\": [False, False, True]},\n {\"title\": \"Kiwi\"}]),\n ]),\n )\n ])\n\nfig.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "plotly", "python" ]
stackoverflow_0074632288_pandas_plotly_python.txt
Q: AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas Str.replace method returns an attribute error. dc_listings['price'].str.replace(',', '') AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas Here are the top 5 rows of my price column. This stack overflow thread recommends to check if my column has NAN values but non of the values in my column are NAN. A: As the error states, you can only use .str with string columns, and you have a float64. There won't be any commas in a float, so what you have won't really do anything, but in general, you could cast it first: dc_listings['price'].astype(str).str.replace... For example: In [18]: df Out[18]: a b c d e 0 0.645821 0.152197 0.006956 0.600317 0.239679 1 0.865723 0.176842 0.226092 0.416990 0.290406 2 0.046243 0.931584 0.020109 0.374653 0.631048 3 0.544111 0.967388 0.526613 0.794931 0.066736 4 0.528742 0.670885 0.998077 0.293623 0.351879 In [19]: df['a'].astype(str).str.replace("5", " hi ") Out[19]: 0 0.64 hi 8208 hi hi 4779467 1 0.86 hi 7231174332336 2 0.04624337481411367 3 0. hi 44111244991 hi 194 4 0. hi 287421814241892 Name: a, dtype: object A: Two ways: You can use series to fix this error. dc_listings['price'].series.str.replace(',', '') And if series doesn't work you can also alteratively use apply(str) as shown below: dc_listings['price'].apply(str).str.replace(',', '') A: If price is a dtype float 64 then the data is not a string. You can try dc_listings['price'].apply(function) A: Randy has the solution to handle your problem with changing the whole column into str type. But when you have non-str-type value (like NA, list, dict, a custom class) inside that column and wanted to filter those special values in the future, i suggest you create your own function and then apply it to the str value only, like this: dc_listings['price'] = dc_listings['price'].apply( lambda x: x.replace(',', '') if type(x) is str else x ) or more clearly, using def : def replace_substring_or_return_value(value): if type(value) is str: return x.replace(',', '') else: return value dc_listings['price'] = dc_listings['price'].apply( replace_substring_or_return_value ) ALTHOUGH this might be a bad practice, because you should use the same data type for every value in a column
AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas
Str.replace method returns an attribute error. dc_listings['price'].str.replace(',', '') AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas Here are the top 5 rows of my price column. This stack overflow thread recommends to check if my column has NAN values but non of the values in my column are NAN.
[ "As the error states, you can only use .str with string columns, and you have a float64. There won't be any commas in a float, so what you have won't really do anything, but in general, you could cast it first:\ndc_listings['price'].astype(str).str.replace...\n\nFor example:\nIn [18]: df\nOut[18]:\n a b c d e\n0 0.645821 0.152197 0.006956 0.600317 0.239679\n1 0.865723 0.176842 0.226092 0.416990 0.290406\n2 0.046243 0.931584 0.020109 0.374653 0.631048\n3 0.544111 0.967388 0.526613 0.794931 0.066736\n4 0.528742 0.670885 0.998077 0.293623 0.351879\n\nIn [19]: df['a'].astype(str).str.replace(\"5\", \" hi \")\nOut[19]:\n0 0.64 hi 8208 hi hi 4779467\n1 0.86 hi 7231174332336\n2 0.04624337481411367\n3 0. hi 44111244991 hi 194\n4 0. hi 287421814241892\nName: a, dtype: object\n\n", "Two ways:\n\nYou can use series to fix this error.\ndc_listings['price'].series.str.replace(',', '')\n\n\n\n\n\nAnd if series doesn't work you can also alteratively use apply(str) as shown below:\ndc_listings['price'].apply(str).str.replace(',', '')\n\n\n\n", "If price is a dtype float 64 then the data is not a string.\nYou can try dc_listings['price'].apply(function)\n", "Randy has the solution to handle your problem with changing the whole column into str type. But when you have non-str-type value (like NA, list, dict, a custom class) inside that column and wanted to filter those special values in the future, i suggest you create your own function and then apply it to the str value only, like this:\ndc_listings['price'] = dc_listings['price'].apply(\n lambda x: x.replace(',', '') if type(x) is str else x\n)\n\nor more clearly, using def :\ndef replace_substring_or_return_value(value):\n if type(value) is str: return x.replace(',', '')\n else: return value\n\ndc_listings['price'] = dc_listings['price'].apply(\n replace_substring_or_return_value\n)\n\nALTHOUGH this might be a bad practice, because you should use the same data type for every value in a column\n" ]
[ 148, 14, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0052065909_pandas_python.txt
Q: How to add my own parameters into pymongo find function Im building a python application that allows you to query data from mongoDB based on the start time and end time that the user puts in. I have been able to connect to mongoDB and put data there. I just cant seem to get the query right. I will show only the function in question because I know that connecting to the database isn't the problem, only the query. def query_from_to(self, begin, end): self.collection.find("$and" : [ { "x" : {"$gte": begin } }, { "x" : {"$lte": end } } ]) Is this even possible? A: Put this format in your function try: collection.find([ { "$Date" : { "$gte": begin, "$lte": end } } ])
How to add my own parameters into pymongo find function
Im building a python application that allows you to query data from mongoDB based on the start time and end time that the user puts in. I have been able to connect to mongoDB and put data there. I just cant seem to get the query right. I will show only the function in question because I know that connecting to the database isn't the problem, only the query. def query_from_to(self, begin, end): self.collection.find("$and" : [ { "x" : {"$gte": begin } }, { "x" : {"$lte": end } } ]) Is this even possible?
[ "Put this format in your function try:\ncollection.find([\n {\n \"$Date\" : {\n \"$gte\": begin, \n \"$lte\": end \n } \n }\n ])\n\n" ]
[ 1 ]
[]
[]
[ "pymongo", "python" ]
stackoverflow_0074636478_pymongo_python.txt
Q: service account does not have storage.objects.get access for Google Cloud Storage I have created a service account in Google Cloud Console and selected role Storage / Storage Admin (i.e. full control of GCS resources). gcloud projects get-iam-policy my_project seems to indicate that the role was actually selected: - members: - serviceAccount:my_sa@my_project.iam.gserviceaccount.com role: roles/storage.admin - members: - serviceAccount:my_sa@my_project.iam.gserviceaccount.com role: roles/storage.objectAdmin - members: - serviceAccount:my_sa@my_project.iam.gserviceaccount.com role: roles/storage.objectCreator And documentation clearly indicates that role roles/storage.admin comprises permissions storage.objects.* (as well as storage.buckets.*). But when I try using that service account in conjunction with the Google Cloud Storage Client Library for Python, I receive this error message: my_sa@my_project.iam.gserviceaccount.com does not have storage.objects.get access to my_project/my_bucket. So why would the selected role not be sufficient in this context? A: The problem was apparently that the service account was associated with too many roles, perhaps as a results of previous configuration attempts. These steps resolved the issue: removed all (three) roles for the offending service account (member) my_sa under IAM & Admin / IAM deleted my_sa under IAM & Admin / Service accounts recreated my_sa (again with role Storage / Storage Admin) Effects are like this: my_sa shows up with one role (Storage Admin) under IAM & Admin / IAM my_sa shows up as member under Storage / Browser / my_bucket / Edit bucket permissions A: It's worth noting, that you need to wait up to a few minutes for permissions to be working in case you just assigned them. At least that's what happened to me after: gcloud projects add-iam-policy-binding xxx --member "serviceAccount:[email protected]" --role "roles/storage.objectViewer" A: Go to your bucket's permissions section and open add permissions section for your bucket. For example, insufficient service, which gcloud tells you, is; [email protected] Add this service as user then give these roles; Cloud Storage - Storage Admin Cloud Storage - Storage Object Admin Cloud Storage - Storage Object Creator Then you should have sufficient permissions to make changes on your bucket. A: For me, it was because deployed with the "default-bucket" as parameter needed for the storage emulator. admin.storage().bucket('default-bucket'); // do not deploy that To fix it, I set the default bucket name at the initialization of the firebase admin. const admin = require('firebase-admin'); const config = process.env.FUNCTIONS_EMULATOR ? { storageBucket: 'default-bucket', } : { storageBucket: 'YOUT_FIREBASE_STORAGE_BUCKET', }; admin.initializeApp(config); const bucket = admin.storage().bucket(); A: I just realized this happens some times when you are just creating the Firebase/Firestore/Storage project by first time. If you got this error in your first installation/deploy/setup, just wait 1 minute and try again. Seems like some delays in the Google Cloud deploys/serving are responsible of this. A: I got this error when I copied a cloud function from another project because I forgot to update the storage bucket. Silly mistake. admin.initializeApp({ storageBucket: "gs://*****.appspot.com", }); A: in my case, after the service account is created, interface returns error: "service account does not have storage.objects.get access for Google Cloud Storage". But, When I tried again the next day, everything was fine :)
service account does not have storage.objects.get access for Google Cloud Storage
I have created a service account in Google Cloud Console and selected role Storage / Storage Admin (i.e. full control of GCS resources). gcloud projects get-iam-policy my_project seems to indicate that the role was actually selected: - members: - serviceAccount:my_sa@my_project.iam.gserviceaccount.com role: roles/storage.admin - members: - serviceAccount:my_sa@my_project.iam.gserviceaccount.com role: roles/storage.objectAdmin - members: - serviceAccount:my_sa@my_project.iam.gserviceaccount.com role: roles/storage.objectCreator And documentation clearly indicates that role roles/storage.admin comprises permissions storage.objects.* (as well as storage.buckets.*). But when I try using that service account in conjunction with the Google Cloud Storage Client Library for Python, I receive this error message: my_sa@my_project.iam.gserviceaccount.com does not have storage.objects.get access to my_project/my_bucket. So why would the selected role not be sufficient in this context?
[ "The problem was apparently that the service account was associated with too many roles, perhaps as a results of previous configuration attempts.\nThese steps resolved the issue:\n\nremoved all (three) roles for the offending service account (member) my_sa under IAM & Admin / IAM\ndeleted my_sa under IAM & Admin / Service accounts\nrecreated my_sa (again with role Storage / Storage Admin)\n\nEffects are like this:\n\nmy_sa shows up with one role (Storage Admin) under IAM & Admin / IAM\nmy_sa shows up as member under Storage / Browser / my_bucket / Edit bucket permissions\n\n", "It's worth noting, that you need to wait up to a few minutes for permissions to be working in case you just assigned them. At least that's what happened to me after:\ngcloud projects add-iam-policy-binding xxx --member\n\"serviceAccount:[email protected]\" --role \"roles/storage.objectViewer\"\n\n", "Go to your bucket's permissions section and open add permissions section for your bucket. For example, insufficient service, which gcloud tells you, is;\[email protected] \n\nAdd this service as user then give these roles;\n\nCloud Storage - Storage Admin\nCloud Storage - Storage Object Admin\nCloud Storage - Storage Object Creator\n\nThen you should have sufficient permissions to make changes on your bucket.\n", "For me, it was because deployed with the \"default-bucket\" as parameter needed for the storage emulator.\nadmin.storage().bucket('default-bucket'); // do not deploy that\n\nTo fix it, I set the default bucket name at the initialization of the firebase admin.\nconst admin = require('firebase-admin');\n\nconst config = process.env.FUNCTIONS_EMULATOR ? {\n storageBucket: 'default-bucket',\n} : {\n storageBucket: 'YOUT_FIREBASE_STORAGE_BUCKET',\n};\n\nadmin.initializeApp(config);\n\nconst bucket = admin.storage().bucket();\n\n", "I just realized this happens some times when you are just creating the Firebase/Firestore/Storage project by first time.\nIf you got this error in your first installation/deploy/setup, just wait 1 minute and try again.\nSeems like some delays in the Google Cloud deploys/serving are responsible of this.\n", "I got this error when I copied a cloud function from another project because I forgot to update the storage bucket. Silly mistake.\nadmin.initializeApp({\n storageBucket: \"gs://*****.appspot.com\",\n});\n\n", "in my case, after the service account is created, interface returns error: \"service account does not have storage.objects.get access for Google Cloud Storage\".\nBut, When I tried again the next day, everything was fine :)\n" ]
[ 24, 18, 16, 1, 1, 0, 0 ]
[]
[]
[ "google_cloud_platform", "google_cloud_storage", "python", "service_accounts" ]
stackoverflow_0051410633_google_cloud_platform_google_cloud_storage_python_service_accounts.txt
Q: How to understand the flaw in my simple three part python code? My Python exercise in 'classes' is as follows: You have been recruited by your friend, a linguistics enthusiast, to create a utility tool that can perform analysis on a given piece of text. Complete the class "analyzedText" with the following methods: Constructor (_init_) - This method should take the argument text, make is lowercase and remove all punctuation. Assume only the following punctuation is used: period (.), exclamation mark (!), comma (,), and question mark (?). Assign this newly formatted text to a new attribute called fmtText. freqAll - This method should create and return dictionary of all unique words in the text along with the number of times they occur in the text. Each key in the dictionary should be the unique word appearing in the text and the associated value should be the number of times it occurs in the text. Create this dictionary from the fmtText attribute. This was my code: class analysedText(object) def __init__ (self, text): formattedText = text.replace('.',' ').replace(',',' ').replace('!',' ').replace('?',' ') formattedText = formattedText.lower() self.fmtText = formattedText def freqAll(self): wordList = self.fmtText.split(' ') wordDict = {} for word in set(wordList): wordDict[word] = wordList(word) return wordDict I get errors on both of these and I can't seem to figure it out after a lot of little adjustments. I suspect the issue in the first part is when I try to assign a value to the newly formatted text but I cannot think of a workable solution. As for the second part, I am at a complete loss - I was wrongfully confident my answer was correct but I received a fail error when I ran it through the classroom's code cell to test it. A: On the assumption that by 'errors' you mean a TypeError, this is caused because of line 13, wordDict[word] = wordList(word). wordList is a list, and by using the ()/brackets you're telling Python that you want to call that list as a function. Which it cannot do. According to your task, you are to instead find the occurrences of words in the list, which you could achieve with the .count() method. This method basically returns the total number of occurrences of an element in a list. (Feel free to read more about it here) With this modification, (this is assuming you want wordDict to contain a dictionary with the word as the key, and the occurrence as the value) your freqAll function would look something like this: def freqAll(self): wordList = self.fmtText.split() wordDict = {} for word in set(wordList): wordDict[word] = wordList.count(word) # wordList.count(word) returns the number of times the string word appears as an element in wordList return wordDict Although you could also achieve this same task with a class known as collections.Counter, (of course this means you have to import collections) which you can read more about here
How to understand the flaw in my simple three part python code?
My Python exercise in 'classes' is as follows: You have been recruited by your friend, a linguistics enthusiast, to create a utility tool that can perform analysis on a given piece of text. Complete the class "analyzedText" with the following methods: Constructor (_init_) - This method should take the argument text, make is lowercase and remove all punctuation. Assume only the following punctuation is used: period (.), exclamation mark (!), comma (,), and question mark (?). Assign this newly formatted text to a new attribute called fmtText. freqAll - This method should create and return dictionary of all unique words in the text along with the number of times they occur in the text. Each key in the dictionary should be the unique word appearing in the text and the associated value should be the number of times it occurs in the text. Create this dictionary from the fmtText attribute. This was my code: class analysedText(object) def __init__ (self, text): formattedText = text.replace('.',' ').replace(',',' ').replace('!',' ').replace('?',' ') formattedText = formattedText.lower() self.fmtText = formattedText def freqAll(self): wordList = self.fmtText.split(' ') wordDict = {} for word in set(wordList): wordDict[word] = wordList(word) return wordDict I get errors on both of these and I can't seem to figure it out after a lot of little adjustments. I suspect the issue in the first part is when I try to assign a value to the newly formatted text but I cannot think of a workable solution. As for the second part, I am at a complete loss - I was wrongfully confident my answer was correct but I received a fail error when I ran it through the classroom's code cell to test it.
[ "On the assumption that by 'errors' you mean a TypeError, this is caused because of line 13, wordDict[word] = wordList(word).\nwordList is a list, and by using the ()/brackets you're telling Python that you want to call that list as a function. Which it cannot do.\nAccording to your task, you are to instead find the occurrences of words in the list, which you could achieve with the .count() method. This method basically returns the total number of occurrences of an element in a list. (Feel free to read more about it here)\nWith this modification, (this is assuming you want wordDict to contain a dictionary with the word as the key, and the occurrence as the value) your freqAll function would look something like this:\ndef freqAll(self):\n wordList = self.fmtText.split()\n\n wordDict = {}\n for word in set(wordList):\n wordDict[word] = wordList.count(word) # wordList.count(word) returns the number of times the string word appears as an element in wordList\n\n return wordDict\n\nAlthough you could also achieve this same task with a class known as collections.Counter, (of course this means you have to import collections) which you can read more about here\n" ]
[ 1 ]
[]
[]
[ "class", "coursera_api", "python" ]
stackoverflow_0074635479_class_coursera_api_python.txt
Q: Use of "DGLGraph.apply_edges" and "DGLGraph.send_and_recv" API (to compute messages) as a replacement of "DGLGraph.send" and "DGLGraph.recv I'm using DGL (Python package dedicated to deep learning on graphs) for training of defining a graph, defining Graph Convolutional Network (GCN) and train. I faced a problem which I’m dealing with for two weeks. I developed my GCN code based on the link below: enter link description here I’m facing an error for this part of the above mentioned code: class GCNLayer(nn.Module): def init(self, in_feats, out_feats): super(GCNLayer, self).init() self.linear = nn.Linear(in_feats, out_feats) def forward(self, g, inputs): # g is the graph and the inputs is the input node features # first set the node features g.ndata['h'] = inputs # trigger message passing on all edges g.send(g.edges(), gcn_message) # trigger aggregation at all nodes g.recv(g.nodes(), gcn_reduce) # get the result node features h = g.ndata.pop('h') # perform linear transformation return self.linear(h) I’m getting an error below: dgl._ffi.base.DGLError: DGLGraph.send is deprecated. As a replacement, use DGLGraph.apply_edges API to compute messages as edge data. Then use DGLGraph.send_and_recv and set the message function as dgl.function.copy_e to conduct message aggregation* As it is guided in the error, I wonder to know how can I use DGLGraph.apply_edges instead of DGLGraph.send? In "DGLGraph.send" command we have 2 arguments "g.edges()" and "gcn_message". How these arguments can be converted to the arguments required for "DGLGraph.apply_edges" which are (func, edges=‘ALL’, etype=None, inplace=False ) (According to this link? Also, the same question for "DGLGraph.send_and_recv". In "DGLGraph.recv" we had 2 arguments "g.nodes()" and "gcn_reduce". How these arguments can be converted to the arguments required for "DGLGraph.send_and_recv" which are "(edges, message_func, reduce_func, apply_node_func=None, etype=None, inplace=False)" (According to this link)? I would be very grateful if you can help me with this big challenge. Thank you A: DGLGraph.apply_edges(func, edges='ALL', etype=None, inplace=False) is used to update edge features using the function 'func' on all the edges in 'edges'. DGLGraph.send_and_recv(edges, message_func, reduce_func, apply_node_func=None, etype=None, inplace=False) is used to pass messages, reduce messages and update the node features for all the edges in 'edges'. To get your forward method to work you can update your code as below def forward(self, g, inputs): g.ndata['h'] = inputs g.send_and_recv(g.edges(), fn.copy_src("h", "m"), fn.sum("m", "h")) h = g.ndata.pop("h") return self.linear(h) You can use your own message_func (message generation) and reduce_func (message aggregation) to fit your purpose. A: try code below, it may solve your problem def forward(self, g, inputs): g.ndata['h'] = inputs g.send_and_recv(g.edges(), gcn_message, gcn_reduce) h = g.ndata.pop('h') return self.linear(h)
Use of "DGLGraph.apply_edges" and "DGLGraph.send_and_recv" API (to compute messages) as a replacement of "DGLGraph.send" and "DGLGraph.recv
I'm using DGL (Python package dedicated to deep learning on graphs) for training of defining a graph, defining Graph Convolutional Network (GCN) and train. I faced a problem which I’m dealing with for two weeks. I developed my GCN code based on the link below: enter link description here I’m facing an error for this part of the above mentioned code: class GCNLayer(nn.Module): def init(self, in_feats, out_feats): super(GCNLayer, self).init() self.linear = nn.Linear(in_feats, out_feats) def forward(self, g, inputs): # g is the graph and the inputs is the input node features # first set the node features g.ndata['h'] = inputs # trigger message passing on all edges g.send(g.edges(), gcn_message) # trigger aggregation at all nodes g.recv(g.nodes(), gcn_reduce) # get the result node features h = g.ndata.pop('h') # perform linear transformation return self.linear(h) I’m getting an error below: dgl._ffi.base.DGLError: DGLGraph.send is deprecated. As a replacement, use DGLGraph.apply_edges API to compute messages as edge data. Then use DGLGraph.send_and_recv and set the message function as dgl.function.copy_e to conduct message aggregation* As it is guided in the error, I wonder to know how can I use DGLGraph.apply_edges instead of DGLGraph.send? In "DGLGraph.send" command we have 2 arguments "g.edges()" and "gcn_message". How these arguments can be converted to the arguments required for "DGLGraph.apply_edges" which are (func, edges=‘ALL’, etype=None, inplace=False ) (According to this link? Also, the same question for "DGLGraph.send_and_recv". In "DGLGraph.recv" we had 2 arguments "g.nodes()" and "gcn_reduce". How these arguments can be converted to the arguments required for "DGLGraph.send_and_recv" which are "(edges, message_func, reduce_func, apply_node_func=None, etype=None, inplace=False)" (According to this link)? I would be very grateful if you can help me with this big challenge. Thank you
[ "DGLGraph.apply_edges(func, edges='ALL', etype=None, inplace=False) is used to update edge features using the function 'func' on all the edges in 'edges'.\nDGLGraph.send_and_recv(edges, message_func, reduce_func, apply_node_func=None, etype=None, inplace=False) is used to pass messages, reduce messages and update the node features for all the edges in 'edges'.\nTo get your forward method to work you can update your code as below\ndef forward(self, g, inputs):\n g.ndata['h'] = inputs\n g.send_and_recv(g.edges(), fn.copy_src(\"h\", \"m\"), fn.sum(\"m\", \"h\"))\n h = g.ndata.pop(\"h\")\n\n return self.linear(h)\n\nYou can use your own message_func (message generation) and reduce_func (message aggregation) to fit your purpose.\n", "try code below, it may solve your problem\ndef forward(self, g, inputs):\n g.ndata['h'] = inputs\n g.send_and_recv(g.edges(), gcn_message, gcn_reduce)\n h = g.ndata.pop('h')\n return self.linear(h)\n\n" ]
[ 0, 0 ]
[]
[]
[ "deep_learning", "dgl", "graph", "graph_neural_network", "python" ]
stackoverflow_0071848343_deep_learning_dgl_graph_graph_neural_network_python.txt
Q: how to compare two dictionaries and display the values that differs in template? I try to compare two dictionaries and if on key, value differs from the other dictionary then print the difference key, value in red. I think my views.py is correct. But how to show the difference in the template? So I have views.py: def data_compare(): fruits = { "appel": 3962.00, "waspeen": 3304.07, "ananas": 24, } set1 = set([(k, v) for k, v in fruits.items()]) return set1 def data_compare2(): fruits2 = { "appel": 3962.00, "waspeen": 3304.07, "ananas": 30, } set2 = set([(k, v) for k, v in fruits2.items()]) return set2 def data_combined(request): data1 = data_compare() data2 = data_compare2() diff_set = list(data1 - data2) + list(data2 - data1) print(data1) return render(request, "main/data_compare.html", context={"data1": data1, "data2": data2, "diff_set": diff_set}) and template: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <div class="container center"> {% for key, value in data1 %} <span {% if diff_set %} style="color: red;">{% endif %} {{ key }}: {{value}}</span><br> {% endfor %} </div> <div class="container center"> {% for key, value in data2 %} <span {% if diff_set %} style="color: red;">{% endif %}{{ key }}: {{value}}</span><br> {% endfor %} </div> </body> </html> I did a print(diff_set) and that shows: [('ananas', 24), ('ananas', 30)] so that is correct But everything is now red. and only in this case ananas has to be red Question: how to return the key, value from a dictionary that differes from the other dictionary in red? A: Looks dictdiff might be useful in your case. The following example is not the same as your output, but I hope it is useful. import dictdiffer fruits = { "appel": 3962.00, "waspeen": 3304.07, "ananas": 24, } fruits2 = { "appel": 3962.00, "waspeen": 3304.07, "ananas": 30, } diff = list(dictdiffer.diff(first=fruits, second=fruits2)) print(diff) # [('change', 'ananas', (24, 30))] A: Following the same principle from my last answer. Since you are trying to compare two dictionaries with same keys, you can just iterate over both of them at once, compare the values and if they differ append the key to a condition list: def compare_data(request): fruits = {"appel": 3962.00,"waspeen": 3304.07,"ananas": 24,} fruits2 = {"appel": 3962.00,"waspeen": 3304.07,"ananas": 30,} diff_set = [] for k, v in fruits.items(): if fruits[k] != fruits2[k]: diff_set.append(k) context = { 'fruits': fruits, 'fruits2': fruits2, 'diff_set': diff_set, } return render(request, 'main/data_compare.html', context) template.html: {% extends 'base.html' %} {% block content %} <div class="container center"> {% for key, value in fruits.items %} <span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br> {% endfor %} </div> <div class="container center"> {% for key, value in fruits2.items %} <span {% if key in diff_set %} style="color: red;"{% endif %}>{{ key }}: {{value}}</span><br> {% endfor %} </div> {% endblock %} Edit In your case, everything is showing red because of your IF statement: {% if diff_set %}...{% endif %} Which checks if the 'diff_set' variable contains any values. It does, so it returns True every iteration. In order to use your 'diff_set' data structure: [('ananas', 24), ('ananas', 30)] One needs to loop through the list and check if if the first value of the tuple is equal to the key value. Even if you do that, with current html structure it would print 'ananas' in red twice.
how to compare two dictionaries and display the values that differs in template?
I try to compare two dictionaries and if on key, value differs from the other dictionary then print the difference key, value in red. I think my views.py is correct. But how to show the difference in the template? So I have views.py: def data_compare(): fruits = { "appel": 3962.00, "waspeen": 3304.07, "ananas": 24, } set1 = set([(k, v) for k, v in fruits.items()]) return set1 def data_compare2(): fruits2 = { "appel": 3962.00, "waspeen": 3304.07, "ananas": 30, } set2 = set([(k, v) for k, v in fruits2.items()]) return set2 def data_combined(request): data1 = data_compare() data2 = data_compare2() diff_set = list(data1 - data2) + list(data2 - data1) print(data1) return render(request, "main/data_compare.html", context={"data1": data1, "data2": data2, "diff_set": diff_set}) and template: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <div class="container center"> {% for key, value in data1 %} <span {% if diff_set %} style="color: red;">{% endif %} {{ key }}: {{value}}</span><br> {% endfor %} </div> <div class="container center"> {% for key, value in data2 %} <span {% if diff_set %} style="color: red;">{% endif %}{{ key }}: {{value}}</span><br> {% endfor %} </div> </body> </html> I did a print(diff_set) and that shows: [('ananas', 24), ('ananas', 30)] so that is correct But everything is now red. and only in this case ananas has to be red Question: how to return the key, value from a dictionary that differes from the other dictionary in red?
[ "Looks dictdiff might be useful in your case. The following example is not the same as your output, but I hope it is useful.\nimport dictdiffer\n\nfruits = {\n \"appel\": 3962.00,\n \"waspeen\": 3304.07,\n \"ananas\": 24,\n}\nfruits2 = {\n \"appel\": 3962.00,\n \"waspeen\": 3304.07,\n \"ananas\": 30,\n}\n\ndiff = list(dictdiffer.diff(first=fruits, second=fruits2))\nprint(diff) # [('change', 'ananas', (24, 30))]\n\n", "Following the same principle from my last answer. Since you are trying to compare two dictionaries with same keys, you can just iterate over both of them at once, compare the values and if they differ append the key to a condition list:\ndef compare_data(request):\n fruits = {\"appel\": 3962.00,\"waspeen\": 3304.07,\"ananas\": 24,}\n fruits2 = {\"appel\": 3962.00,\"waspeen\": 3304.07,\"ananas\": 30,}\n diff_set = []\n\n for k, v in fruits.items():\n if fruits[k] != fruits2[k]:\n diff_set.append(k)\n\n context = {\n 'fruits': fruits, \n 'fruits2': fruits2, \n 'diff_set': diff_set, \n }\n return render(request, 'main/data_compare.html', context)\n\ntemplate.html:\n{% extends 'base.html' %}\n\n{% block content %}\n <div class=\"container center\">\n {% for key, value in fruits.items %}\n <span {% if key in diff_set %} style=\"color: red;\" {% endif %}>{{ key }}: {{value}}</span><br>\n {% endfor %}\n </div>\n\n <div class=\"container center\">\n {% for key, value in fruits2.items %}\n <span {% if key in diff_set %} style=\"color: red;\"{% endif %}>{{ key }}: {{value}}</span><br>\n {% endfor %}\n </div>\n{% endblock %}\n\nEdit\nIn your case, everything is showing red because of your IF statement:\n{% if diff_set %}...{% endif %}\n\nWhich checks if the 'diff_set' variable contains any values. It does, so it returns True every iteration.\nIn order to use your 'diff_set' data structure:\n[('ananas', 24), ('ananas', 30)]\nOne needs to loop through the list and check if if the first value of the tuple is equal to the key value. Even if you do that, with current html structure it would print 'ananas' in red twice.\n" ]
[ 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074633851_django_python.txt
Q: Nested try/except statement for method I wonder if it is possible to handle the exceptions raised when calling a method via a function (this is necessary as in the production code different objects are created depending on args passed) as in the following example. Function createObj triggers the creation of an object Obj_A based off different criteria and is supposed to handle any exceptions that may occur with Obj_A. def createObj(): try: return Obj_A() except: print("Bad boy!") Obj_A has a method that creates a nested object, in which I would like to catch exceptions and handle those at the level of createObj: class Obj_A(object): def __init__(self): pass def myFunc(self, var): return self.Obj_B(self, var) class Obj_B(object): def __init__(my, self, var): try: 1/0 except: raise ValueError("Don't divide by zero") Calling createObj works just fine. But calling createObj.myFunc('var') raises the ValueError("Don't divide by zero"). Of course, handling the error on a try: createObj().myFunc('var') except: print("Not what I need") would work, but is unfortunately not desirable for this use case. Is there a way to handle this exception on the createObj level and return Bad boy!? A: You want to handle an exception with the except clause that will be raised in the future after the corresponding try statement. It's impossible, and unnatural if it's possible. Instead, do in other way like this example. def createObj(): def handle_exception(e): print("Bad boy!") return Obj_A(handle_exception) class Obj_A(object): def __init__(self, handle_exception): self.handle_exception = handle_exception def myFunc(self, var): try: return self.Obj_B(self, var) except Exception as e: self.handle_exception(e)
Nested try/except statement for method
I wonder if it is possible to handle the exceptions raised when calling a method via a function (this is necessary as in the production code different objects are created depending on args passed) as in the following example. Function createObj triggers the creation of an object Obj_A based off different criteria and is supposed to handle any exceptions that may occur with Obj_A. def createObj(): try: return Obj_A() except: print("Bad boy!") Obj_A has a method that creates a nested object, in which I would like to catch exceptions and handle those at the level of createObj: class Obj_A(object): def __init__(self): pass def myFunc(self, var): return self.Obj_B(self, var) class Obj_B(object): def __init__(my, self, var): try: 1/0 except: raise ValueError("Don't divide by zero") Calling createObj works just fine. But calling createObj.myFunc('var') raises the ValueError("Don't divide by zero"). Of course, handling the error on a try: createObj().myFunc('var') except: print("Not what I need") would work, but is unfortunately not desirable for this use case. Is there a way to handle this exception on the createObj level and return Bad boy!?
[ "You want to handle an exception with the except clause that will be raised in the future after the corresponding try statement. It's impossible, and unnatural if it's possible.\nInstead, do in other way like this example.\ndef createObj():\n def handle_exception(e):\n print(\"Bad boy!\")\n return Obj_A(handle_exception)\n\nclass Obj_A(object):\n def __init__(self, handle_exception):\n self.handle_exception = handle_exception\n\n def myFunc(self, var):\n try:\n return self.Obj_B(self, var)\n except Exception as e:\n self.handle_exception(e)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074635376_python_python_3.x.txt
Q: List Highest Correlation Pairs from a Large Correlation Matrix in Pandas? How do you find the top correlations in a correlation matrix with Pandas? There are many answers on how to do this with R (Show correlations as an ordered list, not as a large matrix or Efficient way to get highly correlated pairs from large data set in Python or R), but I am wondering how to do it with pandas? In my case the matrix is 4460x4460, so can't do it visually. A: You can use DataFrame.values to get an numpy array of the data and then use NumPy functions such as argsort() to get the most correlated pairs. But if you want to do this in pandas, you can unstack and sort the DataFrame: import pandas as pd import numpy as np shape = (50, 4460) data = np.random.normal(size=shape) data[:, 1000] += data[:, 2000] df = pd.DataFrame(data) c = df.corr().abs() s = c.unstack() so = s.sort_values(kind="quicksort") print so[-4470:-4460] Here is the output: 2192 1522 0.636198 1522 2192 0.636198 3677 2027 0.641817 2027 3677 0.641817 242 130 0.646760 130 242 0.646760 1171 2733 0.670048 2733 1171 0.670048 1000 2000 0.742340 2000 1000 0.742340 dtype: float64 A: @HYRY's answer is perfect. Just building on that answer by adding a bit more logic to avoid duplicate and self correlations and proper sorting: import pandas as pd d = {'x1': [1, 4, 4, 5, 6], 'x2': [0, 0, 8, 2, 4], 'x3': [2, 8, 8, 10, 12], 'x4': [-1, -4, -4, -4, -5]} df = pd.DataFrame(data = d) print("Data Frame") print(df) print() print("Correlation Matrix") print(df.corr()) print() def get_redundant_pairs(df): '''Get diagonal and lower triangular pairs of correlation matrix''' pairs_to_drop = set() cols = df.columns for i in range(0, df.shape[1]): for j in range(0, i+1): pairs_to_drop.add((cols[i], cols[j])) return pairs_to_drop def get_top_abs_correlations(df, n=5): au_corr = df.corr().abs().unstack() labels_to_drop = get_redundant_pairs(df) au_corr = au_corr.drop(labels=labels_to_drop).sort_values(ascending=False) return au_corr[0:n] print("Top Absolute Correlations") print(get_top_abs_correlations(df, 3)) That gives the following output: Data Frame x1 x2 x3 x4 0 1 0 2 -1 1 4 0 8 -4 2 4 8 8 -4 3 5 2 10 -4 4 6 4 12 -5 Correlation Matrix x1 x2 x3 x4 x1 1.000000 0.399298 1.000000 -0.969248 x2 0.399298 1.000000 0.399298 -0.472866 x3 1.000000 0.399298 1.000000 -0.969248 x4 -0.969248 -0.472866 -0.969248 1.000000 Top Absolute Correlations x1 x3 1.000000 x3 x4 0.969248 x1 x4 0.969248 dtype: float64 A: Few lines solution without redundant pairs of variables: corr_matrix = df.corr().abs() #the matrix is symmetric so we need to extract upper triangle matrix without diagonal (k = 1) sol = (corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(bool)) .stack() .sort_values(ascending=False)) #first element of sol series is the pair with the biggest correlation Then you can iterate through names of variables pairs (which are pandas.Series multi-indexes) and theirs values like this: for index, value in sol.items(): # do some staff A: Combining some features of @HYRY and @arun's answers, you can print the top correlations for dataframe df in a single line using: df.corr().unstack().sort_values().drop_duplicates() Note: the one downside is if you have 1.0 correlations that are not one variable to itself, the drop_duplicates() addition would remove them A: I liked Addison Klinke's post the most, as being the simplest, but used Wojciech Moszczyńsk’s suggestion for filtering and charting, but extended the filter to avoid absolute values, so given a large correlation matrix, filter it, chart it, and then flatten it: Created, Filtered and Charted dfCorr = df.corr() filteredDf = dfCorr[((dfCorr >= .5) | (dfCorr <= -.5)) & (dfCorr !=1.000)] plt.figure(figsize=(30,10)) sn.heatmap(filteredDf, annot=True, cmap="Reds") plt.show() Function In the end, I created a small function to create the correlation matrix, filter it, and then flatten it. As an idea, it could easily be extended, e.g., asymmetric upper and lower bounds, etc. def corrFilter(x: pd.DataFrame, bound: float): xCorr = x.corr() xFiltered = xCorr[((xCorr >= bound) | (xCorr <= -bound)) & (xCorr !=1.000)] xFlattened = xFiltered.unstack().sort_values().drop_duplicates() return xFlattened corrFilter(df, .7) Follow-Up Eventually, I refined the functions # Returns correlation matrix def corrFilter(x: pd.DataFrame, bound: float): xCorr = x.corr() xFiltered = xCorr[((xCorr >= bound) | (xCorr <= -bound)) & (xCorr !=1.000)] return xFiltered # flattens correlation matrix with bounds def corrFilterFlattened(x: pd.DataFrame, bound: float): xFiltered = corrFilter(x, bound) xFlattened = xFiltered.unstack().sort_values().drop_duplicates() return xFlattened # Returns correlation for a variable from flattened correlation matrix def filterForLabels(df: pd.DataFrame, label): try: sideLeft = df[label,] except: sideLeft = pd.DataFrame() try: sideRight = df[:,label] except: sideRight = pd.DataFrame() if sideLeft.empty and sideRight.empty: return pd.DataFrame() elif sideLeft.empty: concat = sideRight.to_frame() concat.rename(columns={0:'Corr'},inplace=True) return concat elif sideRight.empty: concat = sideLeft.to_frame() concat.rename(columns={0:'Corr'},inplace=True) return concat else: concat = pd.concat([sideLeft,sideRight], axis=1) concat["Corr"] = concat[0].fillna(0) + concat[1].fillna(0) concat.drop(columns=[0,1], inplace=True) return concat A: You can do graphically according to this simple code by substituting your data. corr = df.corr() kot = corr[corr>=.9] plt.figure(figsize=(12,8)) sns.heatmap(kot, cmap="Greens") A: Use the code below to view the correlations in the descending order. # See the correlations in descending order corr = df.corr() # df is the pandas dataframe c1 = corr.abs().unstack() c1.sort_values(ascending = False) A: Lot's of good answers here. The easiest way I found was a combination of some of the answers above. corr = corr.where(np.triu(np.ones(corr.shape), k=1).astype(np.bool)) corr = corr.unstack().transpose()\ .sort_values(by='column', ascending=False)\ .dropna() A: Combining most the answers above into a short snippet: def top_entries(df): mat = df.corr().abs() # Remove duplicate and identity entries mat.loc[:,:] = np.tril(mat.values, k=-1) mat = mat[mat>0] # Unstack, sort ascending, and reset the index, so features are in columns # instead of indexes (allowing e.g. a pretty print in Jupyter). # Also rename these it for good measure. return (mat.unstack() .sort_values(ascending=False) .reset_index() .rename(columns={ "level_0": "feature_a", "level_1": "feature_b", 0: "correlation" })) A: Use itertools.combinations to get all unique correlations from pandas own correlation matrix .corr(), generate list of lists and feed it back into a DataFrame in order to use '.sort_values'. Set ascending = True to display lowest correlations on top corrank takes a DataFrame as argument because it requires .corr(). def corrank(X: pandas.DataFrame): import itertools df = pd.DataFrame([[(i,j),X.corr().loc[i,j]] for i,j in list(itertools.combinations(X.corr(), 2))],columns=['pairs','corr']) print(df.sort_values(by='corr',ascending=False)) corrank(X) # prints a descending list of correlation pair (Max on top) A: The following function should do the trick. This implementation Removes self correlations Removes duplicates Enables the selection of top N highest correlated features and it is also configurable so that you can keep both the self correlations as well as the duplicates. You can also to report as many feature pairs as you wish. def get_feature_correlation(df, top_n=None, corr_method='spearman', remove_duplicates=True, remove_self_correlations=True): """ Compute the feature correlation and sort feature pairs based on their correlation :param df: The dataframe with the predictor variables :type df: pandas.core.frame.DataFrame :param top_n: Top N feature pairs to be reported (if None, all of the pairs will be returned) :param corr_method: Correlation compuation method :type corr_method: str :param remove_duplicates: Indicates whether duplicate features must be removed :type remove_duplicates: bool :param remove_self_correlations: Indicates whether self correlations will be removed :type remove_self_correlations: bool :return: pandas.core.frame.DataFrame """ corr_matrix_abs = df.corr(method=corr_method).abs() corr_matrix_abs_us = corr_matrix_abs.unstack() sorted_correlated_features = corr_matrix_abs_us \ .sort_values(kind="quicksort", ascending=False) \ .reset_index() # Remove comparisons of the same feature if remove_self_correlations: sorted_correlated_features = sorted_correlated_features[ (sorted_correlated_features.level_0 != sorted_correlated_features.level_1) ] # Remove duplicates if remove_duplicates: sorted_correlated_features = sorted_correlated_features.iloc[:-2:2] # Create meaningful names for the columns sorted_correlated_features.columns = ['Feature 1', 'Feature 2', 'Correlation (abs)'] if top_n: return sorted_correlated_features[:top_n] return sorted_correlated_features A: I didn't want to unstack or over-complicate this issue, since I just wanted to drop some highly correlated features as part of a feature selection phase. So I ended up with the following simplified solution: # map features to their absolute correlation values corr = features.corr().abs() # set equality (self correlation) as zero corr[corr == 1] = 0 # of each feature, find the max correlation # and sort the resulting array in ascending order corr_cols = corr.max().sort_values(ascending=False) # display the highly correlated features display(corr_cols[corr_cols > 0.8]) In this case, if you want to drop correlated features, you may map through the filtered corr_cols array and remove the odd-indexed (or even-indexed) ones. A: I was trying some of the solutions here but then I actually came up with my own one. I hope this might be useful for the next one so I share it here: def sort_correlation_matrix(correlation_matrix): cor = correlation_matrix.abs() top_col = cor[cor.columns[0]][1:] top_col = top_col.sort_values(ascending=False) ordered_columns = [cor.columns[0]] + top_col.index.tolist() return correlation_matrix[ordered_columns].reindex(ordered_columns) A: This is a improve code from @MiFi. This one order in abs but not excluding the negative values. def top_correlation (df,n): corr_matrix = df.corr() correlation = (corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) .stack() .sort_values(ascending=False)) correlation = pd.DataFrame(correlation).reset_index() correlation.columns=["Variable_1","Variable_2","Correlacion"] correlation = correlation.reindex(correlation.Correlacion.abs().sort_values(ascending=False).index).reset_index().drop(["index"],axis=1) return correlation.head(n) top_correlation(ANYDATA,10) A: simple is better from collections import defaultdict res = defaultdict(dict) corr = returns.corr().replace(1, -1) names = list(corr) for name in names: idx = corr[name].argmax() max_pairwise_name = names[idx] res[name][max_pairwise_name] = corr.loc[max_pairwisename, name] Now res contains the maximum pairwise correlation for each pair
List Highest Correlation Pairs from a Large Correlation Matrix in Pandas?
How do you find the top correlations in a correlation matrix with Pandas? There are many answers on how to do this with R (Show correlations as an ordered list, not as a large matrix or Efficient way to get highly correlated pairs from large data set in Python or R), but I am wondering how to do it with pandas? In my case the matrix is 4460x4460, so can't do it visually.
[ "You can use DataFrame.values to get an numpy array of the data and then use NumPy functions such as argsort() to get the most correlated pairs. \nBut if you want to do this in pandas, you can unstack and sort the DataFrame:\nimport pandas as pd\nimport numpy as np\n\nshape = (50, 4460)\n\ndata = np.random.normal(size=shape)\n\ndata[:, 1000] += data[:, 2000]\n\ndf = pd.DataFrame(data)\n\nc = df.corr().abs()\n\ns = c.unstack()\nso = s.sort_values(kind=\"quicksort\")\n\nprint so[-4470:-4460]\n\nHere is the output:\n2192 1522 0.636198\n1522 2192 0.636198\n3677 2027 0.641817\n2027 3677 0.641817\n242 130 0.646760\n130 242 0.646760\n1171 2733 0.670048\n2733 1171 0.670048\n1000 2000 0.742340\n2000 1000 0.742340\ndtype: float64\n\n", "@HYRY's answer is perfect. Just building on that answer by adding a bit more logic to avoid duplicate and self correlations and proper sorting:\nimport pandas as pd\nd = {'x1': [1, 4, 4, 5, 6], \n 'x2': [0, 0, 8, 2, 4], \n 'x3': [2, 8, 8, 10, 12], \n 'x4': [-1, -4, -4, -4, -5]}\ndf = pd.DataFrame(data = d)\nprint(\"Data Frame\")\nprint(df)\nprint()\n\nprint(\"Correlation Matrix\")\nprint(df.corr())\nprint()\n\ndef get_redundant_pairs(df):\n '''Get diagonal and lower triangular pairs of correlation matrix'''\n pairs_to_drop = set()\n cols = df.columns\n for i in range(0, df.shape[1]):\n for j in range(0, i+1):\n pairs_to_drop.add((cols[i], cols[j]))\n return pairs_to_drop\n\ndef get_top_abs_correlations(df, n=5):\n au_corr = df.corr().abs().unstack()\n labels_to_drop = get_redundant_pairs(df)\n au_corr = au_corr.drop(labels=labels_to_drop).sort_values(ascending=False)\n return au_corr[0:n]\n\nprint(\"Top Absolute Correlations\")\nprint(get_top_abs_correlations(df, 3))\n\nThat gives the following output:\nData Frame\n x1 x2 x3 x4\n0 1 0 2 -1\n1 4 0 8 -4\n2 4 8 8 -4\n3 5 2 10 -4\n4 6 4 12 -5\n\nCorrelation Matrix\n x1 x2 x3 x4\nx1 1.000000 0.399298 1.000000 -0.969248\nx2 0.399298 1.000000 0.399298 -0.472866\nx3 1.000000 0.399298 1.000000 -0.969248\nx4 -0.969248 -0.472866 -0.969248 1.000000\n\nTop Absolute Correlations\nx1 x3 1.000000\nx3 x4 0.969248\nx1 x4 0.969248\ndtype: float64\n\n", "Few lines solution without redundant pairs of variables:\ncorr_matrix = df.corr().abs()\n\n#the matrix is symmetric so we need to extract upper triangle matrix without diagonal (k = 1)\n\nsol = (corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(bool))\n .stack()\n .sort_values(ascending=False))\n\n#first element of sol series is the pair with the biggest correlation\n\nThen you can iterate through names of variables pairs (which are pandas.Series multi-indexes) and theirs values like this:\nfor index, value in sol.items():\n # do some staff\n\n", "Combining some features of @HYRY and @arun's answers, you can print the top correlations for dataframe df in a single line using:\ndf.corr().unstack().sort_values().drop_duplicates()\n\nNote: the one downside is if you have 1.0 correlations that are not one variable to itself, the drop_duplicates() addition would remove them\n", "I liked Addison Klinke's post the most, as being the simplest, but used Wojciech Moszczyńsk’s suggestion for filtering and charting, but extended the filter to avoid absolute values, so given a large correlation matrix, filter it, chart it, and then flatten it:\nCreated, Filtered and Charted\ndfCorr = df.corr()\nfilteredDf = dfCorr[((dfCorr >= .5) | (dfCorr <= -.5)) & (dfCorr !=1.000)]\nplt.figure(figsize=(30,10))\nsn.heatmap(filteredDf, annot=True, cmap=\"Reds\")\nplt.show()\n\n\nFunction\nIn the end, I created a small function to create the correlation matrix, filter it, and then flatten it. As an idea, it could easily be extended, e.g., asymmetric upper and lower bounds, etc.\ndef corrFilter(x: pd.DataFrame, bound: float):\n xCorr = x.corr()\n xFiltered = xCorr[((xCorr >= bound) | (xCorr <= -bound)) & (xCorr !=1.000)]\n xFlattened = xFiltered.unstack().sort_values().drop_duplicates()\n return xFlattened\n\ncorrFilter(df, .7)\n\n\nFollow-Up\nEventually, I refined the functions\n# Returns correlation matrix\ndef corrFilter(x: pd.DataFrame, bound: float):\n xCorr = x.corr()\n xFiltered = xCorr[((xCorr >= bound) | (xCorr <= -bound)) & (xCorr !=1.000)]\n return xFiltered\n\n# flattens correlation matrix with bounds\ndef corrFilterFlattened(x: pd.DataFrame, bound: float):\n xFiltered = corrFilter(x, bound)\n xFlattened = xFiltered.unstack().sort_values().drop_duplicates()\n return xFlattened\n\n# Returns correlation for a variable from flattened correlation matrix\ndef filterForLabels(df: pd.DataFrame, label): \n try:\n sideLeft = df[label,]\n except:\n sideLeft = pd.DataFrame()\n\n try:\n sideRight = df[:,label]\n except:\n sideRight = pd.DataFrame()\n\n if sideLeft.empty and sideRight.empty:\n return pd.DataFrame()\n elif sideLeft.empty: \n concat = sideRight.to_frame()\n concat.rename(columns={0:'Corr'},inplace=True)\n return concat\n elif sideRight.empty:\n concat = sideLeft.to_frame()\n concat.rename(columns={0:'Corr'},inplace=True)\n return concat\n else:\n concat = pd.concat([sideLeft,sideRight], axis=1)\n concat[\"Corr\"] = concat[0].fillna(0) + concat[1].fillna(0)\n concat.drop(columns=[0,1], inplace=True)\n return concat\n\n", "You can do graphically according to this simple code by substituting your data.\ncorr = df.corr()\n\nkot = corr[corr>=.9]\nplt.figure(figsize=(12,8))\nsns.heatmap(kot, cmap=\"Greens\")\n\n\n", "Use the code below to view the correlations in the descending order.\n# See the correlations in descending order\n\ncorr = df.corr() # df is the pandas dataframe\nc1 = corr.abs().unstack()\nc1.sort_values(ascending = False)\n\n", "Lot's of good answers here. The easiest way I found was a combination of some of the answers above. \ncorr = corr.where(np.triu(np.ones(corr.shape), k=1).astype(np.bool))\ncorr = corr.unstack().transpose()\\\n .sort_values(by='column', ascending=False)\\\n .dropna()\n\n", "Combining most the answers above into a short snippet:\ndef top_entries(df):\n mat = df.corr().abs()\n \n # Remove duplicate and identity entries\n mat.loc[:,:] = np.tril(mat.values, k=-1)\n mat = mat[mat>0]\n\n # Unstack, sort ascending, and reset the index, so features are in columns\n # instead of indexes (allowing e.g. a pretty print in Jupyter).\n # Also rename these it for good measure.\n return (mat.unstack()\n .sort_values(ascending=False)\n .reset_index()\n .rename(columns={\n \"level_0\": \"feature_a\",\n \"level_1\": \"feature_b\",\n 0: \"correlation\"\n }))\n\n", "Use itertools.combinations to get all unique correlations from pandas own correlation matrix .corr(), generate list of lists and feed it back into a DataFrame in order to use '.sort_values'. Set ascending = True to display lowest correlations on top \ncorrank takes a DataFrame as argument because it requires .corr().\n def corrank(X: pandas.DataFrame):\n import itertools\n df = pd.DataFrame([[(i,j),X.corr().loc[i,j]] for i,j in list(itertools.combinations(X.corr(), 2))],columns=['pairs','corr']) \n print(df.sort_values(by='corr',ascending=False))\n\n corrank(X) # prints a descending list of correlation pair (Max on top)\n\n", "The following function should do the trick. This implementation\n\nRemoves self correlations\nRemoves duplicates\nEnables the selection of top N highest correlated features\n\nand it is also configurable so that you can keep both the self correlations as well as the duplicates. You can also to report as many feature pairs as you wish. \n\ndef get_feature_correlation(df, top_n=None, corr_method='spearman',\n remove_duplicates=True, remove_self_correlations=True):\n \"\"\"\n Compute the feature correlation and sort feature pairs based on their correlation\n\n :param df: The dataframe with the predictor variables\n :type df: pandas.core.frame.DataFrame\n :param top_n: Top N feature pairs to be reported (if None, all of the pairs will be returned)\n :param corr_method: Correlation compuation method\n :type corr_method: str\n :param remove_duplicates: Indicates whether duplicate features must be removed\n :type remove_duplicates: bool\n :param remove_self_correlations: Indicates whether self correlations will be removed\n :type remove_self_correlations: bool\n\n :return: pandas.core.frame.DataFrame\n \"\"\"\n corr_matrix_abs = df.corr(method=corr_method).abs()\n corr_matrix_abs_us = corr_matrix_abs.unstack()\n sorted_correlated_features = corr_matrix_abs_us \\\n .sort_values(kind=\"quicksort\", ascending=False) \\\n .reset_index()\n\n # Remove comparisons of the same feature\n if remove_self_correlations:\n sorted_correlated_features = sorted_correlated_features[\n (sorted_correlated_features.level_0 != sorted_correlated_features.level_1)\n ]\n\n # Remove duplicates\n if remove_duplicates:\n sorted_correlated_features = sorted_correlated_features.iloc[:-2:2]\n\n # Create meaningful names for the columns\n sorted_correlated_features.columns = ['Feature 1', 'Feature 2', 'Correlation (abs)']\n\n if top_n:\n return sorted_correlated_features[:top_n]\n\n return sorted_correlated_features\n\n\n", "I didn't want to unstack or over-complicate this issue, since I just wanted to drop some highly correlated features as part of a feature selection phase.\nSo I ended up with the following simplified solution:\n# map features to their absolute correlation values\ncorr = features.corr().abs()\n\n# set equality (self correlation) as zero\ncorr[corr == 1] = 0\n\n# of each feature, find the max correlation\n# and sort the resulting array in ascending order\ncorr_cols = corr.max().sort_values(ascending=False)\n\n# display the highly correlated features\ndisplay(corr_cols[corr_cols > 0.8])\n\nIn this case, if you want to drop correlated features, you may map through the filtered corr_cols array and remove the odd-indexed (or even-indexed) ones.\n", "I was trying some of the solutions here but then I actually came up with my own one. I hope this might be useful for the next one so I share it here:\ndef sort_correlation_matrix(correlation_matrix):\n cor = correlation_matrix.abs()\n top_col = cor[cor.columns[0]][1:]\n top_col = top_col.sort_values(ascending=False)\n ordered_columns = [cor.columns[0]] + top_col.index.tolist()\n return correlation_matrix[ordered_columns].reindex(ordered_columns)\n\n", "This is a improve code from @MiFi. This one order in abs but not excluding the negative values.\n def top_correlation (df,n):\n corr_matrix = df.corr()\n correlation = (corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))\n .stack()\n .sort_values(ascending=False))\n correlation = pd.DataFrame(correlation).reset_index()\n correlation.columns=[\"Variable_1\",\"Variable_2\",\"Correlacion\"]\n correlation = correlation.reindex(correlation.Correlacion.abs().sort_values(ascending=False).index).reset_index().drop([\"index\"],axis=1)\n return correlation.head(n)\n\ntop_correlation(ANYDATA,10)\n\n", "simple is better\nfrom collections import defaultdict\nres = defaultdict(dict)\ncorr = returns.corr().replace(1, -1)\nnames = list(corr)\n\nfor name in names:\n idx = corr[name].argmax()\n max_pairwise_name = names[idx]\n res[name][max_pairwise_name] = corr.loc[max_pairwisename, name]\n\nNow res contains the maximum pairwise correlation for each pair\n" ]
[ 128, 71, 60, 29, 20, 14, 13, 3, 3, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "correlation", "pandas", "python" ]
stackoverflow_0017778394_correlation_pandas_python.txt
Q: pynetdicom on_association_requested() get request type (C_FIND, C_ECHO etc) I'm not able to find the variable that holds the Message Type in event name, "on_association_requested" and "on_association_released" methods. If we give event.event.name it results in "EVT_REQUESTED" or "EVT_RELEASED". INCOMING DIMSE MESSAGE D: Message Type : C-ECHO RQ D: Presentation Context ID : 1 D: Message ID : 1 D: Data Set : None END OF DIMSE MESSAGE I tried to get the Message Type, I was not able to find how. A: There is no DIMSE message type during association request. Only after an association has already been through request and acceptance are DIMSE messages allowed to be sent.
pynetdicom on_association_requested() get request type (C_FIND, C_ECHO etc)
I'm not able to find the variable that holds the Message Type in event name, "on_association_requested" and "on_association_released" methods. If we give event.event.name it results in "EVT_REQUESTED" or "EVT_RELEASED". INCOMING DIMSE MESSAGE D: Message Type : C-ECHO RQ D: Presentation Context ID : 1 D: Message ID : 1 D: Data Set : None END OF DIMSE MESSAGE I tried to get the Message Type, I was not able to find how.
[ "There is no DIMSE message type during association request. Only after an association has already been through request and acceptance are DIMSE messages allowed to be sent.\n" ]
[ 0 ]
[]
[]
[ "pynetdicom", "python" ]
stackoverflow_0074611115_pynetdicom_python.txt
Q: How can a pandas merge preserve order? I have two DataFrames in pandas, trying to merge them. But pandas keeps changing the order. I've tried setting indexes, resetting them, no matter what I do, I can't get the returned output to have the rows in the same order. Is there a trick? Note we start out with the loans order 'a,b,c' but after the merge, it's "a,c,b". import pandas loans = [ 'a', 'b', 'c' ] states = [ 'OR', 'CA', 'OR' ] x = pandas.DataFrame({ 'loan' : loans, 'state' : states }) y = pandas.DataFrame({ 'state' : [ 'CA', 'OR' ], 'value' : [ 1, 2]}) z = x.merge(y, how='left', on='state') But now the order is no longer the original 'a,b,c'. Any ideas? I'm using pandas version 11. A: Hopefully someone will provide a better answer, but in case no one does, this will definitely work, so… Zeroth, I'm assuming you don't want to just end up sorted on loan, but to preserve whatever original order was in x, which may or may not have anything to do with the order of the loan column. (Otherwise, the problem is easier, and less interesting.) First, you're asking it to sort based on the join keys. As the docs explain, that's the default when you don't pass a sort argument. Second, if you don't sort based on the join keys, the rows will end up grouped together, such that two rows that merged from the same source row end up next to each other, which means you're still going to get a, c, b. You can work around this by getting the rows grouped together in the order they appear in the original x by just merging again with x (on either side, it doesn't really matter), or by reindexing based on x if you prefer. Like this: x.merge(x.merge(y, how='left', on='state', sort=False)) Alternatively, you can cram an x-index in there with reset_index, then just sort on that, like this: x.reset_index().merge(y, how='left', on='state', sort=False).sort('index') Either way obviously seems a bit wasteful, and clumsy… so, as I said, hopefully there's a better answer that I'm just not seeing at the moment. But if not, that works. A: I might have a much more simple solution: df_z = df_x.join(df_y.set_index('state'), on = 'state') Hope it helps A: The fastest way I've found to merge and restore order - if you are merging "left" - is to include the original order as a column in the left dataframe before merging, then use that to restore the order after merging: import pandas loans = [ 'a', 'b', 'c' ] states = [ 'OR', 'CA', 'OR' ] x = pandas.DataFrame({ 'loan' : loans, 'state' : states }) y = pandas.DataFrame({ 'state' : [ 'CA', 'OR' ], 'value' : [ 1, 2]}) import numpy as np x["Order"] = np.arange(len(x)) z = x.merge(y, how='left', on='state').set_index("Order").ix[np.arange(len(x)), :] This method is faster than sorting. Here it is as a function: def mergeLeftInOrder(x, y, on=None): x = x.copy() x["Order"] = np.arange(len(x)) z = x.merge(y, how='left', on=on).set_index("Order").ix[np.arange(len(x)), :] return z A: Pandas has a merge_ordered function, so your solution is now as simple as: z = pd.merge_ordered(x, y, on='state') A: I tried the following and it does preserve the original order of loans: z = pandas.merge(x, y, on='state', how='left') I hope it helps! Please do let me know if there are any drawbacks of my method. Thanks.
How can a pandas merge preserve order?
I have two DataFrames in pandas, trying to merge them. But pandas keeps changing the order. I've tried setting indexes, resetting them, no matter what I do, I can't get the returned output to have the rows in the same order. Is there a trick? Note we start out with the loans order 'a,b,c' but after the merge, it's "a,c,b". import pandas loans = [ 'a', 'b', 'c' ] states = [ 'OR', 'CA', 'OR' ] x = pandas.DataFrame({ 'loan' : loans, 'state' : states }) y = pandas.DataFrame({ 'state' : [ 'CA', 'OR' ], 'value' : [ 1, 2]}) z = x.merge(y, how='left', on='state') But now the order is no longer the original 'a,b,c'. Any ideas? I'm using pandas version 11.
[ "Hopefully someone will provide a better answer, but in case no one does, this will definitely work, so…\nZeroth, I'm assuming you don't want to just end up sorted on loan, but to preserve whatever original order was in x, which may or may not have anything to do with the order of the loan column. (Otherwise, the problem is easier, and less interesting.)\nFirst, you're asking it to sort based on the join keys. As the docs explain, that's the default when you don't pass a sort argument.\n\nSecond, if you don't sort based on the join keys, the rows will end up grouped together, such that two rows that merged from the same source row end up next to each other, which means you're still going to get a, c, b.\nYou can work around this by getting the rows grouped together in the order they appear in the original x by just merging again with x (on either side, it doesn't really matter), or by reindexing based on x if you prefer. Like this:\nx.merge(x.merge(y, how='left', on='state', sort=False))\n\n\nAlternatively, you can cram an x-index in there with reset_index, then just sort on that, like this:\nx.reset_index().merge(y, how='left', on='state', sort=False).sort('index')\n\n\nEither way obviously seems a bit wasteful, and clumsy… so, as I said, hopefully there's a better answer that I'm just not seeing at the moment. But if not, that works.\n", "I might have a much more simple solution:\ndf_z = df_x.join(df_y.set_index('state'), on = 'state')\n\nHope it helps\n", "The fastest way I've found to merge and restore order - if you are merging \"left\" - is to include the original order as a column in the left dataframe before merging, then use that to restore the order after merging:\nimport pandas\nloans = [ 'a', 'b', 'c' ]\nstates = [ 'OR', 'CA', 'OR' ]\nx = pandas.DataFrame({ 'loan' : loans, 'state' : states })\ny = pandas.DataFrame({ 'state' : [ 'CA', 'OR' ], 'value' : [ 1, 2]})\n\nimport numpy as np\nx[\"Order\"] = np.arange(len(x))\n\nz = x.merge(y, how='left', on='state').set_index(\"Order\").ix[np.arange(len(x)), :]\n\nThis method is faster than sorting. Here it is as a function:\ndef mergeLeftInOrder(x, y, on=None):\n x = x.copy()\n x[\"Order\"] = np.arange(len(x))\n z = x.merge(y, how='left', on=on).set_index(\"Order\").ix[np.arange(len(x)), :]\n return z\n\n", "Pandas has a merge_ordered function, so your solution is now as simple as:\nz = pd.merge_ordered(x, y, on='state')\n\n", "I tried the following and it does preserve the original order of loans:\nz = pandas.merge(x, y, on='state', how='left')\n\nI hope it helps! Please do let me know if there are any drawbacks of my method. Thanks.\n" ]
[ 27, 6, 4, 4, 0 ]
[ "Use pd.merge_ordered(), documentation here. \nFor your example,\nz = pd.merge_ordered(x, y, how='left', on='state')\n\nEDIT: Just wanted to point out that default behavior for this function is an outer merge, different from the default behavior of the more common .merge()\n" ]
[ -3 ]
[ "pandas", "python" ]
stackoverflow_0020206615_pandas_python.txt
Q: Recursion, Fib Numbers On a call to fib(10), how many times is fib(4) computed? I can't seem to figure this out, could anyone help? def fib ( n ): if n < 3: return 1 else: return fib(n-1) + fib(n-2) Trying to figure out how many times fib(4) is computed. A: Set T(n) = the times fib(n) call fib(4) We know that T(4)=1, T(5)=1 T(n) = T(n-1)+T(n-2) So T(6) = T(5) + T(4) = 2 T(7) = T(6) + T(5) = 3 T(8) = T(7) + T(6) = 5 T(9) = T(8) + T(7) = 8 T(10) = T(9) + T(8) = 13 Also you can make some changes in your code a = 0 def fib ( n ): if(n==4): global a a=a+1 print(a) if n < 3: return 1 else: return fib(n-1) + fib(n-2) fib(10) A: You can add a counter to the function def fib (n, cntr = None): if cntr is None: cntr = {} cntr[n] = cntr.get(n, 0) + 1 # update count of current argumenet if n < 3: return 1 else: return fib(n-1, cntr) + fib(n-2, cntr) # mutate cntr in recursive calls Test cntr = {} # Initialize counter print(fib(10, cntr)) # Calculate fib(10) # Output: 55 print(cntr[4]) # get count of number of times fib(4) called # Output: 13
Recursion, Fib Numbers
On a call to fib(10), how many times is fib(4) computed? I can't seem to figure this out, could anyone help? def fib ( n ): if n < 3: return 1 else: return fib(n-1) + fib(n-2) Trying to figure out how many times fib(4) is computed.
[ "Set T(n) = the times fib(n) call fib(4)\nWe know that\nT(4)=1, T(5)=1\n\nT(n) = T(n-1)+T(n-2)\n\nSo\nT(6) = T(5) + T(4) = 2\nT(7) = T(6) + T(5) = 3\nT(8) = T(7) + T(6) = 5\nT(9) = T(8) + T(7) = 8\nT(10) = T(9) + T(8) = 13\n\nAlso you can make some changes in your code\na = 0\n\ndef fib ( n ):\n if(n==4):\n global a\n a=a+1\n print(a)\n\n if n < 3:\n return 1\n\n else:\n return fib(n-1) + fib(n-2)\n\nfib(10)\n\n", "You can add a counter to the function\ndef fib (n, cntr = None):\n if cntr is None:\n cntr = {}\n cntr[n] = cntr.get(n, 0) + 1 # update count of current argumenet\n \n if n < 3:\n return 1\n else:\n return fib(n-1, cntr) + fib(n-2, cntr) # mutate cntr in recursive calls\n\nTest\ncntr = {} # Initialize counter\nprint(fib(10, cntr)) # Calculate fib(10)\n# Output: 55\n\nprint(cntr[4]) # get count of number of times fib(4) called\n# Output: 13\n\n" ]
[ 1, 0 ]
[]
[]
[ "fibonacci", "python", "recursion" ]
stackoverflow_0074636393_fibonacci_python_recursion.txt
Q: Get nearest low and high values across multiple dataframe columns I have a dataframe similar to this: import pandas as pd id = [1001, 1002, 1003] a = [156, 224, 67] b = [131, 203, 61] c = [97, 165, 54] d = [68, 122, 50] value = [71, 180, 66] df = pd.DataFrame({'id':id, 'a':a, 'b':b, 'c':c, 'd':d, 'value':value}) id a b c d value 1001 156 131 97 68 71 1002 224 203 165 122 180 1003 67 61 54 50 66 For each row, I would like to evaluate columns a-d and within them identify the next lowest and next highest values, as compared to value. So in this example, the expected result would look like: id a b c d value nxt_low nxt_high 1001 156 131 97 68 71 68 97 1002 224 203 165 122 180 165 203 1003 67 61 54 50 66 61 67 I have tried creating a single column with a numpy array from a-d and trying to do some operations that way, but I'm not applying it correctly and have been unable to get the desired result. Any help is greatly appreciated. A: you can get nearest low following code: df.apply(lambda x: x[x < x[-1]].max(), axis=1) output: 0 68 1 165 2 61 dtype: int64 get nearest low and high and make result to columns: df[['nxt_low', 'nxt_high']] = df.apply(lambda x: [x[x < x[-1]].max(), x[x > x[-1]].min()], axis=1, result_type='expand') df: id a b c d value nxt_low nxt_high 0 1001 156 131 97 68 71 68 97 1 1002 224 203 165 122 180 165 203 2 1003 67 61 54 50 66 61 67 If id is nearest low or high, modify code a bit. df[['nxt_low', 'nxt_high']] = df.iloc[:, 1:].apply(lambda x: [x[x < x[-1]].max(), x[x > x[-1]].min()], axis=1, result_type='expand') A: Here is a way: cols = ['a','b','c','d'] df2 = df[cols].sub(df['value'],axis=0) df = (df.assign(nxt_low = df.where(df2.lt(0)).max(axis=1), nxt_high = df.where(df2.gt(0)).min(axis=1))) Output: id a b c d value nxt_low nxt_high 0 1001 156 131 97 68 71 68.0 97.0 1 1002 224 203 165 122 180 165.0 203.0 2 1003 67 61 54 50 66 61.0 67.0
Get nearest low and high values across multiple dataframe columns
I have a dataframe similar to this: import pandas as pd id = [1001, 1002, 1003] a = [156, 224, 67] b = [131, 203, 61] c = [97, 165, 54] d = [68, 122, 50] value = [71, 180, 66] df = pd.DataFrame({'id':id, 'a':a, 'b':b, 'c':c, 'd':d, 'value':value}) id a b c d value 1001 156 131 97 68 71 1002 224 203 165 122 180 1003 67 61 54 50 66 For each row, I would like to evaluate columns a-d and within them identify the next lowest and next highest values, as compared to value. So in this example, the expected result would look like: id a b c d value nxt_low nxt_high 1001 156 131 97 68 71 68 97 1002 224 203 165 122 180 165 203 1003 67 61 54 50 66 61 67 I have tried creating a single column with a numpy array from a-d and trying to do some operations that way, but I'm not applying it correctly and have been unable to get the desired result. Any help is greatly appreciated.
[ "you can get nearest low following code:\ndf.apply(lambda x: x[x < x[-1]].max(), axis=1)\n\noutput:\n0 68\n1 165\n2 61\ndtype: int64\n\nget nearest low and high and make result to columns:\ndf[['nxt_low', 'nxt_high']] = df.apply(lambda x: [x[x < x[-1]].max(), x[x > x[-1]].min()], axis=1, result_type='expand')\n\ndf:\n id a b c d value nxt_low nxt_high\n0 1001 156 131 97 68 71 68 97\n1 1002 224 203 165 122 180 165 203\n2 1003 67 61 54 50 66 61 67\n\n\nIf id is nearest low or high, modify code a bit.\ndf[['nxt_low', 'nxt_high']] = df.iloc[:, 1:].apply(lambda x: [x[x < x[-1]].max(), x[x > x[-1]].min()], axis=1, result_type='expand')\n\n", "Here is a way:\ncols = ['a','b','c','d']\ndf2 = df[cols].sub(df['value'],axis=0)\ndf = (df.assign(nxt_low = df.where(df2.lt(0)).max(axis=1),\nnxt_high = df.where(df2.gt(0)).min(axis=1)))\n\nOutput:\n id a b c d value nxt_low nxt_high\n0 1001 156 131 97 68 71 68.0 97.0\n1 1002 224 203 165 122 180 165.0 203.0\n2 1003 67 61 54 50 66 61.0 67.0\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074636203_dataframe_pandas_python.txt
Q: how to make a discord.py bot not accepts commands from dms How do I make a discord.py bot not react to commands from the bot's DMs? I only want the bot to respond to messages if they are on a specific channel on a specific server. A: If you wanted to only respond to messages on a specific channel and you know the name of the channel, you could do this: channel = discord.utils.get(ctx.guild.channels, name="channel name") channel_id = channel.id Then you would check if the id matched the one channel you wanted it to be in. To get a channel or server's id, you need to enable discord developer mode. After than you could just right click on the server or channel and copy the id. To get a server's id you need to add this piece of code as a command: @client.command(pass_context=True) async def getguild(ctx): id = ctx.message.guild.id # the guild is the server # do something with the id (print it out) After you get the server id, you can delete the method. And to check if a message is sent by a person or a bot, you could do this in the on_message method: def on_message(self, message): if (message.author.bot): # is a bot pass A: So just to make the bot not respond to DMs, add this code after each command: if message.guild: # Message comes from a server. else: # Message comes from a DM. This makes it better to separate DM from server messages. You just now have to move the "await message.channel.send" function. A: I assume that you are asking for a bot that only listens to your commands. Well, in that case, you can create a check to see if the message is sent by you or not. It can be done using, @client.event async def on_message(message): if message.author.id == <#your user id>: await message.channel.send('message detected') ...#your code A: You Can Use Simplest And Best Way @bot.command() async def check(ctx): if not isinstance(ctx.channel, discord.channel.DMChannel): Your Work...
how to make a discord.py bot not accepts commands from dms
How do I make a discord.py bot not react to commands from the bot's DMs? I only want the bot to respond to messages if they are on a specific channel on a specific server.
[ "If you wanted to only respond to messages on a specific channel and you know the name of the channel, you could do this:\nchannel = discord.utils.get(ctx.guild.channels, name=\"channel name\")\nchannel_id = channel.id\n\nThen you would check if the id matched the one channel you wanted it to be in. To get a channel or server's id, you need to enable discord developer mode. After than you could just right click on the server or channel and copy the id.\nTo get a server's id you need to add this piece of code as a command:\[email protected](pass_context=True)\nasync def getguild(ctx):\n id = ctx.message.guild.id # the guild is the server\n # do something with the id (print it out)\n\nAfter you get the server id, you can delete the method.\nAnd to check if a message is sent by a person or a bot, you could do this in the on_message method:\ndef on_message(self, message):\n if (message.author.bot):\n # is a bot\n pass\n\n\n", "So just to make the bot not respond to DMs, add this code after each command:\nif message.guild:\n # Message comes from a server.\nelse:\n # Message comes from a DM.\n\nThis makes it better to separate DM from server messages. You just now have to move the \"await message.channel.send\" function.\n", "I assume that you are asking for a bot that only listens to your commands. Well, in that case, you can create a check to see if the message is sent by you or not. It can be done using,\[email protected]\nasync def on_message(message):\nif message.author.id == <#your user id>:\n await message.channel.send('message detected')\n ...#your code\n\n", "You Can Use Simplest And Best Way\[email protected]()\nasync def check(ctx):\n if not isinstance(ctx.channel, discord.channel.DMChannel):\n Your Work...\n\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0072954461_discord_discord.py_python.txt
Q: How to format a dictionary with a list of items in Python I have a python dictionary that I want to look like this: {"name": "BOB", "item1": { "item name": "bread", "quantity of item ": 10, "price of item": "3.00" }, "item2": { "item name": "milk", "quantity of item ": 15, "price of item": "9.00" } } currently it looks like this {"name": "BOB", "item1": {"item name": "bread", "quantity of item ": 10, "price of item": "3.00"}, "item2": {"item name": "milk", "quantity of item ": 15, "price of item": "9.00"}} The list of items can be different and does not have a fixed amount of items, so I would also need to to know how to do that I have tried to add new lines in the dictionaries but it would not work and it would just put '\n' in to my dictionary A: If you're trying to json.dump() it into a JSON file, using the json.dump() function you could pass in the indent argument for indentation (appears to be what you want) You can read more about it here An example: json.dump(jsonData, jsonFile, indent=2) # indentation is usually in spaces, so 2 would mean 2 spaces indentation, but you can replace it with '\t' for tabs All this does is add indentation to the file to make it easier to read
How to format a dictionary with a list of items in Python
I have a python dictionary that I want to look like this: {"name": "BOB", "item1": { "item name": "bread", "quantity of item ": 10, "price of item": "3.00" }, "item2": { "item name": "milk", "quantity of item ": 15, "price of item": "9.00" } } currently it looks like this {"name": "BOB", "item1": {"item name": "bread", "quantity of item ": 10, "price of item": "3.00"}, "item2": {"item name": "milk", "quantity of item ": 15, "price of item": "9.00"}} The list of items can be different and does not have a fixed amount of items, so I would also need to to know how to do that I have tried to add new lines in the dictionaries but it would not work and it would just put '\n' in to my dictionary
[ "If you're trying to json.dump() it into a JSON file, using the json.dump() function you could pass in the indent argument for indentation (appears to be what you want) You can read more about it here\nAn example:\njson.dump(jsonData, jsonFile, indent=2) # indentation is usually in spaces, so 2 would mean 2 spaces indentation, but you can replace it with '\\t' for tabs\n\nAll this does is add indentation to the file to make it easier to read\n" ]
[ 1 ]
[]
[]
[ "dictionary", "python", "python_re" ]
stackoverflow_0074636613_dictionary_python_python_re.txt
Q: Error: InvalidSignature when trying to connect to SP-API amazon I've trying to connect to amazon api for a week now. I've got stuck in this error and after readig the doc several times I can't realize which is the problem. Here is my code: # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Important The AWS SDKs sign API requests for you using the access key that you specify when you configure the SDK. When you use an SDK, you don’t need to learn how to sign API requests. We recommend that you use the AWS SDKs to send API requests, instead of writing your own code. The following example is a reference to help you get started if you have a need to write your own code to send and sign requests. The example is for reference only and is not maintained as functional code. """ # AWS Version 4 signing example # EC2 API (DescribeRegions) # See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html # This version makes a GET request and passes the signature # in the Authorization header. import sys, os, base64, datetime, hashlib, hmac import requests # pip install requests # ************* REQUEST VALUES ************* method = 'GET' service = 'execute-api' host = 'sellingpartnerapi-na.amazon.com' region = 'us-east-1' endpoint = 'https://sellingpartnerapi-na.amazon.com' request_parameters = 'Action=ListOrders&MarketplaceId=ATVPDKIKX0DER&Version=0' #service = 'ec2' #host = 'ec2.amazonaws.com' #region = 'us-east-1' #endpoint = 'https://ec2.amazonaws.com' #request_parameters = 'Action=DescribeRegions&Version=2013-10-15' # Key derivation functions. See: # http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python def sign(key, msg): return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest() def getSignatureKey(key, dateStamp, regionName, serviceName): kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp) kRegion = sign(kDate, regionName) kService = sign(kRegion, serviceName) kSigning = sign(kService, 'aws4_request') return kSigning # Read AWS access key from env. variables or configuration file. Best practice is NOT # to embed credentials in code. access_key = 'AKIEXAMPLE' secret_key = 'SECRETEXAMPLE' if access_key is None or secret_key is None: print('No access key is available.') sys.exit() # Create a date for headers and the credential string t = datetime.datetime.utcnow() amzdate = t.strftime('%Y%m%dT%H%M%SZ') datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope # ************* TASK 1: CREATE A CANONICAL REQUEST ************* # http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html # Step 1 is to define the verb (GET, POST, etc.)--already done. # Step 2: Create canonical URI--the part of the URI from domain to query # string (use '/' if no path) canonical_uri = '/orders/v0/orders' # Step 3: Create the canonical query string. In this example (a GET request), # request parameters are in the query string. Query string values must # be URL-encoded (space=%20). The parameters must be sorted by name. # For this example, the query string is pre-formatted in the request_parameters variable. canonical_querystring = request_parameters # Step 4: Create the canonical headers and signed headers. Header names # must be trimmed and lowercase, and sorted in code point order from # low to high. Note that there is a trailing \n. canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n' # Step 5: Create the list of signed headers. This lists the headers # in the canonical_headers list, delimited with ";" and in alpha order. # Note: The request can include any headers; canonical_headers and # signed_headers lists those that you want to be included in the # hash of the request. "Host" and "x-amz-date" are always required. signed_headers = 'host;x-amz-date' # Step 6: Create payload hash (hash of the request body content). For GET # requests, the payload is an empty string (""). payload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest() # Step 7: Combine elements to create canonical request canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash # ************* TASK 2: CREATE THE STRING TO SIGN************* # Match the algorithm to the hashing algorithm you use, either SHA-1 or # SHA-256 (recommended) algorithm = 'AWS4-HMAC-SHA256' credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request' string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest() # ************* TASK 3: CALCULATE THE SIGNATURE ************* # Create the signing key using the function defined above. signing_key = getSignatureKey(secret_key, datestamp, region, service) # Sign the string_to_sign using the signing_key signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest() # ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST ************* # The signing information can be either in a query string value or in # a header named Authorization. This code shows how to use a header. # Create authorization header and add to request headers authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature # The request can include any headers, but MUST include "host", "x-amz-date", # and (for this scenario) "Authorization". "host" and "x-amz-date" must # be included in the canonical_headers and signed_headers, as noted # earlier. Order here is not significant. # Python note: The 'host' header is added automatically by the Python 'requests' library. headers = {'x-amz-date':amzdate, 'Authorization':authorization_header} # ************* SEND THE REQUEST ************* request_url = endpoint + '?' + canonical_querystring print('\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++') print('Request URL = ' + request_url) r = requests.get(request_url, headers=headers) print('\nRESPONSE++++++++++++++++++++++++++++++++++++') print('Response code: %d\n' % r.status_code) print(r.text) My application is originally built in Java, but since I've got the same error in the Python code sample from amazon, I'm tring to make it work first in Python. It's also interesting that if I uncomment the code: #service = 'ec2' #host = 'ec2.amazonaws.com' #region = 'us-east-1' #endpoint = 'https://ec2.amazonaws.com' #request_parameters = 'Action=DescribeRegions&Version=2013-10-15' It works, but if I use my own endpoints it doesn't. I've checked everything and tried a lot of things, any idea of why this is happening? Thanks in advance for your time. The full error msg { "errors": [ { "message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details. "code": "InvalidSignature" } ] } A: This is another solution without boto3 and only using requests import hashlib import hmac import logging from collections import OrderedDict from urllib.parse import urlencode import defusedxml.ElementTree as ET from sdc_etl_libs.api_helpers.API import API import sys, datetime, hashlib, hmac import requests import json from bs4 import BeautifulSoup def get_session_token_from_xml(content): soup = BeautifulSoup(content, "xml") return soup.find('SessionToken').text, soup.find('AccessKeyId').text, soup.find('SecretAccessKey').text def set_params(action_): logging.info(f"Setting params according to action {action_}") params = dict() if action_ == 'AssumeRole': params['Version'] = '2011-06-15' params['Action'] = action_ params['RoleSessionName'] = <<ROLE NAME>> params['RoleArn'] = <<ROLE ARN>> params['DurationSeconds']='3600' elif action_ == 'orders': params['MarketplaceIds'] = <<MARKET PLACE>> params['LastUpdatedAfter'] = '2022-11-27T14:00:00Z' params['LastUpdatedBefore'] = '2022-11-27T16:00:00Z' else: raise Exception("Action is not implemented.") return params def _get_access_token(lwa_app_id, lwa_client_secret, refresh_token): url = "https://api.amazon.com/auth/O2/token" payload=f'client_id={lwa_app_id}&client_secret={lwa_client_secret}&refresh_token={refresh_token}&grant_type=refresh_token' headers = { 'Host': 'api.amazon.com', 'Content-Type': 'application/x-www-form-urlencoded', } response = requests.request("POST", url, headers=headers, data=payload) return response def format_params_to_create_signature(params_to_format_): """ URL encodes the parameter name and values https://docs.developer.amazonservices.com/en_US/dev_guide/DG_QueryString.html :param params_to_format_: dict. Parameters that should be ordered in natural byte order and url encoded. :return: str. """ logging.info("Format params.") params_in_order = OrderedDict(sorted(params_to_format_.items())) params_formatted = urlencode(params_in_order, doseq=True) return params_formatted def sign(key, msg): return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest() def getSignatureKey(key, dateStamp, regionName, serviceName): kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp) kRegion = sign(kDate, regionName) kService = sign(kRegion, serviceName) kSigning = sign(kService, 'aws4_request') return kSigning def _get_signature_request(action, access_key, secret_key, service, host, region, endpoint, method: str = 'GET', access_token: str = None, security_token: str = None): # ************* REQUEST VALUES ************* params = set_params(action) fparams = format_params_to_create_signature(params) request_parameters = fparams # Read AWS access key from env. variables or configuration file. Best practice is NOT # to embed credentials in code. if access_key is None or secret_key is None: raise Exception("Access key or secret key are not implemented.") # Create a date for headers and the credential string t = datetime.datetime.utcnow() amzdate = t.strftime('%Y%m%dT%H%M%SZ') datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope # ************* TASK 1: CREATE A CANONICAL REQUEST ************* # http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html # Step 1 is to define the verb (GET, POST, etc.)--already done. # Step 2: Create canonical URI--the part of the URI from domain to query # string (use '/' if no path) if action == 'AssumeRole': canonical_uri = '/' else: canonical_uri = '/orders/v0/orders' # Step 3: Create the canonical query string. In this example (a GET request), # request parameters are in the query string. Query string values must # be URL-encoded (space=%20). The parameters must be sorted by name. # For this example, the query string is pre-formatted in the request_parameters variable. canonical_querystring = request_parameters # Step 4: Create the canonical headers and signed headers. Header names # must be trimmed and lowercase, and sorted in code point order from # low to high. Note that there is a trailing \n. # Step 5: Create the list of signed headers. This lists the headers # in the canonical_headers list, delimited with ";" and in alpha order. # Note: The request can include any headers; canonical_headers and # signed_headers lists those that you want to be included in the # hash of the request. "Host" and "x-amz-date" are always required. if action == 'AssumeRole': canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n' signed_headers = 'host;x-amz-date' else: canonical_headers = 'host:' + host + '\n' + 'x-amz-access-token:' + \ access_token + '\n' + 'x-amz-date:' + amzdate + '\n' + 'x-amz-security-token:' + \ security_token + '\n' signed_headers = 'host;x-amz-access-token;x-amz-date;x-amz-security-token' # Step 6: Create payload hash (hash of the request body content). For GET # requests, the payload is an empty string (""). payload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest() # Step 7: Combine elements to create canonical request canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + \ signed_headers + '\n' + payload_hash # ************* TASK 2: CREATE THE STRING TO SIGN************* # Match the algorithm to the hashing algorithm you use, either SHA-1 or # SHA-256 (recommended) algorithm = 'AWS4-HMAC-SHA256' credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request' string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + \ hashlib.sha256(canonical_request.encode('utf-8')).hexdigest() # ************* TASK 3: CALCULATE THE SIGNATURE ************* # Create the signing key using the function defined above. signing_key = getSignatureKey(secret_key, datestamp, region, service) # Sign the string_to_sign using the signing_key signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest() # ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST ************* # The signing information can be either in a query string value or in # a header named Authorization. This code shows how to use a header. # Create authorization header and add to request headers authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + \ 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature # The request can include any headers, but MUST include "host", "x-amz-date", # and (for this scenario) "Authorization". "host" and "x-amz-date" must # be included in the canonical_headers and signed_headers, as noted # earlier. Order here is not significant. # Python note: The 'host' header is added automatically by the Python 'requests' library. if action == 'AssumeRole': headers = {'x-amz-date':amzdate, 'Authorization':authorization_header} else: headers = { 'authorization': authorization_header, 'host': host, 'x-amz-access-token': access_token, 'x-amz-date': amzdate, 'x-amz-security-token': security_token } # ************* SEND THE REQUEST ************* request_url = endpoint + '?' + canonical_querystring logging.info(f"BEGIN REQUEST++++++++++++++++++++++++++++++++++++'") logging.info(f"Request URL = {request_url}") r = requests.get(request_url, headers=headers) logging.info('\nRESPONSE++++++++++++++++++++++++++++++++++++') logging.info('Response code: %d\n' % r.status_code) return r the way to run in is the following service = 'sts' host = 'sts.amazonaws.com' region = 'us-east-1' endpoint = 'https://sts.amazonaws.com' response = _get_signature_session('AssumeRole', access_key, secret_key, service, host, region, endpoint) access_token = json.loads(_get_access_token(lwa_app_id, lwa_client_secret, refresh_token).content)['access_token'] tmp_session_token_, tmp_access_key, tmp_secret_access_key = get_session_token_from_xml(response.content.decode('utf-8')) after that you will have the temporal session token, the temporal access key ath the temporal secret key. finally to get the all orders is in the following code service = 'execute-api' host = 'sellingpartnerapi-na.amazon.com' region = 'us-east-1' endpoint = 'https://sellingpartnerapi-na.amazon.com/orders/v0/orders' response = _get_signature_session('orders', tmp_access_key, tmp_secret_access_key, service, host, region, endpoint, access_token = access_token, security_token = tmp_session_token_) A: After some research and testing, I modified the python app, and it works! Before reading the code below, READ THIS. You must execute pip install boto3 to make it work. Here are the docs: https://pypi.org/project/boto3/ I'm putting the credentials in a raw dict instead of following the boto3 docs structure because it was just for testing. If you want to test it with the code, just replace the credentials dict values. Notice that it is working with the sandbox environment and getOrders endpoint, and you must specify your own RoleSessionName. Here is the code: # AWS Version 4 signing example # EC2 API (DescribeRegions) # See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html # This version makes a GET request and passes the signature # in the Authorization header. import sys, os, base64, datetime, hashlib, hmac import requests # pip install requests import boto3 credentials = { 'lwa_refresh_token': 'whatever', 'lwa_client_secret': 'whatever', 'lwa_client_id': 'whatever', 'aws_secret_access_key': 'whatever', 'aws_access_key': 'whatever', 'role_arn': 'whatever:role/whatever' } # get Access Token and assign to 'x-amz-access-token' response = requests.post('https://api.amazon.com/auth/o2/token', headers={'Content-Type': 'application/x-www-form-urlencoded'}, data={ 'grant_type': 'refresh_token', 'refresh_token': credentials['lwa_refresh_token'], 'client_id': credentials['lwa_client_id'], 'client_secret': credentials['lwa_client_secret'] } ) credentials['x-amz-access-token'] = response.json()['access_token'] # get AWS STS Session Token and assign to 'x-amz-security-token' sts_client = boto3.client( 'sts', aws_access_key_id=credentials['aws_access_key'], aws_secret_access_key=credentials['aws_secret_access_key'] ) assumed_role_object=sts_client.assume_role( RoleArn=credentials['role_arn'], RoleSessionName="whatever role sesion name you got" ) credentials['x-amz-security-token'] = assumed_role_object['Credentials']['SessionToken'] credentials['aws_access_key'] = assumed_role_object['Credentials']['AccessKeyId'] credentials['aws_secret_access_key'] = assumed_role_object['Credentials']['SecretAccessKey'] # ************* REQUEST VALUES ************* method = 'GET' service = 'execute-api' host = 'sandbox.sellingpartnerapi-na.amazon.com' region = 'us-east-1' endpoint = 'https://sandbox.sellingpartnerapi-na.amazon.com/orders/v0/orders' request_parameters = 'CreatedAfter=TEST_CASE_200&MarketplaceIds=ATVPDKIKX0DER' # Key derivation functions. See: # http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python def sign(key, msg): return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest() def getSignatureKey(key, dateStamp, regionName, serviceName): kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp) kRegion = sign(kDate, regionName) kService = sign(kRegion, serviceName) kSigning = sign(kService, 'aws4_request') return kSigning # Read AWS access key from env. variables or configuration file. Best practice is NOT # to embed credentials in code. access_key = credentials['aws_access_key'] # No deberia de ser security-token, si no secret_access_key?¿ secret_key = credentials['aws_secret_access_key'] if access_key is None or secret_key is None: print('No access key is available.') sys.exit() # Create a date for headers and the credential string t = datetime.datetime.utcnow() amzdate = t.strftime('%Y%m%dT%H%M%SZ') datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope # ************* TASK 1: CREATE A CANONICAL REQUEST ************* # http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html # Step 1 is to define the verb (GET, POST, etc.)--already done. # Step 2: Create canonical URI--the part of the URI from domain to query # string (use '/' if no path) canonical_uri = '/orders/v0/orders' # Step 3: Create the canonical query string. In this example (a GET request), # request parameters are in the query string. Query string values must # be URL-encoded (space=%20). The parameters must be sorted by name. # For this example, the query string is pre-formatted in the request_parameters variable. canonical_querystring = request_parameters # Step 4: Create the canonical headers and signed headers. Header names # must be trimmed and lowercase, and sorted in code point order from # low to high. Note that there is a trailing \n. canonical_headers = 'host:' + host + '\n' + 'user-agent:' + 'Ladder data ingestion' + '\n' + 'x-amz-access-token:' + credentials['x-amz-access-token'] + '\n' + 'x-amz-date:' + amzdate + '\n' + 'x-amz-security-token:' + credentials['x-amz-security-token'] + '\n' # Step 5: Create the list of signed headers. This lists the headers # in the canonical_headers list, delimited with ";" and in alpha order. # Note: The request can include any headers; canonical_headers and # signed_headers lists those that you want to be included in the # hash of the request. "Host" and "x-amz-date" are always required. signed_headers = 'host;user-agent;x-amz-access-token;x-amz-date;x-amz-security-token' # Step 6: Create payload hash (hash of the request body content). For GET # requests, the payload is an empty string (""). payload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest() # Step 7: Combine elements to create canonical request canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash print("My Canonical String:") print(canonical_request+'\n') # ************* TASK 2: CREATE THE STRING TO SIGN************* # Match the algorithm to the hashing algorithm you use, either SHA-1 or # SHA-256 (recommended) algorithm = 'AWS4-HMAC-SHA256' credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request' string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest() print("My String to Sign") print(string_to_sign+'\n') # ************* TASK 3: CALCULATE THE SIGNATURE ************* # Create the signing key using the function defined above. signing_key = getSignatureKey(secret_key, datestamp, region, service) # Sign the string_to_sign using the signing_key signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest() # ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST ************* # The signing information can be either in a query string value or in # a header named Authorization. This code shows how to use a header. # Create authorization header and add to request headers authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature # The request can include any headers, but MUST include "host", "x-amz-date", # and (for this scenario) "Authorization". "host" and "x-amz-date" must # be included in the canonical_headers and signed_headers, as noted # earlier. Order here is not significant. # Python note: The 'host' header is added automatically by the Python 'requests' library. headers = { 'authorization': authorization_header, 'host': host, 'user-agent': 'Ladder data ingestion', 'x-amz-access-token': credentials['x-amz-access-token'], 'x-amz-date': amzdate, 'x-amz-security-token': credentials['x-amz-security-token'] } # ************* SEND THE REQUEST ************* request_url = endpoint + '?' + canonical_querystring print('\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++') print('Request URL = ' + request_url) r = requests.get(request_url, headers=headers) print('\nRESPONSE++++++++++++++++++++++++++++++++++++') print('Response code: %d\n' % r.status_code) print(r.text) If you replace your credentials in the code, and it doesn't work, you may need to regenerate them. You can leave a comment here or open a new question and link it in a comment, so I can check it. I've also made a request to getOrder() endpoint, let me know if you have any problem pointing to sandbox.
Error: InvalidSignature when trying to connect to SP-API amazon
I've trying to connect to amazon api for a week now. I've got stuck in this error and after readig the doc several times I can't realize which is the problem. Here is my code: # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Important The AWS SDKs sign API requests for you using the access key that you specify when you configure the SDK. When you use an SDK, you don’t need to learn how to sign API requests. We recommend that you use the AWS SDKs to send API requests, instead of writing your own code. The following example is a reference to help you get started if you have a need to write your own code to send and sign requests. The example is for reference only and is not maintained as functional code. """ # AWS Version 4 signing example # EC2 API (DescribeRegions) # See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html # This version makes a GET request and passes the signature # in the Authorization header. import sys, os, base64, datetime, hashlib, hmac import requests # pip install requests # ************* REQUEST VALUES ************* method = 'GET' service = 'execute-api' host = 'sellingpartnerapi-na.amazon.com' region = 'us-east-1' endpoint = 'https://sellingpartnerapi-na.amazon.com' request_parameters = 'Action=ListOrders&MarketplaceId=ATVPDKIKX0DER&Version=0' #service = 'ec2' #host = 'ec2.amazonaws.com' #region = 'us-east-1' #endpoint = 'https://ec2.amazonaws.com' #request_parameters = 'Action=DescribeRegions&Version=2013-10-15' # Key derivation functions. See: # http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python def sign(key, msg): return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest() def getSignatureKey(key, dateStamp, regionName, serviceName): kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp) kRegion = sign(kDate, regionName) kService = sign(kRegion, serviceName) kSigning = sign(kService, 'aws4_request') return kSigning # Read AWS access key from env. variables or configuration file. Best practice is NOT # to embed credentials in code. access_key = 'AKIEXAMPLE' secret_key = 'SECRETEXAMPLE' if access_key is None or secret_key is None: print('No access key is available.') sys.exit() # Create a date for headers and the credential string t = datetime.datetime.utcnow() amzdate = t.strftime('%Y%m%dT%H%M%SZ') datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope # ************* TASK 1: CREATE A CANONICAL REQUEST ************* # http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html # Step 1 is to define the verb (GET, POST, etc.)--already done. # Step 2: Create canonical URI--the part of the URI from domain to query # string (use '/' if no path) canonical_uri = '/orders/v0/orders' # Step 3: Create the canonical query string. In this example (a GET request), # request parameters are in the query string. Query string values must # be URL-encoded (space=%20). The parameters must be sorted by name. # For this example, the query string is pre-formatted in the request_parameters variable. canonical_querystring = request_parameters # Step 4: Create the canonical headers and signed headers. Header names # must be trimmed and lowercase, and sorted in code point order from # low to high. Note that there is a trailing \n. canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n' # Step 5: Create the list of signed headers. This lists the headers # in the canonical_headers list, delimited with ";" and in alpha order. # Note: The request can include any headers; canonical_headers and # signed_headers lists those that you want to be included in the # hash of the request. "Host" and "x-amz-date" are always required. signed_headers = 'host;x-amz-date' # Step 6: Create payload hash (hash of the request body content). For GET # requests, the payload is an empty string (""). payload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest() # Step 7: Combine elements to create canonical request canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash # ************* TASK 2: CREATE THE STRING TO SIGN************* # Match the algorithm to the hashing algorithm you use, either SHA-1 or # SHA-256 (recommended) algorithm = 'AWS4-HMAC-SHA256' credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request' string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest() # ************* TASK 3: CALCULATE THE SIGNATURE ************* # Create the signing key using the function defined above. signing_key = getSignatureKey(secret_key, datestamp, region, service) # Sign the string_to_sign using the signing_key signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest() # ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST ************* # The signing information can be either in a query string value or in # a header named Authorization. This code shows how to use a header. # Create authorization header and add to request headers authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature # The request can include any headers, but MUST include "host", "x-amz-date", # and (for this scenario) "Authorization". "host" and "x-amz-date" must # be included in the canonical_headers and signed_headers, as noted # earlier. Order here is not significant. # Python note: The 'host' header is added automatically by the Python 'requests' library. headers = {'x-amz-date':amzdate, 'Authorization':authorization_header} # ************* SEND THE REQUEST ************* request_url = endpoint + '?' + canonical_querystring print('\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++') print('Request URL = ' + request_url) r = requests.get(request_url, headers=headers) print('\nRESPONSE++++++++++++++++++++++++++++++++++++') print('Response code: %d\n' % r.status_code) print(r.text) My application is originally built in Java, but since I've got the same error in the Python code sample from amazon, I'm tring to make it work first in Python. It's also interesting that if I uncomment the code: #service = 'ec2' #host = 'ec2.amazonaws.com' #region = 'us-east-1' #endpoint = 'https://ec2.amazonaws.com' #request_parameters = 'Action=DescribeRegions&Version=2013-10-15' It works, but if I use my own endpoints it doesn't. I've checked everything and tried a lot of things, any idea of why this is happening? Thanks in advance for your time. The full error msg { "errors": [ { "message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details. "code": "InvalidSignature" } ] }
[ "This is another solution without boto3 and only using requests\nimport hashlib\nimport hmac\nimport logging\nfrom collections import OrderedDict\nfrom urllib.parse import urlencode\nimport defusedxml.ElementTree as ET\nfrom sdc_etl_libs.api_helpers.API import API\nimport sys, datetime, hashlib, hmac \nimport requests\nimport json\nfrom bs4 import BeautifulSoup\ndef get_session_token_from_xml(content):\n soup = BeautifulSoup(content, \"xml\")\n return soup.find('SessionToken').text, soup.find('AccessKeyId').text, soup.find('SecretAccessKey').text\n\ndef set_params(action_):\n\n logging.info(f\"Setting params according to action {action_}\")\n params = dict()\n if action_ == 'AssumeRole':\n params['Version'] = '2011-06-15'\n params['Action'] = action_\n params['RoleSessionName'] = <<ROLE NAME>>\n params['RoleArn'] = <<ROLE ARN>>\n params['DurationSeconds']='3600'\n elif action_ == 'orders':\n params['MarketplaceIds'] = <<MARKET PLACE>>\n params['LastUpdatedAfter'] = '2022-11-27T14:00:00Z'\n params['LastUpdatedBefore'] = '2022-11-27T16:00:00Z'\n else:\n raise Exception(\"Action is not implemented.\")\n return params\ndef _get_access_token(lwa_app_id, lwa_client_secret, refresh_token):\n url = \"https://api.amazon.com/auth/O2/token\"\n\n payload=f'client_id={lwa_app_id}&client_secret={lwa_client_secret}&refresh_token={refresh_token}&grant_type=refresh_token'\n headers = {\n 'Host': 'api.amazon.com',\n 'Content-Type': 'application/x-www-form-urlencoded',\n }\n\n response = requests.request(\"POST\", url, headers=headers, data=payload)\n\n return response\ndef format_params_to_create_signature(params_to_format_):\n \"\"\"\n URL encodes the parameter name and values\n https://docs.developer.amazonservices.com/en_US/dev_guide/DG_QueryString.html\n :param params_to_format_: dict. Parameters that should be ordered in natural byte order\n and url encoded.\n :return: str.\n \"\"\"\n logging.info(\"Format params.\")\n params_in_order = OrderedDict(sorted(params_to_format_.items()))\n params_formatted = urlencode(params_in_order, doseq=True)\n return params_formatted\n\ndef sign(key, msg):\n \n return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()\n\ndef getSignatureKey(key, dateStamp, regionName, serviceName):\n kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp)\n kRegion = sign(kDate, regionName)\n kService = sign(kRegion, serviceName)\n kSigning = sign(kService, 'aws4_request')\n return kSigning\n\ndef _get_signature_request(action, access_key, secret_key, service, host, region, endpoint, \nmethod: str = 'GET', access_token: str = None, security_token: str = None):\n \n # ************* REQUEST VALUES *************\n params = set_params(action)\n fparams = format_params_to_create_signature(params)\n request_parameters = fparams\n\n # Read AWS access key from env. variables or configuration file. Best practice is NOT\n # to embed credentials in code.\n if access_key is None or secret_key is None:\n raise Exception(\"Access key or secret key are not implemented.\")\n\n # Create a date for headers and the credential string\n t = datetime.datetime.utcnow()\n amzdate = t.strftime('%Y%m%dT%H%M%SZ')\n datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope\n # ************* TASK 1: CREATE A CANONICAL REQUEST *************\n # http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html\n\n # Step 1 is to define the verb (GET, POST, etc.)--already done.\n\n # Step 2: Create canonical URI--the part of the URI from domain to query \n # string (use '/' if no path)\n if action == 'AssumeRole':\n canonical_uri = '/' \n else:\n canonical_uri = '/orders/v0/orders' \n\n # Step 3: Create the canonical query string. In this example (a GET request),\n # request parameters are in the query string. Query string values must\n # be URL-encoded (space=%20). The parameters must be sorted by name.\n # For this example, the query string is pre-formatted in the request_parameters variable.\n canonical_querystring = request_parameters\n\n # Step 4: Create the canonical headers and signed headers. Header names\n # must be trimmed and lowercase, and sorted in code point order from\n # low to high. Note that there is a trailing \\n.\n\n # Step 5: Create the list of signed headers. This lists the headers\n # in the canonical_headers list, delimited with \";\" and in alpha order.\n # Note: The request can include any headers; canonical_headers and\n # signed_headers lists those that you want to be included in the \n # hash of the request. \"Host\" and \"x-amz-date\" are always required.\n if action == 'AssumeRole':\n canonical_headers = 'host:' + host + '\\n' + 'x-amz-date:' + amzdate + '\\n'\n\n signed_headers = 'host;x-amz-date'\n else:\n canonical_headers = 'host:' + host + '\\n' + 'x-amz-access-token:' + \\\n access_token + '\\n' + 'x-amz-date:' + amzdate + '\\n' + 'x-amz-security-token:' + \\\n security_token + '\\n'\n\n signed_headers = 'host;x-amz-access-token;x-amz-date;x-amz-security-token'\n \n\n # Step 6: Create payload hash (hash of the request body content). For GET\n # requests, the payload is an empty string (\"\").\n payload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest()\n\n # Step 7: Combine elements to create canonical request\n canonical_request = method + '\\n' + canonical_uri + '\\n' + canonical_querystring + '\\n' + canonical_headers + '\\n' + \\\n signed_headers + '\\n' + payload_hash\n\n # ************* TASK 2: CREATE THE STRING TO SIGN*************\n # Match the algorithm to the hashing algorithm you use, either SHA-1 or\n # SHA-256 (recommended)\n algorithm = 'AWS4-HMAC-SHA256'\n credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request'\n string_to_sign = algorithm + '\\n' + amzdate + '\\n' + credential_scope + '\\n' + \\\n hashlib.sha256(canonical_request.encode('utf-8')).hexdigest()\n\n # ************* TASK 3: CALCULATE THE SIGNATURE *************\n # Create the signing key using the function defined above.\n signing_key = getSignatureKey(secret_key, datestamp, region, service)\n # Sign the string_to_sign using the signing_key\n signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest()\n \n # ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************\n # The signing information can be either in a query string value or in \n # a header named Authorization. This code shows how to use a header.\n # Create authorization header and add to request headers\n authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + \\\n 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature\n # The request can include any headers, but MUST include \"host\", \"x-amz-date\", \n # and (for this scenario) \"Authorization\". \"host\" and \"x-amz-date\" must\n # be included in the canonical_headers and signed_headers, as noted\n # earlier. Order here is not significant.\n # Python note: The 'host' header is added automatically by the Python 'requests' library.\n if action == 'AssumeRole':\n headers = {'x-amz-date':amzdate, 'Authorization':authorization_header}\n else:\n headers = {\n 'authorization': authorization_header,\n 'host': host,\n 'x-amz-access-token': access_token,\n 'x-amz-date': amzdate, \n 'x-amz-security-token': security_token\n }\n\n # ************* SEND THE REQUEST *************\n request_url = endpoint + '?' + canonical_querystring\n logging.info(f\"BEGIN REQUEST++++++++++++++++++++++++++++++++++++'\")\n logging.info(f\"Request URL = {request_url}\")\n r = requests.get(request_url, headers=headers)\n\n logging.info('\\nRESPONSE++++++++++++++++++++++++++++++++++++')\n logging.info('Response code: %d\\n' % r.status_code)\n\n return r\n\nthe way to run in is the following\nservice = 'sts'\nhost = 'sts.amazonaws.com'\nregion = 'us-east-1'\nendpoint = 'https://sts.amazonaws.com'\nresponse = _get_signature_session('AssumeRole', access_key, secret_key, service, host, region, endpoint)\naccess_token = json.loads(_get_access_token(lwa_app_id, lwa_client_secret, refresh_token).content)['access_token']\ntmp_session_token_, tmp_access_key, tmp_secret_access_key = get_session_token_from_xml(response.content.decode('utf-8'))\n\nafter that you will have the temporal session token, the temporal access key ath the temporal secret key. finally to get the all orders is in the following code\nservice = 'execute-api'\nhost = 'sellingpartnerapi-na.amazon.com'\nregion = 'us-east-1'\nendpoint = 'https://sellingpartnerapi-na.amazon.com/orders/v0/orders'\nresponse = _get_signature_session('orders', tmp_access_key, tmp_secret_access_key, service, host, region, endpoint,\n access_token = access_token, security_token = tmp_session_token_)\n\n", "After some research and testing, I modified the python app, and it works!\nBefore reading the code below, READ THIS.\nYou must execute pip install boto3 to make it work.\nHere are the docs: https://pypi.org/project/boto3/\nI'm putting the credentials in a raw dict instead of following the boto3 docs structure because it was just for testing. If you want to test it with the code, just replace the credentials dict values.\nNotice that it is working with the sandbox environment and getOrders endpoint, and you must specify your own RoleSessionName.\nHere is the code:\n\n# AWS Version 4 signing example\n\n# EC2 API (DescribeRegions)\n\n# See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html\n# This version makes a GET request and passes the signature\n# in the Authorization header.\nimport sys, os, base64, datetime, hashlib, hmac \nimport requests # pip install requests\nimport boto3\n\ncredentials = {\n \n 'lwa_refresh_token': 'whatever',\n 'lwa_client_secret': 'whatever',\n 'lwa_client_id': 'whatever',\n 'aws_secret_access_key': 'whatever',\n 'aws_access_key': 'whatever',\n 'role_arn': 'whatever:role/whatever'\n}\n\n\n# get Access Token and assign to 'x-amz-access-token'\nresponse = requests.post('https://api.amazon.com/auth/o2/token',\n headers={'Content-Type': 'application/x-www-form-urlencoded'},\n data={\n 'grant_type': 'refresh_token',\n 'refresh_token': credentials['lwa_refresh_token'],\n 'client_id': credentials['lwa_client_id'],\n 'client_secret': credentials['lwa_client_secret']\n }\n)\ncredentials['x-amz-access-token'] = response.json()['access_token']\n\n# get AWS STS Session Token and assign to 'x-amz-security-token'\nsts_client = boto3.client(\n 'sts',\n aws_access_key_id=credentials['aws_access_key'],\n aws_secret_access_key=credentials['aws_secret_access_key']\n)\n\nassumed_role_object=sts_client.assume_role(\n RoleArn=credentials['role_arn'],\n RoleSessionName=\"whatever role sesion name you got\"\n)\ncredentials['x-amz-security-token'] = assumed_role_object['Credentials']['SessionToken']\ncredentials['aws_access_key'] = assumed_role_object['Credentials']['AccessKeyId']\ncredentials['aws_secret_access_key'] = assumed_role_object['Credentials']['SecretAccessKey']\n\n# ************* REQUEST VALUES *************\nmethod = 'GET'\nservice = 'execute-api'\nhost = 'sandbox.sellingpartnerapi-na.amazon.com'\nregion = 'us-east-1'\nendpoint = 'https://sandbox.sellingpartnerapi-na.amazon.com/orders/v0/orders'\nrequest_parameters = 'CreatedAfter=TEST_CASE_200&MarketplaceIds=ATVPDKIKX0DER'\n\n# Key derivation functions. See:\n# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python\ndef sign(key, msg):\n return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()\n\ndef getSignatureKey(key, dateStamp, regionName, serviceName):\n kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp)\n kRegion = sign(kDate, regionName)\n kService = sign(kRegion, serviceName)\n kSigning = sign(kService, 'aws4_request')\n return kSigning\n\n# Read AWS access key from env. variables or configuration file. Best practice is NOT\n# to embed credentials in code.\naccess_key = credentials['aws_access_key']\n# No deberia de ser security-token, si no secret_access_key?¿\nsecret_key = credentials['aws_secret_access_key']\nif access_key is None or secret_key is None:\n print('No access key is available.')\n sys.exit()\n\n# Create a date for headers and the credential string\nt = datetime.datetime.utcnow()\namzdate = t.strftime('%Y%m%dT%H%M%SZ')\ndatestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope\n\n\n# ************* TASK 1: CREATE A CANONICAL REQUEST *************\n# http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html\n\n# Step 1 is to define the verb (GET, POST, etc.)--already done.\n\n# Step 2: Create canonical URI--the part of the URI from domain to query \n# string (use '/' if no path)\ncanonical_uri = '/orders/v0/orders' \n\n# Step 3: Create the canonical query string. In this example (a GET request),\n# request parameters are in the query string. Query string values must\n# be URL-encoded (space=%20). The parameters must be sorted by name.\n# For this example, the query string is pre-formatted in the request_parameters variable.\ncanonical_querystring = request_parameters\n\n# Step 4: Create the canonical headers and signed headers. Header names\n# must be trimmed and lowercase, and sorted in code point order from\n# low to high. Note that there is a trailing \\n.\ncanonical_headers = 'host:' + host + '\\n' + 'user-agent:' + 'Ladder data ingestion' + '\\n' + 'x-amz-access-token:' + credentials['x-amz-access-token'] + '\\n' + 'x-amz-date:' + amzdate + '\\n' + 'x-amz-security-token:' + credentials['x-amz-security-token'] + '\\n'\n \n# Step 5: Create the list of signed headers. This lists the headers\n# in the canonical_headers list, delimited with \";\" and in alpha order.\n# Note: The request can include any headers; canonical_headers and\n# signed_headers lists those that you want to be included in the \n# hash of the request. \"Host\" and \"x-amz-date\" are always required.\nsigned_headers = 'host;user-agent;x-amz-access-token;x-amz-date;x-amz-security-token'\n\n# Step 6: Create payload hash (hash of the request body content). For GET\n# requests, the payload is an empty string (\"\").\npayload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest()\n\n# Step 7: Combine elements to create canonical request\ncanonical_request = method + '\\n' + canonical_uri + '\\n' + canonical_querystring + '\\n' + canonical_headers + '\\n' + signed_headers + '\\n' + payload_hash\nprint(\"My Canonical String:\")\nprint(canonical_request+'\\n')\n\n# ************* TASK 2: CREATE THE STRING TO SIGN*************\n# Match the algorithm to the hashing algorithm you use, either SHA-1 or\n# SHA-256 (recommended)\nalgorithm = 'AWS4-HMAC-SHA256'\ncredential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request'\nstring_to_sign = algorithm + '\\n' + amzdate + '\\n' + credential_scope + '\\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest()\nprint(\"My String to Sign\")\nprint(string_to_sign+'\\n')\n\n# ************* TASK 3: CALCULATE THE SIGNATURE *************\n# Create the signing key using the function defined above.\nsigning_key = getSignatureKey(secret_key, datestamp, region, service)\n\n# Sign the string_to_sign using the signing_key\nsignature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest()\n\n\n# ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************\n# The signing information can be either in a query string value or in \n# a header named Authorization. This code shows how to use a header.\n# Create authorization header and add to request headers\nauthorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature\n\n# The request can include any headers, but MUST include \"host\", \"x-amz-date\", \n# and (for this scenario) \"Authorization\". \"host\" and \"x-amz-date\" must\n# be included in the canonical_headers and signed_headers, as noted\n# earlier. Order here is not significant.\n# Python note: The 'host' header is added automatically by the Python 'requests' library.\nheaders = {\n 'authorization': authorization_header,\n 'host': host,\n 'user-agent': 'Ladder data ingestion',\n 'x-amz-access-token': credentials['x-amz-access-token'],\n 'x-amz-date': amzdate, \n 'x-amz-security-token': credentials['x-amz-security-token']\n}\n\n\n# ************* SEND THE REQUEST *************\nrequest_url = endpoint + '?' + canonical_querystring\n\nprint('\\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++')\nprint('Request URL = ' + request_url)\nr = requests.get(request_url, headers=headers)\n\nprint('\\nRESPONSE++++++++++++++++++++++++++++++++++++')\nprint('Response code: %d\\n' % r.status_code)\nprint(r.text)\n\n\nIf you replace your credentials in the code, and it doesn't work, you may need to regenerate them. You can leave a comment here or open a new question and link it in a comment, so I can check it.\nI've also made a request to getOrder() endpoint, let me know if you have any problem pointing to sandbox.\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_product_api", "api", "python" ]
stackoverflow_0074558880_amazon_product_api_api_python.txt
Q: what is this error for? 'NoneType' object has no attribute 'round' cm = int(input("Write height in Centimeters:")) inches = 0.394*cm feet = 0.0328*cm print(("The length in inches",round(inches,2))).round(inches,2) print(("The length in feet",round(feet,2))).round(feet,2) this is the code this code should convert cm in feet and inches but there is a error A: I'll first break down the error: A NoneType object is basically the None object in Python. You could consider it to be basically an object with no value. If you're getting this error, it means you're trying to use the .round() method on something that either returns None, or a None value. Now about your code: You use the .round() method on a print() function, which returns None. So instead you should remove that and just have something like this: print(("The length in inches",round(inches,2))) # Removed extra .round() print(("The length in feet",round(feet,2))) Full code: cm = int(input("Write height in Centimeters:")) inches = 0.394*cm feet = 0.0328*cm print(("The length in inches",round(inches,2))) print(("The length in feet",round(feet,2))) A: try this cm = int(input("Write height in Centimeters:")) inches = 0.394*cm feet = 0.0328*cm print(("The length in inches",round(inches,2))) print(("The length in feet",round(feet,2))) A: You are attempting to call a method called round() as a class method of print. Print is not an object but a function which returns None, so you are attempting to call round() on None. Remove the .round(inches, 2) and .round(feet, 2) and your code should run as expected. A: NoneType in Python is a data type that simply shows that an object has no value/has a value of None. NoneType does't have a round attribute as error said. in your code you are using round attribut in print statement: print(("The length in inches",round(inches,2))).round(inches,2) that is why python give you error. your code should be: print(("The length in inches",round(inches,2)))
what is this error for? 'NoneType' object has no attribute 'round'
cm = int(input("Write height in Centimeters:")) inches = 0.394*cm feet = 0.0328*cm print(("The length in inches",round(inches,2))).round(inches,2) print(("The length in feet",round(feet,2))).round(feet,2) this is the code this code should convert cm in feet and inches but there is a error
[ "I'll first break down the error:\nA NoneType object is basically the None object in Python. You could consider it to be basically an object with no value. If you're getting this error, it means you're trying to use the .round() method on something that either returns None, or a None value.\nNow about your code:\nYou use the .round() method on a print() function, which returns None. So instead you should remove that and just have something like this:\nprint((\"The length in inches\",round(inches,2))) # Removed extra .round()\nprint((\"The length in feet\",round(feet,2)))\n\nFull code:\ncm = int(input(\"Write height in Centimeters:\"))\ninches = 0.394*cm\nfeet = 0.0328*cm\nprint((\"The length in inches\",round(inches,2)))\nprint((\"The length in feet\",round(feet,2)))\n\n", "try this\ncm = int(input(\"Write height in Centimeters:\"))\ninches = 0.394*cm\nfeet = 0.0328*cm\nprint((\"The length in inches\",round(inches,2)))\nprint((\"The length in feet\",round(feet,2)))\n\n", "You are attempting to call a method called round() as a class method of print. Print is not an object but a function which returns None, so you are attempting to call round() on None. Remove the .round(inches, 2) and .round(feet, 2) and your code should run as expected.\n", "NoneType in Python is a data type that simply shows that an object has no value/has a value of None.\nNoneType does't have a round attribute as error said.\nin your code you are using round attribut in print statement:\nprint((\"The length in inches\",round(inches,2))).round(inches,2)\nthat is why python give you error.\nyour code should be:\nprint((\"The length in inches\",round(inches,2)))\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074636711_python.txt
Q: How can I use ParamSpec with method decorators? I was following the example from PEP 0612 (last one in the Motivation section) to create a decorator that can add default parameters to a function. The problem is, the example provided only works for functions but not methods, because Concate doesn't allow inserting self anywhere in the definition. Consider this example, as an adaptation of the one in the PEP: def with_request(f: Callable[Concatenate[Request, P], R]) -> Callable[P, R]: def inner(*args: P.args, **kwargs: P.kwargs) -> R: return f(*args, request=Request(), **kwargs) return inner class Thing: @with_request def takes_int_str(self, request: Request, x: int, y: str) -> int: print(request) return x + 7 thing = Thing() thing.takes_int_str(1, "A") # Invalid self argument "Thing" to attribute function "takes_int_str" with type "Callable[[str, int, str], int]" thing.takes_int_str("B", 2) # Argument 2 to "takes_int_str" of "Thing" has incompatible type "int"; expected "str" Both attempts raise a mypy error because Request doesn't match self as the first argument of the method, like Concatenate said. The problem is that Concatenate doesn't allow you to append Request to the end, so something like Concatenate[P, Request] won't work either. This would be the ideal way to do this in my view, but it doesn't work because "The last parameter to Concatenate needs to be a ParamSpec". def with_request(f: Callable[Concatenate[P, Request], R]) -> Callable[P, R]: ... class Thing: @with_request def takes_int_str(self, x: int, y: str, request: Request) -> int: ... Any ideas? A: There is surprisingly little about this online. I was able to find someone else's discussion of this over at python/typing's Github, which I distilled using your example. The crux of this solution is Callback Protocols, which are functionally equivalent to Callable, but additionally enable us to modify the return type of __get__ (essentially removing the self parameter) as is done for standard methods. from __future__ import annotations from typing import Any, Callable, Concatenate, Generic, ParamSpec, Protocol, TypeVar from requests import Request P = ParamSpec("P") R = TypeVar("R", covariant=True) class Method(Protocol, Generic[P, R]): def __get__(self, instance: Any, owner: type | None = None) -> Callable[P, R]: ... def __call__(self_, self: Any, *args: P.args, **kwargs: P.kwargs) -> R: ... def request_wrapper(f: Callable[Concatenate[Any, Request, P], R]) -> Method[P, R]: def inner(self, *args: P.args, **kwargs: P.kwargs) -> R: return f(self, Request(), *args, **kwargs) return inner class Thing: @request_wrapper def takes_int_str(self, request: Request, x: int, y: str) -> int: print(request) return x + 7 thing = Thing() thing.takes_int_str(1, "a")
How can I use ParamSpec with method decorators?
I was following the example from PEP 0612 (last one in the Motivation section) to create a decorator that can add default parameters to a function. The problem is, the example provided only works for functions but not methods, because Concate doesn't allow inserting self anywhere in the definition. Consider this example, as an adaptation of the one in the PEP: def with_request(f: Callable[Concatenate[Request, P], R]) -> Callable[P, R]: def inner(*args: P.args, **kwargs: P.kwargs) -> R: return f(*args, request=Request(), **kwargs) return inner class Thing: @with_request def takes_int_str(self, request: Request, x: int, y: str) -> int: print(request) return x + 7 thing = Thing() thing.takes_int_str(1, "A") # Invalid self argument "Thing" to attribute function "takes_int_str" with type "Callable[[str, int, str], int]" thing.takes_int_str("B", 2) # Argument 2 to "takes_int_str" of "Thing" has incompatible type "int"; expected "str" Both attempts raise a mypy error because Request doesn't match self as the first argument of the method, like Concatenate said. The problem is that Concatenate doesn't allow you to append Request to the end, so something like Concatenate[P, Request] won't work either. This would be the ideal way to do this in my view, but it doesn't work because "The last parameter to Concatenate needs to be a ParamSpec". def with_request(f: Callable[Concatenate[P, Request], R]) -> Callable[P, R]: ... class Thing: @with_request def takes_int_str(self, x: int, y: str, request: Request) -> int: ... Any ideas?
[ "There is surprisingly little about this online. I was able to find someone else's discussion of this over at python/typing's Github, which I distilled using your example.\nThe crux of this solution is Callback Protocols, which are functionally equivalent to Callable, but additionally enable us to modify the return type of __get__ (essentially removing the self parameter) as is done for standard methods.\nfrom __future__ import annotations\n\nfrom typing import Any, Callable, Concatenate, Generic, ParamSpec, Protocol, TypeVar\n\nfrom requests import Request\n\nP = ParamSpec(\"P\")\nR = TypeVar(\"R\", covariant=True)\n\n\nclass Method(Protocol, Generic[P, R]):\n def __get__(self, instance: Any, owner: type | None = None) -> Callable[P, R]:\n ...\n\n def __call__(self_, self: Any, *args: P.args, **kwargs: P.kwargs) -> R:\n ...\n\n\ndef request_wrapper(f: Callable[Concatenate[Any, Request, P], R]) -> Method[P, R]:\n def inner(self, *args: P.args, **kwargs: P.kwargs) -> R:\n return f(self, Request(), *args, **kwargs)\n\n return inner\n\n\nclass Thing:\n @request_wrapper\n def takes_int_str(self, request: Request, x: int, y: str) -> int:\n print(request)\n return x + 7\n\n\nthing = Thing()\nthing.takes_int_str(1, \"a\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.10", "typing" ]
stackoverflow_0073856901_python_python_3.10_typing.txt
Q: Using FastAPI in a sync way, how can I get the raw body of a POST request? Using FastAPI in a sync, not async mode, I would like to be able to receive the raw, unchanged body of a POST request. All examples I can find show async code, when I try it in a normal sync way, the request.body() shows up as a coroutine object. When I test it by posting some XML to this endpoint, I get a 500 "Internal Server Error". from fastapi import FastAPI, Response, Request, Body app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"} @app.post("/input") def input_request(request: Request): # how can I access the RAW request body here? body = request.body() # do stuff with the body here return Response(content=body, media_type="application/xml") Is this not possible with FastAPI? Note: a simplified input request would look like: POST http://127.0.0.1:1083/input Content-Type: application/xml <XML> <BODY>TEST</BODY> </XML> and I have no control over how input requests are sent, because I need to replace an existing SOAP API. A: Using async def endpoint If an object is a co-routine, it needs to be awaited. FastAPI is based on Starlette, and Starlette methods for returning the request body are async methods (see source code here); thus, one needs to await them (using an async def endpoint). For example: from fastapi import Request @app.post("/input") async def input_request(request: Request): return await request.body() Update 1 - Using def endpoint Alternatively, if you are confident that the incoming data is a valid JSON, you can define your endpoint with def instead, and use the Body field, as shown below (for more options on how to post JSON data, see this answer): from fastapi import Body @app.post("/input") def input_request(payload: dict = Body(...)): return payload If, however, the incoming data are in XML format, as in the example you provided, you could pass them via Files instead, as shown below—as long as you have control over how client data are sent (have a look here as well). from fastapi import File @app.post("/input") def input_request(contents: bytes = File(...)): return contents Update 2 - Using def endpoint and async dependency As described in this post, you can use an async dependency function to pull out the body from the request. You can use async dependencies on non-async (i.e., def) endpoints as well. Hence, if there is some sort of blocking code in this endpoint that prevents you from using async/await—as I am guessing this might be the reason in your case—this is the way to go. Note: I should also mention that this answer—which explains the difference between def and async def endpoints (that you might be aware of)—also provides solutions when you are required to use async def (as you might need to await for coroutines inside a route), but also have some synchronous expensive CPU-bound operation that might be blocking the server. Please have a look. Example of the approach described earlier can be found below. You can uncomment the time.sleep() line, if you would like to confirm yourself that a request won't be blocking other requests from going through, as when you declare an endpoint with normal def instead of async def, it is run in an external threadpool (regardless of the async def dependency function). from fastapi import FastAPI, Depends, Request import time app = FastAPI() async def get_body(request: Request): return await request.body() @app.post("/input") def input_request(body: bytes = Depends(get_body)): print("New request arrived.") #time.sleep(5) return body A: For convenience, you can simply use asgiref, this package supports async_to_sync and sync_to_async: from asgiref.sync import async_to_sync sync_body_func = async_to_sync(request.body) print(sync_body_func()) async_to_sync execute an async function in a eventloop, sync_to_async execute a sync function in a threadpool.
Using FastAPI in a sync way, how can I get the raw body of a POST request?
Using FastAPI in a sync, not async mode, I would like to be able to receive the raw, unchanged body of a POST request. All examples I can find show async code, when I try it in a normal sync way, the request.body() shows up as a coroutine object. When I test it by posting some XML to this endpoint, I get a 500 "Internal Server Error". from fastapi import FastAPI, Response, Request, Body app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"} @app.post("/input") def input_request(request: Request): # how can I access the RAW request body here? body = request.body() # do stuff with the body here return Response(content=body, media_type="application/xml") Is this not possible with FastAPI? Note: a simplified input request would look like: POST http://127.0.0.1:1083/input Content-Type: application/xml <XML> <BODY>TEST</BODY> </XML> and I have no control over how input requests are sent, because I need to replace an existing SOAP API.
[ "Using async def endpoint\nIf an object is a co-routine, it needs to be awaited. FastAPI is based on Starlette, and Starlette methods for returning the request body are async methods (see source code here); thus, one needs to await them (using an async def endpoint). For example:\nfrom fastapi import Request\n\[email protected](\"/input\")\nasync def input_request(request: Request):\n return await request.body()\n\nUpdate 1 - Using def endpoint\nAlternatively, if you are confident that the incoming data is a valid JSON, you can define your endpoint with def instead, and use the Body field, as shown below (for more options on how to post JSON data, see this answer):\nfrom fastapi import Body\n\[email protected](\"/input\")\ndef input_request(payload: dict = Body(...)):\n return payload\n\nIf, however, the incoming data are in XML format, as in the example you provided, you could pass them via Files instead, as shown below—as long as you have control over how client data are sent (have a look here as well).\nfrom fastapi import File\n\[email protected](\"/input\") \ndef input_request(contents: bytes = File(...)): \n return contents\n\nUpdate 2 - Using def endpoint and async dependency\nAs described in this post, you can use an async dependency function to pull out the body from the request. You can use async dependencies on non-async (i.e., def) endpoints as well. Hence, if there is some sort of blocking code in this endpoint that prevents you from using async/await—as I am guessing this might be the reason in your case—this is the way to go.\nNote: I should also mention that this answer—which explains the difference between def and async def endpoints (that you might be aware of)—also provides solutions when you are required to use async def (as you might need to await for coroutines inside a route), but also have some synchronous expensive CPU-bound operation that might be blocking the server. Please have a look.\nExample of the approach described earlier can be found below. You can uncomment the time.sleep() line, if you would like to confirm yourself that a request won't be blocking other requests from going through, as when you declare an endpoint with normal def instead of async def, it is run in an external threadpool (regardless of the async def dependency function).\nfrom fastapi import FastAPI, Depends, Request\nimport time\n\napp = FastAPI()\n\nasync def get_body(request: Request):\n return await request.body()\n\[email protected](\"/input\")\ndef input_request(body: bytes = Depends(get_body)):\n print(\"New request arrived.\")\n #time.sleep(5)\n return body\n\n", "For convenience, you can simply use asgiref, this package supports async_to_sync and sync_to_async:\nfrom asgiref.sync import async_to_sync\n\nsync_body_func = async_to_sync(request.body)\nprint(sync_body_func())\n\nasync_to_sync execute an async function in a eventloop, sync_to_async execute a sync function in a threadpool.\n" ]
[ 9, 0 ]
[]
[]
[ "fastapi", "python", "starlette" ]
stackoverflow_0070658748_fastapi_python_starlette.txt
Q: How can I set and append in column with conditional case in Django? I want to run this sql query in Django UPDATE `TABLE` SET `COLUMN` = (CASE WHEN `COLUMN` = "" THEN '100' ELSE CONCAT(`COLUMN`,'100') END) WHERE `SOMEID` IN [id1,id2,id3]; I tried this from django.db.models import Case, When, F Table.objects.filter(someid__in=[id1,id2,id3]). update(column= Case( When(column="",then="100"), default= column + "100", ) ) I dont know how to put concat in default here. A: You can use the F objects: from django.db.models import F Table.objects.filter(...).update(column=F("column") + "100") You don't need to check if column == "" because it won't matter when you append "100" to it.
How can I set and append in column with conditional case in Django?
I want to run this sql query in Django UPDATE `TABLE` SET `COLUMN` = (CASE WHEN `COLUMN` = "" THEN '100' ELSE CONCAT(`COLUMN`,'100') END) WHERE `SOMEID` IN [id1,id2,id3]; I tried this from django.db.models import Case, When, F Table.objects.filter(someid__in=[id1,id2,id3]). update(column= Case( When(column="",then="100"), default= column + "100", ) ) I dont know how to put concat in default here.
[ "You can use the F objects:\nfrom django.db.models import F\n\nTable.objects.filter(...).update(column=F(\"column\") + \"100\")\n\nYou don't need to check if column == \"\" because it won't matter when you append \"100\" to it.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074636423_django_python.txt
Q: Discord bot, current date as status i have looked a bit and tried multiple things and im stumped. Im going to be hosting a discord bot 24/7 and i want the Status to display the current date and time, as example. 11/30/22, 10:51 PM, in eastern time. Thanks! tried methods such as " activity=discord.Game(datetime.datetime.utcnow().strftime("%H:%M"))," A: You can use tasks.loop, creating a task that updates every minute and changes the bot's Status to the current time. tasks.loop is a decorator which executes the decorated function repeatedly at a defined interval. Then you just need to spawn the loop in an asynchronous context, of which I personally use setup_hook in this example. from discord.ext import tasks import datetime @tasks.loop(minutes=1): async def set_status(): # name is unimportant, but it must not take arguments name = datetime.datetime.utcnow().strftime("%H:%M") activity = discord.Game(name=name) await client.change_presence(activity=activity) @client.event async def setup_hook(): # name is important, and it must not take arguments set_status.start() Replacing client with the name of your client as appropriate (usually client or bot).
Discord bot, current date as status
i have looked a bit and tried multiple things and im stumped. Im going to be hosting a discord bot 24/7 and i want the Status to display the current date and time, as example. 11/30/22, 10:51 PM, in eastern time. Thanks! tried methods such as " activity=discord.Game(datetime.datetime.utcnow().strftime("%H:%M")),"
[ "You can use tasks.loop, creating a task that updates every minute and changes the bot's Status to the current time.\ntasks.loop is a decorator which executes the decorated function repeatedly at a defined interval. Then you just need to spawn the loop in an asynchronous context, of which I personally use setup_hook in this example.\nfrom discord.ext import tasks\nimport datetime\n\[email protected](minutes=1):\nasync def set_status(): # name is unimportant, but it must not take arguments\n name = datetime.datetime.utcnow().strftime(\"%H:%M\")\n activity = discord.Game(name=name)\n await client.change_presence(activity=activity)\n\[email protected]\nasync def setup_hook(): # name is important, and it must not take arguments\n set_status.start()\n\nReplacing client with the name of your client as appropriate (usually client or bot).\n" ]
[ 0 ]
[]
[]
[ "bots", "discord", "python" ]
stackoverflow_0074636772_bots_discord_python.txt
Q: TypeError: numpy boolean subtract, the `-` operator, is not supported in Scipy.Optimize Using scipy.optimise (code below) - for a battery optimisation problem Getting this error: TypeError: numpy boolean subtract, the - operator, is not supported, use the bitwise_xor, the ^ operator, or the logical_xor function instead. Which came from the minimize function directly, so I'm not sure exactly where its coming from. line 70, in <module> sol = minimize(objective, B0, method='SLSQP', \ File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_minimize.py", line 708, in minimize res = _minimize_slsqp(fun, x0, args, jac, bounds, File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 418, in _minimize_slsqp a = _eval_con_normals(x, cons, la, n, m, meq, mieq) File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 486, in _eval_con_normals a_eq = vstack([con['jac'](x, *con['args']) File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 486, in <listcomp> a_eq = vstack([con['jac'](x, *con['args']) File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 301, in cjac return approx_derivative(fun, x, method='2-point', File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_numdiff.py", line 505, in approx_derivative return _dense_difference(fun_wrapped, x0, f0, h, File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_numdiff.py", line 576, in _dense_difference df = fun(x) - f0 TypeError: numpy boolean subtract, the `-` operator, is not supported, use the bitwise_xor, the `^` operator, or the logical_xor function instead. TIME = 2 MAX_BATTERY_CHARGE_RATE = 4 #kwh MAX_BATTERY_CAPACITY = 12 #kw INITIAL_BATTERY_CHARGE = 5 BUY_RATE = [0.1, 0.3] SELL_RATE = [0, 0] L = [2, 5] S = [1, 1] def objective(B): cost = 0 for i in range(TIME): isum = (L[i] - S[i] + B[i]) if isum > 0 : cost += BUY_RATE[i] * isum else: cost += SELL_RATE[i] * isum return cost # BOUNDS # cannot exceed charge rate b = (-1 * MAX_BATTERY_CHARGE_RATE, MAX_BATTERY_CHARGE_RATE) bnds = [b for i in range(TIME)] print(bnds) # CONSTRAINTS # Sum of B up to any point in time cannot be less than 0 # or greater than battery capacity def constraint1(x): for i in range(TIME + 1): array = x[:i] print(array) if (np.sum(array) < 0) or (np.sum(array) > MAX_BATTERY_CAPACITY): return False return True con1 = {'type': 'ineq', 'fun': constraint1} # Battery charge at initial time is set def constraint2(x): return x[0] == INITIAL_BATTERY_CHARGE con2 = {'type': 'eq', 'fun': constraint2} cons = [con1, con2] # SOLUTION B0 = np.ones(TIME) sol = minimize(objective, B0, method='SLSQP', \ bounds=bnds, constraints=cons) print(sol) A: Is the SLSQP method compatible with functions that return boolean values true and false? I would rework function constraint1 to return real values instead of boolean values. If SLSQP is compatible with boolean functions, could you point me to the documentation that states this? When I saw this message, "df = fun(x) - f0 TypeError: numpy boolean subtract," I found this odd because why would the computation of the derivative involve substraction of boolean values. After looking through your code, I noticed that the function constraint1 is returning boolean values true and false. Traditionally, the objective and constraint functions return real values. I would rework function constraint1 to return real values.
TypeError: numpy boolean subtract, the `-` operator, is not supported in Scipy.Optimize
Using scipy.optimise (code below) - for a battery optimisation problem Getting this error: TypeError: numpy boolean subtract, the - operator, is not supported, use the bitwise_xor, the ^ operator, or the logical_xor function instead. Which came from the minimize function directly, so I'm not sure exactly where its coming from. line 70, in <module> sol = minimize(objective, B0, method='SLSQP', \ File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_minimize.py", line 708, in minimize res = _minimize_slsqp(fun, x0, args, jac, bounds, File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 418, in _minimize_slsqp a = _eval_con_normals(x, cons, la, n, m, meq, mieq) File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 486, in _eval_con_normals a_eq = vstack([con['jac'](x, *con['args']) File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 486, in <listcomp> a_eq = vstack([con['jac'](x, *con['args']) File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_slsqp_py.py", line 301, in cjac return approx_derivative(fun, x, method='2-point', File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_numdiff.py", line 505, in approx_derivative return _dense_difference(fun_wrapped, x0, f0, h, File "AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\scipy\optimize\_numdiff.py", line 576, in _dense_difference df = fun(x) - f0 TypeError: numpy boolean subtract, the `-` operator, is not supported, use the bitwise_xor, the `^` operator, or the logical_xor function instead. TIME = 2 MAX_BATTERY_CHARGE_RATE = 4 #kwh MAX_BATTERY_CAPACITY = 12 #kw INITIAL_BATTERY_CHARGE = 5 BUY_RATE = [0.1, 0.3] SELL_RATE = [0, 0] L = [2, 5] S = [1, 1] def objective(B): cost = 0 for i in range(TIME): isum = (L[i] - S[i] + B[i]) if isum > 0 : cost += BUY_RATE[i] * isum else: cost += SELL_RATE[i] * isum return cost # BOUNDS # cannot exceed charge rate b = (-1 * MAX_BATTERY_CHARGE_RATE, MAX_BATTERY_CHARGE_RATE) bnds = [b for i in range(TIME)] print(bnds) # CONSTRAINTS # Sum of B up to any point in time cannot be less than 0 # or greater than battery capacity def constraint1(x): for i in range(TIME + 1): array = x[:i] print(array) if (np.sum(array) < 0) or (np.sum(array) > MAX_BATTERY_CAPACITY): return False return True con1 = {'type': 'ineq', 'fun': constraint1} # Battery charge at initial time is set def constraint2(x): return x[0] == INITIAL_BATTERY_CHARGE con2 = {'type': 'eq', 'fun': constraint2} cons = [con1, con2] # SOLUTION B0 = np.ones(TIME) sol = minimize(objective, B0, method='SLSQP', \ bounds=bnds, constraints=cons) print(sol)
[ "Is the SLSQP method compatible with functions that return boolean values true and false? I would rework function constraint1 to return real values instead of boolean values. If SLSQP is compatible with boolean functions, could you point me to the documentation that states this?\nWhen I saw this message, \"df = fun(x) - f0 TypeError: numpy boolean subtract,\" I found this odd because why would the computation of the derivative involve substraction of boolean values. After looking through your code, I noticed that the function constraint1 is returning boolean values true and false. Traditionally, the objective and constraint functions return real values. I would rework function constraint1 to return real values.\n" ]
[ 1 ]
[]
[]
[ "optimization", "python", "scipy", "scipy_optimize", "scipy_optimize_minimize" ]
stackoverflow_0074636794_optimization_python_scipy_scipy_optimize_scipy_optimize_minimize.txt
Q: How to increase the thickness of x-axis in matplotlib.plt I am trying to increase the thickness of the horizontal x-axis in my plot but I could not find a way to do it. I am able to increase the thickness of x-ticks but not the line itself. Here is my code: ax = plt.subplot(3, 1, 3) q1 = sns.pointplot(df1['Tomato'][0:191], color='#009966',errwidth = 30, scale=4.5) q2 = sns.pointplot(df1['Tomato'][192:], color='#FF6600',errwidth = 30, scale=4.5) for dots in q1.collections: color = dots.get_facecolor() dots.set_color(sns.set_hls_values(color, l=0.5)) dots.set_alpha(0.5) for line in q1.lines: line.set_alpha(0.5) ax.set(ylabel=None) ax.set(xlabel=None) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) #ax.spines["bottom"].set_visible(False) ax.spines["left"].set_visible(False) plt.xticks(fontweight='bold', fontsize='30') plt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0], fontsize=30.0, fontweight='bold', family='Times New Roman') plt.yticks([]) Here is the plot generated based on the code above: A: You can try ax.spines["bottom"].set_linewidth(3).
How to increase the thickness of x-axis in matplotlib.plt
I am trying to increase the thickness of the horizontal x-axis in my plot but I could not find a way to do it. I am able to increase the thickness of x-ticks but not the line itself. Here is my code: ax = plt.subplot(3, 1, 3) q1 = sns.pointplot(df1['Tomato'][0:191], color='#009966',errwidth = 30, scale=4.5) q2 = sns.pointplot(df1['Tomato'][192:], color='#FF6600',errwidth = 30, scale=4.5) for dots in q1.collections: color = dots.get_facecolor() dots.set_color(sns.set_hls_values(color, l=0.5)) dots.set_alpha(0.5) for line in q1.lines: line.set_alpha(0.5) ax.set(ylabel=None) ax.set(xlabel=None) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) #ax.spines["bottom"].set_visible(False) ax.spines["left"].set_visible(False) plt.xticks(fontweight='bold', fontsize='30') plt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0], fontsize=30.0, fontweight='bold', family='Times New Roman') plt.yticks([]) Here is the plot generated based on the code above:
[ "You can try ax.spines[\"bottom\"].set_linewidth(3).\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "matplotlib", "python", "seaborn" ]
stackoverflow_0074636391_jupyter_notebook_matplotlib_python_seaborn.txt
Q: pandas returning line number and type? I have a csv file, and using python get the highest average price of avocado from the data. All works fine until printing the region avocadoesDB = pd.read_csv("avocado.csv") avocadoesDB = pd.DataFrame(avocadoesDB) avocadoesDB = avocadoesDB[['AveragePrice', 'type', 'year', 'region']] regions = avocadoesDB[['AveragePrice', 'region']] regionMax = max(regions['AveragePrice']) region = regions.loc[regions['AveragePrice']==regionMax] print(f"The highest average price for both types of potatoes is ${regionMax} from {region['region']}.") Output: The highest average price for both types of potatoes is $3.25 from 14125 SanFrancisco Name: region, dtype: object. Expected: The highest average price for both types of potatoes is $3.25 from SanFrancisco. A: So i've tried to copy the similar method on a simple dataset and i've seem to make it work, here's the code snippet mx = max(df1['Salary']) plc = df.loc[df1['Salary']==mx]['Name'] print('Max Sal : ' + str(plc.iloc[0])) Output: Max Sal : Farah According to this post on Stack Overflow, when you use df.loc[df1['Salary']==mx]['Name'] , A Series Object is returned, and so to retrieve the value of the desired column, you use [0], if I understood the post correctly. So for your code, you can replace region = regions.loc[regions['AveragePrice']==regionMax] print(f"The highest average price for both types of potatoes is ${regionMax} from {region['region']}.") with region = regions.loc[regions['AveragePrice']==regionMax]['region'] print(f"The highest average price for both types of potatoes is ${regionMax} from {region}.") This should work. Hope this helps!
pandas returning line number and type?
I have a csv file, and using python get the highest average price of avocado from the data. All works fine until printing the region avocadoesDB = pd.read_csv("avocado.csv") avocadoesDB = pd.DataFrame(avocadoesDB) avocadoesDB = avocadoesDB[['AveragePrice', 'type', 'year', 'region']] regions = avocadoesDB[['AveragePrice', 'region']] regionMax = max(regions['AveragePrice']) region = regions.loc[regions['AveragePrice']==regionMax] print(f"The highest average price for both types of potatoes is ${regionMax} from {region['region']}.") Output: The highest average price for both types of potatoes is $3.25 from 14125 SanFrancisco Name: region, dtype: object. Expected: The highest average price for both types of potatoes is $3.25 from SanFrancisco.
[ "So i've tried to copy the similar method on a simple dataset and i've seem to make it work, here's the code snippet\nmx = max(df1['Salary'])\nplc = df.loc[df1['Salary']==mx]['Name']\nprint('Max Sal : ' + str(plc.iloc[0]))\n\nOutput:\nMax Sal : Farah\n\nAccording to this post on Stack Overflow, when you use df.loc[df1['Salary']==mx]['Name'] , A Series Object is returned, and so to retrieve the value of the desired column, you use [0], if I understood the post correctly.\nSo for your code, you can replace\nregion = regions.loc[regions['AveragePrice']==regionMax]\nprint(f\"The highest average price for both types of potatoes is ${regionMax} from {region['region']}.\")\n\n\nwith\nregion = regions.loc[regions['AveragePrice']==regionMax]['region']\n\nprint(f\"The highest average price for both types of potatoes is ${regionMax} from {region}.\")\n\n\nThis should work. Hope this helps!\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074636609_dataframe_python.txt
Q: Creating my first OOP python 'game', its called Battle Bots I'm very new to coding and have just begun OOP with python and my first task is to build a game called Battle Bots. The premise of the game is 2 bots fighting with 100 life points and each turn the bots attack one another with a randomly generated "strength." The strength is equivalent to the attack so whatever the program generates for the bot's strength is the amount that will be taken from the others life points. I'm wondering if I should create 2 separate classes for each bot or keep them together. This code is a very rough idea of what I'm starting with import random class battleBots(): def __init__(self): self.life = 50 print("Bot1 Life Points: ", self.life) def __init__(self): self.life = 50 print("Bot2 Life Points") def newLife(self): self.Newlife = self.life - self.strength return self.Newlife bot1 = battleBots() bot2 = battleBots() class battleBotsGame(): while True: print("Welcome to the Battle bots game... ") print("Bot1 Your Turn!") choice = input("Press h to hit, q to quit: ")
Creating my first OOP python 'game', its called Battle Bots
I'm very new to coding and have just begun OOP with python and my first task is to build a game called Battle Bots. The premise of the game is 2 bots fighting with 100 life points and each turn the bots attack one another with a randomly generated "strength." The strength is equivalent to the attack so whatever the program generates for the bot's strength is the amount that will be taken from the others life points. I'm wondering if I should create 2 separate classes for each bot or keep them together. This code is a very rough idea of what I'm starting with import random class battleBots(): def __init__(self): self.life = 50 print("Bot1 Life Points: ", self.life) def __init__(self): self.life = 50 print("Bot2 Life Points") def newLife(self): self.Newlife = self.life - self.strength return self.Newlife bot1 = battleBots() bot2 = battleBots() class battleBotsGame(): while True: print("Welcome to the Battle bots game... ") print("Bot1 Your Turn!") choice = input("Press h to hit, q to quit: ")
[]
[]
[ "I suggest you to learn about association, composition and aggregation and inheritance and when to use them, so that you have better understanding of relation between objects and classes at the high level.\nalso in your code you can't have two init method in one class i highly recommend to have clear understanding of python OOP with writing pseudo codes.\n" ]
[ -1 ]
[ "class", "object", "python" ]
stackoverflow_0074636709_class_object_python.txt
Q: Django - Sending post request using a nested serializer with many to many relationship. Getting a [400 error code] "This field may not be null] error" I'm fairly new to Django and I'm trying to make a POST request with nested objects. This is the data that I'm sending: { "id":null, "deleted":false, "publishedOn":2022-11-28, "decoratedThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "rawThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "videoUrl":"https://www.youtube.com/watch?v=jNQXAC9IVRw", "title":"Video with tags", "duration":120, "visibility":1, "tags":[ { "id":null, "videoId":null, "videoTagId":42 } ] } Here's a brief diagram of the relationship of these objects on the database I want to create a video and pass in an array of nested data so that I can create multiple tags that can be associated to a video in a many to many relationship. Because of that, the 'id' field of the video will be null and the 'videoId' inside of the tag object will also be null when the data is being sent. However I keep getting a 400 (Bad request) error saying {tags: [{videoId: [This field may not be null.]}]} I'm trying to override the create method inside VideoManageSerializer so that I can extract the tags and after creating the video I can use that video to create those tags. I don't think I'm even getting to the create method part inside VideoManageSerializer as the video is not created on the database. I've been stuck on this issue for a few days. If anybody could point me in the right direction I would really appreciate it. I'm using the following serializers: class VideoManageSerializer(serializers.ModelSerializer): tags = VideoVideoTagSerializer(many=True) class Meta: model = Video fields = ('__all__') # POST def create(self, validated_data): tags = validated_data.pop('tags') video_instance = Video.objects.create(**validated_data) for tag in tags: VideoVideoTag.objects.create(video=video_instance, **tag) return video_instance class VideoVideoTagSerializer(serializers.ModelSerializer): class Meta: model = VideoVideoTag fields = ('__all__') This is the view which uses VideoManageSerializer class VideoManageViewSet(GenericViewSet, # generic view functionality CreateModelMixin, # handles POSTs RetrieveModelMixin, # handles GETs UpdateModelMixin, # handles PUTs and PATCHes ListModelMixin): serializer_class = VideoManageSerializer queryset = Video.objects.all() These are all the models that I'm using: class Video(models.Model): decoratedThumbnail = models.CharField(max_length=500, blank=True, null=True) rawThumbnail = models.CharField(max_length=500, blank=True, null=True) videoUrl = models.CharField(max_length=500, blank=True, null=True) title = models.CharField(max_length=255, blank=True, null=True) duration = models.PositiveIntegerField() visibility = models.ForeignKey(VisibilityType, models.DO_NOTHING, related_name='visibility') publishedOn = models.DateField() deleted = models.BooleanField(default=0) class Meta: managed = True db_table = 'video' class VideoTag(models.Model): name = models.CharField(max_length=100, blank=True, null=True) deleted = models.BooleanField(default=0) class Meta: managed = True db_table = 'video_tag' class VideoVideoTag(models.Model): videoId = models.ForeignKey(Video, models.DO_NOTHING, related_name='tags', db_column='videoId') videoTagId = models.ForeignKey(VideoTag, models.DO_NOTHING, related_name='video_tag', db_column='videoTagId') class Meta: managed = True db_table = 'video_video_tag' A: I would consider changing the serializer as below, class VideoManageSerializer(serializers.ModelSerializer): video_tag_id = serializers.PrimaryKeyRelatedField( many=True, queryset=VideoTag.objects.all(), write_only=True, ) tags = VideoVideoTagSerializer(many=True, read_only=True) class Meta: model = Video fields = "__all__" # POST def create(self, validated_data): tags = validated_data.pop("video_tag_id") video_instance = Video.objects.create(**validated_data) for tag in tags: VideoVideoTag.objects.create(videoId=video_instance, videoTagId=tag) return video_instance Things that have changed - Added a new write_only field named video_tag_id that supposed to accept "list of PKs of VideoTag". Changed the tags field to read_only so that it won't take part in the validation process, but you'll get the "nested serialized output". Changed create(...) method to cooperate with the new changes. The POST payload has been changed as below (note that tags has been removed and video_tag_id has been introduced) { "deleted":false, "publishedOn":"2022-11-28", "decoratedThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "rawThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "videoUrl":"https://www.youtube.com/watch?v=jNQXAC9IVRw", "title":"Video with tags", "duration":120, "visibility":1, "video_tag_id":[1,2,3] } Refs DRF: Simple foreign key assignment with nested serializer? DRF - write_only DRF - read_only
Django - Sending post request using a nested serializer with many to many relationship. Getting a [400 error code] "This field may not be null] error"
I'm fairly new to Django and I'm trying to make a POST request with nested objects. This is the data that I'm sending: { "id":null, "deleted":false, "publishedOn":2022-11-28, "decoratedThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "rawThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "videoUrl":"https://www.youtube.com/watch?v=jNQXAC9IVRw", "title":"Video with tags", "duration":120, "visibility":1, "tags":[ { "id":null, "videoId":null, "videoTagId":42 } ] } Here's a brief diagram of the relationship of these objects on the database I want to create a video and pass in an array of nested data so that I can create multiple tags that can be associated to a video in a many to many relationship. Because of that, the 'id' field of the video will be null and the 'videoId' inside of the tag object will also be null when the data is being sent. However I keep getting a 400 (Bad request) error saying {tags: [{videoId: [This field may not be null.]}]} I'm trying to override the create method inside VideoManageSerializer so that I can extract the tags and after creating the video I can use that video to create those tags. I don't think I'm even getting to the create method part inside VideoManageSerializer as the video is not created on the database. I've been stuck on this issue for a few days. If anybody could point me in the right direction I would really appreciate it. I'm using the following serializers: class VideoManageSerializer(serializers.ModelSerializer): tags = VideoVideoTagSerializer(many=True) class Meta: model = Video fields = ('__all__') # POST def create(self, validated_data): tags = validated_data.pop('tags') video_instance = Video.objects.create(**validated_data) for tag in tags: VideoVideoTag.objects.create(video=video_instance, **tag) return video_instance class VideoVideoTagSerializer(serializers.ModelSerializer): class Meta: model = VideoVideoTag fields = ('__all__') This is the view which uses VideoManageSerializer class VideoManageViewSet(GenericViewSet, # generic view functionality CreateModelMixin, # handles POSTs RetrieveModelMixin, # handles GETs UpdateModelMixin, # handles PUTs and PATCHes ListModelMixin): serializer_class = VideoManageSerializer queryset = Video.objects.all() These are all the models that I'm using: class Video(models.Model): decoratedThumbnail = models.CharField(max_length=500, blank=True, null=True) rawThumbnail = models.CharField(max_length=500, blank=True, null=True) videoUrl = models.CharField(max_length=500, blank=True, null=True) title = models.CharField(max_length=255, blank=True, null=True) duration = models.PositiveIntegerField() visibility = models.ForeignKey(VisibilityType, models.DO_NOTHING, related_name='visibility') publishedOn = models.DateField() deleted = models.BooleanField(default=0) class Meta: managed = True db_table = 'video' class VideoTag(models.Model): name = models.CharField(max_length=100, blank=True, null=True) deleted = models.BooleanField(default=0) class Meta: managed = True db_table = 'video_tag' class VideoVideoTag(models.Model): videoId = models.ForeignKey(Video, models.DO_NOTHING, related_name='tags', db_column='videoId') videoTagId = models.ForeignKey(VideoTag, models.DO_NOTHING, related_name='video_tag', db_column='videoTagId') class Meta: managed = True db_table = 'video_video_tag'
[ "I would consider changing the serializer as below,\nclass VideoManageSerializer(serializers.ModelSerializer):\n video_tag_id = serializers.PrimaryKeyRelatedField(\n many=True,\n queryset=VideoTag.objects.all(),\n write_only=True,\n )\n tags = VideoVideoTagSerializer(many=True, read_only=True)\n\n class Meta:\n model = Video\n fields = \"__all__\"\n\n # POST\n def create(self, validated_data):\n tags = validated_data.pop(\"video_tag_id\")\n video_instance = Video.objects.create(**validated_data)\n for tag in tags:\n VideoVideoTag.objects.create(videoId=video_instance, videoTagId=tag)\n return video_instance\n\nThings that have changed -\n\nAdded a new write_only field named video_tag_id that supposed to accept \"list of PKs of VideoTag\".\nChanged the tags field to read_only so that it won't take part in the validation process, but you'll get the \"nested serialized output\".\nChanged create(...) method to cooperate with the new changes.\nThe POST payload has been changed as below (note that tags has been removed and video_tag_id has been introduced)\n{\n \"deleted\":false,\n \"publishedOn\":\"2022-11-28\",\n \"decoratedThumbnail\":\"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg\",\n \"rawThumbnail\":\"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg\",\n \"videoUrl\":\"https://www.youtube.com/watch?v=jNQXAC9IVRw\",\n \"title\":\"Video with tags\",\n \"duration\":120,\n \"visibility\":1,\n \"video_tag_id\":[1,2,3]\n}\n\n\n\nRefs\n\nDRF: Simple foreign key assignment with nested serializer?\nDRF - write_only\nDRF - read_only\n\n" ]
[ 3 ]
[]
[]
[ "api", "django", "django_rest_framework", "python", "sql" ]
stackoverflow_0074606902_api_django_django_rest_framework_python_sql.txt
Q: Python macOS os module: Can't find path when running my python script for saving a .xlsx document on the local computer and trying to open the same file after with the system() function from the python os module, i catch this specific error on screen: image (Translated to English: Path not found) The code used for saving the file is wb.save() from openpyxl workbook: wb.save(saved_file := (f"{path}Skiftplan ({name} - {shift}).xlsx")) where: {path} = /skiftplaner/November/ {name} = "onsdag" {shift} = "17:00 - 20:00" And the code for opening the file is: system(f"open -a '/Applications/Microsoft Excel.app' '{realpath(saved_file)}'") Now the weird thing about this is, that if i replace any character in name = "onsdag", the program works and the path can be found. ex: name = "0nsdag" will work, or name = "onsdaj", but i would like the variable name to be "onsdag". And i have prior to the shown code checked if the path exists, and if not then created it, which does work, but the system() function can't find the file if the {name} = "onsdag" I've tried changing the {name} variable to ex. "0nsdag", "onsdaj" and other values, which have worked. The file does get created by the wb.save() function, but can't be found by the system() function A: So following the steps from @Gordon Davisson: I tried pasting open -a '/Applications/Microsoft Excel.app' '/Users/andersballeby/Desktop/McDonalds Programmer/Main Project/McDonalds-Leader-Panel/skiftplaner/November/onsdag d. 30-11/Skiftplan (onsdag - MID - 12:00 - 17:00).xlsx' into my terminal, which returned the same error message as on the image at the top of this thread. Then i tried pasting the other command: ls '/Users/andersballeby/Desktop/McDonalds Programmer/Main Project/McDonalds-Leader-Panel/skiftplaner/November/onsdag d. 30-11/Skiftplan (onsdag - MID - 12:00 - 17:00).xlsx' which just returned the same file with no interesting properties at all. Then by removing the file and simply just looking at the ls November/onsdag d.30-11/ folder, i saw this: Skiftplan (onsdag - MID - 12:00 - 17:00).xlsx Skiftplan (onsdag - MID - 12:00 - 17:Skiftplan (onsdag - MID - 12:00 - 17:~$00).xlsx This got me to look in my finder directory folder, and I could only see the first file, not the second file. After this i tried closing all instances of excel and somehow it just works now, if i try doing ls November/onsdag d.30-11/ again, it only returns the single file im looking for: Skiftplan (onsdag - MID - 12:00 - 17:00).xlsx This now went on to fully function in the program, and it's now able to open the excel file without any errors or complications whatsoever.
Python macOS os module: Can't find path
when running my python script for saving a .xlsx document on the local computer and trying to open the same file after with the system() function from the python os module, i catch this specific error on screen: image (Translated to English: Path not found) The code used for saving the file is wb.save() from openpyxl workbook: wb.save(saved_file := (f"{path}Skiftplan ({name} - {shift}).xlsx")) where: {path} = /skiftplaner/November/ {name} = "onsdag" {shift} = "17:00 - 20:00" And the code for opening the file is: system(f"open -a '/Applications/Microsoft Excel.app' '{realpath(saved_file)}'") Now the weird thing about this is, that if i replace any character in name = "onsdag", the program works and the path can be found. ex: name = "0nsdag" will work, or name = "onsdaj", but i would like the variable name to be "onsdag". And i have prior to the shown code checked if the path exists, and if not then created it, which does work, but the system() function can't find the file if the {name} = "onsdag" I've tried changing the {name} variable to ex. "0nsdag", "onsdaj" and other values, which have worked. The file does get created by the wb.save() function, but can't be found by the system() function
[ "So following the steps from @Gordon Davisson:\nI tried pasting open -a '/Applications/Microsoft Excel.app' '/Users/andersballeby/Desktop/McDonalds Programmer/Main Project/McDonalds-Leader-Panel/skiftplaner/November/onsdag d. 30-11/Skiftplan (onsdag - MID - 12:00 - 17:00).xlsx' into my terminal, which returned the same error message as on the image at the top of this thread.\nThen i tried pasting the other command: ls '/Users/andersballeby/Desktop/McDonalds Programmer/Main Project/McDonalds-Leader-Panel/skiftplaner/November/onsdag d. 30-11/Skiftplan (onsdag - MID - 12:00 - 17:00).xlsx' which just returned the same file with no interesting properties at all.\nThen by removing the file and simply just looking at the ls November/onsdag d.30-11/ folder, i saw this:\nSkiftplan (onsdag - MID - 12:00 - 17:00).xlsx\nSkiftplan (onsdag - MID - 12:00 - 17:Skiftplan (onsdag - MID - 12:00 - 17:~$00).xlsx\n\nThis got me to look in my finder directory folder, and I could only see the first file, not the second file.\nAfter this i tried closing all instances of excel and somehow it just works now, if i try doing ls November/onsdag d.30-11/ again, it only returns the single file im looking for:\nSkiftplan (onsdag - MID - 12:00 - 17:00).xlsx\n\nThis now went on to fully function in the program, and it's now able to open the excel file without any errors or complications whatsoever.\n" ]
[ 0 ]
[]
[]
[ "macos", "openpyxl", "operating_system", "python", "python_3.x" ]
stackoverflow_0074636105_macos_openpyxl_operating_system_python_python_3.x.txt
Q: How do I rename parent field name and nested field value in mongodb using pymongo? I have following document: { "dataset_path":"path_of_dataset", "project_1":{ "model_1":"path_of_model_1", "model_2":"path_of_model_2" } } I want to change "project_1" to "renamed_project_1" and "path_of_model_1" to "new_model_1_path". The resultant output should be as follows: { "dataset_path":"path_of_dataset", "renamed_project_1":{ "renamed_model_1":"new_model_1_path", "model_2":"path_of_model_2" } } Here is what I tried: db.collection.update_many({'dataset_path': 'path_to_dataset'}, {'$rename': {"project_1": "renamed_project_1"}}, {'$set': {"project_1.model_1": "new_model_1_path"}}) but the above query throws following error: pymongo.errors.WriteError: Updating the path X would create a conflict at X. A: That's because you're trying to mutate project_1 field two times in a single query. Mongo just doesn't know how to deal with that. You should consider splitting two operations: db.collection.update_many({'dataset_path': 'path_to_dataset'}, {'$rename': {"project_1": "renamed_project_1"}}) db.collection.update_many({'dataset_path': 'path_to_dataset'}, {'$set': {"renamed_project_1.model_1": "new_model_1_path"}}) A: db.collection.update_many({}, {"$rename": {"old_value": "new_value"}})
How do I rename parent field name and nested field value in mongodb using pymongo?
I have following document: { "dataset_path":"path_of_dataset", "project_1":{ "model_1":"path_of_model_1", "model_2":"path_of_model_2" } } I want to change "project_1" to "renamed_project_1" and "path_of_model_1" to "new_model_1_path". The resultant output should be as follows: { "dataset_path":"path_of_dataset", "renamed_project_1":{ "renamed_model_1":"new_model_1_path", "model_2":"path_of_model_2" } } Here is what I tried: db.collection.update_many({'dataset_path': 'path_to_dataset'}, {'$rename': {"project_1": "renamed_project_1"}}, {'$set': {"project_1.model_1": "new_model_1_path"}}) but the above query throws following error: pymongo.errors.WriteError: Updating the path X would create a conflict at X.
[ "That's because you're trying to mutate project_1 field two times in a single query. Mongo just doesn't know how to deal with that.\nYou should consider splitting two operations:\ndb.collection.update_many({'dataset_path': 'path_to_dataset'}, {'$rename': {\"project_1\": \"renamed_project_1\"}})\ndb.collection.update_many({'dataset_path': 'path_to_dataset'}, {'$set': {\"renamed_project_1.model_1\": \"new_model_1_path\"}})\n\n", "db.collection.update_many({}, {\"$rename\": {\"old_value\": \"new_value\"}})\n\n" ]
[ 0, 0 ]
[]
[]
[ "mongodb", "pymongo", "pymongo_3.x", "python" ]
stackoverflow_0066559676_mongodb_pymongo_pymongo_3.x_python.txt
Q: call method using variables in python I want to pass methods as variables. In the below method I have 3 methods that are part of the fuzz library. How can I call them using variable name. from fuzzywuzzy import fuzz from fuzzywuzzy import process method_name1 ='token_sort_ratio' method_name2 ='partial_ratio' method_name3 ='ratio' def compare_alg(l1, l2, alg): print(fuzz.alg(l1,l2)) compare_alg("Catherine M Gitau","Catherine Gitau", method_name1) compare_alg("Catherine M Gitau","Catherine Gitau", method_name2) compare_alg("Catherine M Gitau","Catherine Gitau", method_name3) A: You can use getattr from fuzzywuzzy import fuzz from fuzzywuzzy import process method_name1 ='token_sort_ratio' method_name2 ='partial_ratio' method_name3 ='ratio' def compare_alg(l1, l2, alg): print(getattr(fuzz, alg)(l1,l2)) compare_alg("Catherine M Gitau","Catherine Gitau", method_name1) compare_alg("Catherine M Gitau","Catherine Gitau", method_name2) compare_alg("Catherine M Gitau","Catherine Gitau", method_name3)
call method using variables in python
I want to pass methods as variables. In the below method I have 3 methods that are part of the fuzz library. How can I call them using variable name. from fuzzywuzzy import fuzz from fuzzywuzzy import process method_name1 ='token_sort_ratio' method_name2 ='partial_ratio' method_name3 ='ratio' def compare_alg(l1, l2, alg): print(fuzz.alg(l1,l2)) compare_alg("Catherine M Gitau","Catherine Gitau", method_name1) compare_alg("Catherine M Gitau","Catherine Gitau", method_name2) compare_alg("Catherine M Gitau","Catherine Gitau", method_name3)
[ "You can use getattr\nfrom fuzzywuzzy import fuzz\nfrom fuzzywuzzy import process\nmethod_name1 ='token_sort_ratio'\nmethod_name2 ='partial_ratio'\nmethod_name3 ='ratio'\n\ndef compare_alg(l1, l2, alg):\n print(getattr(fuzz, alg)(l1,l2))\n\ncompare_alg(\"Catherine M Gitau\",\"Catherine Gitau\", method_name1)\ncompare_alg(\"Catherine M Gitau\",\"Catherine Gitau\", method_name2)\ncompare_alg(\"Catherine M Gitau\",\"Catherine Gitau\", method_name3)\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074637000_python.txt
Q: How can a pass a list to an airflow via a template? I have an airflow operator based off of BaseOperator which has libraries as one of its fields. This field takes a list of python packages that may need to be installed to run the code in the task. I would like to be able to pass that list via a template variable but have not had luck doing so. I have tried passing the list as a string and using list(eval('{{ variable_name }}')) to assign it to the value of libraries but this could not be deployed because python did not know what variable_name was. I then tried to pass the data as a list and assigned '{{ variable_name }}' to libraries but this resulted in the dag run failing because it was expecting an object and got a string. Is there a way to pass an this list object to a dag via templating? A: This is now supported via the render_template_as_native_obj. Please add the following argument to your DAG object for Jinja to apply correct typing for basic python objects: render_template_as_native_obj=True A: This is not supported currently but will be supported when https://github.com/apache/airflow/pull/14603 is merged.
How can a pass a list to an airflow via a template?
I have an airflow operator based off of BaseOperator which has libraries as one of its fields. This field takes a list of python packages that may need to be installed to run the code in the task. I would like to be able to pass that list via a template variable but have not had luck doing so. I have tried passing the list as a string and using list(eval('{{ variable_name }}')) to assign it to the value of libraries but this could not be deployed because python did not know what variable_name was. I then tried to pass the data as a list and assigned '{{ variable_name }}' to libraries but this resulted in the dag run failing because it was expecting an object and got a string. Is there a way to pass an this list object to a dag via templating?
[ "This is now supported via the render_template_as_native_obj.\nPlease add the following argument to your DAG object for Jinja to apply correct typing for basic python objects:\nrender_template_as_native_obj=True\n\n", "This is not supported currently but will be supported when https://github.com/apache/airflow/pull/14603 is merged.\n" ]
[ 1, 0 ]
[]
[]
[ "airflow", "python", "templating" ]
stackoverflow_0067202226_airflow_python_templating.txt
Q: Dotted or dashed line with Python PILLOW How to draw a dotted or dashed line or rectangle with Python PILLOW. Can anyone help me? Using openCV I can do that. But I want it using Pillow. A: Thanks to @martineau's comment, I figured out how to draw a dotted line. Here is my code. cur_x = 0 cur_y = 0 image_width = 600 for x in range(cur_x, image_width, 4): draw.line([(x, cur_y), (x + 2, cur_y)], fill=(170, 170, 170)) This will draw a dotted line of gray color. A: I decided to write up the idea I suggested in the comments - namely to draw the shapes with solid lines and then overlay a thresholded noisy image to obliterate parts of the line. I made all the noise on a smaller image and then scaled it up so that the noise was "more clumped" instead of tiny blobs. So this is just the generation of the test image: #!/usr/local/bin/python3 import numpy as np from PIL import Image, ImageDraw # Make empty black image im = Image.new('L', (640,480)) # Draw white rectangle and ellipse draw = ImageDraw.Draw(im) draw.rectangle([20,20,620,460],outline=255) draw.ellipse([100,100,540,380],outline=255) And this is generating the noise overlay and overlaying it - you can just delete this sentence and join the two lumps of code together: # Make noisy overlay, 1/4 the size, threshold at 50%, scale up to full-size noise = np.random.randint(0,256,(120,160),dtype=np.uint8) noise = (noise>128)*255 noiseim = Image.fromarray(noise.astype(np.uint8)) noiseim = noiseim.resize((640,480), resample=Image.NEAREST) # Paste the noise in, but only allowing the white shape outlines to be affected im.paste(noiseim,mask=im) im.save('result.png') The result is this: The solidly-drawn image is like this: The noise is like this: A: The following function draws a dashed line. It might be slow, but it works and I needed it. "dashlen" is the length of the dashes, in pixels. - "ratio" is the ratio of the empty space to the dash length (the higher the value the more empty space you get) import math # math has the fastest sqrt def linedashed(x0, y0, x1, y1, dashlen=4, ratio=3): dx=x1-x0 # delta x dy=y1-y0 # delta y # check whether we can avoid sqrt if dy==0: len=dx elif dx==0: len=dy else: len=math.sqrt(dx*dx+dy*dy) # length of line xa=dx/len # x add for 1px line length ya=dy/len # y add for 1px line length step=dashlen*ratio # step to the next dash a0=0 while a0<len: a1=a0+dashlen if a1>len: a1=len draw.line((x0+xa*a0, y0+ya*a0, x0+xa*a1, y0+ya*a1), fill = (0,0,0)) a0+=step A: I know this question is a bit old (4 y.o. at the time of my writing this answer), but as it happened I was in need of drawing a patterned line. So I concocted my own solution here: https://codereview.stackexchange.com/questions/281582/algorithm-to-traverse-a-path-through-several-data-points-and-draw-a-patterned-li (Sorry the solution was a bit long, better to just look there. The code works, though, that's why it's in CodeReview SE). Provide the right "pattern dictionary", where blank segments are represented by setting color to None, and you should be good to go.
Dotted or dashed line with Python PILLOW
How to draw a dotted or dashed line or rectangle with Python PILLOW. Can anyone help me? Using openCV I can do that. But I want it using Pillow.
[ "Thanks to @martineau's comment, I figured out how to draw a dotted line. Here is my code.\ncur_x = 0\ncur_y = 0\nimage_width = 600\nfor x in range(cur_x, image_width, 4):\n draw.line([(x, cur_y), (x + 2, cur_y)], fill=(170, 170, 170))\n\nThis will draw a dotted line of gray color.\n", "I decided to write up the idea I suggested in the comments - namely to draw the shapes with solid lines and then overlay a thresholded noisy image to obliterate parts of the line.\nI made all the noise on a smaller image and then scaled it up so that the noise was \"more clumped\" instead of tiny blobs.\nSo this is just the generation of the test image:\n#!/usr/local/bin/python3\n\nimport numpy as np\nfrom PIL import Image, ImageDraw\n\n# Make empty black image\nim = Image.new('L', (640,480))\n\n# Draw white rectangle and ellipse\ndraw = ImageDraw.Draw(im)\ndraw.rectangle([20,20,620,460],outline=255)\ndraw.ellipse([100,100,540,380],outline=255)\n\nAnd this is generating the noise overlay and overlaying it - you can just delete this sentence and join the two lumps of code together:\n# Make noisy overlay, 1/4 the size, threshold at 50%, scale up to full-size\nnoise = np.random.randint(0,256,(120,160),dtype=np.uint8)\nnoise = (noise>128)*255\nnoiseim = Image.fromarray(noise.astype(np.uint8))\nnoiseim = noiseim.resize((640,480), resample=Image.NEAREST)\n\n# Paste the noise in, but only allowing the white shape outlines to be affected\nim.paste(noiseim,mask=im)\nim.save('result.png')\n\nThe result is this:\n\nThe solidly-drawn image is like this:\n\nThe noise is like this:\n\n", "The following function draws a dashed line. It might be slow, but it works and I needed it. \n\"dashlen\" is the length of the dashes, in pixels. - \n\"ratio\" is the ratio of the empty space to the dash length (the higher the value the more empty space you get)\nimport math # math has the fastest sqrt\n\ndef linedashed(x0, y0, x1, y1, dashlen=4, ratio=3): \n dx=x1-x0 # delta x\n dy=y1-y0 # delta y\n # check whether we can avoid sqrt\n if dy==0: len=dx\n elif dx==0: len=dy\n else: len=math.sqrt(dx*dx+dy*dy) # length of line\n xa=dx/len # x add for 1px line length\n ya=dy/len # y add for 1px line length\n step=dashlen*ratio # step to the next dash\n a0=0\n while a0<len:\n a1=a0+dashlen\n if a1>len: a1=len\n draw.line((x0+xa*a0, y0+ya*a0, x0+xa*a1, y0+ya*a1), fill = (0,0,0))\n a0+=step \n\n", "I know this question is a bit old (4 y.o. at the time of my writing this answer), but as it happened I was in need of drawing a patterned line.\nSo I concocted my own solution here: https://codereview.stackexchange.com/questions/281582/algorithm-to-traverse-a-path-through-several-data-points-and-draw-a-patterned-li\n(Sorry the solution was a bit long, better to just look there. The code works, though, that's why it's in CodeReview SE).\nProvide the right \"pattern dictionary\", where blank segments are represented by setting color to None, and you should be good to go.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "python", "python_imaging_library" ]
stackoverflow_0051908563_python_python_imaging_library.txt
Q: Using Merge into to update multiple rows in snowflake DB using python I have a snowflake table lets called it as (temp) with ID as my primary key which autoincrements with any new inserts in the table. I learned about merge into statement can be used to update multiple rows in the snowflake table. I have a tkinter application, which retrieves the user input entered on the form using treeview method in python. I am looking how to update this table (temp) using the update statement for multiple rows for various columns? I am saving the user input variables using my internal function either in a list or a tuple that needs to be updated in the table temp. For example, if the user wants to change for all the five rows with ID IN (1,2,3,4,5) for the columns W1, W2, W3 how should I go about it? In the snowflake documentation, I see it is using the target table and the source table. Can this be possible? If so, how to go about it? If not, which alternate method I should follow? Thanks ################# Code template this is function, when user selects the update button on my tkinter app ############ Sql_Update= """ Update statement goes in here with parameters """ def Update_Fn(): updates = self.records.selection() All_items = [self.records.item(i, 'values') for i in updates] #I get all the primary keys IDs that needs to be updated. ID1 = All_items[0][0] ID2 = All_items[1][0] ID3 = All_items[2][0] ............... ## I get all the column entries that needs to be updated var1 = W1.get() var2 = W2.get() var3 = W3.get() #etc.. Example for the first row entries ctx = snowflake.connector.connect( user = "" password ="" account = "" database = "" ) cs = ctx.cursor() df = pd.read_sql(Sql_Update, ctx, params=param) ctx.commit() cs.close() ctx.close() ####### Trying to understand how to use the update for the multiple rows ########### merge into temp using "source table" ## I don't have a source table but values in a list or a tuple on temp.ID = var.ID #value from one of the variable when matched then update set temp.W1 = var2.W1 temp.W2 = var3.W2.....; ID Decision W1 W2 W3 Date Name 1 KLT Map 5 0 2 11/30/2022 python_beginner 2 PI Errors 7 0 3 11/30/2022 python_beginner 3 KI Logs 8 8 3 11/30/2022 python_beginner 4 Non_Issues 9 8 4 11/30/2022 python_beginner 5 Tickets 87 5 1 11/30/2022 python_beginner A: Not a fancy solution but I am passing these updates statements as a list. Since I need only up to three update statements for this to work. queries = [update1, update2, update3] for i, q in enumerate(queries): df = pd.read_sql(q,con, param= params)
Using Merge into to update multiple rows in snowflake DB using python
I have a snowflake table lets called it as (temp) with ID as my primary key which autoincrements with any new inserts in the table. I learned about merge into statement can be used to update multiple rows in the snowflake table. I have a tkinter application, which retrieves the user input entered on the form using treeview method in python. I am looking how to update this table (temp) using the update statement for multiple rows for various columns? I am saving the user input variables using my internal function either in a list or a tuple that needs to be updated in the table temp. For example, if the user wants to change for all the five rows with ID IN (1,2,3,4,5) for the columns W1, W2, W3 how should I go about it? In the snowflake documentation, I see it is using the target table and the source table. Can this be possible? If so, how to go about it? If not, which alternate method I should follow? Thanks ################# Code template this is function, when user selects the update button on my tkinter app ############ Sql_Update= """ Update statement goes in here with parameters """ def Update_Fn(): updates = self.records.selection() All_items = [self.records.item(i, 'values') for i in updates] #I get all the primary keys IDs that needs to be updated. ID1 = All_items[0][0] ID2 = All_items[1][0] ID3 = All_items[2][0] ............... ## I get all the column entries that needs to be updated var1 = W1.get() var2 = W2.get() var3 = W3.get() #etc.. Example for the first row entries ctx = snowflake.connector.connect( user = "" password ="" account = "" database = "" ) cs = ctx.cursor() df = pd.read_sql(Sql_Update, ctx, params=param) ctx.commit() cs.close() ctx.close() ####### Trying to understand how to use the update for the multiple rows ########### merge into temp using "source table" ## I don't have a source table but values in a list or a tuple on temp.ID = var.ID #value from one of the variable when matched then update set temp.W1 = var2.W1 temp.W2 = var3.W2.....; ID Decision W1 W2 W3 Date Name 1 KLT Map 5 0 2 11/30/2022 python_beginner 2 PI Errors 7 0 3 11/30/2022 python_beginner 3 KI Logs 8 8 3 11/30/2022 python_beginner 4 Non_Issues 9 8 4 11/30/2022 python_beginner 5 Tickets 87 5 1 11/30/2022 python_beginner
[ "Not a fancy solution but I am passing these updates statements as a list. Since I need only up to three update statements for this to work.\nqueries = [update1, update2, update3]\nfor i, q in enumerate(queries):\n df = pd.read_sql(q,con, param= params)\n\n" ]
[ 0 ]
[]
[]
[ "python", "snowflake_cloud_data_platform" ]
stackoverflow_0074633268_python_snowflake_cloud_data_platform.txt
Q: Find strings between 2 substrings with Python I’m trying to get the string starting with p and between 2 substrings ds and svcp. My approach is like this import re string_list = [‘ds-pfoo-svcp’, ‘ds-abc-pbar-svcp’, ‘ds-abc-ptee-xyz-svcp’] for s in string_list: result = re.search(’ds-[p](.*)-svcp’, s) print(result.group(1)) the output of my expected list : [‘foo’, ‘bar’, ‘tee’] But I got the error for the 2nd and 3rd element in the list. How can I fix this issue? A: We can use a list comprehension along with re.search here: import re string_list = ['ds-pfoo-svcp', 'ds-abc-pbar-svcp','ds-abc-ptee-xyz-svcp'] output = [re.search(r'ds(?:-\w+)*?-p(\w+)-', x).group(1) for x in string_list] print(output) # ['foo', 'bar', 'tee']
Find strings between 2 substrings with Python
I’m trying to get the string starting with p and between 2 substrings ds and svcp. My approach is like this import re string_list = [‘ds-pfoo-svcp’, ‘ds-abc-pbar-svcp’, ‘ds-abc-ptee-xyz-svcp’] for s in string_list: result = re.search(’ds-[p](.*)-svcp’, s) print(result.group(1)) the output of my expected list : [‘foo’, ‘bar’, ‘tee’] But I got the error for the 2nd and 3rd element in the list. How can I fix this issue?
[ "We can use a list comprehension along with re.search here:\nimport re\n\nstring_list = ['ds-pfoo-svcp', 'ds-abc-pbar-svcp','ds-abc-ptee-xyz-svcp']\noutput = [re.search(r'ds(?:-\\w+)*?-p(\\w+)-', x).group(1) for x in string_list]\nprint(output) # ['foo', 'bar', 'tee']\n\n" ]
[ 3 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074637091_python_regex.txt
Q: Why are the values of the dict in the list is not printing out? So I want to loop across several list each hasving one or multiple dictionaries. For Example given: r = [{"symbol":10},{"symbol":15},{"symbol":25}] h = [{"sy":15},{"sy":23},{"sk":64}] i = [{"sl":45},{"sl":67},{"sl":98}] I want it to print: Symbol sy sl 10 15 45 15 23 67 25 64 98 I did it in python and it worked perfectly: for p in r,h,i: if p == r: print(p[c]["symbol"]) elif p == h: print(p[c]["sy"]) elif p == i: print(p[c]["sl"]) it works perfectly in python.But when outputting in jinja only the first word is outputted.I'm using flask to communicate with the server side which is written in python. But I am having some issuees doing it in jinja: Here my jinja code: {%set c = 0%} {% for s in symbol,stockname,shares,price,total%} <tr> {%if s == symbol%} <td> {{s[c].symbol}}</td> {%if s == stockname %} <td> {{s[c].stockname}}</td> {%if s == shares %} <td> {{s[c].shares}}</td> {%if s == price %} <td> {{s[c].price}}</td> {%if s == total %} <td> {{s[c]["total"]}}</td> </tr> {%set c = c + 1%} {%endif%} {%endif%} {%endif%} {%endif%} {%endif%} {%endfor%} </table> {% endblock %}``` A: I figured it out. First I must say that I wasn't getting the input I wanted in python. In python to get the each list of dictionaries output how I wanted I had to do take this approach: r = [{"symbol":10},{"symbol":15},{"symbol":25}] h = [{"sy":15},{"sy":23},{"sy":64}] i = [{"sl":45},{"sl":67},{"sl":98}] c = 0 n = [] for p in r,h,i: print(r[c]["symbol"]) print(h[c]["sy"]) print(i[c]["sl"]) c+=1
Why are the values of the dict in the list is not printing out?
So I want to loop across several list each hasving one or multiple dictionaries. For Example given: r = [{"symbol":10},{"symbol":15},{"symbol":25}] h = [{"sy":15},{"sy":23},{"sk":64}] i = [{"sl":45},{"sl":67},{"sl":98}] I want it to print: Symbol sy sl 10 15 45 15 23 67 25 64 98 I did it in python and it worked perfectly: for p in r,h,i: if p == r: print(p[c]["symbol"]) elif p == h: print(p[c]["sy"]) elif p == i: print(p[c]["sl"]) it works perfectly in python.But when outputting in jinja only the first word is outputted.I'm using flask to communicate with the server side which is written in python. But I am having some issuees doing it in jinja: Here my jinja code: {%set c = 0%} {% for s in symbol,stockname,shares,price,total%} <tr> {%if s == symbol%} <td> {{s[c].symbol}}</td> {%if s == stockname %} <td> {{s[c].stockname}}</td> {%if s == shares %} <td> {{s[c].shares}}</td> {%if s == price %} <td> {{s[c].price}}</td> {%if s == total %} <td> {{s[c]["total"]}}</td> </tr> {%set c = c + 1%} {%endif%} {%endif%} {%endif%} {%endif%} {%endif%} {%endfor%} </table> {% endblock %}```
[ "I figured it out. First I must say that I wasn't getting the input I wanted in python.\nIn python to get the each list of dictionaries output how I wanted I had to do take this approach:\nr = [{\"symbol\":10},{\"symbol\":15},{\"symbol\":25}]\nh = [{\"sy\":15},{\"sy\":23},{\"sy\":64}]\ni = [{\"sl\":45},{\"sl\":67},{\"sl\":98}]\nc = 0\nn = []\nfor p in r,h,i:\n print(r[c][\"symbol\"])\n print(h[c][\"sy\"])\n print(i[c][\"sl\"])\n c+=1\n\n" ]
[ 0 ]
[]
[]
[ "flask", "jinja2", "jinjava", "python" ]
stackoverflow_0074636250_flask_jinja2_jinjava_python.txt
Q: Why won't webbrowser module open my html file in my browser I am using the python webbrowser module to try and open a html file. I added a short thing to get code from a website to view, allowing me to store a web-page incase I ever need to view it without wifi, for instance a news article or something else. The code itself is fairly short so far, so here it is: import requests as req from bs4 import BeautifulSoup as bs import webbrowser import re webcheck = re.compile('^(https?:\/\/)?(www.)?([a-z0-9]+\.[a-z]+)([\/a-zA-Z0-9#\-_]+\/?)*$') #Valid URL Check while True: url = input('URL (MUST HAVE HTTP://): ') check = webcheck.search(url) groups = list(check.groups()) if check != None: for group in groups: if group == 'https://': groups.remove(group) elif group.count('/') > 0: groups.append(group.replace('/', '--')) groups.remove(group) filename = ''.join(groups) + '.html' break #Getting Website Data reply = req.get(url) soup = bs(reply.text, 'html.parser') #Writing Website with open(filename, 'w') as file: file.write(reply.text) #Open Website webbrowser.open(filename) webbrowser.open('https://www.youtube.com') I added webbrowser.open('https://www.youtube.com') so that I knew the module was working, which it was, as it did open up youtube. However, webbrowser.open(filename) doesn't do anything, yet it returns True if I define it as a variable and print it. The html file itself has a period in the name, but I don't think that should matter as I have made a file without it as the name and it wont run. Does webbrowser need special permissions to work? I'm not sure what to do as I've removed characters from the filename and even showed that the module is working by opening youtube. What can I do to fix this? A: From the webbrowser documentation: Note that on some platforms, trying to open a filename using this function, may work and start the operating system’s associated program. However, this is neither supported nor portable. So it seems that webbrowser can't do what you want. Why did you expect that it would? A: adding file:// + full path name does the trick for any wondering
Why won't webbrowser module open my html file in my browser
I am using the python webbrowser module to try and open a html file. I added a short thing to get code from a website to view, allowing me to store a web-page incase I ever need to view it without wifi, for instance a news article or something else. The code itself is fairly short so far, so here it is: import requests as req from bs4 import BeautifulSoup as bs import webbrowser import re webcheck = re.compile('^(https?:\/\/)?(www.)?([a-z0-9]+\.[a-z]+)([\/a-zA-Z0-9#\-_]+\/?)*$') #Valid URL Check while True: url = input('URL (MUST HAVE HTTP://): ') check = webcheck.search(url) groups = list(check.groups()) if check != None: for group in groups: if group == 'https://': groups.remove(group) elif group.count('/') > 0: groups.append(group.replace('/', '--')) groups.remove(group) filename = ''.join(groups) + '.html' break #Getting Website Data reply = req.get(url) soup = bs(reply.text, 'html.parser') #Writing Website with open(filename, 'w') as file: file.write(reply.text) #Open Website webbrowser.open(filename) webbrowser.open('https://www.youtube.com') I added webbrowser.open('https://www.youtube.com') so that I knew the module was working, which it was, as it did open up youtube. However, webbrowser.open(filename) doesn't do anything, yet it returns True if I define it as a variable and print it. The html file itself has a period in the name, but I don't think that should matter as I have made a file without it as the name and it wont run. Does webbrowser need special permissions to work? I'm not sure what to do as I've removed characters from the filename and even showed that the module is working by opening youtube. What can I do to fix this?
[ "From the webbrowser documentation:\n\nNote that on some platforms, trying to open a filename using this function, may work and start the operating system’s associated program. However, this is neither supported nor portable.\n\nSo it seems that webbrowser can't do what you want. Why did you expect that it would?\n", "adding file:// + full path name does the trick for any wondering\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074636649_python.txt
Q: Why is my elif statement showing a syntax error? My son is learning Python. He's only ten and just starting out and he's gotten stuck. He has a syntax error - can anyone help please? It's on line 18. The error displayed is as follows: File "main.py", line 23 elif player_choice == "2": ^ SyntaxError: invalid syntax  Please see code below: # choose a chest import random exitChoice = ("Nothing") while exitChoice != "EXIT": print("You find four dusty treasure chests in the attic.") print("You have one rusted key which seems to be giving") print("You the urge to open the chest but there is only one key") print("and four chests, which chest will you open?") player_choice = input("Choose 1, 2, 3, or 4...") if player_choice == "1": print("The chest contains billions of dollars but the money seems to") print("be engulfed in a strange green light which happens to be") print("radioactive!") print("You start feeling light headed and then die!") print("GAME OVER!") elif player_choice == "2": print("The chest contains a golden amulet emitting a powerful energy aswell as a metal glove with black particles surrounding it. you can only choose one, what will your choice be.") box_choice = input ("Choose, amulet or glove") if box_choice == "amulet": print("The amulet contains incredible power but you coudn't handle this power so you went ballistic destroying hundreds of galaxys but you ended up destroying killing all human kind but the amulet would only work as long as earth was around so the amulet exploded destroying you and the universe ") print("GAME OVER!") elif box_choice =="glove": print("When you put the glove on black ooze came up and out the glove covering your whole arm. You started wreaking havoc on the world. However you realised what you were doing and thought to yourself why am I doing this i had a great life. Happines took over and you spent the rest of your days with your family happier then ever") print("Thanks for playing!") else: print("Error") elif player_choice =="3": print("You see a small black stone at the bottom of the chest so you pick it up. The stone turns into a black hole which destroys every thing in exsitence.") print("GAME OVER!") elif player_choice =="4": print("A genie comes out the chest and says if you can guess the number im thinking of between 1 and 10 I will grant you eternal happienes.") number = int(input("What number do you choose?" )) if number =="random.randint(1,10)": print("Well done now for your eternal happienes.") print("'Thanks for playing!") else: print("Incorrect now I must make you suffer by enflicting you with eternal pain mwahahah!") print("GAME OVER!") else: print("Sorry, you didn't enter 1, 2, 3 or 4.") exitChoice = input("press return to play again, or type EXIT to leave.") Not sure what other details I can add. A: if player_choice == "1": print("The chest contains billions of dollars but the money seems to") print("be engulfed in a strange green light which happens to be") print("radioactive!") print("You start feeling light headed and then die!") print("GAME OVER!") Only the lines that are indented underneath the if statement are part of it. So in this example, only print("The chest ..") is actually under the if statement. Once Python sees a non-indented line, it considers the if block to be terminated. So in this example, one it saw print("be engulfed..."), Python thought the if block was over. But then it saw an elif with no apparent preceding if, so it complained. Every line that is conditional on the if statement needs to be indented. Not just the first one. A: As John Gordon Mentioned , Indentation is important in Python. When you create an if else statement for example : if(a<b) { printf("a is less than b"); } else{ printf("a is greater than b"); } In this code snippet of a basic if-else statement in C, you can observe that the indentation doesn't matter as the {} determine what code is executed or not. That's not the case for Python however. if a>b: print(a) print(b) Here, a is printed if a>b but b is always printed. So if you want multiple lines to be excecuted in a if-else statement. You would have to change it to if a>b: print(a) print(b) The same rule applies for Looping and other conditional statements as well. Here's a post from W3schools to know more about indentations in python.
Why is my elif statement showing a syntax error?
My son is learning Python. He's only ten and just starting out and he's gotten stuck. He has a syntax error - can anyone help please? It's on line 18. The error displayed is as follows: File "main.py", line 23 elif player_choice == "2": ^ SyntaxError: invalid syntax  Please see code below: # choose a chest import random exitChoice = ("Nothing") while exitChoice != "EXIT": print("You find four dusty treasure chests in the attic.") print("You have one rusted key which seems to be giving") print("You the urge to open the chest but there is only one key") print("and four chests, which chest will you open?") player_choice = input("Choose 1, 2, 3, or 4...") if player_choice == "1": print("The chest contains billions of dollars but the money seems to") print("be engulfed in a strange green light which happens to be") print("radioactive!") print("You start feeling light headed and then die!") print("GAME OVER!") elif player_choice == "2": print("The chest contains a golden amulet emitting a powerful energy aswell as a metal glove with black particles surrounding it. you can only choose one, what will your choice be.") box_choice = input ("Choose, amulet or glove") if box_choice == "amulet": print("The amulet contains incredible power but you coudn't handle this power so you went ballistic destroying hundreds of galaxys but you ended up destroying killing all human kind but the amulet would only work as long as earth was around so the amulet exploded destroying you and the universe ") print("GAME OVER!") elif box_choice =="glove": print("When you put the glove on black ooze came up and out the glove covering your whole arm. You started wreaking havoc on the world. However you realised what you were doing and thought to yourself why am I doing this i had a great life. Happines took over and you spent the rest of your days with your family happier then ever") print("Thanks for playing!") else: print("Error") elif player_choice =="3": print("You see a small black stone at the bottom of the chest so you pick it up. The stone turns into a black hole which destroys every thing in exsitence.") print("GAME OVER!") elif player_choice =="4": print("A genie comes out the chest and says if you can guess the number im thinking of between 1 and 10 I will grant you eternal happienes.") number = int(input("What number do you choose?" )) if number =="random.randint(1,10)": print("Well done now for your eternal happienes.") print("'Thanks for playing!") else: print("Incorrect now I must make you suffer by enflicting you with eternal pain mwahahah!") print("GAME OVER!") else: print("Sorry, you didn't enter 1, 2, 3 or 4.") exitChoice = input("press return to play again, or type EXIT to leave.") Not sure what other details I can add.
[ "if player_choice == \"1\":\n print(\"The chest contains billions of dollars but the money seems to\")\nprint(\"be engulfed in a strange green light which happens to be\")\nprint(\"radioactive!\")\nprint(\"You start feeling light headed and then die!\")\nprint(\"GAME OVER!\")\n\nOnly the lines that are indented underneath the if statement are part of it.\nSo in this example, only print(\"The chest ..\") is actually under the if statement.\nOnce Python sees a non-indented line, it considers the if block to be terminated. So in this example, one it saw print(\"be engulfed...\"), Python thought the if block was over.\nBut then it saw an elif with no apparent preceding if, so it complained.\nEvery line that is conditional on the if statement needs to be indented. Not just the first one.\n", "As John Gordon Mentioned , Indentation is important in Python.\nWhen you create an if else statement for example :\nif(a<b)\n{\n printf(\"a is less than b\");\n}\nelse{\nprintf(\"a is greater than b\");\n}\n\nIn this code snippet of a basic if-else statement in C, you can observe that the indentation doesn't matter as the {} determine what code is executed or not. That's not the case for Python however.\n\nif a>b:\n print(a)\nprint(b)\n\nHere, a is printed if a>b but b is always printed. So if you want multiple lines to be excecuted in a if-else statement. You would have to change it to\nif a>b:\n print(a)\n print(b)\n\nThe same rule applies for Looping and other conditional statements as well.\nHere's a post from W3schools to know more about indentations in python.\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074636539_python.txt