content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Drawing a huge graph with networkX and matplotlib I am drawing a graph with around 5K nodes in it using networkX and matplotlib. The GTK window by matplotlib has tools to zoom and visualise the graph. Is there any way, I can save a magnified version for proper visualisation later? import matplotlib.pyplot as plt import networkx as nx pos=nx.spring_layout(G) #G is my graph nx.draw(G,pos,node_color='#A0CBE2',edge_color='#BB0000',width=2,edge_cmap=plt.cm.Blues,with_labels=True) #plt.show() plt.savefig("graph.png", dpi=500, facecolor='w', edgecolor='w',orientation='portrait', papertype=None, format=None,transparent=False, bbox_inches=None, pad_inches=0.1) A: You have two easy options: Up the DPI plt.savefig("graph.png", dpi=1000) (larger image file size) Save as a PDF plt.savefig("graph.pdf") This is the best option, as the final graph is not rasterized. In theory, you should be able to zoom in indefinitely. A: While not in GTK, you might want to check out NetworkX Viewer. A: You may want to check this out: plt.savefig("name.svg") the quality is magnificent. although the dpi option is still alive. A: Use this plt.savefig('name.svg') and then just upload the file to Figma -https://www.figma.com
Drawing a huge graph with networkX and matplotlib
I am drawing a graph with around 5K nodes in it using networkX and matplotlib. The GTK window by matplotlib has tools to zoom and visualise the graph. Is there any way, I can save a magnified version for proper visualisation later? import matplotlib.pyplot as plt import networkx as nx pos=nx.spring_layout(G) #G is my graph nx.draw(G,pos,node_color='#A0CBE2',edge_color='#BB0000',width=2,edge_cmap=plt.cm.Blues,with_labels=True) #plt.show() plt.savefig("graph.png", dpi=500, facecolor='w', edgecolor='w',orientation='portrait', papertype=None, format=None,transparent=False, bbox_inches=None, pad_inches=0.1)
[ "You have two easy options:\nUp the DPI\nplt.savefig(\"graph.png\", dpi=1000)\n\n(larger image file size)\nSave as a PDF\nplt.savefig(\"graph.pdf\")\n\nThis is the best option, as the final graph is not rasterized. In theory, you should be able to zoom in indefinitely. \n", "While not in GTK, you might want to check out NetworkX Viewer.\n", "You may want to check this out:\nplt.savefig(\"name.svg\")\n\nthe quality is magnificent. although the dpi option is still alive.\n", "Use this plt.savefig('name.svg') and then just upload the file to Figma -https://www.figma.com\n" ]
[ 33, 2, 1, 0 ]
[]
[]
[ "graph", "matplotlib", "networkx", "python" ]
stackoverflow_0009402255_graph_matplotlib_networkx_python.txt
Q: Python Libraries are not importing into Pycharm I am using Pycharm and I have the latest versions, I reinstalled everything today. I want to install requests and BeautifulSoup and I do it with Pycharm settings and Python Interpreter. I add them there and click OK. I handle the path and all, but it keeps showing my import is unused or does not exist. When I type: import requests, it is ok for 1 second, then it goes grey and gives me an error. Does anyone have any tips on this? I have changed the path, done the environment thing and reinstalled twice. A: It's not an error, it's only a warning. Pycharm is telling you that you have imported a library but you're not using this library (request). If you use the library the warning disappears
Python Libraries are not importing into Pycharm
I am using Pycharm and I have the latest versions, I reinstalled everything today. I want to install requests and BeautifulSoup and I do it with Pycharm settings and Python Interpreter. I add them there and click OK. I handle the path and all, but it keeps showing my import is unused or does not exist. When I type: import requests, it is ok for 1 second, then it goes grey and gives me an error. Does anyone have any tips on this? I have changed the path, done the environment thing and reinstalled twice.
[ "It's not an error, it's only a warning. Pycharm is telling you that you have imported a library but you're not using this library (request). If you use the library the warning disappears\n" ]
[ 0 ]
[]
[]
[ "import", "pip", "python", "python_idle", "shared_libraries" ]
stackoverflow_0074657661_import_pip_python_python_idle_shared_libraries.txt
Q: Create a new storj bucket with uplink-Python I'm trying to create a new storj bucket with uplink-python Does anyone know this error ? Thank you class Storage: def __init__(self, api_key: str, satellite: str, passphrase: str, email: str): """ account Storj """ self.api_key = api_key self.satellite = satellite self.email = email self.passphrase = passphrase self.uplink = Uplink() self.config = Config() self.access = self.uplink.config_request_access_with_passphrase(self.config, self.satellite, self.api_key, self.passphrase) self.project = self.access.open_project() <-- open the project def create_bucket(self, mybucket: str): """ Create un bucket and ingnores the error if it already exist """ print(mybucket) self.project.ensure_bucket(mybucket) def get_bucket(self, mybucket: str): """ verify bucket """ print(mybucket) return self.project.stat_bucket(mybucket) def close(self): self.project.close() storage = Storage(api_key="zzz", satellite=".storj.io:7777", passphrase="passtest", mail="[email protected]") Verify the old bucket storage.get_bucket("demo01") # it's works Create a new bucket storage.create_bucket("Hello") # internal error output: File "../test/uplink/lib/python3.9/site-packages/uplink_python/project.py", line 121, in ensure_bucket raise _storj_exception(bucket_result.error.contents.code, uplink_python.errors.InternalError: 'internal error' line 121 def ensure_bucket(self, bucket_name: str): """ function ensures that a bucket exists or creates a new one. When bucket already exists it returns a valid Bucket and no error Parameters ---------- bucket_name : str Returns ------- Bucket """ # # declare types of arguments and response of the corresponding golang function self.uplink.m_libuplink.uplink_ensure_bucket.argtypes = [ctypes.POINTER(_ProjectStruct), ctypes.c_char_p] self.uplink.m_libuplink.uplink_ensure_bucket.restype = _BucketResult # # prepare the input for the function bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8')) # open bucket if doesn't exist by calling the exported golang function bucket_result = self.uplink.m_libuplink.uplink_ensure_bucket(self.project, bucket_name_ptr) # # if error occurred if bool(bucket_result.error): raise _storj_exception(bucket_result.error.contents.code, bucket_result.error.contents.message.decode("utf-8")) return self.uplink.bucket_from_result(bucket_result.bucket) The "internal error" message comes from that "raise _storj_exception.." add: There are no error creating a new bucket with web interface ref : https://storj-thirdparty.github.io/uplink-python/#/library?id=ensure_bucketbucket_name https://github.com/storj/uplink/blob/8da069b86063ee9671cc85cc44eaa6b8baf84b58/bucket.go#L97 A: Solved: an upper case letter in the bucket name causes the internal error... replace storage.create_bucket("Hello") with storage.create_bucket("hello") :) EDIT: that's why: https://forum.storj.io/t/bucket-name-uppercase/20554/2?u=mike1
Create a new storj bucket with uplink-Python
I'm trying to create a new storj bucket with uplink-python Does anyone know this error ? Thank you class Storage: def __init__(self, api_key: str, satellite: str, passphrase: str, email: str): """ account Storj """ self.api_key = api_key self.satellite = satellite self.email = email self.passphrase = passphrase self.uplink = Uplink() self.config = Config() self.access = self.uplink.config_request_access_with_passphrase(self.config, self.satellite, self.api_key, self.passphrase) self.project = self.access.open_project() <-- open the project def create_bucket(self, mybucket: str): """ Create un bucket and ingnores the error if it already exist """ print(mybucket) self.project.ensure_bucket(mybucket) def get_bucket(self, mybucket: str): """ verify bucket """ print(mybucket) return self.project.stat_bucket(mybucket) def close(self): self.project.close() storage = Storage(api_key="zzz", satellite=".storj.io:7777", passphrase="passtest", mail="[email protected]") Verify the old bucket storage.get_bucket("demo01") # it's works Create a new bucket storage.create_bucket("Hello") # internal error output: File "../test/uplink/lib/python3.9/site-packages/uplink_python/project.py", line 121, in ensure_bucket raise _storj_exception(bucket_result.error.contents.code, uplink_python.errors.InternalError: 'internal error' line 121 def ensure_bucket(self, bucket_name: str): """ function ensures that a bucket exists or creates a new one. When bucket already exists it returns a valid Bucket and no error Parameters ---------- bucket_name : str Returns ------- Bucket """ # # declare types of arguments and response of the corresponding golang function self.uplink.m_libuplink.uplink_ensure_bucket.argtypes = [ctypes.POINTER(_ProjectStruct), ctypes.c_char_p] self.uplink.m_libuplink.uplink_ensure_bucket.restype = _BucketResult # # prepare the input for the function bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8')) # open bucket if doesn't exist by calling the exported golang function bucket_result = self.uplink.m_libuplink.uplink_ensure_bucket(self.project, bucket_name_ptr) # # if error occurred if bool(bucket_result.error): raise _storj_exception(bucket_result.error.contents.code, bucket_result.error.contents.message.decode("utf-8")) return self.uplink.bucket_from_result(bucket_result.bucket) The "internal error" message comes from that "raise _storj_exception.." add: There are no error creating a new bucket with web interface ref : https://storj-thirdparty.github.io/uplink-python/#/library?id=ensure_bucketbucket_name https://github.com/storj/uplink/blob/8da069b86063ee9671cc85cc44eaa6b8baf84b58/bucket.go#L97
[ "Solved:\nan upper case letter in the bucket name causes the internal error...\nreplace\nstorage.create_bucket(\"Hello\")\n\nwith\nstorage.create_bucket(\"hello\")\n\n:)\nEDIT:\nthat's why:\nhttps://forum.storj.io/t/bucket-name-uppercase/20554/2?u=mike1\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "go", "python" ]
stackoverflow_0074649328_amazon_s3_go_python.txt
Q: Using .expr() and arithmetics: how to add multiple (calculated) columns to dataframe within one expression So I have a spark dataframe with some columns and I want to add some new columns which are the product of the initial columns: new_col1 = col_1 * col_2 & new_col2 = col_3 * col_4. See the data frames below as an example. df= | id | col_1| col_2| col_3| col_4| |:---|:----:|:-----|:-----|:-----| |1 | a | x | d1 | u | |2 | b | y | e1 | v | |3 | c | z | f1 | w | df_new = | id | col_1| col_2| col_3| col_4| new_col1 | new_col2 | |:---|:----:|:-----|:-----|:-----|:--------:|:--------:| |1 | a | x | d1 | u | a*x | d1*u | |2 | 2 | 3 | e1 | v | 6 | e1*v | |3 | c | z | 4 | 2.5 | c*z | 10 | Of course, this would be rather straightforward using df_new = ( df .withColumn(newcol_1, col(col_1)*col(col_2)) .withColumn(newcol_2, col(col_3)*col(col_4)) ) However, the number of times that this operation is variable; so the number of new_col's is variable. Besides this happens in a join. So I would really like to do this all in 1 expression. My solution was this, I have a config file with a dictionary with columns part of the operations (this is the place where I can add more columns to be calculated) (don't mind the nesting of the dictionary) "multiplied_parameters": { "mult_parameter1": {"name": "new_col1", "col_parts": ["col_1","col_2"]}, "mult_parameter2": {"name": "new_col2", "col_parts": ["col_3, col_4"]}, }, Then I use this for loop to create an expression which produces the expression: col_1*col_2 as new_col1, ``col_3*col_4 as new_col2 newcol_lst = [] for keyval in dictionary["multiplied_parameters"].items(): newcol_lst.append( f'{"*".join(keyval[1]["col_parts"])} as {keyval[1]["name"]}' ) operation = f'{", ".join(newcol_lst)}' col_lst = ["col_1", "col_2", "col_3", "col_4"] df_new = ( df .select( *col_lst, expr(operation), ) This gives me the error. ParseException: mismatched input ',' expecting {<EOF>, '-'}(line 1, pos 33) == SQL == col_1*col_2 as new_col1, col_3*col_4 as new_col2 -----------------------^^^ So the problem is in the way that I concatenate the two operations. I also know that this the problem because when the dictionary only has 1 key (mult_parameter1) then I don't have any problem. The question is thus, in essence, how can I use .expr() with two different arithmetics to determine two different calculated columns. A: I don't think that expr can do what you are trying to do. However, you don't have to concatenate all your expressions and use a single expr, instead you can do something like this df_new = ( df .select( *(col_lst + [expr(nc) for nc in new_col_list]) ) The above code is untested, but in general a technique of creating a list of columns is common in Spark.
Using .expr() and arithmetics: how to add multiple (calculated) columns to dataframe within one expression
So I have a spark dataframe with some columns and I want to add some new columns which are the product of the initial columns: new_col1 = col_1 * col_2 & new_col2 = col_3 * col_4. See the data frames below as an example. df= | id | col_1| col_2| col_3| col_4| |:---|:----:|:-----|:-----|:-----| |1 | a | x | d1 | u | |2 | b | y | e1 | v | |3 | c | z | f1 | w | df_new = | id | col_1| col_2| col_3| col_4| new_col1 | new_col2 | |:---|:----:|:-----|:-----|:-----|:--------:|:--------:| |1 | a | x | d1 | u | a*x | d1*u | |2 | 2 | 3 | e1 | v | 6 | e1*v | |3 | c | z | 4 | 2.5 | c*z | 10 | Of course, this would be rather straightforward using df_new = ( df .withColumn(newcol_1, col(col_1)*col(col_2)) .withColumn(newcol_2, col(col_3)*col(col_4)) ) However, the number of times that this operation is variable; so the number of new_col's is variable. Besides this happens in a join. So I would really like to do this all in 1 expression. My solution was this, I have a config file with a dictionary with columns part of the operations (this is the place where I can add more columns to be calculated) (don't mind the nesting of the dictionary) "multiplied_parameters": { "mult_parameter1": {"name": "new_col1", "col_parts": ["col_1","col_2"]}, "mult_parameter2": {"name": "new_col2", "col_parts": ["col_3, col_4"]}, }, Then I use this for loop to create an expression which produces the expression: col_1*col_2 as new_col1, ``col_3*col_4 as new_col2 newcol_lst = [] for keyval in dictionary["multiplied_parameters"].items(): newcol_lst.append( f'{"*".join(keyval[1]["col_parts"])} as {keyval[1]["name"]}' ) operation = f'{", ".join(newcol_lst)}' col_lst = ["col_1", "col_2", "col_3", "col_4"] df_new = ( df .select( *col_lst, expr(operation), ) This gives me the error. ParseException: mismatched input ',' expecting {<EOF>, '-'}(line 1, pos 33) == SQL == col_1*col_2 as new_col1, col_3*col_4 as new_col2 -----------------------^^^ So the problem is in the way that I concatenate the two operations. I also know that this the problem because when the dictionary only has 1 key (mult_parameter1) then I don't have any problem. The question is thus, in essence, how can I use .expr() with two different arithmetics to determine two different calculated columns.
[ "I don't think that expr can do what you are trying to do. However, you don't have to concatenate all your expressions and use a single expr, instead you can do something like this\ndf_new = (\n df\n .select(\n *(col_lst + [expr(nc) for nc in new_col_list])\n ) \n\n\nThe above code is untested, but in general a technique of creating a list of columns is common in Spark.\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "apache_spark_sql", "arithmetic_expressions", "python" ]
stackoverflow_0074630581_apache_spark_apache_spark_sql_arithmetic_expressions_python.txt
Q: _tkinter.TclError: image "score6" doesn't exist Hello so I've been trying to solve this problem but cant find anything I tried dictionaries and exec. How can I use string value as a variable name? I have a problem when I define a variable name in a string and try to make a button with the image it shows error - _tkinter.TclError: image "score6" doesn't exist, but if I manually type in the image variable name the error doesn't show. img = 'score' + str(correct) #here I make the variable name #the scores can be from 0-9 self.rez = Button(window, relief="sunken", image=img, bd=0, bg='#cecece',activebackground='#cecece') self.rez.place(x=520, y=330) #this is where images are defined(this is outside the class) score0 = ImageTk.PhotoImage(Image.open("scores/09.png")) score1 = ImageTk.PhotoImage(Image.open("scores/19.png")) score2 = ImageTk.PhotoImage(Image.open("scores/29.png")) score3 = ImageTk.PhotoImage(Image.open("scores/39.png")) score4 = ImageTk.PhotoImage(Image.open("scores/49.png")) score5 = ImageTk.PhotoImage(Image.open("scores/59.png")) so how can I use string value as a variable name? A: You could do this using eval img = eval('score' + str(correct)) but this is dangerous if correct is provided by the user. A better approach is to use a list images = [ImageTk.PhotoImage(Image.open("scores/09.png")), ImageTk.PhotoImage(Image.open("scores/19.png")), ImageTk.PhotoImage(Image.open("scores/29.png")), ImageTk.PhotoImage(Image.open("scores/39.png")), ImageTk.PhotoImage(Image.open("scores/49.png")), ImageTk.PhotoImage(Image.open("scores/59.png"))] img = images[correct] A: Firstly, you'll probably want to import Pathlib to work with the absolute paths to your image files from pathlib import Path Then I think it might make more sense to put your image filenames into a list... image_dir = Path(r'C:\<path>\<to>\scores') # the folder containing the images images = [ # list of individual image file names "09.png", "19.png", "29.png", "39.png", "49.png", "59.png", ... ] # etc. And then define a function that can handle fetching these images as needed def set_image(correct): # I assume 'correct' is an integer img = ImageTk.PhotoImage( Image.open( # open the correct image (-1 to accommodate zero-indexing) image_dir.joinpath(images[correct - 1]) ) ) return img Then you can update your button's image like so, for example: self.rez.configure(image=img)
_tkinter.TclError: image "score6" doesn't exist
Hello so I've been trying to solve this problem but cant find anything I tried dictionaries and exec. How can I use string value as a variable name? I have a problem when I define a variable name in a string and try to make a button with the image it shows error - _tkinter.TclError: image "score6" doesn't exist, but if I manually type in the image variable name the error doesn't show. img = 'score' + str(correct) #here I make the variable name #the scores can be from 0-9 self.rez = Button(window, relief="sunken", image=img, bd=0, bg='#cecece',activebackground='#cecece') self.rez.place(x=520, y=330) #this is where images are defined(this is outside the class) score0 = ImageTk.PhotoImage(Image.open("scores/09.png")) score1 = ImageTk.PhotoImage(Image.open("scores/19.png")) score2 = ImageTk.PhotoImage(Image.open("scores/29.png")) score3 = ImageTk.PhotoImage(Image.open("scores/39.png")) score4 = ImageTk.PhotoImage(Image.open("scores/49.png")) score5 = ImageTk.PhotoImage(Image.open("scores/59.png")) so how can I use string value as a variable name?
[ "You could do this using eval\nimg = eval('score' + str(correct))\n\nbut this is dangerous if correct is provided by the user. A better approach is to use a list\nimages = [ImageTk.PhotoImage(Image.open(\"scores/09.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/19.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/29.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/39.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/49.png\")),\n ImageTk.PhotoImage(Image.open(\"scores/59.png\"))]\n\nimg = images[correct]\n\n", "Firstly, you'll probably want to import Pathlib to work with the absolute paths to your image files\nfrom pathlib import Path\n\nThen I think it might make more sense to put your image filenames into a list...\nimage_dir = Path(r'C:\\<path>\\<to>\\scores') # the folder containing the images\nimages = [ # list of individual image file names\n \"09.png\",\n \"19.png\",\n \"29.png\",\n \"39.png\", \n \"49.png\", \n \"59.png\",\n ...\n] # etc.\n\nAnd then define a function that can handle fetching these images as needed\ndef set_image(correct): # I assume 'correct' is an integer\n img = ImageTk.PhotoImage(\n Image.open(\n # open the correct image (-1 to accommodate zero-indexing)\n image_dir.joinpath(images[correct - 1]) \n )\n )\n return img\n\nThen you can update your button's image like so, for example:\nself.rez.configure(image=img)\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074656689_python_tkinter.txt
Q: How to add search other field in many2one? HI I have a customer field and the default search is by name, and I want to add a search by barcode as well to the customer field I have tried adding a barcode(partner_id.barcode) on the domain as below, but it still doesn't work (model = sale.order) @api.model def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None): if self._context.get('sale_show_partner_name'): if operator == 'ilike' and not (name or '').strip(): domain = [] elif operator in ('ilike', 'like', '=', '=like', '=ilike'): domain = expression.AND([ args or [], ['|', '|', ('name', operator, name), ('partner_id.name', operator, name), ('partner_id.barcode', operator, name)] ]) return self._search(domain, limit=limit, access_rights_uid=name_get_uid) return super(SaleOrder, self)._name_search(name, args=args, operator=operator, limit=limit, name_get_uid=name_get_uid) I have also tried in the (res.partner) model as below. it can search customer by barcode, but cannot search customer by name : @api.model def name_search(self, name, args=None, operator='ilike', limit=100): if not self.env.context.get('display_barcode', True): return super(ResPartnerInherit, self).name_search(name, args, operator, limit) else: args = args or [] recs = self.browse() if not recs: recs = self.search([('barcode', operator, name)] + args, limit=limit) return recs.name_get() What should I do if I want to find a customer by name and barcode? If anyone knows, please let me know Best Regards A: The barcode field in res.partner is a property field and stored in ir.property model which name is Company Propeties in Odoo and you can access it with developer mode from Settings -> Technical -> Company Propeties. The _name_search method for res.partner enable you to search in any Many2one partner relation field in any model by one of these fields display_name, email, reference and vat and you can override it to add barcode as below: from odoo import api, models from odoo.osv.expression import get_unaccent_wrapper import re class ResPartner(models.Model): _inherit = 'res.partner' @api.model def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None): self = self.with_user(name_get_uid) if name_get_uid else self # as the implementation is in SQL, we force the recompute of fields if necessary self.recompute(['display_name']) self.flush() print(args) if args is None: args = [] order_by_rank = self.env.context.get('res_partner_search_mode') if (name or order_by_rank) and operator in ('=', 'ilike', '=ilike', 'like', '=like'): self.check_access_rights('read') where_query = self._where_calc(args) self._apply_ir_rules(where_query, 'read') from_clause, where_clause, where_clause_params = where_query.get_sql() from_str = from_clause if from_clause else 'res_partner' where_str = where_clause and (" WHERE %s AND " % where_clause) or ' WHERE ' print(where_clause_params) # search on the name of the contacts and of its company search_name = name if operator in ('ilike', 'like'): search_name = '%%%s%%' % name if operator in ('=ilike', '=like'): operator = operator[1:] unaccent = get_unaccent_wrapper(self.env.cr) fields = self._get_name_search_order_by_fields() query = """SELECT res_partner.id FROM {from_str} LEFT JOIN ir_property trust_property ON ( trust_property.res_id = 'res.partner,'|| {from_str}."id" AND trust_property.name = 'barcode') {where} ({email} {operator} {percent} OR {display_name} {operator} {percent} OR {reference} {operator} {percent} OR {barcode} {operator} {percent} OR {vat} {operator} {percent}) -- don't panic, trust postgres bitmap ORDER BY {fields} {display_name} {operator} {percent} desc, {display_name} """.format(from_str=from_str, fields=fields, where=where_str, operator=operator, email=unaccent('res_partner.email'), display_name=unaccent('res_partner.display_name'), reference=unaccent('res_partner.ref'), barcode=unaccent('trust_property.value_text'), percent=unaccent('%s'), vat=unaccent('res_partner.vat'), ) where_clause_params += [search_name] * 4 # for email / display_name, reference where_clause_params += [re.sub('[^a-zA-Z0-9\-\.]+', '', search_name) or None] # for vat where_clause_params += [search_name] # for order by if limit: query += ' limit %s' where_clause_params.append(limit) print(query) print(where_clause_params) self.env.cr.execute(query, where_clause_params) return [row[0] for row in self.env.cr.fetchall()] return super(ResPartner, self)._name_search(name, args, operator=operator, limit=limit, name_get_uid=name_get_uid)
How to add search other field in many2one?
HI I have a customer field and the default search is by name, and I want to add a search by barcode as well to the customer field I have tried adding a barcode(partner_id.barcode) on the domain as below, but it still doesn't work (model = sale.order) @api.model def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None): if self._context.get('sale_show_partner_name'): if operator == 'ilike' and not (name or '').strip(): domain = [] elif operator in ('ilike', 'like', '=', '=like', '=ilike'): domain = expression.AND([ args or [], ['|', '|', ('name', operator, name), ('partner_id.name', operator, name), ('partner_id.barcode', operator, name)] ]) return self._search(domain, limit=limit, access_rights_uid=name_get_uid) return super(SaleOrder, self)._name_search(name, args=args, operator=operator, limit=limit, name_get_uid=name_get_uid) I have also tried in the (res.partner) model as below. it can search customer by barcode, but cannot search customer by name : @api.model def name_search(self, name, args=None, operator='ilike', limit=100): if not self.env.context.get('display_barcode', True): return super(ResPartnerInherit, self).name_search(name, args, operator, limit) else: args = args or [] recs = self.browse() if not recs: recs = self.search([('barcode', operator, name)] + args, limit=limit) return recs.name_get() What should I do if I want to find a customer by name and barcode? If anyone knows, please let me know Best Regards
[ "The barcode field in res.partner is a property field and stored in ir.property model which name is Company Propeties in Odoo and you can access it with developer mode from Settings -> Technical -> Company Propeties.\nThe _name_search method for res.partner enable you to search in any Many2one partner relation field in any model by one of these fields display_name, email, reference and vat and you can override it to add barcode as below:\nfrom odoo import api, models\nfrom odoo.osv.expression import get_unaccent_wrapper\nimport re\n\n\nclass ResPartner(models.Model):\n _inherit = 'res.partner'\n\n @api.model\n def _name_search(self, name, args=None, operator='ilike', limit=100, name_get_uid=None):\n self = self.with_user(name_get_uid) if name_get_uid else self\n # as the implementation is in SQL, we force the recompute of fields if necessary\n self.recompute(['display_name'])\n self.flush()\n print(args)\n if args is None:\n args = []\n order_by_rank = self.env.context.get('res_partner_search_mode')\n if (name or order_by_rank) and operator in ('=', 'ilike', '=ilike', 'like', '=like'):\n self.check_access_rights('read')\n where_query = self._where_calc(args)\n self._apply_ir_rules(where_query, 'read')\n from_clause, where_clause, where_clause_params = where_query.get_sql()\n from_str = from_clause if from_clause else 'res_partner'\n where_str = where_clause and (\" WHERE %s AND \" % where_clause) or ' WHERE '\n print(where_clause_params)\n # search on the name of the contacts and of its company\n search_name = name\n if operator in ('ilike', 'like'):\n search_name = '%%%s%%' % name\n if operator in ('=ilike', '=like'):\n operator = operator[1:]\n\n unaccent = get_unaccent_wrapper(self.env.cr)\n\n fields = self._get_name_search_order_by_fields()\n\n query = \"\"\"SELECT res_partner.id\n FROM {from_str}\n LEFT JOIN ir_property trust_property ON (\n trust_property.res_id = 'res.partner,'|| {from_str}.\"id\"\n AND trust_property.name = 'barcode')\n {where} ({email} {operator} {percent}\n OR {display_name} {operator} {percent}\n OR {reference} {operator} {percent}\n OR {barcode} {operator} {percent}\n OR {vat} {operator} {percent})\n -- don't panic, trust postgres bitmap\n ORDER BY {fields} {display_name} {operator} {percent} desc,\n {display_name}\n \"\"\".format(from_str=from_str,\n fields=fields,\n where=where_str,\n operator=operator,\n email=unaccent('res_partner.email'),\n display_name=unaccent('res_partner.display_name'),\n reference=unaccent('res_partner.ref'),\n barcode=unaccent('trust_property.value_text'),\n percent=unaccent('%s'),\n vat=unaccent('res_partner.vat'), )\n\n where_clause_params += [search_name] * 4 # for email / display_name, reference\n where_clause_params += [re.sub('[^a-zA-Z0-9\\-\\.]+', '', search_name) or None] # for vat\n where_clause_params += [search_name] # for order by\n if limit:\n query += ' limit %s'\n where_clause_params.append(limit)\n print(query)\n print(where_clause_params)\n self.env.cr.execute(query, where_clause_params)\n return [row[0] for row in self.env.cr.fetchall()]\n\n return super(ResPartner, self)._name_search(name, args, operator=operator, limit=limit, name_get_uid=name_get_uid)\n\n\n" ]
[ 0 ]
[]
[]
[ "erp", "odoo", "odoo_14", "python" ]
stackoverflow_0074651848_erp_odoo_odoo_14_python.txt
Q: Python (NumPy): Memory efficient array multiplication with fancy indexing I'm looking to do fast matrix multiplication in python, preferably NumPy, of an array A with another array B of repeated matrices by using a third array I of indices. This can be accomplished using fancy indexing and matrix multiplication: from numpy.random import rand, randint A = rand(1000,5,5) B = rand(40000000,5,1) I = randint(low=0, high=1000, size=40000000) A[I] @ B However, this creates the intermediate array A[I] of shape (40000000, 5, 5) which overflows the memory. It seems highly inefficient to have to repeat a small set of matrices for multiplication, and this is essentially a more general version of broadcasting such as A[0:1] @ B which has no issues. Are there any alternatives? I have looked at NumPy's einsum function but have not seen any support for utilizing an index vector in the call. A: If you're open to another package, you could wrap it up with dask. from numpy.random import rand, randint from dask import array as da A = da.from_array(rand(1000,5,5)) B = da.from_array(rand(40000000,5,1)) I = da.from_array(randint(low=0, high=1000, size=40000000)) fancy = A[I] @ B After finished manipulating, then bring it into memory using fancy.compute()
Python (NumPy): Memory efficient array multiplication with fancy indexing
I'm looking to do fast matrix multiplication in python, preferably NumPy, of an array A with another array B of repeated matrices by using a third array I of indices. This can be accomplished using fancy indexing and matrix multiplication: from numpy.random import rand, randint A = rand(1000,5,5) B = rand(40000000,5,1) I = randint(low=0, high=1000, size=40000000) A[I] @ B However, this creates the intermediate array A[I] of shape (40000000, 5, 5) which overflows the memory. It seems highly inefficient to have to repeat a small set of matrices for multiplication, and this is essentially a more general version of broadcasting such as A[0:1] @ B which has no issues. Are there any alternatives? I have looked at NumPy's einsum function but have not seen any support for utilizing an index vector in the call.
[ "If you're open to another package, you could wrap it up with dask.\nfrom numpy.random import rand, randint\nfrom dask import array as da\n\nA = da.from_array(rand(1000,5,5))\nB = da.from_array(rand(40000000,5,1))\nI = da.from_array(randint(low=0, high=1000, size=40000000))\n\nfancy = A[I] @ B\n\n\nAfter finished manipulating, then bring it into memory using fancy.compute()\n" ]
[ 1 ]
[]
[]
[ "memory", "numpy", "python", "vectorization" ]
stackoverflow_0074657420_memory_numpy_python_vectorization.txt
Q: Mac run python script with double click I'm trying to make an automator application to run a python script so I can double click the icon and start the script. it doesn't give me an error but it does nothing. #!/bin/bash echo Running Script python /Desktop/test.py echo Script ended I also tried with a Shell script .sh with the same code. it was working before with the .sh until I updated to Mac OS Ventura. I also installed anaconda and python, but not sure how to point to anaconda environment. any help would be great Thank you A: A shell of Automator has different PATH, setopt, and other values from your shell in terminal. Therefore the script in your Automator won't work exactly the same in your shell. To make it work, you need to use an absolute path of the python command. Run this command in your terminal to get the absolute path. which python After, fix python in your script with the previous result. #!/bin/bash echo Running Script /Absolute/path/somewhere/python /Desktop/test.py echo Script ended To debug with showing a window of the script result Add "Run Applescript" in your automator app with the code below. It's not perfect but useful. Also, in your picture, you can see the result by click the Results button. on run {input, parameters} display dialog input as text end run
Mac run python script with double click
I'm trying to make an automator application to run a python script so I can double click the icon and start the script. it doesn't give me an error but it does nothing. #!/bin/bash echo Running Script python /Desktop/test.py echo Script ended I also tried with a Shell script .sh with the same code. it was working before with the .sh until I updated to Mac OS Ventura. I also installed anaconda and python, but not sure how to point to anaconda environment. any help would be great Thank you
[ "A shell of Automator has different PATH, setopt, and other values from your shell in terminal. Therefore the script in your Automator won't work exactly the same in your shell.\nTo make it work, you need to use an absolute path of the python command.\nRun this command in your terminal to get the absolute path.\nwhich python\n\nAfter, fix python in your script with the previous result.\n#!/bin/bash\n\necho Running Script\n\n/Absolute/path/somewhere/python /Desktop/test.py\n\necho Script ended\n\nTo debug with showing a window of the script result\nAdd \"Run Applescript\" in your automator app with the code below.\nIt's not perfect but useful.\nAlso, in your picture, you can see the result by click the Results button.\non run {input, parameters}\n display dialog input as text\nend run\n\n" ]
[ 0 ]
[]
[]
[ "automator", "macos", "python" ]
stackoverflow_0074657989_automator_macos_python.txt
Q: Trying to apply fit_transofrm() function from sklearn.compose.ColumnTransformer class on array but getting "tuple index out of range" error I am beginner in ML/AI and trying to do pre-proccesing on my dataset of digits that I've made myself. I want to apply OneHotEncoding on my categorical variable (which is a dependent one,idk if it is important) but getting "tuple index out of range" error. I was searching on the internet and the only solution was to use reshape() function but it didn't help or may be i am not using it correctly. Here is my dataset,first 28 columns are decoded digits into 1s and 0s and the last column contains digits itself Here is my code: import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf #Data Preprocessing dataset = pd.read_csv('dataset_cisla_polia2.csv',header = None,sep = ';') X = dataset.iloc[:, 0:28].values y = dataset.iloc[:, 29].values print(X) print(y) from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough') y = np.array(ct.fit_transform(y)) I am expecting to get variable y to be like this: digit 1 is encoded that way = [1 0 0 0 0 0 0 0 0 0 ], digit 2 is encoded that way = [0 1 0 0 0 0 0 0 0 0 ] and so on.. A: This is because ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough') will one-hot encode the column of index 29. You are fit-transforming y which only has 1 column. You can change the 29 to 0. ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[0])],remainder = 'passthrough') Edit You also need to change the iloc to keep the numpy array as column structure. y = dataset.iloc[:, [29]].values
Trying to apply fit_transofrm() function from sklearn.compose.ColumnTransformer class on array but getting "tuple index out of range" error
I am beginner in ML/AI and trying to do pre-proccesing on my dataset of digits that I've made myself. I want to apply OneHotEncoding on my categorical variable (which is a dependent one,idk if it is important) but getting "tuple index out of range" error. I was searching on the internet and the only solution was to use reshape() function but it didn't help or may be i am not using it correctly. Here is my dataset,first 28 columns are decoded digits into 1s and 0s and the last column contains digits itself Here is my code: import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf #Data Preprocessing dataset = pd.read_csv('dataset_cisla_polia2.csv',header = None,sep = ';') X = dataset.iloc[:, 0:28].values y = dataset.iloc[:, 29].values print(X) print(y) from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough') y = np.array(ct.fit_transform(y)) I am expecting to get variable y to be like this: digit 1 is encoded that way = [1 0 0 0 0 0 0 0 0 0 ], digit 2 is encoded that way = [0 1 0 0 0 0 0 0 0 0 ] and so on..
[ "This is because ct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[29])],remainder = 'passthrough') will one-hot encode the column of index 29.\nYou are fit-transforming y which only has 1 column. You can change the 29 to 0.\nct = ColumnTransformer(transformers=[('encoder',OneHotEncoder(),[0])],remainder = 'passthrough')\n\nEdit\nYou also need to change the iloc to keep the numpy array as column structure.\ny = dataset.iloc[:, [29]].values\n\n" ]
[ 0 ]
[]
[]
[ "artificial_intelligence", "data_preprocessing", "machine_learning", "python", "scikit_learn" ]
stackoverflow_0074657678_artificial_intelligence_data_preprocessing_machine_learning_python_scikit_learn.txt
Q: How to real count repeated string in string in python? I am sorry that I am not sure my question is correct or clear enough. However, I hope the example below can explain my question: As what you see print("abbbbbbc".count("bbb")) #–output is 2 But I want the result is 4, because bbbbbb has 6 characters and can breakdown as below: bbb--- -bbb-- --bbb- ---bbb I couldn't figure out which function I can use to solve this matter. What I did is count by for combinations, and it doesn't look right at all. Thank for helping. A: You can implement your own logic, like this: a = "abbbbbbc" b = "bbb" count = 0 for i in range(len(a) - len(b)): if a[i:i+len(b)] == b: count += 1 print(count) OR count = 0 for i in range(len(a)): if a[i:].startswith(b): count += 1 print(count) OR count = sum([1 if a[i:].startswith(b) else 0 for i in range(len(a))]) print(count)
How to real count repeated string in string in python?
I am sorry that I am not sure my question is correct or clear enough. However, I hope the example below can explain my question: As what you see print("abbbbbbc".count("bbb")) #–output is 2 But I want the result is 4, because bbbbbb has 6 characters and can breakdown as below: bbb--- -bbb-- --bbb- ---bbb I couldn't figure out which function I can use to solve this matter. What I did is count by for combinations, and it doesn't look right at all. Thank for helping.
[ "You can implement your own logic, like this:\na = \"abbbbbbc\"\nb = \"bbb\"\n\ncount = 0\nfor i in range(len(a) - len(b)):\n if a[i:i+len(b)] == b:\n count += 1\nprint(count)\n\nOR\ncount = 0\nfor i in range(len(a)):\n if a[i:].startswith(b):\n count += 1\nprint(count)\n\nOR\ncount = sum([1 if a[i:].startswith(b) else 0 for i in range(len(a))])\nprint(count)\n\n" ]
[ 1 ]
[]
[]
[ "count", "python", "string" ]
stackoverflow_0074657986_count_python_string.txt
Q: Export SQL Script from SQLAlchemy I am using SQLAlchemy with ORM and DeclarativeMeta to connect to my Database. Is there a way to generate or export a .sql file that contains all the Create Tables Commands? Thank you! I tried to get that information from my Meta Object or even from my SQLAlchemy Engine but they don't hold information like that. Even the Meta.metadate._create_all() does not return a string or something else A: Found an answers in the documentation of sqlalchemy. from sqlalchemy.schema import CreateTable print(CreateTable(my_mysql_table).compile(mysql_engine)) CREATE TABLE my_table ( id INTEGER(11) NOT NULL AUTO_INCREMENT, ... )ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 SQLAlchemy Documentation!
Export SQL Script from SQLAlchemy
I am using SQLAlchemy with ORM and DeclarativeMeta to connect to my Database. Is there a way to generate or export a .sql file that contains all the Create Tables Commands? Thank you! I tried to get that information from my Meta Object or even from my SQLAlchemy Engine but they don't hold information like that. Even the Meta.metadate._create_all() does not return a string or something else
[ "Found an answers in the documentation of sqlalchemy.\nfrom sqlalchemy.schema import CreateTable\nprint(CreateTable(my_mysql_table).compile(mysql_engine))\n\nCREATE TABLE my_table (\nid INTEGER(11) NOT NULL AUTO_INCREMENT,\n...\n)ENGINE=InnoDB DEFAULT CHARSET=utf8mb4\n\nSQLAlchemy Documentation!\n" ]
[ 1 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0074657665_python_sqlalchemy.txt
Q: How to remove margins from PDF? (Generated using WeasyPrint) I am trying to render a PDF document within my Flask application. For this, I am using the following HTML template: <!DOCTYPE html> <html> <head> <style> @page { margin:0 } h1 { color:white; } .header{ background: #0a0045; height: 250px; } .center { position: relative; top: 50%; left: 50%; -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); text-align:center; } </style> </head> <body> <div class="header"> <div class="center"> <h1>Name</h1> </div> </div> </body> </html> I keep getting white margins at the top and right/left of my header section: Is there a way to remove them? Edit: Below is the code used to generate the PDF file using WeasyPrint in my Flask app: def generate_pdf(id): element = Element.query.filter_by(id=id).first() attri_dict = get_element_attri_dict_for_tpl(element) html = render_template('element.html', attri_dict=attri_dict) pdf = HTML(string=html).write_pdf() destin_loc = app.config['ELEMENTS_FOLDER'] timestamp = dt.datetime.now().strftime('%Y%m%d%H%M%S') file_name = '_'.join(['new_element', timestamp]) path_to_new_file = destin_loc + '/%s.pdf' % file_name f = open(path_to_new_file, 'wb') f.write(pdf) filename = return_latest_element_path() return send_from_directory(directory=app.config['ELEMENTS_FOLDER'], filename=filename, as_attachment=True) A: Maybe you forgot " ; " or/and " mm ", it works: @page { size: A4; /* Change from the default size of A4 */ margin: 0mm; /* Set margin on each page */ } A: The weasyprint uses 3 sources of css, one of them is default user agent stylesheet (https://doc.courtbouillon.org/weasyprint/stable/api_reference.html#supported-features) That defines: body { display: block; margin: 8px; } make sure to override that margin on tag and you will loose the margin.
How to remove margins from PDF? (Generated using WeasyPrint)
I am trying to render a PDF document within my Flask application. For this, I am using the following HTML template: <!DOCTYPE html> <html> <head> <style> @page { margin:0 } h1 { color:white; } .header{ background: #0a0045; height: 250px; } .center { position: relative; top: 50%; left: 50%; -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); text-align:center; } </style> </head> <body> <div class="header"> <div class="center"> <h1>Name</h1> </div> </div> </body> </html> I keep getting white margins at the top and right/left of my header section: Is there a way to remove them? Edit: Below is the code used to generate the PDF file using WeasyPrint in my Flask app: def generate_pdf(id): element = Element.query.filter_by(id=id).first() attri_dict = get_element_attri_dict_for_tpl(element) html = render_template('element.html', attri_dict=attri_dict) pdf = HTML(string=html).write_pdf() destin_loc = app.config['ELEMENTS_FOLDER'] timestamp = dt.datetime.now().strftime('%Y%m%d%H%M%S') file_name = '_'.join(['new_element', timestamp]) path_to_new_file = destin_loc + '/%s.pdf' % file_name f = open(path_to_new_file, 'wb') f.write(pdf) filename = return_latest_element_path() return send_from_directory(directory=app.config['ELEMENTS_FOLDER'], filename=filename, as_attachment=True)
[ "Maybe you forgot \" ; \" or/and \" mm \",\nit works:\n@page {\n size: A4; /* Change from the default size of A4 */\n margin: 0mm; /* Set margin on each page */\n }\n\n", "The weasyprint uses 3 sources of css, one of them is default user agent stylesheet\n(https://doc.courtbouillon.org/weasyprint/stable/api_reference.html#supported-features)\nThat defines:\nbody {\n display: block;\n margin: 8px;\n}\n\nmake sure to override that margin on tag and you will loose the margin.\n" ]
[ 13, 0 ]
[]
[]
[ "flask", "margin", "pdf", "python", "weasyprint" ]
stackoverflow_0058175484_flask_margin_pdf_python_weasyprint.txt
Q: Count the digits in a number I have written a function called count_digit: # Write a function which takes a number as an input # It should count the number of digits in the number # And check if the number is a 1 or 2-digit number then return True # Return False for any other case def count_digit(num): if (num/10 == 0): return 1 else: return 1 + count_digit(num / 10); print(count_digit(23)) I get 325 as output. Why is that and how do I correct it? A: convert the integer to a string, and then use the len() method on the converted string. Unless you also consider taking floats as input too, and not integers exclusively. A: This is a Python3 behaviour. / returns float and not integer division. Change your code to: def count_digit(num): if (num//10 == 0): return 1 else: return 1 + count_digit(num // 10) print(count_digit(23)) A: Recursive def count_digit(n): if n == 0: return 0 return count_digit(n // 10) + 1 Easy def count_digit(n): return len(str(abs(n))) A: Assuming you always send integer to function and you are asking for a mathematical answer, this might be your solution: import math def count_digits(number): return int(math.log10(abs(number))) + 1 if number else 1 if __name__ == '__main__': print(count_digits(15712)) # prints: 5 A: Here is an easy solution in python; num=23 temp=num #make a copy of integer count=0 #This while loop will run unless temp is not zero.,so we need to do something to make this temp ==0 while(temp): temp=temp//10 # "//10" this floor division by 10,remove last digit of the number.,means we are removing last digit and assigning back to the number;unless we make the number 0 count+=1 #after removing last digit from the number;we are counting it;until the number becomes Zero and while loop becomes False print(count)
Count the digits in a number
I have written a function called count_digit: # Write a function which takes a number as an input # It should count the number of digits in the number # And check if the number is a 1 or 2-digit number then return True # Return False for any other case def count_digit(num): if (num/10 == 0): return 1 else: return 1 + count_digit(num / 10); print(count_digit(23)) I get 325 as output. Why is that and how do I correct it?
[ "convert the integer to a string, and then use the len() method on the converted string. Unless you also consider taking floats as input too, and not integers exclusively.\n", "This is a Python3 behaviour. / returns float and not integer division.\nChange your code to:\ndef count_digit(num):\n if (num//10 == 0):\n return 1\n else:\n return 1 + count_digit(num // 10)\n\nprint(count_digit(23))\n\n", "Recursive\ndef count_digit(n):\n if n == 0:\n return 0\n return count_digit(n // 10) + 1\n\nEasy\ndef count_digit(n):\n return len(str(abs(n)))\n\n", "Assuming you always send integer to function and you are asking for a mathematical answer, this might be your solution:\nimport math\n\n\ndef count_digits(number):\n return int(math.log10(abs(number))) + 1 if number else 1\n\n\nif __name__ == '__main__':\n print(count_digits(15712))\n # prints: 5\n\n", "Here is an easy solution in python;\nnum=23\ntemp=num #make a copy of integer\ncount=0\n\n#This while loop will run unless temp is not zero.,so we need to do something to make this temp ==0\n\nwhile(temp): \n temp=temp//10 # \"//10\" this floor division by 10,remove last digit of the number.,means we are removing last digit and assigning back to the number;unless we make the number 0\n count+=1 #after removing last digit from the number;we are counting it;until the number becomes Zero and while loop becomes False\nprint(count)\n\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "count", "filter", "python" ]
stackoverflow_0070258942_count_filter_python.txt
Q: How can I create for each category a horizontal bar plot that consists of shares I have the following df type eur_d asia_d amer_d 0 cat1 0.58 0.30 0.12 1 cat2 0.50 0.29 0.21 2 cat3 0.50 0.30 0.20 3 cat4 0.42 0.31 0.27 4 cat5 0.42 0.37 0.20 5 cat6 0.60 0.21 0.19 6 cat7 0.26 0.50 0.24 7 cat8 0.54 0.17 0.30 8 cat9 0.46 0.25 0.29 Ideally I want to create 9 horizontal bar of same length that shows the share of Europe, Asia, and America for each category with different colors. A: If you mean stacked horizontal bar chart, this can help. df.plot.barh(x="type", stacked=True, figsize=(10, 5)) plt.show()
How can I create for each category a horizontal bar plot that consists of shares
I have the following df type eur_d asia_d amer_d 0 cat1 0.58 0.30 0.12 1 cat2 0.50 0.29 0.21 2 cat3 0.50 0.30 0.20 3 cat4 0.42 0.31 0.27 4 cat5 0.42 0.37 0.20 5 cat6 0.60 0.21 0.19 6 cat7 0.26 0.50 0.24 7 cat8 0.54 0.17 0.30 8 cat9 0.46 0.25 0.29 Ideally I want to create 9 horizontal bar of same length that shows the share of Europe, Asia, and America for each category with different colors.
[ "If you mean stacked horizontal bar chart, this can help.\ndf.plot.barh(x=\"type\", stacked=True, figsize=(10, 5))\nplt.show()\n\n\n" ]
[ 2 ]
[]
[]
[ "matplotlib", "pandas", "python" ]
stackoverflow_0074658027_matplotlib_pandas_python.txt
Q: Django: python manage.py migrate does nothing at all I just started learning django, and as i try to apply my migrations the first problem occurs. I start the server up, type python manage.py migrate and nothing happens. No error, no crash, just no response. Performing system checks... System check identified no issues (0 silenced). You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions. Run 'python manage.py migrate' to apply them. May 01, 2017 - 11:36:27 Django version 1.11, using settings 'website.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. python manage.py migrate And that's the end of my terminal feed. I thought maybe it just looks like nothing happens, but no. The changes weren't applied and I can't proceed any further. Any ideas on what's going on? A: Well, you say that you first start the server and then type in the commands. That's also what the terminal feed you shared shows. Do not run the server if you want to run management commands using manage.py. Hit Ctrl+C to exit the server and then run your migration commands, it will work. A: Try: python manage.py makemigrations python manage.py migrate A: @adam-karolczak n all If there are multiple DJANGO Projects, it can happen that DJANGO_SETTINGS_MODULE is set to some other app in environment varibles, the current project manage.py will not point to current project settings thus the error. So, confirm DJANGO_SETTINGS_MODULE in fact points to the settings.py of current project. Close the project if its running viz. ctrl+C. You can also check the server is not running ( linux ) by ps -ef | grep runserver Then kill the process ids if they exist. If you confirmed settings.py in DJANGO_MODULE_SETTINGS is for the project you are having issue. Run the following it should resolve. python manage.py makemigrations python manage.py migrate Hope it helps. A: I was getting the same error running this 2 command in terminal python manage.py makemigrations python manage.py migrate and then python manage.py runserver solved my issues. Thanks A: Have you tried with parameter? python manage.py makemigrations <app_name> A: I had the same issue and the problem was that there was a pg_dump script running at the same time I was trying to migrate. After the dump was completed, migrations ran successfully. A: Check that INSTALL_APPS app exists, if not add it Checks the model for default attributes Running this 2 command in terminal python manage.py makemigrations python manage.py migration A: First exit of the present web server by typing Ctrl + C Then run python manage.py migrate The Warning is due to not configuring the initial database or migrating.
Django: python manage.py migrate does nothing at all
I just started learning django, and as i try to apply my migrations the first problem occurs. I start the server up, type python manage.py migrate and nothing happens. No error, no crash, just no response. Performing system checks... System check identified no issues (0 silenced). You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions. Run 'python manage.py migrate' to apply them. May 01, 2017 - 11:36:27 Django version 1.11, using settings 'website.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. python manage.py migrate And that's the end of my terminal feed. I thought maybe it just looks like nothing happens, but no. The changes weren't applied and I can't proceed any further. Any ideas on what's going on?
[ "Well, you say that you first start the server and then type in the commands. That's also what the terminal feed you shared shows. \nDo not run the server if you want to run management commands using manage.py.\nHit Ctrl+C to exit the server and then run your migration commands, it will work.\n", "Try: \npython manage.py makemigrations\npython manage.py migrate\n\n", "@adam-karolczak n all\nIf there are multiple DJANGO Projects, it can happen that DJANGO_SETTINGS_MODULE is set to some other app in environment varibles, the current project manage.py will not point to current project settings thus the error.\nSo, confirm DJANGO_SETTINGS_MODULE in fact points to the settings.py of current project.\nClose the project if its running viz. ctrl+C. \nYou can also check the server is not running ( linux ) by\nps -ef | grep runserver\n\nThen kill the process ids if they exist.\nIf you confirmed settings.py in DJANGO_MODULE_SETTINGS is for the project you are having issue.\nRun the following it should resolve.\npython manage.py makemigrations\npython manage.py migrate\n\nHope it helps.\n", "I was getting the same error \nrunning this 2 command in terminal\n python manage.py makemigrations\n python manage.py migrate\n\nand then\n python manage.py runserver\n\nsolved my issues.\nThanks\n", "Have you tried with parameter?\npython manage.py makemigrations <app_name>\n", "I had the same issue and the problem was that there was a pg_dump script running at the same time I was trying to migrate. After the dump was completed, migrations ran successfully. \n", "\nCheck that INSTALL_APPS app exists, if not add it\n\nChecks the model for default attributes\n\nRunning this 2 command in terminal\npython manage.py makemigrations\npython manage.py migration\n\n\n", "\nFirst exit of the present web server by typing Ctrl + C\nThen run python manage.py migrate\n\nThe Warning is due to not configuring the initial database or migrating.\n" ]
[ 12, 10, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "django", "python", "python_3.5", "python_3.x" ]
stackoverflow_0043718536_django_python_python_3.5_python_3.x.txt
Q: pyqt5_tools designer.exe does not exist I have installed PyQT5 by command pip install pyqt5 pyqt5-tools. Then I want to show path for designer.exe. However I could not found that in C:\Users\User\AppData\Local\Programs\Python\Python38\Lib\site-packages\pyqt5_tools directory. These are content of that folder. A: using the pip install pyqt5-tools method I found the designer on this path: C:\Users\user\AppData\Local\Programs\Python\Python39\Lib\site-packages\qt5_applications\Qt\bin A: On my system QT Designer is saved under C:\Users\User\AppData\Local\Qt Designer EDIT: It seems like I installed QT Designer differently. You can use pip install PyQt5Designer. Then it should be in the path I gave. A: pip install pyqt5-tools Check the path your_python_installed\Lib\site-packages\qt5_applications\Qt\bin\designer.exe A: pip install pyqt5-tools I Found designer.exe in: %APPDATA%\Roaming\Python\[Version]\site-packages\qt5_applications\Qt\bin A: I found it on path venv\Lib\site-packages\qt5_applications\Qt\bin A: If this helps someone, in my case there were two "similar" of pyqt5 folders inside site-packages, ones starts with pyqt5 and others with just qt5, I found the designer app in the folder qt5_aplications. A: I searched the site-packages folder and found out that the designer.exe now exists under the ./Lib/site-packages/qt5_applications/Qt/bin I am currently using Python 3.9.13
pyqt5_tools designer.exe does not exist
I have installed PyQT5 by command pip install pyqt5 pyqt5-tools. Then I want to show path for designer.exe. However I could not found that in C:\Users\User\AppData\Local\Programs\Python\Python38\Lib\site-packages\pyqt5_tools directory. These are content of that folder.
[ "using the pip install pyqt5-tools method I found the designer on this path:\nC:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python39\\Lib\\site-packages\\qt5_applications\\Qt\\bin\n", "On my system QT Designer is saved under C:\\Users\\User\\AppData\\Local\\Qt Designer\nEDIT:\nIt seems like I installed QT Designer differently.\nYou can use pip install PyQt5Designer.\nThen it should be in the path I gave.\n", "pip install pyqt5-tools\nCheck the path your_python_installed\\Lib\\site-packages\\qt5_applications\\Qt\\bin\\designer.exe\n", "pip install pyqt5-tools\n\nI Found designer.exe in:\n%APPDATA%\\Roaming\\Python\\[Version]\\site-packages\\qt5_applications\\Qt\\bin\n\n", "I found it on path\nvenv\\Lib\\site-packages\\qt5_applications\\Qt\\bin\n", "If this helps someone, in my case there were two \"similar\" of pyqt5 folders inside site-packages, ones starts with pyqt5 and others with just qt5, I found the designer app in the folder qt5_aplications.\n", "I searched the site-packages folder and found out that the designer.exe now exists under the ./Lib/site-packages/qt5_applications/Qt/bin \nI am currently using Python 3.9.13\n" ]
[ 10, 4, 2, 2, 1, 0, 0 ]
[]
[]
[ "pip", "python" ]
stackoverflow_0065007143_pip_python.txt
Q: How can I iterate over a dataframe containing multiple conditions (using iterrows()) and flag columns based on these conditions using np.where? I am trying to generate flags based on multiple conditions. I would like to do the following in a more iterable way: # sample dataframe data = [[1, 1980.0, 2000.0]] df = pd.DataFrame(data, columns=["Item", "year1", "start_year"]) df Item year1 year2 1 1980.0 2000.0 # assign flag based on condition df = df.assign(year_flag=lambda x: np.where(x["year1"] < 1985, True, False)) df Item year1 year2 year_flag 1 1980.0 2000.0 True The way I would like to do this is the following: # create a dataframe containing conditions and flags I'd like to generate data = [ [ "year1", "(df['year1'] < 1985)", ], [ "year2", "((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10))", ], ] condition_df = pd.DataFrame(data, columns=["column", "condition"]) condition_df column condition year1 (df['year1'] < 1985) year2 ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10)) # iterate through rows in condition_df to generate conditions + flags for idx, row in condition_df.iterrows(): col = row["column"] condition = row["condition"] flag_col_name = f"{col}_flag" df = df.assign(flag_col_name=lambda df: np.where(condition, True, False)) Unfortunately this results in the following error: I am assuming this is because the condition is a string and thus 1985 is also a string (could be wrong though). Is there any way I can use this method to flag a dataframe? Or an alternative method that might be more successful? Thank you !!! A: In my opinion, a number is expected, and the resulting value is a string. I tried removing the quotes in the conditions themselves. No errors occur. I also added one more line to the dataframe for verification. The flags are displayed as they should. The last line gives: False. import pandas as pd import numpy as np data = [[1, 1980.0, 2000.0], [2, 1990, 3000.0]] df = pd.DataFrame(data, columns=["Item", "year1", "start_year"]) data = [["year1", (df['year1'] < 1985), ], ["year2", ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10)), ], ] condition_df = pd.DataFrame(data, columns=["column", "condition"]) for idx, row in condition_df.iterrows(): col = row["column"] condition = row["condition"] flag_col_name = f"{col}_flag" df = df.assign(flag_col_name=lambda df: np.where(condition, True, False)) print(df)
How can I iterate over a dataframe containing multiple conditions (using iterrows()) and flag columns based on these conditions using np.where?
I am trying to generate flags based on multiple conditions. I would like to do the following in a more iterable way: # sample dataframe data = [[1, 1980.0, 2000.0]] df = pd.DataFrame(data, columns=["Item", "year1", "start_year"]) df Item year1 year2 1 1980.0 2000.0 # assign flag based on condition df = df.assign(year_flag=lambda x: np.where(x["year1"] < 1985, True, False)) df Item year1 year2 year_flag 1 1980.0 2000.0 True The way I would like to do this is the following: # create a dataframe containing conditions and flags I'd like to generate data = [ [ "year1", "(df['year1'] < 1985)", ], [ "year2", "((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10))", ], ] condition_df = pd.DataFrame(data, columns=["column", "condition"]) condition_df column condition year1 (df['year1'] < 1985) year2 ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10)) # iterate through rows in condition_df to generate conditions + flags for idx, row in condition_df.iterrows(): col = row["column"] condition = row["condition"] flag_col_name = f"{col}_flag" df = df.assign(flag_col_name=lambda df: np.where(condition, True, False)) Unfortunately this results in the following error: I am assuming this is because the condition is a string and thus 1985 is also a string (could be wrong though). Is there any way I can use this method to flag a dataframe? Or an alternative method that might be more successful? Thank you !!!
[ "In my opinion, a number is expected, and the resulting value is a string. I tried removing the quotes in the conditions themselves. No errors occur. I also added one more line to the dataframe for verification. The flags are displayed as they should. The last line gives: False.\nimport pandas as pd\nimport numpy as np\n\ndata = [[1, 1980.0, 2000.0], [2, 1990, 3000.0]]\n\ndf = pd.DataFrame(data, columns=[\"Item\", \"year1\", \"start_year\"])\n\n\ndata = [[\"year1\", (df['year1'] < 1985), ],\n [\"year2\", ((df['year1'] < 1985) | ((df['start_year'] - df['year1']) < 10)), ], ]\n\ncondition_df = pd.DataFrame(data, columns=[\"column\", \"condition\"])\n\n\nfor idx, row in condition_df.iterrows():\n col = row[\"column\"]\n condition = row[\"condition\"]\n\n flag_col_name = f\"{col}_flag\"\n\n df = df.assign(flag_col_name=lambda df: np.where(condition, True, False))\n print(df)\n\n" ]
[ 0 ]
[]
[]
[ "conditional_statements", "dataframe", "loops", "pandas", "python" ]
stackoverflow_0074636740_conditional_statements_dataframe_loops_pandas_python.txt
Q: Write the attributes of an object into a txt file So i am trying to write to a text file with all the attributes of an object called item. I was able to access the information with: >>>print(*vars(item).values()]) 125001 John Smith 12 First Road London N1 55 74 but when i try write it into the text file: with open('new_student_data.txt', 'w') as f: f.writelines(*vars(item).values()) it throws an error as writelines() only takes one argument. how can i write all the attributes of item to a single line in the text file? A: with open('new_student_data.txt', 'w') as f: for i in vars(item).values(): f.write(f"{i}\n") If file.writelines only takes an iterable and doesn't support *args, you can always iterate over your list and write it with file.write. A: You can just print as follows: with open('new_student_data.txt', 'w') as f: print(*vars(item).values(), file=f) A: Every class is capable of outputting a dictionary by calling __dict__ method. This way you are getting the attribute name and the value as a dictionary, then you could simply iterate through it and write it to a txt file: with open('params.txt'), 'w') as f: for param, value in params.__dict__.items(): if not param.startswith("__"): f.write(f"{param}:{value}\n") Here I am also getting rid of default class attributes that start with "__". Hope this helps
Write the attributes of an object into a txt file
So i am trying to write to a text file with all the attributes of an object called item. I was able to access the information with: >>>print(*vars(item).values()]) 125001 John Smith 12 First Road London N1 55 74 but when i try write it into the text file: with open('new_student_data.txt', 'w') as f: f.writelines(*vars(item).values()) it throws an error as writelines() only takes one argument. how can i write all the attributes of item to a single line in the text file?
[ "with open('new_student_data.txt', 'w') as f:\n for i in vars(item).values():\n f.write(f\"{i}\\n\")\n\nIf file.writelines only takes an iterable and doesn't support *args, you can always iterate over your list and write it with file.write.\n", "You can just print as follows:\nwith open('new_student_data.txt', 'w') as f:\n print(*vars(item).values(), file=f)\n\n", "Every class is capable of outputting a dictionary by calling __dict__ method. This way you are getting the attribute name and the value as a dictionary, then you could simply iterate through it and write it to a txt file:\nwith open('params.txt'), 'w') as f:\n for param, value in params.__dict__.items():\n if not param.startswith(\"__\"):\n f.write(f\"{param}:{value}\\n\")\n\nHere I am also getting rid of default class attributes that start with \"__\". Hope this helps\n" ]
[ 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0071604746_python.txt
Q: Finding max values in a text file read in using Python I have a text file containing only numbers. There are gaps in the sets of numbers and the problem asks that the file is read through, adds the numbers within each group then finds the top three values in the list and adds them together. I've found the way to read through the file and calculate the sum of the largest set but cannot find the second or third. I've pasted my code here:my coding attempt and my results text file content here: List of values in the text file A: Create a list to store the group totals. Read the file a line at a time. Try to convert each line to int. If that fails then you're at a group separator so append zero to the group_totals list Sort the list and print the last 3 items FILENAME = '/Users/dan/Desktop/day1 copy.txt' group_totals = [0] with open(FILENAME) as data: for line in data: try: group_totals[-1] += int(line) except ValueError: group_totals.append(0) print(sorted(group_totals)[-3:]) Output: [740, 1350, 2000] Note: This code implicitly assumes that there are at least 3 groups of values
Finding max values in a text file read in using Python
I have a text file containing only numbers. There are gaps in the sets of numbers and the problem asks that the file is read through, adds the numbers within each group then finds the top three values in the list and adds them together. I've found the way to read through the file and calculate the sum of the largest set but cannot find the second or third. I've pasted my code here:my coding attempt and my results text file content here: List of values in the text file
[ "Create a list to store the group totals.\nRead the file a line at a time. Try to convert each line to int. If that fails then you're at a group separator so append zero to the group_totals list\nSort the list and print the last 3 items\nFILENAME = '/Users/dan/Desktop/day1 copy.txt'\n\ngroup_totals = [0]\n\nwith open(FILENAME) as data:\n for line in data:\n try:\n group_totals[-1] += int(line)\n except ValueError:\n group_totals.append(0)\n print(sorted(group_totals)[-3:])\n\nOutput:\n[740, 1350, 2000]\n\nNote:\nThis code implicitly assumes that there are at least 3 groups of values\n" ]
[ 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0074657939_file_python.txt
Q: Create Column Based On Aggregation of Other Columns - Pyspark I want to create a column whose values are equal to another column's when certain conditions are met. I want the column first to have the value of the column share when the columns gender, week and type are the same. I have the following dataframe: +------+----+----+-------------+-------------------+ |gender|week|type| share| units| +------+----+----+-------------+-------------------+ | Male| 37|Polo| 0.01| 1809.0| | Male| 37|Polo| 0.1| 2327.0| | Male| 37|Polo| 0.15| 2982.0| | Male| 37|Polo| 0.2| 3558.0| | Male| 38|Polo| 0.01| 1700.0| | Male| 38|Polo| 0.1| 2245.0| | Male| 38|Polo| 0.15| 2900.0| | Male| 38|Polo| 0.2| 3477.0| I want the output to be: +------+----+----+-------------+-------------------+---------+ |gender|week|type| share| units| first| +------+----+----+-------------+-------------------+---------+ | Male| 37|Polo| 0.01| 1809.0| 1809.0| | Male| 37|Polo| 0.1| 2327.0| 1809.0| | Male| 37|Polo| 0.15| 2982.0| 1809.0| | Male| 37|Polo| 0.2| 3558.0| 1809.0| | Male| 38|Polo| 0.01| 1700.0| 1700.0| | Male| 38|Polo| 0.1| 2245.0| 1700.0| | Male| 38|Polo| 0.15| 2900.0| 1700.0| | Male| 38|Polo| 0.2| 3477.0| 1700.0| How can I implement this? A: I found the answer out so I will be posting it here. I used a window function: m_window = Window.partitionBy(["gender","week","type"]).orderBy("share") Then I create a column using the function first and over window like this: df.withColumn("first", first("units").over(m_window))
Create Column Based On Aggregation of Other Columns - Pyspark
I want to create a column whose values are equal to another column's when certain conditions are met. I want the column first to have the value of the column share when the columns gender, week and type are the same. I have the following dataframe: +------+----+----+-------------+-------------------+ |gender|week|type| share| units| +------+----+----+-------------+-------------------+ | Male| 37|Polo| 0.01| 1809.0| | Male| 37|Polo| 0.1| 2327.0| | Male| 37|Polo| 0.15| 2982.0| | Male| 37|Polo| 0.2| 3558.0| | Male| 38|Polo| 0.01| 1700.0| | Male| 38|Polo| 0.1| 2245.0| | Male| 38|Polo| 0.15| 2900.0| | Male| 38|Polo| 0.2| 3477.0| I want the output to be: +------+----+----+-------------+-------------------+---------+ |gender|week|type| share| units| first| +------+----+----+-------------+-------------------+---------+ | Male| 37|Polo| 0.01| 1809.0| 1809.0| | Male| 37|Polo| 0.1| 2327.0| 1809.0| | Male| 37|Polo| 0.15| 2982.0| 1809.0| | Male| 37|Polo| 0.2| 3558.0| 1809.0| | Male| 38|Polo| 0.01| 1700.0| 1700.0| | Male| 38|Polo| 0.1| 2245.0| 1700.0| | Male| 38|Polo| 0.15| 2900.0| 1700.0| | Male| 38|Polo| 0.2| 3477.0| 1700.0| How can I implement this?
[ "I found the answer out so I will be posting it here.\nI used a window function:\nm_window = Window.partitionBy([\"gender\",\"week\",\"type\"]).orderBy(\"share\")\nThen I create a column using the function first and over window like this:\ndf.withColumn(\"first\", first(\"units\").over(m_window))\n" ]
[ 1 ]
[]
[]
[ "conditional_statements", "dataframe", "pyspark", "python" ]
stackoverflow_0074655040_conditional_statements_dataframe_pyspark_python.txt
Q: Django, looping through openweather icons always displays the last icon instead of appended city icon I am trying to build out a weather app using openweather api and what I want to do is replace the icon png's with my own customized icon set. In order to do this, I have referenced the openweather api png codes as seen here: https://openweathermap.org/weather-conditions. I have written some code that states if this code equals '01d' then replace the icon code with my custom data image src. The issue is when looping through (after I have added a city), I am always appending the last image which in this case is the data code for '50n' rather than the correct weather code for that city. here is the code in my views.py: def weather(request): url = 'http://api.openweathermap.org/data/2.5/weather?q={}&units=metric&appid=<MYAPPKEY>' cities = City.objects.all() weather_data = [] for city in cities: city_weather = requests.get(url.format(city)).json() weather = { 'city' : city, 'temperature' : city_weather['main']['temp'], 'description' : city_weather['weather'][0]['description'], 'icon' : city_weather['weather'][0]['icon'], } icon = weather['icon'] if icon == '01d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg' elif icon == '01n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01n.svg' elif icon == '02d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02d.svg' elif icon == '02n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02n.svg' elif icon == '03d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03d.svg' elif icon == '03n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03n.svg' elif icon == '04d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04d.svg' elif icon == '04n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04n.svg' elif icon == '09d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09d.svg' elif icon == '09n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09n.svg' elif icon == '10d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10d.svg' elif icon == '10n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10n.svg' elif icon == '11d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11d.svg' elif icon == '11n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11n.svg' elif icon == '13d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13d.svg' elif icon == '13n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13n.svg' elif icon == '50d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50d.svg' elif icon == '50n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50n.svg' weather_data.append(weather) context = {'weather_data' : weather_data, 'icon': icon} return render(request, 'weather/weather.html', context) What am I doing wrong or am I missing something? A: icon = weather['icon'] This sets a variable icon to reference the string inside the dictionary. icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg' This reassigns that variable to a URL string. It does NOT change the dictionary like you might think. context = {'weather_data' : weather_data, 'icon': icon} After the loop, you set a single value in the context which will be the last icon url. Two suggestions: Don't reassign a variable to mean something else. Use two different variables instead. So instead of icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg' do icon_url = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg' Better yet, store the url in each dictionary: weather['icon_url'] = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg' Now you will have a url for the icon in each city. You can build the URL directly from the name of the icon without all the if statements. Do this just once instead of 50 times: weather['icon_url'] = f'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/{icon}.svg' Alternatively, you can do this directly in the template. Since you didn't share your template, I can't give any details beyond this vague hint.
Django, looping through openweather icons always displays the last icon instead of appended city icon
I am trying to build out a weather app using openweather api and what I want to do is replace the icon png's with my own customized icon set. In order to do this, I have referenced the openweather api png codes as seen here: https://openweathermap.org/weather-conditions. I have written some code that states if this code equals '01d' then replace the icon code with my custom data image src. The issue is when looping through (after I have added a city), I am always appending the last image which in this case is the data code for '50n' rather than the correct weather code for that city. here is the code in my views.py: def weather(request): url = 'http://api.openweathermap.org/data/2.5/weather?q={}&units=metric&appid=<MYAPPKEY>' cities = City.objects.all() weather_data = [] for city in cities: city_weather = requests.get(url.format(city)).json() weather = { 'city' : city, 'temperature' : city_weather['main']['temp'], 'description' : city_weather['weather'][0]['description'], 'icon' : city_weather['weather'][0]['icon'], } icon = weather['icon'] if icon == '01d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg' elif icon == '01n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01n.svg' elif icon == '02d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02d.svg' elif icon == '02n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/02n.svg' elif icon == '03d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03d.svg' elif icon == '03n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/03n.svg' elif icon == '04d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04d.svg' elif icon == '04n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/04n.svg' elif icon == '09d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09d.svg' elif icon == '09n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/09n.svg' elif icon == '10d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10d.svg' elif icon == '10n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/10n.svg' elif icon == '11d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11d.svg' elif icon == '11n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/11n.svg' elif icon == '13d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13d.svg' elif icon == '13n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/13n.svg' elif icon == '50d': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50d.svg' elif icon == '50n': icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/50n.svg' weather_data.append(weather) context = {'weather_data' : weather_data, 'icon': icon} return render(request, 'weather/weather.html', context) What am I doing wrong or am I missing something?
[ " icon = weather['icon']\n\nThis sets a variable icon to reference the string inside the dictionary.\n icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\nThis reassigns that variable to a URL string. It does NOT change the dictionary like you might think.\n context = {'weather_data' : weather_data, 'icon': icon}\n\nAfter the loop, you set a single value in the context which will be the last icon url.\nTwo suggestions:\n\nDon't reassign a variable to mean something else. Use two different variables instead. So instead of\n icon = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\ndo\n icon_url = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\n\nBetter yet, store the url in each dictionary:\n weather['icon_url'] = 'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/01d.svg'\n\nNow you will have a url for the icon in each city.\n\nYou can build the URL directly from the name of the icon without all the if statements. Do this just once instead of 50 times:\n weather['icon_url'] = f'https://dar-group-150-holborn.s3.eu-west-2.amazonaws.com/images/{icon}.svg'\n\nAlternatively, you can do this directly in the template. Since you didn't share your template, I can't give any details beyond this vague hint.\n\n\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074658135_django_python.txt
Q: What causes this arithmetic discrepancy between numpy and MATLAB and how can I force either behavior in Python? I tried to normalize a probability distribution of the form $p_k := 2^{-k^2}$ for $k \in {1,\dots,n}$ for $n = 8$ in numpy/Python 3.8 along the following lines, using an equivalent of MATLAB's num2hex a la C++ / Python Equivalent of Matlab's num2hex. The sums of the normalized distributions differ in Python and MATLAB R2020a. If $n < 8$, there is no discrepancy. What is going on, and how can I force Python to produce the same result as MATLAB for $n > 7$? It's hard for me to tell which of these is IEEE 754 compliant (maybe both, with a discrepancy in grouping that affects a carry[?]) but I need the MATLAB behavior. I note that there are still discrepancies in rounding between numpy and MATLAB per Differences between Matlab and Numpy and Python's `round` function (which I verified myself) but not sure this has any bearing. import numpy as np import struct # for MATLAB num2hex equivalent below n = 8 unnormalizedPDF = np.array([2**-(k**2) for k in range(1,n+1)]) # MATLAB num2hex equivalent a la https://stackoverflow.com/questions/24790722/ num2hex = lambda x : hex(struct.unpack('!q', struct.pack('!d',x))[0]) hexPDF = [num2hex(unnormalizedPDF[k]/np.sum(unnormalizedPDF)) for k in range(0,n)] print(hexPDF) # ['0x3fec5862805436a4', # '0x3fbc5862805436a4', # '0x3f6c5862805436a4', # '0x3efc5862805436a4', # '0x3e6c5862805436a4', # '0x3dbc5862805436a4', # '0x3cec5862805436a4', # '0x3bfc5862805436a4'] hexPDFSum = num2hex(np.sum(unnormalizedPDF/np.sum(unnormalizedPDF))) print(hexPDFSum) # 0x3ff0000000000000 Here is the equivalent in MATLAB: n = 8; unnormalizedPDF = 2.^-((1:n).^2); num2hex(unnormalizedPDF/sum(unnormalizedPDF)) % ans = % % 8×16 char array % % '3fec5862805436a4' % '3fbc5862805436a4' % '3f6c5862805436a4' % '3efc5862805436a4' % '3e6c5862805436a4' % '3dbc5862805436a4' % '3cec5862805436a4' % '3bfc5862805436a4' num2hex(sum(unnormalizedPDF/sum(unnormalizedPDF))) % ans = % % '3fefffffffffffff' Note that the unnormalized distributions are exactly the same, but the sums of their normalizations differ by a single bit. If I use $n = 7$, everything agrees (both give 0x3fefffffffffffff), and both give the same results for $n < 7$ as well. A: According to the manual, numpy.sum uses pairwise summation to get more precision. Another common algorithm is Kahan summation. Anyway, I wouldn't count too much on Numpy and MATLAB giving the same result up to the last bit, as there might me operation reordering if computations are made in parallel. See this for the kind of problem that can arise. However, we can cheat a little bit and force Python to do the sum without the extra precision: hexPDFSum = num2hex(np.sum(np.hstack((np.reshape(unnormalizedPDF / np.sum(unnormalizedPDF), (n, 1)), np.zeros((n, 1)))), 0)[0]) hexPDFSum '0x3fefffffffffffff'
What causes this arithmetic discrepancy between numpy and MATLAB and how can I force either behavior in Python?
I tried to normalize a probability distribution of the form $p_k := 2^{-k^2}$ for $k \in {1,\dots,n}$ for $n = 8$ in numpy/Python 3.8 along the following lines, using an equivalent of MATLAB's num2hex a la C++ / Python Equivalent of Matlab's num2hex. The sums of the normalized distributions differ in Python and MATLAB R2020a. If $n < 8$, there is no discrepancy. What is going on, and how can I force Python to produce the same result as MATLAB for $n > 7$? It's hard for me to tell which of these is IEEE 754 compliant (maybe both, with a discrepancy in grouping that affects a carry[?]) but I need the MATLAB behavior. I note that there are still discrepancies in rounding between numpy and MATLAB per Differences between Matlab and Numpy and Python's `round` function (which I verified myself) but not sure this has any bearing. import numpy as np import struct # for MATLAB num2hex equivalent below n = 8 unnormalizedPDF = np.array([2**-(k**2) for k in range(1,n+1)]) # MATLAB num2hex equivalent a la https://stackoverflow.com/questions/24790722/ num2hex = lambda x : hex(struct.unpack('!q', struct.pack('!d',x))[0]) hexPDF = [num2hex(unnormalizedPDF[k]/np.sum(unnormalizedPDF)) for k in range(0,n)] print(hexPDF) # ['0x3fec5862805436a4', # '0x3fbc5862805436a4', # '0x3f6c5862805436a4', # '0x3efc5862805436a4', # '0x3e6c5862805436a4', # '0x3dbc5862805436a4', # '0x3cec5862805436a4', # '0x3bfc5862805436a4'] hexPDFSum = num2hex(np.sum(unnormalizedPDF/np.sum(unnormalizedPDF))) print(hexPDFSum) # 0x3ff0000000000000 Here is the equivalent in MATLAB: n = 8; unnormalizedPDF = 2.^-((1:n).^2); num2hex(unnormalizedPDF/sum(unnormalizedPDF)) % ans = % % 8×16 char array % % '3fec5862805436a4' % '3fbc5862805436a4' % '3f6c5862805436a4' % '3efc5862805436a4' % '3e6c5862805436a4' % '3dbc5862805436a4' % '3cec5862805436a4' % '3bfc5862805436a4' num2hex(sum(unnormalizedPDF/sum(unnormalizedPDF))) % ans = % % '3fefffffffffffff' Note that the unnormalized distributions are exactly the same, but the sums of their normalizations differ by a single bit. If I use $n = 7$, everything agrees (both give 0x3fefffffffffffff), and both give the same results for $n < 7$ as well.
[ "According to the manual, numpy.sum uses pairwise summation to get more precision. Another common algorithm is Kahan summation.\nAnyway, I wouldn't count too much on Numpy and MATLAB giving the same result up to the last bit, as there might me operation reordering if computations are made in parallel. See this for the kind of problem that can arise.\nHowever, we can cheat a little bit and force Python to do the sum without the extra precision:\nhexPDFSum = num2hex(np.sum(np.hstack((np.reshape(unnormalizedPDF / np.sum(unnormalizedPDF), (n, 1)), np.zeros((n, 1)))), 0)[0])\nhexPDFSum\n'0x3fefffffffffffff'\n\n" ]
[ 3 ]
[]
[]
[ "ieee_754", "matlab", "numpy", "python", "python_3.x" ]
stackoverflow_0074658068_ieee_754_matlab_numpy_python_python_3.x.txt
Q: I made a list which i wanted it to receive data in order the server list only takes one input and use the for loop to duplicate that Server starting [Listining] Server is listning to 192.168.129.254 NEW Connection - ('192.168.129.254', 64225) connected[ACTIVE CONNECTIONS] 1 ['hello', 'hello', 'hello', 'hello', 'hello'] This 5 'hello' in the list came from one sent data from client I want the list to save 5 different inputs sent from client like: ['input1', 'input2', 'input3', 'input4', 'input5'] here is the code def handle_client(conn, addr): print(f"NEW Connection - {addr} connected") connected = True while connected: msg_length = conn.recv(HEADER).decode(FORMAT) if msg_length: msg_length = int(msg_length) msg = conn.recv(msg_length).decode(FORMAT) if msg == DISCONNECT_MSG: connected = False list1 = [] for i in range(5): values = msg list1.append(values) print(list1) conn.close() A: I want the list to save 5 different inputs sent from client Then you have to rearrange your code a bit: list1 = [] while connected: msg_length = conn.recv(HEADER).decode(FORMAT) if msg_length: msg_length = int(msg_length) msg = conn.recv(msg_length).decode(FORMAT) if msg == DISCONNECT_MSG: connected = False if len(list1) < 5: values = msg list1.append(values) print(list1)
I made a list which i wanted it to receive data in order
the server list only takes one input and use the for loop to duplicate that Server starting [Listining] Server is listning to 192.168.129.254 NEW Connection - ('192.168.129.254', 64225) connected[ACTIVE CONNECTIONS] 1 ['hello', 'hello', 'hello', 'hello', 'hello'] This 5 'hello' in the list came from one sent data from client I want the list to save 5 different inputs sent from client like: ['input1', 'input2', 'input3', 'input4', 'input5'] here is the code def handle_client(conn, addr): print(f"NEW Connection - {addr} connected") connected = True while connected: msg_length = conn.recv(HEADER).decode(FORMAT) if msg_length: msg_length = int(msg_length) msg = conn.recv(msg_length).decode(FORMAT) if msg == DISCONNECT_MSG: connected = False list1 = [] for i in range(5): values = msg list1.append(values) print(list1) conn.close()
[ "\nI want the list to save 5 different inputs sent from client\n\nThen you have to rearrange your code a bit:\n list1 = []\n while connected:\n msg_length = conn.recv(HEADER).decode(FORMAT)\n if msg_length:\n msg_length = int(msg_length)\n msg = conn.recv(msg_length).decode(FORMAT)\n if msg == DISCONNECT_MSG:\n connected = False\n if len(list1) < 5:\n values = msg\n list1.append(values)\n print(list1)\n\n" ]
[ 0 ]
[]
[]
[ "list", "python", "server", "sockets" ]
stackoverflow_0074656985_list_python_server_sockets.txt
Q: Python: converting timestamp to date time not working I am requesting data from the api.etherscan.io website. For this, I require a free API key. I am getting information for the following wallet addresses 0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5, 0xc508dbe4866528db024fb126e0eb97595668c288. Below is the code I am using: wallet_addresses = ['0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5', '0xc508dbe4866528db024fb126e0eb97595668c288'] page_number = 0 df_main = pd.DataFrame() while True: for address in wallet_addresses: url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}' output = requests.get(url).text df_temp = pd.DataFrame(json.loads(output)['result']) df_temp['wallet_address'] = address df_main = df_main.append(df_temp) page_number += 1 df_main['timeStamp'] = pd.to_datetime(df_main['timeStamp'], unit='s') if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime.date(2022, 1, 1): pass Note that you need your own (free) ether_api. What I want to do is get data from today's date, all the way back to 2022-01-01 which is what I am trying to achieve in the if statement. However, the above gives me an error: ValueError: unit='s' not valid with non-numerical val='2022-09-19 18:14:47' How can this be done? I've tried multiple methods to get pandas datetime to work, but all of them gave me errors. A: Here you go, it's working without an error: page_number = 0 df_main = pd.DataFrame() while True: for address in wallet_addresses: url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}' output = requests.get(url).text df_temp = pd.DataFrame(json.loads(output)['result']) df_temp['wallet_address'] = address page_number += 1 df_temp['timeStamp'] = pd.to_datetime(df_temp['timeStamp'], unit='s') df_main = df_main.append(df_temp) if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime(2022, 1, 1).date(): pass Wrong append So, what has happened here. As suggested in the first comment under question we acknowledged the type of first record in df_main with type(df_main['timeStamp'].iloc[0]). With IPython and Jupyter-Notebook one can look what is happening with df_main just after receiving an error with it being populated on the last for loop iteration that failed. Otherwise if one uses PyCharm or any other IDE with a possibility to debug, the contents of df_main can be revealed via debug. What we were missing, is that df_main = df_main.append(df_temp) is placed in a slightly wrong place. On first iteration it works well, pd.to_datetime(df_main['timeStamp'], unit='s') gets an str type with Linux epoch and gets converted to pandas._libs.tslibs.timestamps.Timestamp. But on next iteration df_main['timeStamp'] already has the Timestamp type and it gets appended with str type, so we get a column with mixed type. E.g.: type(df_main['timeStamp'].iloc[0]) == type(df_main['timeStamp'].iloc[-1]) This results with False. Hence when trying to convert Timestamp to Timestamp one gets an error featured in question. To mitigate this we can place .append() below the conversion and do this conversion on df_temp instead of df_main, this way we will only append Timestamps to the resulting DataFrame and the code below with if clause will work fine. As a side note Another small change I've made was datetime.date(2022, 1, 1). This change was not needed, but the way one works with datetime depends on how this library was imported, so it's worth mentioning: import datetime datetime.date(2022, 1, 1) datetime.datetime(2022, 1, 1).date() from datetime import datetime datetime(2022, 1, 1).date() All the above is legit and will produce the same. On the first import module gets imported, on the second one type gets imported. Alternative solution Conversion to Timestamp takes time. If the API provides Linux epoch dates, why not use this date for comparison? Let's add this somewhere where you define wallet_addresses: reference_date = "01/01/2021" reference_date = int(time.mktime(datetime.datetime.strptime(reference_date, "%d/%m/%Y").timetuple())) This will result in 1609448400. Other stack overflow question as reference. This integer can now be compared with timestamps provided by the API. The only thing left is to cast str to int. We can have your code left intact with some minor changes at the end: << Your code without changes >> df_main['timeStamp'] = df_main['timeStamp'].astype(int) if min(df_main['timeStamp']) < reference_date: pass To make a benchmark I've changed while True: to for _ in range(0,4): to limit the infinite cycle, results are as follows: Initial solution took 11.6 s to complete Alternative solution took 8.85 s to complete It's 30% faster. Casting str to int takes less time than conversion to TimeStamps, I would call this a preferable solution. Future warning FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. It makes sense to comply with this warning. df_main = df_main.append(df_temp) has to be changed to df_main = pd.concat([df_main, df_temp]). As for current 1.5.0 version it's already deprecated. Time to upgrade!
Python: converting timestamp to date time not working
I am requesting data from the api.etherscan.io website. For this, I require a free API key. I am getting information for the following wallet addresses 0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5, 0xc508dbe4866528db024fb126e0eb97595668c288. Below is the code I am using: wallet_addresses = ['0xdafea492d9c6733ae3d56b7ed1adb60692c98bc5', '0xc508dbe4866528db024fb126e0eb97595668c288'] page_number = 0 df_main = pd.DataFrame() while True: for address in wallet_addresses: url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}' output = requests.get(url).text df_temp = pd.DataFrame(json.loads(output)['result']) df_temp['wallet_address'] = address df_main = df_main.append(df_temp) page_number += 1 df_main['timeStamp'] = pd.to_datetime(df_main['timeStamp'], unit='s') if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime.date(2022, 1, 1): pass Note that you need your own (free) ether_api. What I want to do is get data from today's date, all the way back to 2022-01-01 which is what I am trying to achieve in the if statement. However, the above gives me an error: ValueError: unit='s' not valid with non-numerical val='2022-09-19 18:14:47' How can this be done? I've tried multiple methods to get pandas datetime to work, but all of them gave me errors.
[ "Here you go, it's working without an error:\npage_number = 0\ndf_main = pd.DataFrame()\nwhile True:\n for address in wallet_addresses:\n url=f'https://api.etherscan.io/api?module=account&action=txlist&address={address}&startblock=0&endblock=99999999&page={page_number}&offset=10&sort=asc&apikey={ether_api}'\n output = requests.get(url).text\n df_temp = pd.DataFrame(json.loads(output)['result'])\n df_temp['wallet_address'] = address\n page_number += 1\n df_temp['timeStamp'] = pd.to_datetime(df_temp['timeStamp'], unit='s')\n df_main = df_main.append(df_temp)\n if min(pd.to_datetime(df_main['timeStamp']).dt.date) < datetime(2022, 1, 1).date():\n pass\n\n\nWrong append\nSo, what has happened here. As suggested in the first comment under question we acknowledged the type of first record in df_main with type(df_main['timeStamp'].iloc[0]). With IPython and Jupyter-Notebook one can look what is happening with df_main just after receiving an error with it being populated on the last for loop iteration that failed.\nOtherwise if one uses PyCharm or any other IDE with a possibility to debug, the contents of df_main can be revealed via debug.\nWhat we were missing, is that df_main = df_main.append(df_temp) is placed in a slightly wrong place. On first iteration it works well, pd.to_datetime(df_main['timeStamp'], unit='s') gets an str type with Linux epoch and gets converted to pandas._libs.tslibs.timestamps.Timestamp.\nBut on next iteration df_main['timeStamp'] already has the Timestamp type and it gets appended with str type, so we get a column with mixed type. E.g.:\ntype(df_main['timeStamp'].iloc[0]) == type(df_main['timeStamp'].iloc[-1])\n\nThis results with False. Hence when trying to convert Timestamp to Timestamp one gets an error featured in question.\nTo mitigate this we can place .append() below the conversion and do this conversion on df_temp instead of df_main, this way we will only append Timestamps to the resulting DataFrame and the code below with if clause will work fine.\nAs a side note\nAnother small change I've made was datetime.date(2022, 1, 1). This change was not needed, but the way one works with datetime depends on how this library was imported, so it's worth mentioning:\nimport datetime\ndatetime.date(2022, 1, 1)\ndatetime.datetime(2022, 1, 1).date()\n\nfrom datetime import datetime\ndatetime(2022, 1, 1).date()\n\nAll the above is legit and will produce the same. On the first import module gets imported, on the second one type gets imported.\n\nAlternative solution\nConversion to Timestamp takes time. If the API provides Linux epoch dates, why not use this date for comparison? Let's add this somewhere where you define wallet_addresses:\nreference_date = \"01/01/2021\"\nreference_date = int(time.mktime(datetime.datetime.strptime(reference_date, \"%d/%m/%Y\").timetuple()))\n\nThis will result in 1609448400. Other stack overflow question as reference.\nThis integer can now be compared with timestamps provided by the API. The only thing left is to cast str to int. We can have your code left intact with some minor changes at the end:\n<< Your code without changes >>\n df_main['timeStamp'] = df_main['timeStamp'].astype(int)\n if min(df_main['timeStamp']) < reference_date:\n pass\n\nTo make a benchmark I've changed while True: to for _ in range(0,4): to limit the infinite cycle, results are as follows:\n\nInitial solution took 11.6 s to complete\nAlternative solution took 8.85 s to complete\n\nIt's 30% faster. Casting str to int takes less time than conversion to TimeStamps, I would call this a preferable solution.\nFuture warning\nFutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.\n\nIt makes sense to comply with this warning. df_main = df_main.append(df_temp) has to be changed to df_main = pd.concat([df_main, df_temp]).\nAs for current 1.5.0 version it's already deprecated. Time to upgrade!\n" ]
[ 1 ]
[]
[]
[ "dataframe", "datetime", "etherscan", "pandas", "python" ]
stackoverflow_0074657344_dataframe_datetime_etherscan_pandas_python.txt
Q: beginner troubles making guessing game. I keep getting an attribute error trying to append my guess to the guesses list. following along in a course. I was prompted to say getting warmer if the current guess was closer than the last guess. i set guesses = 0 and and within the while loop i tried to append with (guesses.append(cg)) cg = current guess import random correct = random.randint(1,100) print(correct) guesses = 0 cg = int(input('Welcome to GUESSER guess here: ')) while True: if cg > 100 or cg < 0: print('out of bounds') continue if cg == correct: print(f'It took {len(guesses)} to guess right. nice.') break if abs(cg - correct) <= 10: #first guess print('warm.') else: print('cold.') guesses.append(cg) if guesses[-2]: #after first guess if abs(correct - guesses[-2]) > abs(correct - cg): print('warmer') guesses.append(cg) else: print ('colder') guesses.append(cg) pass A: You assign an integer to guesses here: guesses = 0 So the interpreter is right saying you CANNOT append to int. Define it as a list: guesses = [] But there's more: You ask for input BEFORE the loop, so it happens only once, later the loop is infinite, cause no new input is ever provided If you need only current and previous value you don't need a list at all, rather 3 integers (current, previous, counter - to print number of guesses if you wish to do that) If you want to stick to the list you'll get an IndexError next, since there is no guesses[-2] element during 1st iteration (and you don't check the length of the list before trying to access that) Do NOT call variables like "cg" it means nothing, abbreviations depend on a context (which you might have or might not have), now it's a simple program and you can instantly see that it's probably "current_guess", but that's not the case in general, the IDE should make your life easier and give you possibility to auto insert once defined name, so if somebody says it's time consuming they are plainly wrong A: while True: guess = int(input("Enter Guess Here: \n")) if guess < 1 or guess > 100: print('OOB, try again: ') continue # compare player guess to number if guess == win_num: print(f'you did it in {len(guess_list)} congrats') break #if guess is wrong add to guess to list guess_list.append(guess) # if guess_list[-2]: if abs(win_num - guess) < abs(win_num - guess_list[-2]): print('warmer') else: print('colder') else: if abs(win_num - guess) <= 10: print('warm') else: print('cold')
beginner troubles
making guessing game. I keep getting an attribute error trying to append my guess to the guesses list. following along in a course. I was prompted to say getting warmer if the current guess was closer than the last guess. i set guesses = 0 and and within the while loop i tried to append with (guesses.append(cg)) cg = current guess import random correct = random.randint(1,100) print(correct) guesses = 0 cg = int(input('Welcome to GUESSER guess here: ')) while True: if cg > 100 or cg < 0: print('out of bounds') continue if cg == correct: print(f'It took {len(guesses)} to guess right. nice.') break if abs(cg - correct) <= 10: #first guess print('warm.') else: print('cold.') guesses.append(cg) if guesses[-2]: #after first guess if abs(correct - guesses[-2]) > abs(correct - cg): print('warmer') guesses.append(cg) else: print ('colder') guesses.append(cg) pass
[ "You assign an integer to guesses here:\nguesses = 0\n\nSo the interpreter is right saying you CANNOT append to int. Define it as a list:\nguesses = []\n\nBut there's more:\n\nYou ask for input BEFORE the loop, so it happens only once, later the loop is infinite, cause no new input is ever provided\nIf you need only current and previous value you don't need a list at all, rather 3 integers (current, previous, counter - to print number of guesses if you wish to do that)\nIf you want to stick to the list you'll get an IndexError next, since there is no guesses[-2] element during 1st iteration (and you don't check the length of the list before trying to access that)\nDo NOT call variables like \"cg\" it means nothing, abbreviations depend on a context (which you might have or might not have), now it's a simple program and you can instantly see that it's probably \"current_guess\", but that's not the case in general, the IDE should make your life easier and give you possibility to auto insert once defined name, so if somebody says it's time consuming they are plainly wrong\n\n", "\nwhile True:\n guess = int(input(\"Enter Guess Here: \\n\"))\n \n if guess < 1 or guess > 100:\n print('OOB, try again: ')\n continue\n \n # compare player guess to number\n if guess == win_num:\n print(f'you did it in {len(guess_list)} congrats')\n break\n \n #if guess is wrong add to guess to list\n guess_list.append(guess)\n \n #\n if guess_list[-2]:\n if abs(win_num - guess) < abs(win_num - guess_list[-2]):\n print('warmer')\n else:\n print('colder')\n \n \n else:\n if abs(win_num - guess) <= 10:\n print('warm')\n else:\n print('cold')\n \n \n\n\n" ]
[ 0, 0 ]
[]
[]
[ "error_handling", "list", "python" ]
stackoverflow_0074648333_error_handling_list_python.txt
Q: Pandas - Dataframe dates subtraction I am dealing with a dataframe like this: mydata['TS_START'] 0 2022-11-09 00:00:00 1 2022-11-09 00:00:30 2 2022-11-09 00:01:00 3 2022-11-09 00:01:30 4 2022-11-09 00:02:00 ... I would like to create a new column where: mydata['delta_t'] 0 2022-11-09 00:00:30 - 2022-11-09 00:00:00 1 2022-11-09 00:01:00 - 2022-11-09 00:00:30 2 2022-11-09 00:01:30 - 2022-11-09 00:01:00 3 2022-11-09 00:02:00 - 2022-11-09 00:01:30 ... Obtaining something like this (in decimals units hour based): mydata['delta_t'] 0 30/3600 1 30/3600 2 30/3600 3 30/3600 ... I obtained this result using a for cycle, but it is very slow. I would like to obtain a faster solution, using a vectorized form. Do you have any suggestion? A: here is one way : df['date'] = pd.to_datetime(df['date']) df['delta_t'] = (df['date'] - df['date'].shift(1)).dt.total_seconds() print(df) output : >> date delta_t 0 2022-11-09 00:00:00 NaN 1 2022-11-09 00:00:30 30.0 2 2022-11-09 00:01:00 30.0 3 2022-11-09 00:01:30 30.0 4 2022-11-09 00:02:00 30.0
Pandas - Dataframe dates subtraction
I am dealing with a dataframe like this: mydata['TS_START'] 0 2022-11-09 00:00:00 1 2022-11-09 00:00:30 2 2022-11-09 00:01:00 3 2022-11-09 00:01:30 4 2022-11-09 00:02:00 ... I would like to create a new column where: mydata['delta_t'] 0 2022-11-09 00:00:30 - 2022-11-09 00:00:00 1 2022-11-09 00:01:00 - 2022-11-09 00:00:30 2 2022-11-09 00:01:30 - 2022-11-09 00:01:00 3 2022-11-09 00:02:00 - 2022-11-09 00:01:30 ... Obtaining something like this (in decimals units hour based): mydata['delta_t'] 0 30/3600 1 30/3600 2 30/3600 3 30/3600 ... I obtained this result using a for cycle, but it is very slow. I would like to obtain a faster solution, using a vectorized form. Do you have any suggestion?
[ "here is one way :\ndf['date'] = pd.to_datetime(df['date'])\n\ndf['delta_t'] = (df['date'] - df['date'].shift(1)).dt.total_seconds()\nprint(df)\n\noutput :\n>>\n date delta_t\n0 2022-11-09 00:00:00 NaN\n1 2022-11-09 00:00:30 30.0\n2 2022-11-09 00:01:00 30.0\n3 2022-11-09 00:01:30 30.0\n4 2022-11-09 00:02:00 30.0\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "date", "pandas", "python" ]
stackoverflow_0074658022_dataframe_date_pandas_python.txt
Q: Convert string to List so square bracket will be eliminated and will be a list '[1, 2]' It is a string. How to I make it List [1,2] So convertion from '[1, 2]' to [1,2] A: You have a couple of options eval eval('[1, 2]') # [1, 2] ast.literal_eval import ast ast.literal_eval('[1, 2]') # [1, 2] string parsing list(map(int, '[1, 2]'.strip('[]').split(', '))) # [1, 2]
Convert string to List so square bracket will be eliminated and will be a list
'[1, 2]' It is a string. How to I make it List [1,2] So convertion from '[1, 2]' to [1,2]
[ "You have a couple of options\neval\neval('[1, 2]')\n# [1, 2]\n\nast.literal_eval\nimport ast\nast.literal_eval('[1, 2]')\n# [1, 2]\n\nstring parsing\nlist(map(int, '[1, 2]'.strip('[]').split(', ')))\n# [1, 2]\n\n" ]
[ 2 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074658231_list_python.txt
Q: Save JPEG comment using Pillow I need to save an Image in Python (created as a Numpy array) as a JPEG file, while including a "comment" in the file with some specific metadata. This metadata will be used by another (third-party) application and is a simple ASCII string. I have a sample image including such a "comment", which I can read out using Pillow (PIL), via the image.info['comment'] or the image.app['COM'] property. However, when I try a simple round-trip, i.e. loading my sample image and save it again using a different file name, the comment is no longer preserved. Equally, I found no way to include a comment in a newly created image. I am aware that EXIF tags are the preferred way to save metadata in JPEG images, but as mentioned, the third-party application only accepts this data as a "comment", not as EXIF, which I cannot change. After reading this question, I looked into the binary structure of my sample file and found the comment at the start of the file, after a few bytes of some other (meta)data. I do however not know a lot about binary file manipulation, and also I was wondering if there is a more elegant way, other than messing with the binary... EDIT: minimum example: from PIL import Image img = Image.open(path) # where path is the path to the sample image # this prints the desired metadata if it is correctly saved in loaded image print(img.info["comment"]) img.save(new_path) # save with different file name img.close() # now open to see if it has been saved correctly new_img = Image.open(new_path) print(new_img.info['comment']) # now results in KeyError I also tried img.save(new_path, info=img.info), but this does not seem to have an effect. Since img.info['comment'] appears identical to img.app['COM'], I tried img.save(new_path, app=img.app), again does not work. A: To save the "comment" metadata in the JPEG file, you can use the Image.save() method with the save_all=True and exif=img.app arguments. This will preserve the metadata in the JPEG file. Here is an example: from PIL import Image # open the image img = Image.open(path) # save the image with the comment metadata preserved img.save(new_path, save_all=True, exif=img.app) img.close() # now open the new image to see if the metadata has been preserved new_img = Image.open(new_path) print(new_img.info['comment']) You can also specify the comment metadata as a dictionary in the Image.save() method directly, instead of using the img.app property: from PIL import Image # open the image img = Image.open(path) # create a dictionary with the comment metadata comment_metadata = {'comment': "this is my comment metadata"} # save the image with the comment metadata preserved img.save(new_path, save_all=True, exif=comment_metadata) img.close() # now open the new image to see if the metadata has been preserved new_img = Image.open(new_path) print(new_img.info['comment']) A: Just been having a play with this and I couldn't see anything directly in Pillow to support this. I've found that the save() method supports a parameter called extra that can be used to pass arbitrary bytes to the output file. We then just need a simple method to turn a comment into a valid JPEG segment, for example: import struct from PIL import Image def make_jpeg_variable_segment(marker: int, payload: bytes) -> bytes: "make a JPEG segment from the given payload" return struct.pack('>HH', marker, 2 + len(payload)) + payload def make_jpeg_comment_segment(comment: bytes) -> bytes: "make a JPEG comment/COM segment" return make_jpeg_variable_segment(0xFFFE, comment) # open source image with Image.open("foo.jpeg") as im: # save out with new JPEG comment im.save('bar.jpeg', extra=make_jpeg_comment_segment("hello world".encode())) # read file back in to ensure comment round-trips with Image.open('bar.jpeg') as im: print(im.app['COM']) print(im.info['comment']) Note that in my initial attempts I tried appending the comment segment at the end of the file, but Pillow wouldn't load this comment even after calling the .load() method to force it to load the entire JPEG file. It would be nice if this was supported natively, but it doesn't seem to do it yet!
Save JPEG comment using Pillow
I need to save an Image in Python (created as a Numpy array) as a JPEG file, while including a "comment" in the file with some specific metadata. This metadata will be used by another (third-party) application and is a simple ASCII string. I have a sample image including such a "comment", which I can read out using Pillow (PIL), via the image.info['comment'] or the image.app['COM'] property. However, when I try a simple round-trip, i.e. loading my sample image and save it again using a different file name, the comment is no longer preserved. Equally, I found no way to include a comment in a newly created image. I am aware that EXIF tags are the preferred way to save metadata in JPEG images, but as mentioned, the third-party application only accepts this data as a "comment", not as EXIF, which I cannot change. After reading this question, I looked into the binary structure of my sample file and found the comment at the start of the file, after a few bytes of some other (meta)data. I do however not know a lot about binary file manipulation, and also I was wondering if there is a more elegant way, other than messing with the binary... EDIT: minimum example: from PIL import Image img = Image.open(path) # where path is the path to the sample image # this prints the desired metadata if it is correctly saved in loaded image print(img.info["comment"]) img.save(new_path) # save with different file name img.close() # now open to see if it has been saved correctly new_img = Image.open(new_path) print(new_img.info['comment']) # now results in KeyError I also tried img.save(new_path, info=img.info), but this does not seem to have an effect. Since img.info['comment'] appears identical to img.app['COM'], I tried img.save(new_path, app=img.app), again does not work.
[ "To save the \"comment\" metadata in the JPEG file, you can use the Image.save() method with the save_all=True and exif=img.app arguments. This will preserve the metadata in the JPEG file.\nHere is an example:\nfrom PIL import Image\n\n# open the image\nimg = Image.open(path)\n\n# save the image with the comment metadata preserved\nimg.save(new_path, save_all=True, exif=img.app)\nimg.close()\n\n# now open the new image to see if the metadata has been preserved\nnew_img = Image.open(new_path)\nprint(new_img.info['comment'])\n\nYou can also specify the comment metadata as a dictionary in the Image.save() method directly, instead of using the img.app property:\nfrom PIL import Image\n\n# open the image\nimg = Image.open(path)\n\n# create a dictionary with the comment metadata\ncomment_metadata = {'comment': \"this is my comment metadata\"}\n\n# save the image with the comment metadata preserved\nimg.save(new_path, save_all=True, exif=comment_metadata)\nimg.close()\n\n# now open the new image to see if the metadata has been preserved\nnew_img = Image.open(new_path)\nprint(new_img.info['comment'])\n\n", "Just been having a play with this and I couldn't see anything directly in Pillow to support this. I've found that the save() method supports a parameter called extra that can be used to pass arbitrary bytes to the output file.\nWe then just need a simple method to turn a comment into a valid JPEG segment, for example:\nimport struct\nfrom PIL import Image\n\ndef make_jpeg_variable_segment(marker: int, payload: bytes) -> bytes:\n \"make a JPEG segment from the given payload\"\n return struct.pack('>HH', marker, 2 + len(payload)) + payload\n\ndef make_jpeg_comment_segment(comment: bytes) -> bytes:\n \"make a JPEG comment/COM segment\"\n return make_jpeg_variable_segment(0xFFFE, comment)\n\n# open source image\nwith Image.open(\"foo.jpeg\") as im:\n # save out with new JPEG comment\n im.save('bar.jpeg', extra=make_jpeg_comment_segment(\"hello world\".encode()))\n\n# read file back in to ensure comment round-trips\nwith Image.open('bar.jpeg') as im:\n print(im.app['COM'])\n print(im.info['comment'])\n\nNote that in my initial attempts I tried appending the comment segment at the end of the file, but Pillow wouldn't load this comment even after calling the .load() method to force it to load the entire JPEG file.\nIt would be nice if this was supported natively, but it doesn't seem to do it yet!\n" ]
[ 1, 1 ]
[]
[]
[ "jpeg", "python", "python_imaging_library" ]
stackoverflow_0074653239_jpeg_python_python_imaging_library.txt
Q: Couldn't find ffmpeg or avconv - Python I'm working on a captcha solver and I need to use ffmpeg, though nothing works. Windows 10 user. Warning when running the code for the first time: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) Then, when I tried running the script anyway and it required ffprobe, I got the following error: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:198: RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning) Traceback (most recent call last): File "D:\Scripts\captcha\main.py", line 164, in <module> main() File "D:\Scripts\captcha\main.py", line 155, in main captchaSolver() File "D:\Scripts\captcha\main.py", line 106, in captchaSolver sound = pydub.AudioSegment.from_mp3( File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 796, in from_mp3 return cls.from_file(file, 'mp3', parameters=parameters) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 728, in from_file info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py", line 274, in mediainfo_json res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE) File "C:\Program Files\Python310\lib\subprocess.py", line 966, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Program Files\Python310\lib\subprocess.py", line 1435, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified I tried downloading it normally, editing environment variables, pasting them in the same folder as the script, installing with pip, tested ffmpeg manually to see if it works and indeed it converted a mkv to mp4, however my script has no intention of running A: As you can see by the error message, the issue is with ffprobe and not ffmpeg. Make sure that both ffprobe.exe and ffmpeg.exe are in the executable path. One option is placing ffprobe.exe and ffmpeg.exe in the same folder as the Python script (D:\Scripts\captcha\ in your case). Other option is adding the folder of ffprobe.exe and ffmpeg.exe to Windows system path. (Placing the EXE files in a folder that is already in the system path may also work). Debugging Pydub source code (pydub-0.25.1): The code fails in the following code: hp, ht, pid, tid = _winapi.CreateProcess(executable, args, # no special security None, None, int(not close_fds), creationflags, env, os.fspath(cwd) if cwd is not None else None, startupinfo) Where args = 'ffprobe -of json -v info -show_format -show_streams test.mp3' We are getting there from: info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) From: prober = get_prober_name() Here is the source code of get_prober_name method: def get_prober_name(): """ Return probe application, either avconv or ffmpeg """ if which("avprobe"): return "avprobe" elif which("ffprobe"): return "ffprobe" else: # should raise exception warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning) return "ffprobe" The which method looks for the command is in the execution path (returns None if it is not found). As you can see from Pydub sources, ffprobe.exe should be in the execution path. Note For setting FFmpeg path we may also apply something like: import pydub pydub.AudioSegment.converter = 'c:\\FFmpeg\\bin\\ffmpeg.exe' sound = pydub.AudioSegment.from_mp3("test.mp3") But there is no such option for FFprobe. A: If you haven't installed the ffmpeg/ffprobe as @Rotem answered, you can use my ffmpeg-downloader package: pip install ffmpeg-downloader ffdl install --add-path The --add-path option adds the installed FFmpeg folder to the user's system path. Re-open the Python window and both ffmpeg and ffprobe will be available to your program.
Couldn't find ffmpeg or avconv - Python
I'm working on a captcha solver and I need to use ffmpeg, though nothing works. Windows 10 user. Warning when running the code for the first time: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) Then, when I tried running the script anyway and it required ffprobe, I got the following error: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:198: RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning) Traceback (most recent call last): File "D:\Scripts\captcha\main.py", line 164, in <module> main() File "D:\Scripts\captcha\main.py", line 155, in main captchaSolver() File "D:\Scripts\captcha\main.py", line 106, in captchaSolver sound = pydub.AudioSegment.from_mp3( File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 796, in from_mp3 return cls.from_file(file, 'mp3', parameters=parameters) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 728, in from_file info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py", line 274, in mediainfo_json res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE) File "C:\Program Files\Python310\lib\subprocess.py", line 966, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Program Files\Python310\lib\subprocess.py", line 1435, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified I tried downloading it normally, editing environment variables, pasting them in the same folder as the script, installing with pip, tested ffmpeg manually to see if it works and indeed it converted a mkv to mp4, however my script has no intention of running
[ "As you can see by the error message, the issue is with ffprobe and not ffmpeg.\nMake sure that both ffprobe.exe and ffmpeg.exe are in the executable path.\n\nOne option is placing ffprobe.exe and ffmpeg.exe in the same folder as the Python script (D:\\Scripts\\captcha\\ in your case).\nOther option is adding the folder of ffprobe.exe and ffmpeg.exe to Windows system path.\n(Placing the EXE files in a folder that is already in the system path may also work).\n\n\nDebugging Pydub source code (pydub-0.25.1):\nThe code fails in the following code:\nhp, ht, pid, tid = _winapi.CreateProcess(executable, args,\n # no special security\n None, None,\n int(not close_fds),\n creationflags,\n env,\n os.fspath(cwd) if cwd is not None else None,\n startupinfo)\n\nWhere args = 'ffprobe -of json -v info -show_format -show_streams test.mp3'\nWe are getting there from:\ninfo = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit)\nFrom:\nprober = get_prober_name()\nHere is the source code of get_prober_name method:\ndef get_prober_name():\n \"\"\"\n Return probe application, either avconv or ffmpeg\n \"\"\"\n if which(\"avprobe\"):\n return \"avprobe\"\n elif which(\"ffprobe\"):\n return \"ffprobe\"\n else:\n # should raise exception\n warn(\"Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work\", RuntimeWarning)\n return \"ffprobe\"\n\nThe which method looks for the command is in the execution path (returns None if it is not found).\nAs you can see from Pydub sources, ffprobe.exe should be in the execution path.\n\n\nNote\nFor setting FFmpeg path we may also apply something like:\n import pydub\n pydub.AudioSegment.converter = 'c:\\\\FFmpeg\\\\bin\\\\ffmpeg.exe'\n sound = pydub.AudioSegment.from_mp3(\"test.mp3\")\n\n\n\nBut there is no such option for FFprobe.\n", "If you haven't installed the ffmpeg/ffprobe as @Rotem answered, you can use my ffmpeg-downloader package:\npip install ffmpeg-downloader\nffdl install --add-path\n\nThe --add-path option adds the installed FFmpeg folder to the user's system path. Re-open the Python window and both ffmpeg and ffprobe will be available to your program.\n" ]
[ 0, 0 ]
[]
[]
[ "ffmpeg", "ffprobe", "pydub", "python", "selenium" ]
stackoverflow_0074651215_ffmpeg_ffprobe_pydub_python_selenium.txt
Q: How to store a language in a database I'm working with a couple volunteers on creating the first online dictionary for our language Tarifit (An Amazigh language spoken in Northern Morocco) I'm still a CS student learning about Python and C# currently but I also know HTML/CSS/JS and my question was what is the best way to store all the words in a database and how can the people I work with who don't know anything about programming edit the database and add more words etc... I'm already using JavaScript to work on the dictionary site but I could also use Python or any other programming language if it has a better solution for the Database. I have been looking at some SQL Databases and Redis but I don't have experience with them so idk if they will be useful to learn for this exact type of project.
How to store a language in a database
I'm working with a couple volunteers on creating the first online dictionary for our language Tarifit (An Amazigh language spoken in Northern Morocco) I'm still a CS student learning about Python and C# currently but I also know HTML/CSS/JS and my question was what is the best way to store all the words in a database and how can the people I work with who don't know anything about programming edit the database and add more words etc... I'm already using JavaScript to work on the dictionary site but I could also use Python or any other programming language if it has a better solution for the Database. I have been looking at some SQL Databases and Redis but I don't have experience with them so idk if they will be useful to learn for this exact type of project.
[]
[]
[ "1-) Classical solution: Rent a server, Deploy MySQL to server, Use bootstrap templates for web application design in which users add words, use PHP to connect MySQL database server (for adding and displaying words\n2-) Modern solution: Open 3 month free usage Google cloud account, create Firebase database, create program or web app and connect to firebase using credential keys with python\n", "TLDR: I recommend Python + SQLite\nReasoning for a relational DB\n\nYour data is very structured - therefore you can utilize the structure guarantees of a relational DB.\nJoins are very effective with relational DBs and can be done all in one query.\nThe size of your data should not exceed 1M rows, which is the perfect size for relational DBs.\nData is persisted on the hard drive, and you have all the nice ACID properties.\n\nReasoning for SQLite\n\nTakes about 5 Minutes to set it up.\n\nResoning for Python\nThis is entirely opinionated, in my experience the JS-database-connectors feel a lot less stable. Python is a perfect language for a beginner and is very rewarding when prototyping a project such as yours.\n\nAdditional: Redis will be overkill - that's for very high-speed applications with very small access times.\n" ]
[ -1, -1 ]
[ "database", "dictionary", "javascript", "python", "web" ]
stackoverflow_0074658078_database_dictionary_javascript_python_web.txt
Q: How to fix the error name 'phi' is not defined? I'm trying to solve the following laplace transform: f(t) = sen(ωt + φ) I wrote the following code to solve the problem import sympy as sym from sympy.abc import s,t,x,y,z from sympy.integrals import laplace_transform from sympy.integrals import inverse_laplace_transform omega = sympy.Symbol('omega', real=True) sin = sympy.sin function = (sin(omega*t + phi)) function U = laplace_transform(function, t, s) U[0] As you can see, I tried the code above to solve the problem, however, the error that the name 'phi' is not defined. Could someone give me an idea of ​​what I would have to fix to make it work? A: add a import for phi from sympy.abc import phi
How to fix the error name 'phi' is not defined?
I'm trying to solve the following laplace transform: f(t) = sen(ωt + φ) I wrote the following code to solve the problem import sympy as sym from sympy.abc import s,t,x,y,z from sympy.integrals import laplace_transform from sympy.integrals import inverse_laplace_transform omega = sympy.Symbol('omega', real=True) sin = sympy.sin function = (sin(omega*t + phi)) function U = laplace_transform(function, t, s) U[0] As you can see, I tried the code above to solve the problem, however, the error that the name 'phi' is not defined. Could someone give me an idea of ​​what I would have to fix to make it work?
[ "add a import for phi\nfrom sympy.abc import phi\n\n" ]
[ 2 ]
[]
[]
[ "jupyter_notebook", "math", "python", "python_3.x", "symbolic_math" ]
stackoverflow_0074658323_jupyter_notebook_math_python_python_3.x_symbolic_math.txt
Q: How to get rid of the in place FutureWarning when setting an entire column from an array? In pandas v.1.5.0 a new warning has been added, which is shown, when a column is set from an array of different dtype. The FutureWarning informs about a planned semantic change, when using iloc: the change will be done in-place in future versions. The changelog instructs what to do to get the old behavior, but there is no hint how to handle the situation, when in-place operation is in fact the right choice. The example from the changelog: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]) df.iloc[:, 0] = new_prices df.iloc[:, 0] This is the warning, which is printed in pandas 1.5.0: FutureWarning: In a future version, df.iloc[:, i] = newvals will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either df[df.columns[i]] = newvals or, if columns are non-unique, df.isetitem(i, newvals) How to get rid of the warning, if I don't care about in-place or not, but want to get rid of the warning? Am I supposed to change dtype explicitly? Do I really need to catch the warning every single time I need to use this feature? Isn't there a better way? A: I haven't found any better way than suppressing the warning using the warnings module: import numpy as np import pandas as pd import warnings df = pd.DataFrame({"price": [11.1, 12.2]}, index=["book1", "book2"]) original_prices = df["price"] new_prices = np.array([98, 99]) with warnings.catch_warnings(): # Setting values in-place is fine, ignore the warning in Pandas >= 1.5.0 # This can be removed, if Pandas 1.5.0 does not need to be supported any longer. # See also: https://stackoverflow.com/q/74057367/859591 warnings.filterwarnings( "ignore", category=FutureWarning, message=( ".*will attempt to set the values inplace instead of always setting a new array. " "To retain the old behavior, use either.*" ), ) df.iloc[:, 0] = new_prices df.iloc[:, 0] A: Post here because I can't comment yet. For now I think I will also suppress the warnings, because I don't want the old behavior, never expected to use it that way. And the suggested syntax has the danger to trigger the SettingWithCopyWarning warning. A: As the changelog states, the warning is printed when setting an entire column from an array with different dtype, so adjusting the dtype is one way to silence it: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]).astype(float) df.iloc[:, 0] = new_prices df.iloc[:, 0] Note the additional .astype(float). Not an ideal solution, but a solution.
How to get rid of the in place FutureWarning when setting an entire column from an array?
In pandas v.1.5.0 a new warning has been added, which is shown, when a column is set from an array of different dtype. The FutureWarning informs about a planned semantic change, when using iloc: the change will be done in-place in future versions. The changelog instructs what to do to get the old behavior, but there is no hint how to handle the situation, when in-place operation is in fact the right choice. The example from the changelog: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]) df.iloc[:, 0] = new_prices df.iloc[:, 0] This is the warning, which is printed in pandas 1.5.0: FutureWarning: In a future version, df.iloc[:, i] = newvals will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either df[df.columns[i]] = newvals or, if columns are non-unique, df.isetitem(i, newvals) How to get rid of the warning, if I don't care about in-place or not, but want to get rid of the warning? Am I supposed to change dtype explicitly? Do I really need to catch the warning every single time I need to use this feature? Isn't there a better way?
[ "I haven't found any better way than suppressing the warning using the warnings module:\nimport numpy as np\nimport pandas as pd\nimport warnings\n\ndf = pd.DataFrame({\"price\": [11.1, 12.2]}, index=[\"book1\", \"book2\"])\noriginal_prices = df[\"price\"]\nnew_prices = np.array([98, 99])\nwith warnings.catch_warnings():\n # Setting values in-place is fine, ignore the warning in Pandas >= 1.5.0\n # This can be removed, if Pandas 1.5.0 does not need to be supported any longer.\n # See also: https://stackoverflow.com/q/74057367/859591\n warnings.filterwarnings(\n \"ignore\",\n category=FutureWarning,\n message=(\n \".*will attempt to set the values inplace instead of always setting a new array. \"\n \"To retain the old behavior, use either.*\"\n ),\n )\n\n df.iloc[:, 0] = new_prices\n\ndf.iloc[:, 0]\n\n", "Post here because I can't comment yet.\nFor now I think I will also suppress the warnings, because I don't want the old behavior, never expected to use it that way. And the suggested syntax has the danger to trigger the SettingWithCopyWarning warning.\n", "As the changelog states, the warning is printed when setting an entire column from an array with different dtype, so adjusting the dtype is one way to silence it:\ndf = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2'])\noriginal_prices = df['price']\nnew_prices = np.array([98, 99]).astype(float)\ndf.iloc[:, 0] = new_prices\ndf.iloc[:, 0]\n\nNote the additional .astype(float). Not an ideal solution, but a solution.\n" ]
[ 4, 0, 0 ]
[ "I am just filtering all future warnings for now:\nimport warnings\nwarnings.simplefilter(\"ignore\", category=FutureWarning)\n\n" ]
[ -2 ]
[ "pandas", "python" ]
stackoverflow_0074057367_pandas_python.txt
Q: Manually place the ticks on x-axis, at the beginning, middle and end - Matplotlib Is there a way to always place the ticks on x-axis of Matplotlib always at the beginning, middle and end of the axis instead of Matplotlib automatically placing them?For example, I have a plot shown below. matplotlib plot Is there a way to always place 25 at the very beginning, 80 in the middle and 95 at the very end? This is the code I tried in Jupyter Notebook: import matplotlib.pyplot as plt def box(ax_position, percentile_values, label, label_position): fig = plt.figure(figsize = (10, 2)) ax1 = fig.add_axes(ax_position) ax1.set_xticks(percentile_values) ax1.tick_params(axis="x",direction="out", pad=-15, colors='b') ax1.set_yticks([]) ax1.text(*label_position, label, size=8) plt.show() box((0.4, 0.5, 0.4, 0.1), (25,80,95), "CA",(0.01, 1.3)) The number of values passed in percentile_values will always be 3 and these 3 always needs to be placed at the beginning, middle, and end - but Matplotlib automatically places these ticks as per the numerical value. This is what I am looking for: what I need I tried using matplotlib.ticker.FixedLocator, but that does not help me though I can display only 3 ticks, but the position of the ticks is chosen by Matplotlib and not placed at the beginning, middle and end. A: You need to split the set_xticks() to have only the number of entries - (0,1,2) and use set_xticklables() to give the text you want to display - (25,80,95). Note that I have used numpy's linspace to get the list of numbers based on length of percentile_value. Also, I have removed the pad=-15 as you have indicated that you want the numbers below the ticks. Hope this is what you are looking for. import matplotlib.pyplot as plt import numpy as np def box(ax_position, percentile_values, label, label_position): fig = plt.figure(figsize = (10, 2)) ax1 = fig.add_axes(ax_position) ax1.set_xticks(np.linspace(0,len(percentile_values)-1,len(percentile_values))) ax1.set_xticklabels(percentile_values) ax1.tick_params(axis="x",direction="out", colors='b') ax1.set_yticks([]) ax1.text(*label_position, label, size=8) plt.show() box((0.4, 0.5, 0.4, 0.1), (25,80,95), "CA",(0.01, 1.3))
Manually place the ticks on x-axis, at the beginning, middle and end - Matplotlib
Is there a way to always place the ticks on x-axis of Matplotlib always at the beginning, middle and end of the axis instead of Matplotlib automatically placing them?For example, I have a plot shown below. matplotlib plot Is there a way to always place 25 at the very beginning, 80 in the middle and 95 at the very end? This is the code I tried in Jupyter Notebook: import matplotlib.pyplot as plt def box(ax_position, percentile_values, label, label_position): fig = plt.figure(figsize = (10, 2)) ax1 = fig.add_axes(ax_position) ax1.set_xticks(percentile_values) ax1.tick_params(axis="x",direction="out", pad=-15, colors='b') ax1.set_yticks([]) ax1.text(*label_position, label, size=8) plt.show() box((0.4, 0.5, 0.4, 0.1), (25,80,95), "CA",(0.01, 1.3)) The number of values passed in percentile_values will always be 3 and these 3 always needs to be placed at the beginning, middle, and end - but Matplotlib automatically places these ticks as per the numerical value. This is what I am looking for: what I need I tried using matplotlib.ticker.FixedLocator, but that does not help me though I can display only 3 ticks, but the position of the ticks is chosen by Matplotlib and not placed at the beginning, middle and end.
[ "You need to split the set_xticks() to have only the number of entries - (0,1,2) and use set_xticklables() to give the text you want to display - (25,80,95). Note that I have used numpy's linspace to get the list of numbers based on length of percentile_value. Also, I have removed the pad=-15 as you have indicated that you want the numbers below the ticks. Hope this is what you are looking for.\nimport matplotlib.pyplot as plt\nimport numpy as np\ndef box(ax_position, percentile_values, label, label_position):\n fig = plt.figure(figsize = (10, 2))\n ax1 = fig.add_axes(ax_position) \n ax1.set_xticks(np.linspace(0,len(percentile_values)-1,len(percentile_values)))\n ax1.set_xticklabels(percentile_values)\n ax1.tick_params(axis=\"x\",direction=\"out\", colors='b')\n ax1.set_yticks([])\n ax1.text(*label_position, label, size=8)\n plt.show()\n\nbox((0.4, 0.5, 0.4, 0.1), (25,80,95), \"CA\",(0.01, 1.3))\n\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074657894_matplotlib_python.txt
Q: What have i do wrong? File "F:\2д шутер на питоне\main.py", line 402, in <module> world_data[x][y] = int(tile) ValueError: invalid literal for int() with base 10: '-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\ a fragment of the problematic code: world_data = [] for row in range(ROWS): r = [-1] * COLS world_data.append(r) with open(f'level{level}_data.csv', newline='') as csvfile: reader = csv.reader(csvfile, delimiter=',') for x, row in enumerate(reader): for y, tile in enumerate(row): world_data[x][y] = int(tile) I don't even know what the problem is A: It looks like you have tab delimiter, not comma, so replace reader = csv.reader(csvfile, delimiter=',') with reader = csv.reader(csvfile, delimiter='\t') I would note that you could replace this whole block with import numpy as np world_data = np.genfromtext(f'level{level}_data.csv', delimiter='\t')
What have i do wrong?
File "F:\2д шутер на питоне\main.py", line 402, in <module> world_data[x][y] = int(tile) ValueError: invalid literal for int() with base 10: '-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\ a fragment of the problematic code: world_data = [] for row in range(ROWS): r = [-1] * COLS world_data.append(r) with open(f'level{level}_data.csv', newline='') as csvfile: reader = csv.reader(csvfile, delimiter=',') for x, row in enumerate(reader): for y, tile in enumerate(row): world_data[x][y] = int(tile) I don't even know what the problem is
[ "It looks like you have tab delimiter, not comma, so replace\nreader = csv.reader(csvfile, delimiter=',')\n\nwith\nreader = csv.reader(csvfile, delimiter='\\t')\n\nI would note that you could replace this whole block with\nimport numpy as np\nworld_data = np.genfromtext(f'level{level}_data.csv', delimiter='\\t')\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074658374_python.txt
Q: Quarterly forecast Data across multiple departments I want to forecast some data, here is an example of the csv table: Time Period HR Fin Legal Leadership Overall 2021Q2 42 36 66 53 2021Q3 52 43 64 67 2021Q4 65 47 71 73 2022Q1 68 50 75 74 2022Q2 72 57 77 81 2022Q3 79 62 75 78 I want to make predictions for every quarter until the end of Q4 2023. I found an article which does something similar but doesn't have multiple value columns (Y axis) I tried tailoring my code to allow for this but I get an error. Here is my code(I've altered the contents to simplify my table, there were originally 12 columns not 5): import pandas as pd from datetime import date, timedelta import datetime import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.graphics.tsaplots import plot_pacf from statsmodels.tsa.arima_model import ARIMA import statsmodels.api as sm import warnings import plotly.graph_objects as go # import make_subplots function from plotly.subplots # to make grid of plots from plotly.subplots import make_subplots 'set filepath' inputfilepath = 'C:/Documents/' \ 'Forecast/Input/' \ 'Forecast Data csv.csv' df = pd.read_csv(inputfilepath) print(df) import plotly.express as px figure = px.line(df, x="Time Period", y=("Fin","Legal","Leadership","Overall"), title='Quarterly scores') figure.show() However, I am met with the following error: ValueError: All arguments should have the same length. The length of argument y is 4, whereas the length of previously-processed arguments ['Time Period'] is 6 How would I alter my code to produce a graph that contains multiple y variables (Fin, Legal, Leadership, Overall)? Additionally, this is the link to the article I found: https://thecleverprogrammer.com/2022/09/05/business-forecasting-using-python/ A: Looks like your "y" argument accepts only list [ele1, ele2], not a tuple(ele1, ele2). I changed the brackets to squares and I ran your code just fine: import plotly.express as px figure = px.line(df, x="Time Period", y=["Fin","Legal","Leadership","Overall"], title='Quarterly scores') figure.show() produces: this
Quarterly forecast Data across multiple departments
I want to forecast some data, here is an example of the csv table: Time Period HR Fin Legal Leadership Overall 2021Q2 42 36 66 53 2021Q3 52 43 64 67 2021Q4 65 47 71 73 2022Q1 68 50 75 74 2022Q2 72 57 77 81 2022Q3 79 62 75 78 I want to make predictions for every quarter until the end of Q4 2023. I found an article which does something similar but doesn't have multiple value columns (Y axis) I tried tailoring my code to allow for this but I get an error. Here is my code(I've altered the contents to simplify my table, there were originally 12 columns not 5): import pandas as pd from datetime import date, timedelta import datetime import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.graphics.tsaplots import plot_pacf from statsmodels.tsa.arima_model import ARIMA import statsmodels.api as sm import warnings import plotly.graph_objects as go # import make_subplots function from plotly.subplots # to make grid of plots from plotly.subplots import make_subplots 'set filepath' inputfilepath = 'C:/Documents/' \ 'Forecast/Input/' \ 'Forecast Data csv.csv' df = pd.read_csv(inputfilepath) print(df) import plotly.express as px figure = px.line(df, x="Time Period", y=("Fin","Legal","Leadership","Overall"), title='Quarterly scores') figure.show() However, I am met with the following error: ValueError: All arguments should have the same length. The length of argument y is 4, whereas the length of previously-processed arguments ['Time Period'] is 6 How would I alter my code to produce a graph that contains multiple y variables (Fin, Legal, Leadership, Overall)? Additionally, this is the link to the article I found: https://thecleverprogrammer.com/2022/09/05/business-forecasting-using-python/
[ "Looks like your \"y\" argument accepts only list [ele1, ele2], not a tuple(ele1, ele2). I changed the brackets to squares and I ran your code just fine:\n import plotly.express as px\n figure = px.line(df, x=\"Time Period\", \n y=[\"Fin\",\"Legal\",\"Leadership\",\"Overall\"],\n title='Quarterly scores')\n\nfigure.show()\n\nproduces:\nthis\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "pandas", "python" ]
stackoverflow_0074658092_matplotlib_pandas_python.txt
Q: Snakemake not interpreting wildcard correctly? I am trying to run a snakemake file but it is producing a weird result refseq = 'refseq.fasta' reads = '_R1_001' reads2 = '_R2_001' configfile: "config.yaml" ## Add config def getsamples(): import glob test = (glob.glob("*.fastq.gz")) samples = [] for i in test: samples.append(i.rsplit('_', 2)[0]) print(samples) return(samples) def getbarcodes(): with open('unique.barcodes.txt') as file: lines = [line.rstrip() for line in file] return(lines) rule all: input: expand("called/{barcodes}{sample}_called.vcf", barcodes=getbarcodes(), sample=getsamples()), expand("mosdepth/{barcodes}{sample}.mosdepth.summary.txt", barcodes=getbarcodes(), sample=getsamples()) rule fastq_grep: input: R1 = "{sample}_R1_001.fastq.gz", R2 = "{sample}_R2_001.fastq.gz" output: "grepped/{barcodes}{sample}_R1_001.plate.fastq", "grepped/{barcodes}{sample}_R2_001.plate.fastq" shell: "fastq-grep -i '{wildcards.barcodes}' {input.R1} > {output} && fastq-grep -i '{wildcards.barcodes}' {input.R2} > {output}" I have files in my directory with *.fastq.gz on the end of them but I get this: Missing input files for rule fastq_grep: 0_R1_001.fastq.gz 0_R2_001.fastq.gz Those two files do not exist, where is it getting them from? I would expect to see a lot of fastq files that are in my directory but it is only listing one file that does not exist. A: It's a common problem due to {barcodes}{sample} pattern. Snakemake won't know where {barcodes} ends and where {sample} starts without a wildcard_constraint. Right now, snakemake is thinking that your sample wildcard is just a 0.
Snakemake not interpreting wildcard correctly?
I am trying to run a snakemake file but it is producing a weird result refseq = 'refseq.fasta' reads = '_R1_001' reads2 = '_R2_001' configfile: "config.yaml" ## Add config def getsamples(): import glob test = (glob.glob("*.fastq.gz")) samples = [] for i in test: samples.append(i.rsplit('_', 2)[0]) print(samples) return(samples) def getbarcodes(): with open('unique.barcodes.txt') as file: lines = [line.rstrip() for line in file] return(lines) rule all: input: expand("called/{barcodes}{sample}_called.vcf", barcodes=getbarcodes(), sample=getsamples()), expand("mosdepth/{barcodes}{sample}.mosdepth.summary.txt", barcodes=getbarcodes(), sample=getsamples()) rule fastq_grep: input: R1 = "{sample}_R1_001.fastq.gz", R2 = "{sample}_R2_001.fastq.gz" output: "grepped/{barcodes}{sample}_R1_001.plate.fastq", "grepped/{barcodes}{sample}_R2_001.plate.fastq" shell: "fastq-grep -i '{wildcards.barcodes}' {input.R1} > {output} && fastq-grep -i '{wildcards.barcodes}' {input.R2} > {output}" I have files in my directory with *.fastq.gz on the end of them but I get this: Missing input files for rule fastq_grep: 0_R1_001.fastq.gz 0_R2_001.fastq.gz Those two files do not exist, where is it getting them from? I would expect to see a lot of fastq files that are in my directory but it is only listing one file that does not exist.
[ "It's a common problem due to {barcodes}{sample} pattern.\nSnakemake won't know where {barcodes} ends and where {sample} starts without a wildcard_constraint. Right now, snakemake is thinking that your sample wildcard is just a 0.\n" ]
[ 1 ]
[]
[]
[ "bash", "bioinformatics", "python", "snakemake" ]
stackoverflow_0074658260_bash_bioinformatics_python_snakemake.txt
Q: Finding specific text with selenium Okay, so I need to search in a search engine for (person's name) net worth and from the first 5 links get all the values and find the average one. So... When I search for example Elon Musk net worth and open for example the first one which happens to be Wikipedia my thought process was to search example for a string that ends in billion and get that value but there happens to be many strings that end in billion and I can't be sure if I found the one, I need. Any advice? I haven't tried anything yet. I was just brainstorming but couldn't find any solutions to my problem. A: I think you're going to have to use some sort of machine learning based text analysis to figure out if it's the context you're looking for.
Finding specific text with selenium
Okay, so I need to search in a search engine for (person's name) net worth and from the first 5 links get all the values and find the average one. So... When I search for example Elon Musk net worth and open for example the first one which happens to be Wikipedia my thought process was to search example for a string that ends in billion and get that value but there happens to be many strings that end in billion and I can't be sure if I found the one, I need. Any advice? I haven't tried anything yet. I was just brainstorming but couldn't find any solutions to my problem.
[ "I think you're going to have to use some sort of machine learning based text analysis to figure out if it's the context you're looking for.\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "web_scraping" ]
stackoverflow_0074657852_python_selenium_web_scraping.txt
Q: How to increase font size for Edge and Nodes using Diagrams I'm using Diagrams as code which uses the Diagrams Python module. I'm trying to increase the font size for Edge labels but can't seem to figure out how to do it. Edge only seems to accept attr instead of graph_attr so I've tried variations with no luck. Examples I've tried are: Edge(style="dotted", label="patches", attr="fontsize=20") Edge(style="dotted", label="patches", attr={"fontsize": "20"}) Edge(style="dotted", label="patches", fontsize="20") Internet(label="Internet", attr="fontsize=20") A: I'm not sure what I was doing wrong before but I was able to get the following variations working. Edge(color="black", label="texthere", fontsize="20") font = "20" Edge(color="black", label="texthere", fontsize=font) There is also a head and tail label. See the following examples: Edge(color="red", headlabel="port info", labelfontsize="20")
How to increase font size for Edge and Nodes using Diagrams
I'm using Diagrams as code which uses the Diagrams Python module. I'm trying to increase the font size for Edge labels but can't seem to figure out how to do it. Edge only seems to accept attr instead of graph_attr so I've tried variations with no luck. Examples I've tried are: Edge(style="dotted", label="patches", attr="fontsize=20") Edge(style="dotted", label="patches", attr={"fontsize": "20"}) Edge(style="dotted", label="patches", fontsize="20") Internet(label="Internet", attr="fontsize=20")
[ "I'm not sure what I was doing wrong before but I was able to get the following variations working.\nEdge(color=\"black\", label=\"texthere\", fontsize=\"20\")\n\nfont = \"20\"\nEdge(color=\"black\", label=\"texthere\", fontsize=font)\n\nThere is also a head and tail label. See the following examples:\nEdge(color=\"red\", headlabel=\"port info\", labelfontsize=\"20\")\n\n" ]
[ 0 ]
[]
[]
[ "diagram", "python" ]
stackoverflow_0074648806_diagram_python.txt
Q: how to add a profile object by using objects.create method simply my error is this Exception has occurred: TypeError User() got an unexpected keyword argument 'User' here is the data I receive from the post request in view.py if request.method == "POST": student_surname = request.POST.get('student_surname') student_initials = request.POST.get('student_initials') student_birthday = request.POST.get('student_birthday') student_username = request.POST.get('student_username') student_email = request.POST.get('student_email') student_entrance = request.POST.get('student_entrance') student_contact = request.POST.get('student_contact') student_residence = request.POST.get('student_residence') student_father = request.POST.get('student_father') student_other_skills = request.POST.get('student_skills') student_sports = request.POST.get('student_sports') student_password = request.POST.get('student_password') I can create user object it's working in view.py user = User.objects.create_user( username=student_username, email=student_email, password=student_password ) some data is related to profile in view.py student_profile = User.objects.create( User=user, #Error line surname=student_surname, initials=student_initials, entrance_number=student_entrance, email=student_email, father=student_father, skills=student_other_skills, sports=student_sports, birthday=student_birthday, contact=student_contact, address=student_residence, ) student_profile.save() profile definition in models.py class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) surname = models.CharField(max_length=50) initials = models.CharField(max_length=10, blank=True) entrance_number = models.CharField(max_length=10, blank=True) email = models.EmailField(max_length=254, blank=True) father = models.CharField(max_length=50, blank=True) skills = models.CharField(max_length=50, blank=True) sports = models.CharField(max_length=50, blank=True) birthday = models.DateField(null=True, blank=True) contact = models.CharField(max_length=100, null=True, blank=True) address = models.CharField(max_length=100, null=True, blank=True) # other fields here def __str__(self): return self.user.username I believe the error is in User = user line can somebody tell me how to initialize this profile object correctly AND save record in the database at the time of creating the user. A: student_profile = Profile.objects.create( # Profile user=user, #user surname=student_surname, initials=student_initials, entrance_number=student_entrance, email=student_email, father=student_father, skills=student_other_skills, sports=student_sports, birthday=student_birthday, contact=student_contact, address=student_residence, ) Not User model, must be Profile model, your model field is user, but you have used User
how to add a profile object by using objects.create method
simply my error is this Exception has occurred: TypeError User() got an unexpected keyword argument 'User' here is the data I receive from the post request in view.py if request.method == "POST": student_surname = request.POST.get('student_surname') student_initials = request.POST.get('student_initials') student_birthday = request.POST.get('student_birthday') student_username = request.POST.get('student_username') student_email = request.POST.get('student_email') student_entrance = request.POST.get('student_entrance') student_contact = request.POST.get('student_contact') student_residence = request.POST.get('student_residence') student_father = request.POST.get('student_father') student_other_skills = request.POST.get('student_skills') student_sports = request.POST.get('student_sports') student_password = request.POST.get('student_password') I can create user object it's working in view.py user = User.objects.create_user( username=student_username, email=student_email, password=student_password ) some data is related to profile in view.py student_profile = User.objects.create( User=user, #Error line surname=student_surname, initials=student_initials, entrance_number=student_entrance, email=student_email, father=student_father, skills=student_other_skills, sports=student_sports, birthday=student_birthday, contact=student_contact, address=student_residence, ) student_profile.save() profile definition in models.py class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) surname = models.CharField(max_length=50) initials = models.CharField(max_length=10, blank=True) entrance_number = models.CharField(max_length=10, blank=True) email = models.EmailField(max_length=254, blank=True) father = models.CharField(max_length=50, blank=True) skills = models.CharField(max_length=50, blank=True) sports = models.CharField(max_length=50, blank=True) birthday = models.DateField(null=True, blank=True) contact = models.CharField(max_length=100, null=True, blank=True) address = models.CharField(max_length=100, null=True, blank=True) # other fields here def __str__(self): return self.user.username I believe the error is in User = user line can somebody tell me how to initialize this profile object correctly AND save record in the database at the time of creating the user.
[ "student_profile = Profile.objects.create( # Profile\n user=user, #user\n surname=student_surname,\n initials=student_initials,\n entrance_number=student_entrance,\n email=student_email,\n father=student_father,\n skills=student_other_skills,\n sports=student_sports,\n birthday=student_birthday,\n contact=student_contact,\n address=student_residence,\n )\n\nNot User model, must be Profile model, your model field is user, but you have used User\n" ]
[ 1 ]
[]
[]
[ "authentication", "django", "django_models", "python", "python_3.x" ]
stackoverflow_0074658376_authentication_django_django_models_python_python_3.x.txt
Q: How does "Insert documentation comment stub" work in Pycharm for getting method parameters? I have enabled Insert documentation comment stub within Editor | General |Smart keys : But then how to get the method parameters type stubs? Adding the docstring triple quotes and then enter does open up the docstring - but with nothing in it: def get_self_join_clause(self, df, alias1='o', alias2 = 'n'): """ """ # pycharm added this automatically A: how to get the method parameters type stubs? PyCharm does generate the docstring stub with the type placeholders, but the placeholders aren't currently (using PyCharm 2022.1) populated from the __annotations__ with the types. This has been marked with the state "To be discussed" in the JetBrains bugtracker, see issues PY-23400 and PY-54930. Inclusion of the type placeholders is configured by checking File > Settings > Editor > General > Smart Keys > Python > Insert type placeholders in the documentation comment stub. It's worth noting the insert docstring stub intention is selected on the function name itself In this example with Napoleon style docstring format selected in File > Settings > Tools > Python Integrated Tools > Docstrings, the result:
How does "Insert documentation comment stub" work in Pycharm for getting method parameters?
I have enabled Insert documentation comment stub within Editor | General |Smart keys : But then how to get the method parameters type stubs? Adding the docstring triple quotes and then enter does open up the docstring - but with nothing in it: def get_self_join_clause(self, df, alias1='o', alias2 = 'n'): """ """ # pycharm added this automatically
[ "\nhow to get the method parameters type stubs?\n\nPyCharm does generate the docstring stub with the type placeholders, but the placeholders aren't currently (using PyCharm 2022.1) populated from the __annotations__ with the types. This has been marked with the state \"To be discussed\" in the JetBrains bugtracker, see issues PY-23400 and PY-54930.\nInclusion of the type placeholders is configured by checking File > Settings > Editor > General > Smart Keys > Python > Insert type placeholders in the documentation comment stub.\n\n\nIt's worth noting the insert docstring stub intention is selected on the function name itself\n\nIn this example with Napoleon style docstring format selected in File > Settings > Tools > Python Integrated Tools > Docstrings, the result:\n\n" ]
[ 1 ]
[]
[]
[ "docstring", "pycharm", "python" ]
stackoverflow_0074657042_docstring_pycharm_python.txt
Q: BeautifulSoup giving me many error lines when used I've installed beautifulsoup (file named bs4) into my pythonproject folder which is the same folder as the python file I am running. The .py file contains the following code, and for input I am using this URL to a simple page with 1 link which the code is supposed to retrieve. URL used as url input: http://data.pr4e.org/page1.htm .py code: import urllib.request, urllib.parse, urllib.error from bs4 import BeautifulSoup import ssl ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE url = input('Enter - ') html = urllib.request.urlopen(url, context=ctx).read() soup = BeautifulSoup(html, 'html.parser') # Retrieve all of the anchor tags tags = soup('a') for tag in tags: print(tag.get('href', None)) Though I could be wrong, it appears to me that bs4 imports correctly because my IDE program suggests BeautifulSoup when I begin typing it. After all, it is installed in the same directory as the .py file. however, It spits out the following lines of error when I run it using the previously provided url: Traceback (most recent call last): File "C:\Users\Thomas\PycharmProjects\pythonProject\main.py", line 16, in <module> soup = BeautifulSoup(html, 'html.parser') File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 215, in __init__ self._feed() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 241, in _feed self.endData() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 315, in endData self.object_was_parsed(o) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 320, in object_was_parsed previous_element = most_recent_element or self._most_recent_element File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1001, in __getattr__ return self.find(tag) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1238, in find l = self.find_all(name, attrs, recursive, text, 1, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1259, in find_all return self._find_all(name, attrs, text, limit, generator, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 516, in _find_all strainer = SoupStrainer(name, attrs, text, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1560, in __init__ self.text = self._normalize_search_value(text) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1565, in _ normalize_search_value if (isinstance(value, str) or isinstance(value, collections.Callable) or hasattr(value, 'match') AttributeError: module 'collections' has no attribute 'Callable' Process finished with exit code 1 The lines being referred to in the error messages are from files inside bs4 that were downloaded as part of it. I haven't edited any of the bs4 contained files or even touched them. Can anyone help me figure out why bs4 isn't working? A: Are you using python 3.10? Looks like beautifulsoup library is using removed deprecated aliases to Collections Abstract Base Classes. More info here: https://docs.python.org/3/whatsnew/3.10.html#removed A quick fix is to paste these 2 lines just below your imports: import collections collections.Callable = collections.abc.Callable A: Andrey, i cannot comment yet. But i tried your fix and Im using Thonny and using 3.10 in terminal. But after adding the two import collections and callable lines. i get another error in Thonny that isnt shown in terminal. when i run the program in terminal it simply seems to do nothing. In Thonny it suggests that "Module has no attribute "Callable"
BeautifulSoup giving me many error lines when used
I've installed beautifulsoup (file named bs4) into my pythonproject folder which is the same folder as the python file I am running. The .py file contains the following code, and for input I am using this URL to a simple page with 1 link which the code is supposed to retrieve. URL used as url input: http://data.pr4e.org/page1.htm .py code: import urllib.request, urllib.parse, urllib.error from bs4 import BeautifulSoup import ssl ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE url = input('Enter - ') html = urllib.request.urlopen(url, context=ctx).read() soup = BeautifulSoup(html, 'html.parser') # Retrieve all of the anchor tags tags = soup('a') for tag in tags: print(tag.get('href', None)) Though I could be wrong, it appears to me that bs4 imports correctly because my IDE program suggests BeautifulSoup when I begin typing it. After all, it is installed in the same directory as the .py file. however, It spits out the following lines of error when I run it using the previously provided url: Traceback (most recent call last): File "C:\Users\Thomas\PycharmProjects\pythonProject\main.py", line 16, in <module> soup = BeautifulSoup(html, 'html.parser') File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 215, in __init__ self._feed() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 241, in _feed self.endData() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 315, in endData self.object_was_parsed(o) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 320, in object_was_parsed previous_element = most_recent_element or self._most_recent_element File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1001, in __getattr__ return self.find(tag) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1238, in find l = self.find_all(name, attrs, recursive, text, 1, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1259, in find_all return self._find_all(name, attrs, text, limit, generator, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 516, in _find_all strainer = SoupStrainer(name, attrs, text, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1560, in __init__ self.text = self._normalize_search_value(text) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1565, in _ normalize_search_value if (isinstance(value, str) or isinstance(value, collections.Callable) or hasattr(value, 'match') AttributeError: module 'collections' has no attribute 'Callable' Process finished with exit code 1 The lines being referred to in the error messages are from files inside bs4 that were downloaded as part of it. I haven't edited any of the bs4 contained files or even touched them. Can anyone help me figure out why bs4 isn't working?
[ "Are you using python 3.10? Looks like beautifulsoup library is using removed deprecated aliases to Collections Abstract Base Classes. More info here: https://docs.python.org/3/whatsnew/3.10.html#removed\nA quick fix is to paste these 2 lines just below your imports:\nimport collections\ncollections.Callable = collections.abc.Callable\n\n", "Andrey, i cannot comment yet. But i tried your fix and Im using Thonny and using 3.10 in terminal. But after adding the two import collections and callable lines. i get another error in Thonny that isnt shown in terminal. when i run the program in terminal it simply seems to do nothing. In Thonny it suggests that \"Module has no attribute \"Callable\"\n" ]
[ 2, 0 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0070677261_beautifulsoup_python.txt
Q: Why doesn't Element.attrib include namespace definitions? I'd like to create a XML namespace mapping (e.g., to use in findall calls as in the Python documentation of ElementTree). Given the definitions seem to exist as attributes of the xbrl root element, I'd have thought I could just examine the attrib attribute of the root element within my ElementTree. However, the following code from io import StringIO import xml.etree.ElementTree as ET TEST = '''<?xml version="1.0" encoding="utf-8"?> <xbrl xml:lang="en-US" xmlns="http://www.xbrl.org/2003/instance" xmlns:country="http://xbrl.sec.gov/country/2021" xmlns:dei="http://xbrl.sec.gov/dei/2021q4" xmlns:iso4217="http://www.xbrl.org/2003/iso4217" xmlns:link="http://www.xbrl.org/2003/linkbase" xmlns:nvda="http://www.nvidia.com/20220130" xmlns:srt="http://fasb.org/srt/2021-01-31" xmlns:stpr="http://xbrl.sec.gov/stpr/2021" xmlns:us-gaap="http://fasb.org/us-gaap/2021-01-31" xmlns:xbrldi="http://xbrl.org/2006/xbrldi" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> </xbrl>''' xbrl = ET.parse(StringIO(TEST)) print(xbrl.getroot().attrib) produces the following output: {'{http://www.w3.org/XML/1998/namespace}lang': 'en-US'} Why aren't any of the namespace attributes showing up in root.attrib? I'd at least expect xlmns to be in the dictionary given it has no prefix. What have I tried? The following code seems to work to generate the namespace mapping: print({prefix: uri for key, (prefix, uri) in ET.iterparse(StringIO(TEST), events=['start-ns'])}) output: {'': 'http://www.xbrl.org/2003/instance', 'country': 'http://xbrl.sec.gov/country/2021', 'dei': 'http://xbrl.sec.gov/dei/2021q4', 'iso4217': 'http://www.xbrl.org/2003/iso4217', 'link': 'http://www.xbrl.org/2003/linkbase', 'nvda': 'http://www.nvidia.com/20220130', 'srt': 'http://fasb.org/srt/2021-01-31', 'stpr': 'http://xbrl.sec.gov/stpr/2021', 'us-gaap': 'http://fasb.org/us-gaap/2021-01-31', 'xbrldi': 'http://xbrl.org/2006/xbrldi', 'xlink': 'http://www.w3.org/1999/xlink', 'xsi': 'http://www.w3.org/2001/XMLSchema-instance'} But yikes is it gross to have to parse the file twice. A: As for the answer to your specific question, why the attrib list doesn't contain the namespace prefix decls, sorry for the unquenching answer: because they're not attributes. http://www.w3.org/XML/1998/namespace is a special schema that doesn't act like the other schemas in your userspace. In that representation, xmlns:prefix="uri" is an attribute. In all other subordinate (by parsing sequence) schemas, xmlns:prefix="uri" is a special thing, a namespace prefix declaration, which is different than an attribute on a node or element. I don't have a reference for this but it holds true perfectly in at least a half dozen (correct) implementations of XML parsers that I've used, including those from IBM, Microsoft and Oracle. As for the ugliness of reparsing the file, I feel your pain but it's necessary. As tdelaney so well pointed out, you may not assume that all of your namespace decls or prefixes must be on your root element. Be prepared for the possibility of the same prefix being redefined with a different namespace on every node in your document. This may hold true and the library must correctly work with it, even if it is never the case your document (or worse, if it's never been the case so far). Consider if perhaps you are shoehorning some text processing to parse or query XML when there may be a better solution, like XPath or XQuery. There are some good recent changes to and Python wrappers for Saxon, even though their pricing model has changed.
Why doesn't Element.attrib include namespace definitions?
I'd like to create a XML namespace mapping (e.g., to use in findall calls as in the Python documentation of ElementTree). Given the definitions seem to exist as attributes of the xbrl root element, I'd have thought I could just examine the attrib attribute of the root element within my ElementTree. However, the following code from io import StringIO import xml.etree.ElementTree as ET TEST = '''<?xml version="1.0" encoding="utf-8"?> <xbrl xml:lang="en-US" xmlns="http://www.xbrl.org/2003/instance" xmlns:country="http://xbrl.sec.gov/country/2021" xmlns:dei="http://xbrl.sec.gov/dei/2021q4" xmlns:iso4217="http://www.xbrl.org/2003/iso4217" xmlns:link="http://www.xbrl.org/2003/linkbase" xmlns:nvda="http://www.nvidia.com/20220130" xmlns:srt="http://fasb.org/srt/2021-01-31" xmlns:stpr="http://xbrl.sec.gov/stpr/2021" xmlns:us-gaap="http://fasb.org/us-gaap/2021-01-31" xmlns:xbrldi="http://xbrl.org/2006/xbrldi" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> </xbrl>''' xbrl = ET.parse(StringIO(TEST)) print(xbrl.getroot().attrib) produces the following output: {'{http://www.w3.org/XML/1998/namespace}lang': 'en-US'} Why aren't any of the namespace attributes showing up in root.attrib? I'd at least expect xlmns to be in the dictionary given it has no prefix. What have I tried? The following code seems to work to generate the namespace mapping: print({prefix: uri for key, (prefix, uri) in ET.iterparse(StringIO(TEST), events=['start-ns'])}) output: {'': 'http://www.xbrl.org/2003/instance', 'country': 'http://xbrl.sec.gov/country/2021', 'dei': 'http://xbrl.sec.gov/dei/2021q4', 'iso4217': 'http://www.xbrl.org/2003/iso4217', 'link': 'http://www.xbrl.org/2003/linkbase', 'nvda': 'http://www.nvidia.com/20220130', 'srt': 'http://fasb.org/srt/2021-01-31', 'stpr': 'http://xbrl.sec.gov/stpr/2021', 'us-gaap': 'http://fasb.org/us-gaap/2021-01-31', 'xbrldi': 'http://xbrl.org/2006/xbrldi', 'xlink': 'http://www.w3.org/1999/xlink', 'xsi': 'http://www.w3.org/2001/XMLSchema-instance'} But yikes is it gross to have to parse the file twice.
[ "As for the answer to your specific question, why the attrib list doesn't contain the namespace prefix decls, sorry for the unquenching answer: because they're not attributes.\nhttp://www.w3.org/XML/1998/namespace is a special schema that doesn't act like the other schemas in your userspace. In that representation, xmlns:prefix=\"uri\" is an attribute. In all other subordinate (by parsing sequence) schemas, xmlns:prefix=\"uri\" is a special thing, a namespace prefix declaration, which is different than an attribute on a node or element. I don't have a reference for this but it holds true perfectly in at least a half dozen (correct) implementations of XML parsers that I've used, including those from IBM, Microsoft and Oracle.\nAs for the ugliness of reparsing the file, I feel your pain but it's necessary. As tdelaney so well pointed out, you may not assume that all of your namespace decls or prefixes must be on your root element.\nBe prepared for the possibility of the same prefix being redefined with a different namespace on every node in your document. This may hold true and the library must correctly work with it, even if it is never the case your document (or worse, if it's never been the case so far).\nConsider if perhaps you are shoehorning some text processing to parse or query XML when there may be a better solution, like XPath or XQuery. There are some good recent changes to and Python wrappers for Saxon, even though their pricing model has changed.\n" ]
[ 1 ]
[]
[]
[ "elementtree", "python", "xml", "xml_namespaces" ]
stackoverflow_0074337020_elementtree_python_xml_xml_namespaces.txt
Q: Get records for the nearest date if record does not exist for a particular date I have a pandas dataframe of stock records, my goal is to pass in a particular 'day' e.g 8 and get the filtered data frame for the 8th of each month and year in the dataset. I have gone through some SO questions and managed to get one part of my requirement that was getting the records for a particular day, however if the data for say '8th' does not exist for the particular month and year, I need to get the records for the closest day where record exists for this particular month and year. As an example, if I pass in 8th and there is no record for 8th Jan' 2022, I need to see if records exists for 7th and 9th Jan'22, and so on..and get the record for the nearest date. If record is present in both 7th and 9th, I will get the record for 9th (higher date). However, it is possible if the record for 7th exists and 9th does not exist, then I will get the record for 7th (closest). Code I have written so far filtered_df = data.loc[(data['Date'].dt.day == 8)] If the dataset is required, please let me know. I tried to make it clear but if there is any doubt, please let me know. Any help in the correct direction is appreciated. A: Alternative 1 Resample to a daily resolution, selecting the nearest day to fill in missing values: df2 = df.resample('D').nearest() df2 = df2.loc[df2.index.day == 8] Alternative 2 A more general method (and a tiny bit faster) is to generate dates/times of your choice, then use reindex() and method 'nearest'. It is more general because you can use any series of timestamps you could come up with (not necessarily aligned with any frequency). dates = pd.date_range( start=df.first_valid_index().normalize(), end=df.last_valid_index(), freq='D') dates = dates[dates.day == 8] df2 = df.reindex(dates, method='nearest') Example Let's start with a reproducible example: import yfinance as yf df = yf.download(['AAPL', 'AMZN'], start='2022-01-01', end='2022-12-31', freq='D') >>> df.iloc[:10, :5] Adj Close Close High AAPL AMZN AAPL AMZN AAPL Date 2022-01-03 180.959747 170.404495 182.009995 170.404495 182.880005 2022-01-04 178.663086 167.522003 179.699997 167.522003 182.940002 2022-01-05 173.910645 164.356995 174.919998 164.356995 180.169998 2022-01-06 171.007523 163.253998 172.000000 163.253998 175.300003 2022-01-07 171.176529 162.554001 172.169998 162.554001 174.139999 2022-01-10 171.196426 161.485992 172.190002 161.485992 172.500000 2022-01-11 174.069748 165.362000 175.080002 165.362000 175.179993 2022-01-12 174.517136 165.207001 175.529999 165.207001 177.179993 2022-01-13 171.196426 161.214005 172.190002 161.214005 176.619995 2022-01-14 172.071335 162.138000 173.070007 162.138000 173.779999 Now: df2 = df.resample('D').nearest() df2 = df2.loc[df2.index.day == 8] >>> df2.iloc[:5, :5] Adj Close Close High AAPL AMZN AAPL AMZN AAPL 2022-01-08 171.176529 162.554001 172.169998 162.554001 174.139999 2022-02-08 174.042633 161.413498 174.830002 161.413498 175.350006 2022-03-08 156.730942 136.014496 157.440002 136.014496 162.880005 2022-04-08 169.323975 154.460495 170.089996 154.460495 171.779999 2022-05-08 151.597595 108.789001 152.059998 108.789001 155.830002 Warning Replacing a missing day with data from the future (which is what happens when the nearest day is after the missing one) is called peak-ahead and can cause peak-ahead bias in quant research that would use that data. It is usually considered dangerous. You'd be safer using method='ffill'.
Get records for the nearest date if record does not exist for a particular date
I have a pandas dataframe of stock records, my goal is to pass in a particular 'day' e.g 8 and get the filtered data frame for the 8th of each month and year in the dataset. I have gone through some SO questions and managed to get one part of my requirement that was getting the records for a particular day, however if the data for say '8th' does not exist for the particular month and year, I need to get the records for the closest day where record exists for this particular month and year. As an example, if I pass in 8th and there is no record for 8th Jan' 2022, I need to see if records exists for 7th and 9th Jan'22, and so on..and get the record for the nearest date. If record is present in both 7th and 9th, I will get the record for 9th (higher date). However, it is possible if the record for 7th exists and 9th does not exist, then I will get the record for 7th (closest). Code I have written so far filtered_df = data.loc[(data['Date'].dt.day == 8)] If the dataset is required, please let me know. I tried to make it clear but if there is any doubt, please let me know. Any help in the correct direction is appreciated.
[ "Alternative 1\nResample to a daily resolution, selecting the nearest day to fill in missing values:\ndf2 = df.resample('D').nearest()\ndf2 = df2.loc[df2.index.day == 8]\n\nAlternative 2\nA more general method (and a tiny bit faster) is to generate dates/times of your choice, then use reindex() and method 'nearest'. It is more general because you can use any series of timestamps you could come up with (not necessarily aligned with any frequency).\ndates = pd.date_range(\n start=df.first_valid_index().normalize(), end=df.last_valid_index(),\n freq='D')\ndates = dates[dates.day == 8]\ndf2 = df.reindex(dates, method='nearest')\n\nExample\nLet's start with a reproducible example:\nimport yfinance as yf\n\ndf = yf.download(['AAPL', 'AMZN'], start='2022-01-01', end='2022-12-31', freq='D')\n>>> df.iloc[:10, :5]\n Adj Close Close High\n AAPL AMZN AAPL AMZN AAPL\nDate \n2022-01-03 180.959747 170.404495 182.009995 170.404495 182.880005\n2022-01-04 178.663086 167.522003 179.699997 167.522003 182.940002\n2022-01-05 173.910645 164.356995 174.919998 164.356995 180.169998\n2022-01-06 171.007523 163.253998 172.000000 163.253998 175.300003\n2022-01-07 171.176529 162.554001 172.169998 162.554001 174.139999\n2022-01-10 171.196426 161.485992 172.190002 161.485992 172.500000\n2022-01-11 174.069748 165.362000 175.080002 165.362000 175.179993\n2022-01-12 174.517136 165.207001 175.529999 165.207001 177.179993\n2022-01-13 171.196426 161.214005 172.190002 161.214005 176.619995\n2022-01-14 172.071335 162.138000 173.070007 162.138000 173.779999\n\nNow:\ndf2 = df.resample('D').nearest()\ndf2 = df2.loc[df2.index.day == 8]\n\n>>> df2.iloc[:5, :5]\n Adj Close Close High\n AAPL AMZN AAPL AMZN AAPL\n2022-01-08 171.176529 162.554001 172.169998 162.554001 174.139999\n2022-02-08 174.042633 161.413498 174.830002 161.413498 175.350006\n2022-03-08 156.730942 136.014496 157.440002 136.014496 162.880005\n2022-04-08 169.323975 154.460495 170.089996 154.460495 171.779999\n2022-05-08 151.597595 108.789001 152.059998 108.789001 155.830002\n\nWarning\nReplacing a missing day with data from the future (which is what happens when the nearest day is after the missing one) is called peak-ahead and can cause peak-ahead bias in quant research that would use that data. It is usually considered dangerous. You'd be safer using method='ffill'.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074657984_dataframe_pandas_python.txt
Q: ImportError: No module named wget Please help me to find reason on MacOS why when I including library import wget I'm getting error File "/Users/xx/python/import.py", line 4, in <module> import wget ImportError: No module named wget This library is installed xx$ pip3 install wget Requirement already satisfied: wget in /usr/local/lib/python3.6/site-packages (3.2) I just suppose that some path is not set, but I don't know how to prove this. Please help me find solution for this problem. A: Try pip install wget, maybe you’re using python 2 A: With pip3 you are installing module for python 3, It can b that you have both versions of python 2 and 3 and you your environment is pointing default to python 2 Check python version or install wget for python 2 python -V pip install wget A: this should not be the case, but check if site-packages is in the path for accessing modules >>> import sys >>> sys.path [..., '...\\python3.6\\lib\\site-packages', ...] ## if this is here I cannot help you if not, try repairing python you can do that by clicking setup file (one with which you installed in the first place), and among 3 options click repair A: If you process the python script by command: python import.py or python3 import.py it should work. But if you process the executable python script by command: ./import.py ENTER then incldue as the first line of the script import.py: #!/usr/bin/env python or #!/usr/bin/env python3 A: sudo apt-get install --reinstall python3-wget A: The following command worked for me in Jupyter Lab !pip install wget Hope this does help! A: In Jupyter Lab, although my Python was 3.9 but it was using 3.7 paths (I have multiple Pythons installed): import sys sys.path ['D:\\Projects', 'C:\\Program Files\\Python37\\python37.zip', 'C:\\Program Files\\Python37\\DLLs', 'C:\\Program Files\\Python37\\lib', 'C:\\Program Files\\Python37', '', 'C:\\Users\\John\\AppData\\Roaming\\Python\\Python37\\site-packages', 'C:\\Program Files\\Python37\\lib\\site-packages', 'C:\\Program Files\\Python37\\lib\\site-packages\\win32', 'C:\\Program Files\\Python37\\lib\\site-packages\\win32\\lib', 'C:\\Program Files\\Python37\\lib\\site-packages\\Pythonwin', 'C:\\Program Files\\Python37\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\John\\.ipython'] So I did !pip3.7 install --user wget, and then it worked. A: pip install wget if in colab use: !pip install wget A: I had the same problem recently and using python3 instead of py worked for me.
ImportError: No module named wget
Please help me to find reason on MacOS why when I including library import wget I'm getting error File "/Users/xx/python/import.py", line 4, in <module> import wget ImportError: No module named wget This library is installed xx$ pip3 install wget Requirement already satisfied: wget in /usr/local/lib/python3.6/site-packages (3.2) I just suppose that some path is not set, but I don't know how to prove this. Please help me find solution for this problem.
[ "Try pip install wget, maybe you’re using python 2\n", "With pip3 you are installing module for python 3,\nIt can b that you have both versions of python 2 and 3 and you your environment is pointing default to python 2 \nCheck python version or install wget for python 2\npython -V \npip install wget\n\n", "this should not be the case, but check if site-packages is in the path for accessing modules\n>>> import sys\n>>> sys.path\n[..., '...\\\\python3.6\\\\lib\\\\site-packages', ...] ## if this is here I cannot help you\n\nif not, try repairing python\nyou can do that by clicking setup file (one with which you installed in the first place),\nand among 3 options click repair\n", "If you process the python script by command:\npython import.py\n\nor \npython3 import.py\n\nit should work.\nBut if you process the executable python script by command:\n./import.py ENTER\n\nthen incldue as the first line of the script import.py:\n#!/usr/bin/env python\n\nor\n#!/usr/bin/env python3\n\n", "sudo apt-get install --reinstall python3-wget\n\n", "The following command worked for me in Jupyter Lab\n!pip install wget\n\nHope this does help!\n", "In Jupyter Lab, although my Python was 3.9 but it was using 3.7 paths (I have multiple Pythons installed):\nimport sys\n\nsys.path\n\n['D:\\\\Projects',\n 'C:\\\\Program Files\\\\Python37\\\\python37.zip',\n 'C:\\\\Program Files\\\\Python37\\\\DLLs',\n 'C:\\\\Program Files\\\\Python37\\\\lib',\n 'C:\\\\Program Files\\\\Python37',\n '',\n 'C:\\\\Users\\\\John\\\\AppData\\\\Roaming\\\\Python\\\\Python37\\\\site-packages',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\win32',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\win32\\\\lib',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\Pythonwin',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\IPython\\\\extensions',\n 'C:\\\\Users\\\\John\\\\.ipython']\n\nSo I did !pip3.7 install --user wget, and then it worked.\n", "pip install wget\n\nif in colab use:\n!pip install wget\n\n", "I had the same problem recently and using python3 instead of py worked for me.\n" ]
[ 14, 4, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "macos", "python", "wget" ]
stackoverflow_0051069716_macos_python_wget.txt
Q: django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'") I don't understand what kind of error is ? sometimes this code works and after 1-2 times submitting form then trying to submit form again with different details then i got this error, django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'") Here this is my views.py code @api_view(['POST', 'GET']) def add_info_view(request): if request.method == 'POST': form = GitInfoForm(request.POST) if form.is_valid(): form.save() try: git_Id = form.cleaned_data['git_Id'] s = Gitinformation.objects.filter(git_Id=git_Id).values('request_id') print('Value of S :', s[0]['request_id']) s = s[0]['request_id'] approve_url = f"http://127.0.0.1:8000/Approve/?request_id={str(s)}" print("Url : ", approve_url) try: send_mail( 'KSA Test Activation', approve_url, '[email protected]', ['[email protected]'], fail_silently=False, ) request.session['approve_url'] = approve_url print('Approve Url sent : ', approve_url) except Exception as e: pass except Exception as e: pass form = GitInfoForm() form = GitInfoForm() return render(request, 'requestApp/addInfo.html', {'form': form}) How to getrid of this error, please help me. A: Based on your comment. request_id = models.UUIDField(primary_key=False, default=uuid.uuid4().hex, editable=False, unique=True) You assigned an instance of the UUID for the default value. In fact, you didn't set a function to generate a new UUID for each record. If you check the related migration file, you can see a line like this: ('request_id', models.UUIDField(default='97c8a76eefe8445081fcfec3af4f1df2', editable=False, unique=True)) you set an instance of the UUID as default but the unique property is set, because of this, the first time you save a record is ok but the next record, you face with error. Actually, you have to set a function for the default to run for each record. You should set the properties like the below: request_id = models.UUIDField(primary_key=False, default=uuid.uuid4, editable=False, unique=True)
django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'")
I don't understand what kind of error is ? sometimes this code works and after 1-2 times submitting form then trying to submit form again with different details then i got this error, django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'") Here this is my views.py code @api_view(['POST', 'GET']) def add_info_view(request): if request.method == 'POST': form = GitInfoForm(request.POST) if form.is_valid(): form.save() try: git_Id = form.cleaned_data['git_Id'] s = Gitinformation.objects.filter(git_Id=git_Id).values('request_id') print('Value of S :', s[0]['request_id']) s = s[0]['request_id'] approve_url = f"http://127.0.0.1:8000/Approve/?request_id={str(s)}" print("Url : ", approve_url) try: send_mail( 'KSA Test Activation', approve_url, '[email protected]', ['[email protected]'], fail_silently=False, ) request.session['approve_url'] = approve_url print('Approve Url sent : ', approve_url) except Exception as e: pass except Exception as e: pass form = GitInfoForm() form = GitInfoForm() return render(request, 'requestApp/addInfo.html', {'form': form}) How to getrid of this error, please help me.
[ "Based on your comment.\nrequest_id = models.UUIDField(primary_key=False, default=uuid.uuid4().hex, editable=False, unique=True)\n\nYou assigned an instance of the UUID for the default value. In fact, you didn't set a function to generate a new UUID for each record. If you check the related migration file, you can see a line like this:\n('request_id', models.UUIDField(default='97c8a76eefe8445081fcfec3af4f1df2', editable=False, unique=True))\n\nyou set an instance of the UUID as default but the unique property is set, because of this, the first time you save a record is ok but the next record, you face with error.\nActually, you have to set a function for the default to run for each record. You should set the properties like the below:\nrequest_id = models.UUIDField(primary_key=False, default=uuid.uuid4, editable=False, unique=True)\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "django_views", "mysql", "python" ]
stackoverflow_0074652528_django_django_rest_framework_django_views_mysql_python.txt
Q: Django3 Paginator with function based view class ProductList(ListView): model = Product paginate_by = 8 def company_page(request, slug): ... product_list = Product.objects.filter(company=company).order_by('-pk') paginator = Paginator(product_list, 4) page_number = request.GET.get('page') page_obj = paginator.get_page(page_number) return render(request, 'product/product_list.html', { ..., 'product_list': product_list, 'page_obj': page_obj }) views.py <nav aria-label="Pagination"> <ul class="pagination justify-content-center my-5"> {% if page_obj.has_previous %} <li class="page-item mx-auto lead"> <a class="page-link" href="?page={{page_obj.previous_page_number}}" tabindex="-1" aria-disabled="true"> Newer</a> </li> {% else %} <li class="page-item disabled"> <a class="page-link" href="#" tabindex="-1" aria-disabled="true"> Newer</a> </li> {% endif %} {% if page_obj.has_next %} <li class="page-item mx-auto lead"> <a class="page-link" href="?page={{page_obj.next_page_number}}"> Older</a> </li> {% else %} <li class="page-item disabled mx-auto lead"> <a class="page-link" href="#!"> Older</a> </li> {% endif %} </ul> </nav> product_list.html I added pagination on ProductList view with paginated_by and imported Paginator to make other pages using a function view but it's only paginated on ProductList view and doesn't work on company_page view. The Newer & Older buttons work but the page keeps showing every product_list objects. How can I make it work on all pages? A: Try this: views.py # other views ... def company_page(request, slug): product_list = Product.objects.filter(company=company).order_by('-pk') page = request.GET.get('page', 1) paginator = Paginator(product_list, 4) try: Products = paginator.page(page) except PageNotAnInteger: Products = paginator.page(1) except EmptyPage: Products = paginator.page(paginator.num_pages) context = {'Products': Products} return render(request, 'product/product_list.html', context) product_list.html <nav aria-label="Pagination"> {% if Products.has_other_pages %} <ul class="pagination justify-content-center my-5"> {% if Products.has_previous %} <li class="page-item mx-auto lead"> <a class="page-link" href="?page={{ Products.previous_page_number }}" tabindex="-1" aria-disabled="true"> Previous</a> </li> {% else %} <li class="page-item disabled"> <a class="page-link" href="#" tabindex="-1" aria-disabled="true"> Previous</a> </li> {% endif %} {% for i in Products.paginator.page_range %} {% if Products.number == i %} <li class="page-item mx-auto lead"> <a class="page-link" href="?page={{ i }}"> {{ i }}</a> </li> {% endif %} {% endfor %} {% if Products.has_next %} <li class="page-item mx-auto lead"> <a class="page-link" href="?page={{ Products.next_page_number }}"> Next</a> </li> {% else %} <li class="page-item disabled mx-auto lead"> <a class="page-link" href="#!"> Next</a> </li> {% endif %} </ul> {% endif %} </nav>
Django3 Paginator with function based view
class ProductList(ListView): model = Product paginate_by = 8 def company_page(request, slug): ... product_list = Product.objects.filter(company=company).order_by('-pk') paginator = Paginator(product_list, 4) page_number = request.GET.get('page') page_obj = paginator.get_page(page_number) return render(request, 'product/product_list.html', { ..., 'product_list': product_list, 'page_obj': page_obj }) views.py <nav aria-label="Pagination"> <ul class="pagination justify-content-center my-5"> {% if page_obj.has_previous %} <li class="page-item mx-auto lead"> <a class="page-link" href="?page={{page_obj.previous_page_number}}" tabindex="-1" aria-disabled="true"> Newer</a> </li> {% else %} <li class="page-item disabled"> <a class="page-link" href="#" tabindex="-1" aria-disabled="true"> Newer</a> </li> {% endif %} {% if page_obj.has_next %} <li class="page-item mx-auto lead"> <a class="page-link" href="?page={{page_obj.next_page_number}}"> Older</a> </li> {% else %} <li class="page-item disabled mx-auto lead"> <a class="page-link" href="#!"> Older</a> </li> {% endif %} </ul> </nav> product_list.html I added pagination on ProductList view with paginated_by and imported Paginator to make other pages using a function view but it's only paginated on ProductList view and doesn't work on company_page view. The Newer & Older buttons work but the page keeps showing every product_list objects. How can I make it work on all pages?
[ "Try this:\nviews.py\n# other views ...\n\ndef company_page(request, slug):\n product_list = Product.objects.filter(company=company).order_by('-pk')\n\n page = request.GET.get('page', 1)\n paginator = Paginator(product_list, 4)\n\n try:\n Products = paginator.page(page)\n except PageNotAnInteger:\n Products = paginator.page(1)\n except EmptyPage:\n Products = paginator.page(paginator.num_pages)\n\n context = {'Products': Products}\n return render(request, 'product/product_list.html', context)\n\nproduct_list.html\n<nav aria-label=\"Pagination\">\n{% if Products.has_other_pages %}\n <ul class=\"pagination justify-content-center my-5\">\n {% if Products.has_previous %}\n <li class=\"page-item mx-auto lead\">\n <a class=\"page-link\" href=\"?page={{ Products.previous_page_number }}\" tabindex=\"-1\" aria-disabled=\"true\">\n Previous</a>\n </li>\n {% else %}\n <li class=\"page-item disabled\">\n <a class=\"page-link\" href=\"#\" tabindex=\"-1\" aria-disabled=\"true\">\n Previous</a>\n </li>\n {% endif %}\n {% for i in Products.paginator.page_range %}\n {% if Products.number == i %}\n <li class=\"page-item mx-auto lead\">\n <a class=\"page-link\" href=\"?page={{ i }}\">\n {{ i }}</a>\n </li>\n {% endif %}\n {% endfor %}\n {% if Products.has_next %}\n\n <li class=\"page-item mx-auto lead\">\n <a class=\"page-link\" href=\"?page={{ Products.next_page_number }}\">\n Next</a>\n </li>\n {% else %}\n <li class=\"page-item disabled mx-auto lead\">\n <a class=\"page-link\" href=\"#!\">\n Next</a>\n </li>\n {% endif %}\n </ul>\n {% endif %}\n\n </nav>\n\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074646099_django_python.txt
Q: Button is not clickable by Selenuim (Python) I have a script that uses Selenium (Python). I tried to make the code click a button that it acknowledges is clickable, but throws an error stating it;s not clickable. Same thing happens again in a dropdown menu, but this time I'm not clicking, but selecting an option by value. from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By from selenium.webdriver.support.select import * from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # Getting to Chrome and website website = 'https://www.padron.gob.ar/publica/' driver = webdriver.Chrome(ChromeDriverManager().install()) driver.get(website) driver.maximize_window() #"BUENOS AIRES" in "Distrito Electoral" distritoElectoralOptions = driver.find_element(By.NAME, 'site') Select(distritoElectoralOptions).select_by_value('02 ') #Clicking "Consulta por Zona WebDriverWait(driver, 35).until(EC.element_to_be_clickable((By.ID, 'lired'))) consultaPorZona = driver.find_element(By.ID, 'lired') consultaPorZona.click() #"SEC_8" in "Sección General Electoral" WebDriverWait(driver, 35).until(EC.visibility_of((By.NAME, 'secg'))) seccionGeneralElectoral = driver.find_element(By.NAME, 'secg') Select(seccionGeneralElectoral).select_by_value('00008') I'm getting this error on line 21: selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable: element has zero size It works in a ipython notebook if each section is separated, but only if the option "Run all" is not used. Instead, the kernel has to be run on it's own. I'm using VS Code. Also, when it reaches the last line, when run in ipynb format, it throws this error: Message: Cannot locate option with value: 00008 Thank you in advance. A: When a web element is present in the HTML-DOM but it is not in the state that can be interacted. Other words, when the element is found but we can’t interact with it, it throws ElementNotInteractableException. The element not interactable exception may occur due to various reasons. Element is not visible Element is present off-screen (After scrolling down it will display) Element is present behind any other element Element is disabled If the element is not visible then wait until element is visible. For this we will use wait command in selenium wait = WebDriverWait(driver, 10) element = wait.until(EC.element_to_be_clickable((By.ID, 'someid'))) element.click() If the element is off-screen then we need to scroll down the browser and interact with the element. Use the execute_script() interface that helps to execute JavaScript methods through Selenium Webdriver. browser = webdriver.Firefox() browser.get("https://en.wikipedia.org") browser.execute_script("window.scrollTo(0,1000)") //scroll 1000 pixel vertical Reference this solution in to the regarding problem.
Button is not clickable by Selenuim (Python)
I have a script that uses Selenium (Python). I tried to make the code click a button that it acknowledges is clickable, but throws an error stating it;s not clickable. Same thing happens again in a dropdown menu, but this time I'm not clicking, but selecting an option by value. from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By from selenium.webdriver.support.select import * from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # Getting to Chrome and website website = 'https://www.padron.gob.ar/publica/' driver = webdriver.Chrome(ChromeDriverManager().install()) driver.get(website) driver.maximize_window() #"BUENOS AIRES" in "Distrito Electoral" distritoElectoralOptions = driver.find_element(By.NAME, 'site') Select(distritoElectoralOptions).select_by_value('02 ') #Clicking "Consulta por Zona WebDriverWait(driver, 35).until(EC.element_to_be_clickable((By.ID, 'lired'))) consultaPorZona = driver.find_element(By.ID, 'lired') consultaPorZona.click() #"SEC_8" in "Sección General Electoral" WebDriverWait(driver, 35).until(EC.visibility_of((By.NAME, 'secg'))) seccionGeneralElectoral = driver.find_element(By.NAME, 'secg') Select(seccionGeneralElectoral).select_by_value('00008') I'm getting this error on line 21: selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable: element has zero size It works in a ipython notebook if each section is separated, but only if the option "Run all" is not used. Instead, the kernel has to be run on it's own. I'm using VS Code. Also, when it reaches the last line, when run in ipynb format, it throws this error: Message: Cannot locate option with value: 00008 Thank you in advance.
[ "When a web element is present in the HTML-DOM but it is not in the state that can be interacted. Other words, when the element is found but we can’t interact with it, it throws ElementNotInteractableException.\nThe element not interactable exception may occur due to various reasons.\n\nElement is not visible\nElement is present off-screen (After scrolling down it will display)\nElement is present behind any other element\nElement is disabled\n\nIf the element is not visible then wait until element is visible. For this we will use wait command in selenium\nwait = WebDriverWait(driver, 10)\nelement = wait.until(EC.element_to_be_clickable((By.ID, 'someid')))\nelement.click()\n\nIf the element is off-screen then we need to scroll down the browser and interact with the element.\nUse the execute_script() interface that helps to execute JavaScript methods through Selenium Webdriver.\nbrowser = webdriver.Firefox()\nbrowser.get(\"https://en.wikipedia.org\")\nbrowser.execute_script(\"window.scrollTo(0,1000)\") //scroll 1000 pixel vertical\n\nReference this solution in to the regarding problem.\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "web_scraping" ]
stackoverflow_0074657899_python_selenium_web_scraping.txt
Q: best way to speed up multiprocessing code in python? I am trying to mess around with matrices in python, and wanted to use multiprocessing to processes each row separately for a math operation, I have posted a minimal reproducible sample below, but keep in mind that for my actual code I do in-fact need the entire matrix passed to the helper function. This sample takes literally forever to process a 10,000 by 10,000 matrix. Almost 2 hours with 9 processes. Looking in task manage it seems only 4-5 of the threads will run at any given time on my cpu, and the application never uses more than 25%. I've done my absolute best to avoid branches in my real code, though the sample provided is branchless. It still takes roughly 25 seconds to process a 1000 by 1000 matrix on my machine, which is ludacris to me as a mainly c++ developer. I wrote serial code in C that executes the entire 10,000 by 10,000 in constant time in less than a second. I think the main bottleneck is the multiprocessing code, but I am required to do this with multiprocessing. Any ideas for how I could go about improving this? Each row can be processed entirely separately but they need to be joined together back into a matrix for my actual code. import random from multiprocessing import Pool import time def addMatrixRow(matrixData): matrix = matrixData[0] rowNum = matrixData[1] del (matrixData) rowSum = 0 for colNum in range(len(matrix[rowNum])): rowSum += matrix[rowNum][colNum] return rowSum def genMatrix(row, col): matrix = list() for i in range(row): matrix.append(list()) for j in range(col): matrix[i].append(random.randint(0, 1)) return matrix def main(): matrix = genMatrix(1000, 1000) print("generated matrix") MAX_PROCESSES = 4 finalSum = 0 processPool = Pool(processes=MAX_PROCESSES) poolData = list() start = time.time() for i in range(100): for rowNum in range(len(matrix)): matrixData = [matrix, rowNum] poolData.append(matrixData) finalData = processPool.map(addMatrixRow, poolData) poolData = list() finalSum += sum(finalData) end = time.time() print(end-start) print(f'final sum {finalSum}') if __name__ == '__main__': main() A: Your matrix has 1000 rows of 1000 elements each and you are summing each row 100 times. By my calculation, that is 100,000 tasks you are submitting to the pool passing a one-million element matrix each time. Ouch! Now I know you say that the worker function addMatrixRow must have access to the complete matrix. Fine. But instead of passing it a 100,000 times, you can reduce that to 4 times by initializing each process in the pool with a global variable set to the matrix using the initializer and initargs arguments when you construct the pool. You are able to get away with this because the matrix is read-only. And instead of creating poolArgs as a large list you can instead create a generator function that when iterated returns the next argument to be submitted to the pool. But to take advantage of this you cannot use the map method, which will convert the generator to a list and not save you any memory. Instead use imap_unordered (rather than imap since you do not care now in what order your worker function is returning its results because of the commutative law of addition). But with such a large input, you should be using the chunksize argument with imap_unordered. So that the number of reads and writes to the pool's task queue is greatly reduced(albeit the size of the data being written is larger for each queue operation). If all of this is somewhat vague to you, I suggest reading the docs thoroughly for class multiprocessing.pool.Pool and its imap and imap_unordered methods. I have made a few other optimizations replacing for loops with list comprehensions and using the built-in sum function. import random from multiprocessing import Pool import time def init_pool_processes(m): global matrix matrix = m def addMatrixRow(rowNum): return sum(matrix[rowNum]) def genMatrix(row, col): return [[random.randint(0, 1) for _ in range(col)] for _ in range(row)] def compute_chunksize(pool_size, iterable_size): chunksize, remainder = divmod(iterable_size, 4 * pool_size) if remainder: chunksize += 1 return chunksize def main(): matrix = genMatrix(1000, 1000) print("generated matrix") MAX_PROCESSES = 4 processPool = Pool(processes=MAX_PROCESSES, initializer=init_pool_processes, initargs=(matrix,)) start = time.time() # Use a generator function: poolData = (rowNum for _ in range(100) for rowNum in range(len(matrix))) # Compute efficient chunksize chunksize = compute_chunksize(MAX_PROCESSES, len(matrix) * 100) finalSum = sum(processPool.imap_unordered(addMatrixRow, poolData, chunksize=chunksize)) end = time.time() print(end-start) print(f'final sum {finalSum}') processPool.close() processPool.join() if __name__ == '__main__': main() Prints: generated matrix 0.35799622535705566 final sum 49945400 Note the running time of .36 seconds. Assuming you have more CPU cores (than 4), use them all for an even greater reduction in time. A: you are serializing the entire matrix on each function call, you should only send the data that you are processing to the function, nothing more ... and python has a built-in sum function that has a very optimized C code. import random from multiprocessing import Pool import time def addMatrixRow(row_data): rowSum = sum(row_data) return rowSum def genMatrix(row, col): matrix = list() for i in range(row): matrix.append(list()) for j in range(col): matrix[i].append(random.randint(0, 1)) return matrix def main(): matrix = genMatrix(1000, 1000) print("generated matrix") MAX_PROCESSES = 4 finalSum = 0 processPool = Pool(processes=MAX_PROCESSES) poolData = list() start = time.time() for i in range(100): for rowNum in range(len(matrix)): matrixData = matrix[rowNum] poolData.append(matrixData) finalData = processPool.map(addMatrixRow, poolData) poolData = list() finalSum += sum(finalData) end = time.time() print(end-start) print(f'final sum {finalSum}') if __name__ == '__main__': main() generated matrix 3.5028157234191895 final sum 49963400 just not using process pool and running the code serially using list(map(sum,poolData)) generated matrix 1.2143816947937012 final sum 50020800 so yeh python can do it in a second.
best way to speed up multiprocessing code in python?
I am trying to mess around with matrices in python, and wanted to use multiprocessing to processes each row separately for a math operation, I have posted a minimal reproducible sample below, but keep in mind that for my actual code I do in-fact need the entire matrix passed to the helper function. This sample takes literally forever to process a 10,000 by 10,000 matrix. Almost 2 hours with 9 processes. Looking in task manage it seems only 4-5 of the threads will run at any given time on my cpu, and the application never uses more than 25%. I've done my absolute best to avoid branches in my real code, though the sample provided is branchless. It still takes roughly 25 seconds to process a 1000 by 1000 matrix on my machine, which is ludacris to me as a mainly c++ developer. I wrote serial code in C that executes the entire 10,000 by 10,000 in constant time in less than a second. I think the main bottleneck is the multiprocessing code, but I am required to do this with multiprocessing. Any ideas for how I could go about improving this? Each row can be processed entirely separately but they need to be joined together back into a matrix for my actual code. import random from multiprocessing import Pool import time def addMatrixRow(matrixData): matrix = matrixData[0] rowNum = matrixData[1] del (matrixData) rowSum = 0 for colNum in range(len(matrix[rowNum])): rowSum += matrix[rowNum][colNum] return rowSum def genMatrix(row, col): matrix = list() for i in range(row): matrix.append(list()) for j in range(col): matrix[i].append(random.randint(0, 1)) return matrix def main(): matrix = genMatrix(1000, 1000) print("generated matrix") MAX_PROCESSES = 4 finalSum = 0 processPool = Pool(processes=MAX_PROCESSES) poolData = list() start = time.time() for i in range(100): for rowNum in range(len(matrix)): matrixData = [matrix, rowNum] poolData.append(matrixData) finalData = processPool.map(addMatrixRow, poolData) poolData = list() finalSum += sum(finalData) end = time.time() print(end-start) print(f'final sum {finalSum}') if __name__ == '__main__': main()
[ "Your matrix has 1000 rows of 1000 elements each and you are summing each row 100 times. By my calculation, that is 100,000 tasks you are submitting to the pool passing a one-million element matrix each time. Ouch!\nNow I know you say that the worker function addMatrixRow must have access to the complete matrix. Fine. But instead of passing it a 100,000 times, you can reduce that to 4 times by initializing each process in the pool with a global variable set to the matrix using the initializer and initargs arguments when you construct the pool. You are able to get away with this because the matrix is read-only.\nAnd instead of creating poolArgs as a large list you can instead create a generator function that when iterated returns the next argument to be submitted to the pool. But to take advantage of this you cannot use the map method, which will convert the generator to a list and not save you any memory. Instead use imap_unordered (rather than imap since you do not care now in what order your worker function is returning its results because of the commutative law of addition). But with such a large input, you should be using the chunksize argument with imap_unordered. So that the number of reads and writes to the pool's task queue is greatly reduced(albeit the size of the data being written is larger for each queue operation).\nIf all of this is somewhat vague to you, I suggest reading the docs thoroughly for class multiprocessing.pool.Pool and its imap and imap_unordered methods.\nI have made a few other optimizations replacing for loops with list comprehensions and using the built-in sum function.\nimport random\nfrom multiprocessing import Pool\nimport time\n\n\ndef init_pool_processes(m):\n global matrix\n matrix = m \n\ndef addMatrixRow(rowNum):\n return sum(matrix[rowNum])\n\ndef genMatrix(row, col):\n return [[random.randint(0, 1) for _ in range(col)] for _ in range(row)]\n \ndef compute_chunksize(pool_size, iterable_size):\n chunksize, remainder = divmod(iterable_size, 4 * pool_size)\n if remainder:\n chunksize += 1\n return chunksize\n\ndef main():\n matrix = genMatrix(1000, 1000)\n print(\"generated matrix\")\n MAX_PROCESSES = 4\n\n processPool = Pool(processes=MAX_PROCESSES, initializer=init_pool_processes, initargs=(matrix,))\n start = time.time()\n # Use a generator function:\n poolData = (rowNum for _ in range(100) for rowNum in range(len(matrix)))\n # Compute efficient chunksize\n chunksize = compute_chunksize(MAX_PROCESSES, len(matrix) * 100)\n finalSum = sum(processPool.imap_unordered(addMatrixRow, poolData, chunksize=chunksize))\n end = time.time()\n print(end-start)\n print(f'final sum {finalSum}')\n processPool.close()\n processPool.join()\n\n\nif __name__ == '__main__':\n main()\n\nPrints:\ngenerated matrix\n0.35799622535705566\nfinal sum 49945400\n\nNote the running time of .36 seconds.\nAssuming you have more CPU cores (than 4), use them all for an even greater reduction in time.\n", "you are serializing the entire matrix on each function call, you should only send the data that you are processing to the function, nothing more ... and python has a built-in sum function that has a very optimized C code.\nimport random\nfrom multiprocessing import Pool\nimport time\n\n\ndef addMatrixRow(row_data):\n rowSum = sum(row_data)\n return rowSum\n\n\ndef genMatrix(row, col):\n matrix = list()\n for i in range(row):\n matrix.append(list())\n for j in range(col):\n matrix[i].append(random.randint(0, 1))\n return matrix\n\ndef main():\n matrix = genMatrix(1000, 1000)\n print(\"generated matrix\")\n MAX_PROCESSES = 4\n finalSum = 0\n\n processPool = Pool(processes=MAX_PROCESSES)\n poolData = list()\n\n start = time.time()\n for i in range(100):\n for rowNum in range(len(matrix)):\n matrixData = matrix[rowNum]\n poolData.append(matrixData)\n\n finalData = processPool.map(addMatrixRow, poolData)\n poolData = list()\n finalSum += sum(finalData)\n end = time.time()\n print(end-start)\n print(f'final sum {finalSum}')\n\n\nif __name__ == '__main__':\n main()\n\ngenerated matrix\n3.5028157234191895\nfinal sum 49963400\n\njust not using process pool and running the code serially using list(map(sum,poolData))\ngenerated matrix\n1.2143816947937012\nfinal sum 50020800\n\nso yeh python can do it in a second.\n" ]
[ 3, 1 ]
[]
[]
[ "matrix", "multiprocessing", "process_pool", "python" ]
stackoverflow_0074646298_matrix_multiprocessing_process_pool_python.txt
Q: How to Finding USB port Address with dev/tty/usb.. format in raspberry pi 4 ver.b? I had a problem for searching USB Address Port in Raspberry Pi. I'm using RIGOL DSE1102E Digital Oscilloscope, to acquiring data to my Raspberry Pi 4 Ver. b. So, i'm connecting from Raspberry Pi 4 to my Oscilloscope USB Slave's port and i'm checking in my Raspberry terminal. So i'm typing pi@raspberrypi:~$ lsusb so, it returned Bus 002 Device 001 : ID 1d6b Linux Foundation 3.0 root hub Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub so i'm assumed my Raspberry is connected to my instrument because appearance of this line Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES so, based on this case how to know this Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES address in format dev/tty/usb... because i want to code it using pyos library from Python A: Instead of using /dev/ttyUSB0 I recommend using the symlinks provided by the kernel in /dev/serial/by-id. They contain a lot of info about the USB device, including the vendor ID and product ID, so you can be sure you are opening the right device. They also should be pretty stable, not depending on the USB port you use or the order the devices are plugged in. Run ls -l /dev/serial/by-id to explore the options.
How to Finding USB port Address with dev/tty/usb.. format in raspberry pi 4 ver.b?
I had a problem for searching USB Address Port in Raspberry Pi. I'm using RIGOL DSE1102E Digital Oscilloscope, to acquiring data to my Raspberry Pi 4 Ver. b. So, i'm connecting from Raspberry Pi 4 to my Oscilloscope USB Slave's port and i'm checking in my Raspberry terminal. So i'm typing pi@raspberrypi:~$ lsusb so, it returned Bus 002 Device 001 : ID 1d6b Linux Foundation 3.0 root hub Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub so i'm assumed my Raspberry is connected to my instrument because appearance of this line Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES so, based on this case how to know this Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES address in format dev/tty/usb... because i want to code it using pyos library from Python
[ "Instead of using /dev/ttyUSB0 I recommend using the symlinks provided by the kernel in /dev/serial/by-id. They contain a lot of info about the USB device, including the vendor ID and product ID, so you can be sure you are opening the right device. They also should be pretty stable, not depending on the USB port you use or the order the devices are plugged in. Run ls -l /dev/serial/by-id to explore the options.\n" ]
[ 0 ]
[]
[]
[ "python", "raspberry_pi4", "usb" ]
stackoverflow_0074623554_python_raspberry_pi4_usb.txt
Q: How do I interchange two sets of elements in a string in python? So, how can I interchange two sets of adjacent elements In a string. Like lets take a string "abcd" I want to make it "cdab",another example would be "5089" I want to change this to "8950", The string is a large one and I want to apply the method throughout the string. Can you guys please suggest a way to do the same in python. I tried modifying an existing algorithm for interchanging adjacent characters but it didn't work. Thank You. A: Use the slice notation def swap(value): middle = len(value) // 2 return value[middle:] + value[:middle] print(swap("abcd")) # cdab print(swap("abcde")) # cdeab print(swap("1234567890")) # 6789012345 A: s = "abcdefghijklmn" ans = [] n = 2 flip = True for i in range(0, len(s), n): if flip: ind = i + n else: ind = i - n flip = not flip if len(s[ind:ind + n]) > 0: ans.append(s[ind:ind + n]) else: ans.append(s[i:]) "".join(ans) # 'cdabghefklijmn' OR s = "abcdefghijklmn" ans = list(s) n = 2 for i in range(0, len(s), 2 * n): ans[i + n : i + 2 * n], ans[i : i + n] = ans[i : i + n], ans[i + n : i + 2 * n] "".join(ans) # 'cdabghefklijmn'
How do I interchange two sets of elements in a string in python?
So, how can I interchange two sets of adjacent elements In a string. Like lets take a string "abcd" I want to make it "cdab",another example would be "5089" I want to change this to "8950", The string is a large one and I want to apply the method throughout the string. Can you guys please suggest a way to do the same in python. I tried modifying an existing algorithm for interchanging adjacent characters but it didn't work. Thank You.
[ "Use the slice notation\ndef swap(value):\n middle = len(value) // 2\n return value[middle:] + value[:middle]\n\nprint(swap(\"abcd\")) # cdab\nprint(swap(\"abcde\")) # cdeab\nprint(swap(\"1234567890\")) # 6789012345\n\n", "s = \"abcdefghijklmn\"\n\nans = []\nn = 2\nflip = True\nfor i in range(0, len(s), n):\n if flip:\n ind = i + n\n else:\n ind = i - n\n flip = not flip\n \n if len(s[ind:ind + n]) > 0:\n ans.append(s[ind:ind + n])\n else:\n ans.append(s[i:])\n\n\"\".join(ans)\n# 'cdabghefklijmn'\n\nOR\ns = \"abcdefghijklmn\"\n\nans = list(s)\nn = 2\n\nfor i in range(0, len(s), 2 * n):\n ans[i + n : i + 2 * n], ans[i : i + n] = ans[i : i + n], ans[i + n : i + 2 * n]\n\"\".join(ans)\n# 'cdabghefklijmn'\n\n" ]
[ 0, 0 ]
[]
[]
[ "algorithm", "python", "string" ]
stackoverflow_0074658493_algorithm_python_string.txt
Q: FileNotFoundError: [Errno 2] No such file or directory: './iris.csv' I'm getting this error for my Python code using the IDLE Shell and I'm not sure how to resolve it. I've tried downloading and adding the iris.csv file into the same place as the following .py file but it just gives me another set of errors like shown. If someone could help me it would be greatly appreciated! ------------------------------------------------------------------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\kyle_\OneDrive\Documents\COIS 4400H\Lab 5.py", line 17, in <module> iris_df = pd.read_csv('./iris.csv') File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 317, in wrapper return func(*args, **kwargs) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 950, in read_csv return _read(filepath_or_buffer, kwds) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 605, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1442, in __init__ self._engine = self._make_engine(f, self.engine) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1729, in _make_engine self.handles = get_handle( File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\common.py", line 857, in get_handle handle = open( FileNotFoundError: [Errno 2] No such file or directory: './iris.csv' This is the code that I'm using: ------------------------------------------------------------------------------------------------------------------------------- import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import datetime as dt import sklearn from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from scipy.cluster.hierarchy import linkage from scipy.cluster.hierarchy import dendrogram from scipy.cluster.hierarchy import cut_tree iris_df = pd.read_csv('./iris.csv') iris_df.head() iris_df['iris'].drop_duplicates() iris_df = iris_df.drop('iris',axis=1) scaler = StandardScaler() iris_df_scaled = scaler.fit_transform(iris_df) iris_df_scaled.shape sse = [] range_n_clusters = [2, 3, 4, 5, 6, 7, 8 , 9 , 10 ] for num_clusters in range_n_clusters: kmeans = KMeans(n_clusters=num_clusters, max_iter=50) kmeans.fit(iris_df_scaled) sse.append(kmeans.inertia_) plt.plot(sse) # 1. Kmeans with k=3 kmeans = KMeans(n_clusters=3, max_iter=50) y = kmeans.fit_predict(iris_df_scaled) y iris_df['Label'] = kmeans.labels_ iris_df.head() plt.scatter(iris_df_scaled[y == 0, 0], iris_df_scaled[y == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa') plt.scatter(iris_df_scaled[y == 1, 0], iris_df_scaled[y == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour') plt.scatter(iris_df_scaled[y == 2, 0], iris_df_scaled[y == 2, 1], s = 100, c = 'green', label = 'Iris-virginica') # 2. Hierarchical clustering plt.figure(figsize=(15, 5)) mergings = linkage(iris_df_scaled,method='complete',metric='euclidean') dendrogram(mergings) plt.show() cluster_hier = cut_tree(mergings,n_clusters=3).reshape(-1) iris_df['Label'] = cluster_hier iris_df.head() plt.scatter(iris_df_scaled[cluster_hier == 0, 0], iris_df_scaled[cluster_hier == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa') plt.scatter(iris_df_scaled[cluster_hier == 1, 0], iris_df_scaled[cluster_hier == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour') plt.scatter(iris_df_scaled[cluster_hier == 2, 0], iris_df_scaled[cluster_hier == 2, 1], s = 100, c = 'green', label = 'Iris-virginica') A: Looks like you are looking in the file path that ends with \Lab 5 .py, which will just contain your python script. So you need to look one layer "above" your python script, which is the directory containing the script and the iris.csv-file. Try: iris_df = pd.read_csv('../iris.csv')
FileNotFoundError: [Errno 2] No such file or directory: './iris.csv'
I'm getting this error for my Python code using the IDLE Shell and I'm not sure how to resolve it. I've tried downloading and adding the iris.csv file into the same place as the following .py file but it just gives me another set of errors like shown. If someone could help me it would be greatly appreciated! ------------------------------------------------------------------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\kyle_\OneDrive\Documents\COIS 4400H\Lab 5.py", line 17, in <module> iris_df = pd.read_csv('./iris.csv') File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 317, in wrapper return func(*args, **kwargs) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 950, in read_csv return _read(filepath_or_buffer, kwds) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 605, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1442, in __init__ self._engine = self._make_engine(f, self.engine) File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1729, in _make_engine self.handles = get_handle( File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\common.py", line 857, in get_handle handle = open( FileNotFoundError: [Errno 2] No such file or directory: './iris.csv' This is the code that I'm using: ------------------------------------------------------------------------------------------------------------------------------- import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import datetime as dt import sklearn from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from scipy.cluster.hierarchy import linkage from scipy.cluster.hierarchy import dendrogram from scipy.cluster.hierarchy import cut_tree iris_df = pd.read_csv('./iris.csv') iris_df.head() iris_df['iris'].drop_duplicates() iris_df = iris_df.drop('iris',axis=1) scaler = StandardScaler() iris_df_scaled = scaler.fit_transform(iris_df) iris_df_scaled.shape sse = [] range_n_clusters = [2, 3, 4, 5, 6, 7, 8 , 9 , 10 ] for num_clusters in range_n_clusters: kmeans = KMeans(n_clusters=num_clusters, max_iter=50) kmeans.fit(iris_df_scaled) sse.append(kmeans.inertia_) plt.plot(sse) # 1. Kmeans with k=3 kmeans = KMeans(n_clusters=3, max_iter=50) y = kmeans.fit_predict(iris_df_scaled) y iris_df['Label'] = kmeans.labels_ iris_df.head() plt.scatter(iris_df_scaled[y == 0, 0], iris_df_scaled[y == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa') plt.scatter(iris_df_scaled[y == 1, 0], iris_df_scaled[y == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour') plt.scatter(iris_df_scaled[y == 2, 0], iris_df_scaled[y == 2, 1], s = 100, c = 'green', label = 'Iris-virginica') # 2. Hierarchical clustering plt.figure(figsize=(15, 5)) mergings = linkage(iris_df_scaled,method='complete',metric='euclidean') dendrogram(mergings) plt.show() cluster_hier = cut_tree(mergings,n_clusters=3).reshape(-1) iris_df['Label'] = cluster_hier iris_df.head() plt.scatter(iris_df_scaled[cluster_hier == 0, 0], iris_df_scaled[cluster_hier == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa') plt.scatter(iris_df_scaled[cluster_hier == 1, 0], iris_df_scaled[cluster_hier == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour') plt.scatter(iris_df_scaled[cluster_hier == 2, 0], iris_df_scaled[cluster_hier == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
[ "Looks like you are looking in the file path that ends with \\Lab 5 .py, which will just contain your python script. So you need to look one layer \"above\" your python script, which is the directory containing the script and the iris.csv-file.\nTry: iris_df = pd.read_csv('../iris.csv')\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074658611_python.txt
Q: Validation python, Using GUI I am attempting to validate the text box field so that the user can only insert integers, although i have used a while loop to attempt and cannot figure it out I keep getting errors. Please help. from tkinter import * import tkinter as tk from tkinter.tix import * # setup the UI root = Tk() # Give the UI a title root.title("Distance converter Miles to Kilometers") # set window geometry root.geometry("480x130") # setup the buttons valRadio = tk.IntVar() myText=tk.StringVar() e1 =tk.IntVar() def calculate(*arg): while True: try: if valRadio.get() == 1: # get the miles ( Calculation ) res = round(float(e1.get()) / 1.6093,2) # set the result text myText.set( "Your input converts to " + str(res) + " Miles") break if valRadio.get() == 2: # get the kilometeres res = round(float(e1.get()) * 1.6093,2) # set the result text myText.set( "Your input converts to " + str(res) + " Kilometers") break if ValueError: myText.set ("Please check selections, only Integers are allowed") break else: # print error message res = round(float(e1.get()) / 1.6093,2) myText.set ("Please check selections, a field cannot be empty") break except ValueError: myText.set ("Please check selections, a field cannot be empty") break # Set the label for Instructions and how to use the calculator instructions = Label(root, text="""Hover me:""") instructions.grid(row=0, column=1) # set the label to determine the distance field conversion = tk.Label( text=" Value to be converted :" ) conversion.grid(row=1,column = 0,) # set the entry box to enable the user to input their distance tk.Entry(textvariable = e1).grid(row=1, column=1) #set the label to determine the result of the program and output the users results below it tk.Label(text = "Result:").grid(row=5,column = 0) result = tk.Label(text="(result)", textvariable=myText) result.grid(row=5,column=1) # the radio button control for Miles r1 = tk.Radiobutton(text="Miles", variable=valRadio, value=1).grid(row=3, column=0) # the radio button control for Kilometers r2 = tk.Radiobutton(text="Kilometers", variable=valRadio, value=2).grid(row=3, column=2) # enable a calculate button and decide what it will do as well as wher on the grid it belongs calculate_button = tk.Button(text="Calculate \n (Enter)", command=calculate) calculate_button.grid(row=6, column=2) # deploy the UI root.mainloop() I have attempted to use the While loop inside the code although I can only get it to where if the user inputs text and doesn't select a radio button the error will display but I would like to have it where the text box in general will not allow anything but integers and if it receives string print the error as it does if the radio buttons aren't selected. A: define validation type and validatecommand. validate = key makes with every key input it runs validatecommand. It only types if that function returns true which is 'validate' function in this case. vcmd = (root.register(validate), '%P') tk.Entry(textvariable = e1,validate="key", validatecommand=vcmd).grid(row=1, column=1) this is the validation function def validate(input): if not input: return True elif re.fullmatch(r'[0-9]*',input): return True myText.set("Please check selections, only Integers are allowed") return False it return true only when its full of numbers([0-9]* is an regular expression which defines all numbers) or empty. If it contains any letter it return False any it denied this way. Also do not forget to imports import re
Validation python, Using GUI
I am attempting to validate the text box field so that the user can only insert integers, although i have used a while loop to attempt and cannot figure it out I keep getting errors. Please help. from tkinter import * import tkinter as tk from tkinter.tix import * # setup the UI root = Tk() # Give the UI a title root.title("Distance converter Miles to Kilometers") # set window geometry root.geometry("480x130") # setup the buttons valRadio = tk.IntVar() myText=tk.StringVar() e1 =tk.IntVar() def calculate(*arg): while True: try: if valRadio.get() == 1: # get the miles ( Calculation ) res = round(float(e1.get()) / 1.6093,2) # set the result text myText.set( "Your input converts to " + str(res) + " Miles") break if valRadio.get() == 2: # get the kilometeres res = round(float(e1.get()) * 1.6093,2) # set the result text myText.set( "Your input converts to " + str(res) + " Kilometers") break if ValueError: myText.set ("Please check selections, only Integers are allowed") break else: # print error message res = round(float(e1.get()) / 1.6093,2) myText.set ("Please check selections, a field cannot be empty") break except ValueError: myText.set ("Please check selections, a field cannot be empty") break # Set the label for Instructions and how to use the calculator instructions = Label(root, text="""Hover me:""") instructions.grid(row=0, column=1) # set the label to determine the distance field conversion = tk.Label( text=" Value to be converted :" ) conversion.grid(row=1,column = 0,) # set the entry box to enable the user to input their distance tk.Entry(textvariable = e1).grid(row=1, column=1) #set the label to determine the result of the program and output the users results below it tk.Label(text = "Result:").grid(row=5,column = 0) result = tk.Label(text="(result)", textvariable=myText) result.grid(row=5,column=1) # the radio button control for Miles r1 = tk.Radiobutton(text="Miles", variable=valRadio, value=1).grid(row=3, column=0) # the radio button control for Kilometers r2 = tk.Radiobutton(text="Kilometers", variable=valRadio, value=2).grid(row=3, column=2) # enable a calculate button and decide what it will do as well as wher on the grid it belongs calculate_button = tk.Button(text="Calculate \n (Enter)", command=calculate) calculate_button.grid(row=6, column=2) # deploy the UI root.mainloop() I have attempted to use the While loop inside the code although I can only get it to where if the user inputs text and doesn't select a radio button the error will display but I would like to have it where the text box in general will not allow anything but integers and if it receives string print the error as it does if the radio buttons aren't selected.
[ "define validation type and validatecommand. validate = key makes with every key input it runs validatecommand. It only types if that function returns true which is 'validate' function in this case.\nvcmd = (root.register(validate), '%P')\ntk.Entry(textvariable = e1,validate=\"key\", validatecommand=vcmd).grid(row=1, column=1)\n\nthis is the validation function\ndef validate(input):\n if not input:\n return True\n elif re.fullmatch(r'[0-9]*',input):\n return True\n myText.set(\"Please check selections, only Integers are allowed\")\n return False\n\nit return true only when its full of numbers([0-9]* is an regular expression which defines all numbers) or empty. If it contains any letter it return False any it denied this way.\nAlso do not forget to imports\nimport re\n\n" ]
[ 0 ]
[]
[]
[ "interface", "python", "tkinter" ]
stackoverflow_0074650750_interface_python_tkinter.txt
Q: How make np.roll working faster for one dimension array? I generate a two zero arrays by np.zero then i use np.roll to make circshifting array. But when i calling np.roll in cycle it works very slow. Is there any way to speed up my code? Here is the code: preamble_length = 256 threshold_level = 100 sample_rate = 750e3 decimation_factor = 6 preamble_combination = [1,-1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1,-1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1] sequence = np.zeros(preamble_length) buffer_filter = np.zeros(preamble_length) size_array = sample_rate / decimation_factor rxDataReal = np.real(downsample(rxData, decimation_factor)) #rxData is a array of complex numbers rxDataDownSampled = rxDataReal check = 0 find_max = 0 peak_max = 0 preamble_ready = 0 received_flag = False size_array = int(size_array) main_counter = 0 #In this section the np.roll working very slow for main_counter in range(size_array): if(preamble_ready == 0): if(rxDataDownSampled[main_counter] < 0): check_sign = 1 else: check_sign = -1 sequence = np.roll(sequence, -1)#this buffer_filter = np.roll(buffer_filter, -1)#and this sequence[preamble_length-1] = check_sign bufferSum = sequence * preamble_combination buffer_filter[preamble_length-1] = np.sum(bufferSum) find_max = np.max(buffer_filter) if(find_max >= threshold_level): peak_max = find_max sequence = np.zeros(preamble_ready) buffer_filter = np.zeros(preamble_length) print('Value of peak_max: ', peak_max) received_flag = True if(received_flag==True): break preamble_value = peak_max A: So your roll are doing: In [118]: x=np.arange(10) In [119]: np.roll(x,-1) Out[119]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0]) You can look at the np.roll code; it's probably more general, it has to, in one way or other, copy all the values of x to a new array. This might be a bit faster, since it doesn't try to be as general: In [120]: y=np.zeros_like(x) ...: y[:-1] = x[1:]; y[-1] = x[0] In [121]: y Out[121]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0]) times Nope, it isn't faster: In [122]: x=np.arange(100000) In [123]: timeit np.roll(x,-1) 82.1 µs ± 102 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [124]: %%timeit ...: y=np.zeros_like(x) ...: y[:-1] = x[1:]; y[-1] = x[0] ...: ...: 93.4 µs ± 4.84 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) other timings: In [128]: timeit y=x[1:].copy() 52.4 µs ± 164 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [129]: timeit np.concatenate((x[1:],x[0:1])) 58.6 µs ± 289 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
How make np.roll working faster for one dimension array?
I generate a two zero arrays by np.zero then i use np.roll to make circshifting array. But when i calling np.roll in cycle it works very slow. Is there any way to speed up my code? Here is the code: preamble_length = 256 threshold_level = 100 sample_rate = 750e3 decimation_factor = 6 preamble_combination = [1,-1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1,-1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1] sequence = np.zeros(preamble_length) buffer_filter = np.zeros(preamble_length) size_array = sample_rate / decimation_factor rxDataReal = np.real(downsample(rxData, decimation_factor)) #rxData is a array of complex numbers rxDataDownSampled = rxDataReal check = 0 find_max = 0 peak_max = 0 preamble_ready = 0 received_flag = False size_array = int(size_array) main_counter = 0 #In this section the np.roll working very slow for main_counter in range(size_array): if(preamble_ready == 0): if(rxDataDownSampled[main_counter] < 0): check_sign = 1 else: check_sign = -1 sequence = np.roll(sequence, -1)#this buffer_filter = np.roll(buffer_filter, -1)#and this sequence[preamble_length-1] = check_sign bufferSum = sequence * preamble_combination buffer_filter[preamble_length-1] = np.sum(bufferSum) find_max = np.max(buffer_filter) if(find_max >= threshold_level): peak_max = find_max sequence = np.zeros(preamble_ready) buffer_filter = np.zeros(preamble_length) print('Value of peak_max: ', peak_max) received_flag = True if(received_flag==True): break preamble_value = peak_max
[ "So your roll are doing:\nIn [118]: x=np.arange(10)\nIn [119]: np.roll(x,-1)\nOut[119]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])\n\nYou can look at the np.roll code; it's probably more general, it has to, in one way or other, copy all the values of x to a new array. This might be a bit faster, since it doesn't try to be as general:\nIn [120]: y=np.zeros_like(x)\n ...: y[:-1] = x[1:]; y[-1] = x[0]\nIn [121]: y\nOut[121]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])\n\ntimes\nNope, it isn't faster:\nIn [122]: x=np.arange(100000)\n\nIn [123]: timeit np.roll(x,-1)\n82.1 µs ± 102 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\nIn [124]: %%timeit \n ...: y=np.zeros_like(x)\n ...: y[:-1] = x[1:]; y[-1] = x[0]\n ...: \n ...: \n93.4 µs ± 4.84 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\nother timings:\nIn [128]: timeit y=x[1:].copy()\n52.4 µs ± 164 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\nIn [129]: timeit np.concatenate((x[1:],x[0:1]))\n58.6 µs ± 289 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "python", "signal_processing" ]
stackoverflow_0074655749_numpy_python_signal_processing.txt
Q: Unable to install coursera-dl I accidentally deleted a file I think called coursera-dl.exe from C:\python310\lib\site-packages. I tried to uninstall it using: pip uninstall coursera-dl it showed this warning: WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) but it was successfully uninstalled. I tried to reinstall it using: pip install coursera-dl but it gives this error: WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) Requirement already satisfied: coursera-dl in c:\python310\lib\site-packages (0.11.5) Requirement already satisfied: six>=1.5.0 in c:\python310\lib\site-packages (from coursera-dl) (1.16.0) Requirement already satisfied: keyring>=4.0 in c:\python310\lib\site-packages (from coursera-dl) (23.9.1) Requirement already satisfied: requests>=2.10.0 in c:\python310\lib\site-packages (from coursera-dl) (2.28.1) Requirement already satisfied: beautifulsoup4>=4.1.3 in c:\python310\lib\site-packages (from coursera-dl) (4.11.1) Requirement already satisfied: configargparse>=0.12.0 in c:\python310\lib\site-packages (from coursera-dl) (1.5.3) Requirement already satisfied: pyasn1>=0.1.7 in c:\python310\lib\site-packages (from coursera-dl) (0.4.8) Requirement already satisfied: attrs==18.1.0 in c:\python310\lib\site-packages (from coursera-dl) (18.1.0) Requirement already satisfied: urllib3>=1.23 in c:\python310\lib\site-packages (from coursera-dl) (1.26.12) Requirement already satisfied: soupsieve>1.2 in c:\python310\lib\site-packages (from beautifulsoup4>=4.1.3->coursera-dl) (2.3.2.post1) Requirement already satisfied: jaraco.classes in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (3.2.2) Requirement already satisfied: pywin32-ctypes!=0.1.0,!=0.1.1 in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (0.2.0) Requirement already satisfied: idna<4,>=2.5 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (3.3) Requirement already satisfied: charset-normalizer<3,>=2 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2.1.1) Requirement already satisfied: certifi>=2017.4.17 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2022.6.15.1) Requirement already satisfied: more-itertools in c:\python310\lib\site-packages (from jaraco.classes->keyring>=4.0->coursera-dl) (8.14.0) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) Any help will be appreciated. Thanks in advance. A: Try: pip install --upgrade --force-reinstall coursera-dl or pip install --ignore-installed coursera-dl
Unable to install coursera-dl
I accidentally deleted a file I think called coursera-dl.exe from C:\python310\lib\site-packages. I tried to uninstall it using: pip uninstall coursera-dl it showed this warning: WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) but it was successfully uninstalled. I tried to reinstall it using: pip install coursera-dl but it gives this error: WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) Requirement already satisfied: coursera-dl in c:\python310\lib\site-packages (0.11.5) Requirement already satisfied: six>=1.5.0 in c:\python310\lib\site-packages (from coursera-dl) (1.16.0) Requirement already satisfied: keyring>=4.0 in c:\python310\lib\site-packages (from coursera-dl) (23.9.1) Requirement already satisfied: requests>=2.10.0 in c:\python310\lib\site-packages (from coursera-dl) (2.28.1) Requirement already satisfied: beautifulsoup4>=4.1.3 in c:\python310\lib\site-packages (from coursera-dl) (4.11.1) Requirement already satisfied: configargparse>=0.12.0 in c:\python310\lib\site-packages (from coursera-dl) (1.5.3) Requirement already satisfied: pyasn1>=0.1.7 in c:\python310\lib\site-packages (from coursera-dl) (0.4.8) Requirement already satisfied: attrs==18.1.0 in c:\python310\lib\site-packages (from coursera-dl) (18.1.0) Requirement already satisfied: urllib3>=1.23 in c:\python310\lib\site-packages (from coursera-dl) (1.26.12) Requirement already satisfied: soupsieve>1.2 in c:\python310\lib\site-packages (from beautifulsoup4>=4.1.3->coursera-dl) (2.3.2.post1) Requirement already satisfied: jaraco.classes in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (3.2.2) Requirement already satisfied: pywin32-ctypes!=0.1.0,!=0.1.1 in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (0.2.0) Requirement already satisfied: idna<4,>=2.5 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (3.3) Requirement already satisfied: charset-normalizer<3,>=2 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2.1.1) Requirement already satisfied: certifi>=2017.4.17 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2022.6.15.1) Requirement already satisfied: more-itertools in c:\python310\lib\site-packages (from jaraco.classes->keyring>=4.0->coursera-dl) (8.14.0) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) Any help will be appreciated. Thanks in advance.
[ "Try:\npip install --upgrade --force-reinstall coursera-dl\n\nor\npip install --ignore-installed coursera-dl\n\n" ]
[ 0 ]
[]
[]
[ "cmd", "coursera_api", "python" ]
stackoverflow_0074631073_cmd_coursera_api_python.txt
Q: python re unterminated character set at position 0 CODE: import re inp=input() tup=tuple(map(str,inp.split(','))) i=0 while i<len(tup): x=tup[i] a=re.search("[0-9a-zA-Z\$#@",x) if a!="None": break else: i=i+1 if a!="None" and len(tup[i])>=6 and len(tup[i])<=12: print(tup[i]) else: print("invalid") INPUT: ABd1234@1,a F1#,2w3E*,2We3345 ERROR: unterminated character set at position 0 A: The error stems from the invalid regular expression - specifically, you've omitted the right-bracket. However, even if you fix that, based on the code shown in the question, this isn't going to work for a couple of reasons. The return value from re.search will always be unequal to 'None' The final if test in the code is outside the while loop which is almost certainly not what's wanted. Try this instead: import string VALIDCHARS = set(string.ascii_letters+string.digits+'$#@') for word in input().split(','): if 6 <= len(word) <= 12 and all(c in VALIDCHARS for c in word): print(f'{word} is valid') else: print(f'{word} is invalid') A: Your regular expression is missing a closing bracket. It should be: a=re.search("[0-9a-zA-Z\$#@]",x) Also, replace all instances of "None" the string with None the keyword. This is because .search returns None as can be seen here. https://docs.python.org/3/library/re.html#re.search
python re unterminated character set at position 0
CODE: import re inp=input() tup=tuple(map(str,inp.split(','))) i=0 while i<len(tup): x=tup[i] a=re.search("[0-9a-zA-Z\$#@",x) if a!="None": break else: i=i+1 if a!="None" and len(tup[i])>=6 and len(tup[i])<=12: print(tup[i]) else: print("invalid") INPUT: ABd1234@1,a F1#,2w3E*,2We3345 ERROR: unterminated character set at position 0
[ "The error stems from the invalid regular expression - specifically, you've omitted the right-bracket.\nHowever, even if you fix that, based on the code shown in the question, this isn't going to work for a couple of reasons.\n\nThe return value from re.search will always be unequal to 'None'\nThe final if test in the code is outside the while loop which is almost certainly not what's wanted.\n\nTry this instead:\nimport string\n\nVALIDCHARS = set(string.ascii_letters+string.digits+'$#@')\n\nfor word in input().split(','):\n if 6 <= len(word) <= 12 and all(c in VALIDCHARS for c in word):\n print(f'{word} is valid')\n else:\n print(f'{word} is invalid')\n\n", "Your regular expression is missing a closing bracket. It should be:\na=re.search(\"[0-9a-zA-Z\\$#@]\",x)\n\nAlso, replace all instances of \"None\" the string with None the keyword. This is because .search returns None as can be seen here.\nhttps://docs.python.org/3/library/re.html#re.search\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_re", "regex", "search", "tuples" ]
stackoverflow_0074658401_python_python_re_regex_search_tuples.txt
Q: How to easier to split csv data by substring using python Finally I want to split clearly like this photo *NOT replace, I want to SPLIT and not just using "," to split MUST according to substring to split it I have a csv like: date, time, ID1, ID2, ID3, "Action=xxx, ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, AddBy=xxxxxx, TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, Status=xxx, ExtOrderNo=xxx, Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx",x,x,ID4 How can I easier to split to different column by the string before "="? And if there is no relevant words in the row, the row is empty Or add "word=," or simply add "," at that position Final Result LIKE: date, time, ID1, ID2, ID3, "Action=xxx, **RetCode=,** ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, **ExtOrderNo=,** **TradeNo=,** **Ref=,** AddBy=xxxxxx, **Gateway=,** TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, **TradeNo=,** Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx **ChannelId=,**",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, **Validity=,** Status=xxx, ExtOrderNo=xxx, **TradeNo=,** Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx **ClOrderId=,** **ChannelId=,**",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, **RetCode=,** ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, **TradedQty=,** **Validity=,** **Status=,** ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx **TimeStamp=,** **ClOrderId=,** **ChannelId=,**",x,x,ID4 p.s. above just some example of the csv, maybe have other words=xxx, how can I easier to split it I want clearly in csv or excel show which data exists and which data does not A: Not sure I understand 100%, but let me try to help. The focus points are: # import the pandas library and alias as pd import pandas as pd # read a csv with the example data df = pd.read_csv("data.csv", sep=",", quoting=False, header = None) # replace any values that match the pattern "something=value" with "value" df.replace(to_replace=r"^(.*)=", value="", regex=True, inplace=True) # save to a new csv file: df.to_csv("new_data.csv", sep=",", header = None, index = False)
How to easier to split csv data by substring using python
Finally I want to split clearly like this photo *NOT replace, I want to SPLIT and not just using "," to split MUST according to substring to split it I have a csv like: date, time, ID1, ID2, ID3, "Action=xxx, ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, AddBy=xxxxxx, TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, Status=xxx, ExtOrderNo=xxx, Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx",x,x,ID4 How can I easier to split to different column by the string before "="? And if there is no relevant words in the row, the row is empty Or add "word=," or simply add "," at that position Final Result LIKE: date, time, ID1, ID2, ID3, "Action=xxx, **RetCode=,** ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, **ExtOrderNo=,** **TradeNo=,** **Ref=,** AddBy=xxxxxx, **Gateway=,** TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, **TradeNo=,** Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx **ChannelId=,**",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, **Validity=,** Status=xxx, ExtOrderNo=xxx, **TradeNo=,** Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx **ClOrderId=,** **ChannelId=,**",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, **RetCode=,** ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, **TradedQty=,** **Validity=,** **Status=,** ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx **TimeStamp=,** **ClOrderId=,** **ChannelId=,**",x,x,ID4 p.s. above just some example of the csv, maybe have other words=xxx, how can I easier to split it I want clearly in csv or excel show which data exists and which data does not
[ "Not sure I understand 100%, but let me try to help.\nThe focus points are:\n# import the pandas library and alias as pd\nimport pandas as pd\n\n# read a csv with the example data\ndf = pd.read_csv(\"data.csv\", sep=\",\", quoting=False, header = None)\n\n# replace any values that match the pattern \"something=value\" with \"value\"\ndf.replace(to_replace=r\"^(.*)=\", value=\"\", regex=True, inplace=True)\n# save to a new csv file:\ndf.to_csv(\"new_data.csv\", sep=\",\", header = None, index = False)\n\n" ]
[ 0 ]
[]
[]
[ "csv", "pandas", "python" ]
stackoverflow_0074658256_csv_pandas_python.txt
Q: How to deal with the categorical variable of more than 33 000 cities? I work in Python. I have a problem with the categorical variable - "city". I'm building a predictive model on a large dataset-over 1 million rows. I have over 100 features. One of them is "city", consisting of 33 000 different cities. I use e.g. XGBoost where I need to convert categorical variables into numeric. Dummifying causes the number of features to increase strongly. XGBoost (and my 20 gb RAM) can't handle this. Is there any other way to deal with this variable than e.g. One Hot Encoding, dummies etc.? (When using One Hot Encoding e.g., I have performance problems, there are too many features in my model and I'm running out of memory.) Is there any way to deal with this? A: XGBoost has also since version 1.3.0 added experimental support for categorical encoding. Copying my answer from another question. Nov 23, 2020 XGBoost has since version 1.3.0 added experimental support for categorical features. From the docs: 1.8.7 Categorical Data Other than users performing encoding, XGBoost has experimental support for categorical data using gpu_hist and gpu_predictor. No special operation needs to be done on input test data since the information about categories is encoded into the model during training. https://buildmedia.readthedocs.org/media/pdf/xgboost/latest/xgboost.pdf In the DMatrix section the docs also say: enable_categorical (boolean, optional) – New in version 1.3.0. Experimental support of specializing for categorical features. Do not set to True unless you are interested in development. Currently it’s only available for gpu_hist tree method with 1 vs rest (one hot) categorical split. Also, JSON serialization format, gpu_predictor and pandas input are required. Other models option: If you don't need to use XGBoost, you can use a model like LightGBM or or CatBoost which support categorical encoding without one-hot-encoding out of the box. A: You could use some kind of embeddings that reflect better those cities (and compress the number of total features by direct OHE), maybe using some features to describe the continet where each city belongs, then some other features to describe the country/region, etc. Note that since you didn't provide any specific detail about this task, I've used only geographical data on my example, but you could use some other variables related to each city, like the mean temprature, the population, the area, etc, depending on the task you are trying to address here. Another approach could be replacing the city name with its coordinates (latitude and longitude). Again, this may be helpful depending on the task for your model. Hope this helps A: Beside the models, you could also decrease the number of the features (cities) by grouping them in geographical regions. Another option is grouping them by population size. Another option is grouping them by their frequency by using quantile bins. Target encoding might be another option for you. Feature engineering in many cases involves a lot of manual work, unfortunately you cannot always have everything sorted out automatically. A: There are already great responses here. Other technique I would use is cluster those cities into groups using K-means clustering with some of the features specific to cities in your dataset. By this way you could use the cluster number in place of the actual city. This could reduce the number of levels quite a bit.
How to deal with the categorical variable of more than 33 000 cities?
I work in Python. I have a problem with the categorical variable - "city". I'm building a predictive model on a large dataset-over 1 million rows. I have over 100 features. One of them is "city", consisting of 33 000 different cities. I use e.g. XGBoost where I need to convert categorical variables into numeric. Dummifying causes the number of features to increase strongly. XGBoost (and my 20 gb RAM) can't handle this. Is there any other way to deal with this variable than e.g. One Hot Encoding, dummies etc.? (When using One Hot Encoding e.g., I have performance problems, there are too many features in my model and I'm running out of memory.) Is there any way to deal with this?
[ "XGBoost has also since version 1.3.0 added experimental support for categorical encoding.\nCopying my answer from another question.\nNov 23, 2020\nXGBoost has since version 1.3.0 added experimental support for categorical features. From the docs:\n\n1.8.7 Categorical Data\nOther than users performing encoding, XGBoost has experimental support\nfor categorical data using gpu_hist and gpu_predictor. No special\noperation needs to be done on input test data since the information\nabout categories is encoded into the model during training.\n\nhttps://buildmedia.readthedocs.org/media/pdf/xgboost/latest/xgboost.pdf\nIn the DMatrix section the docs also say:\n\nenable_categorical (boolean, optional) – New in version 1.3.0.\nExperimental support of specializing for categorical features. Do not\nset to True unless you are interested in development. Currently it’s\nonly available for gpu_hist tree method with 1 vs rest (one hot)\ncategorical split. Also, JSON serialization format, gpu_predictor and\npandas input are required.\n\nOther models option:\nIf you don't need to use XGBoost, you can use a model like LightGBM or or CatBoost which support categorical encoding without one-hot-encoding out of the box.\n", "You could use some kind of embeddings that reflect better those cities (and compress the number of total features by direct OHE), maybe using some features to describe the continet where each city belongs, then some other features to describe the country/region, etc.\nNote that since you didn't provide any specific detail about this task, I've used only geographical data on my example, but you could use some other variables related to each city, like the mean temprature, the population, the area, etc, depending on the task you are trying to address here.\nAnother approach could be replacing the city name with its coordinates (latitude and longitude). Again, this may be helpful depending on the task for your model.\nHope this helps\n", "Beside the models, you could also decrease the number of the features (cities) by grouping them in geographical regions. Another option is grouping them by population size.\nAnother option is grouping them by their frequency by using quantile bins. Target encoding might be another option for you.\nFeature engineering in many cases involves a lot of manual work, unfortunately you cannot always have everything sorted out automatically.\n", "There are already great responses here.\nOther technique I would use is cluster those cities into groups using K-means clustering with some of the features specific to cities in your dataset.\nBy this way you could use the cluster number in place of the actual city. This could reduce the number of levels quite a bit.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "forecasting", "python", "xgboost" ]
stackoverflow_0061975690_forecasting_python_xgboost.txt
Q: spacy Can't find model 'en_core_web_sm' on windows 10 and Python 3.5.3 :: Anaconda custom (64-bit) what is difference between spacy.load('en_core_web_sm') and spacy.load('en')? This link explains different model sizes. But i am still not clear how spacy.load('en_core_web_sm') and spacy.load('en') differ spacy.load('en') runs fine for me. But the spacy.load('en_core_web_sm') throws error i have installed spacyas below. when i go to jupyter notebook and run command nlp = spacy.load('en_core_web_sm') I get the below error --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-4-b472bef03043> in <module>() 1 # Import spaCy and load the language library 2 import spacy ----> 3 nlp = spacy.load('en_core_web_sm') 4 5 # Create a Doc object C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\__init__.py in load(name, **overrides) 13 if depr_path not in (True, False, None): 14 deprecation_warning(Warnings.W001.format(path=depr_path)) ---> 15 return util.load_model(name, **overrides) 16 17 C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\util.py in load_model(name, **overrides) 117 elif hasattr(name, 'exists'): # Path or Path-like to model data 118 return load_model_from_path(name, **overrides) --> 119 raise IOError(Errors.E050.format(name=name)) 120 121 OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. how I installed Spacy --- (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>conda install -c conda-forge spacy Fetching package metadata ............. Solving package specifications: . Package plan for installation in environment C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder: The following NEW packages will be INSTALLED: blas: 1.0-mkl cymem: 1.31.2-py35h6538335_0 conda-forge dill: 0.2.8.2-py35_0 conda-forge msgpack-numpy: 0.4.4.2-py_0 conda-forge murmurhash: 0.28.0-py35h6538335_1000 conda-forge plac: 0.9.6-py_1 conda-forge preshed: 1.0.0-py35h6538335_0 conda-forge pyreadline: 2.1-py35_1000 conda-forge regex: 2017.11.09-py35_0 conda-forge spacy: 2.0.12-py35h830ac7b_0 conda-forge termcolor: 1.1.0-py_2 conda-forge thinc: 6.10.3-py35h830ac7b_2 conda-forge tqdm: 4.29.1-py_0 conda-forge ujson: 1.35-py35hfa6e2cd_1001 conda-forge The following packages will be UPDATED: msgpack-python: 0.4.8-py35_0 --> 0.5.6-py35he980bc4_3 conda-forge The following packages will be DOWNGRADED: freetype: 2.7-vc14_2 conda-forge --> 2.5.5-vc14_2 Proceed ([y]/n)? y blas-1.0-mkl.t 100% |###############################| Time: 0:00:00 0.00 B/s cymem-1.31.2-p 100% |###############################| Time: 0:00:00 1.65 MB/s msgpack-python 100% |###############################| Time: 0:00:00 5.37 MB/s murmurhash-0.2 100% |###############################| Time: 0:00:00 1.49 MB/s plac-0.9.6-py_ 100% |###############################| Time: 0:00:00 0.00 B/s pyreadline-2.1 100% |###############################| Time: 0:00:00 4.62 MB/s regex-2017.11. 100% |###############################| Time: 0:00:00 3.31 MB/s termcolor-1.1. 100% |###############################| Time: 0:00:00 187.81 kB/s tqdm-4.29.1-py 100% |###############################| Time: 0:00:00 2.51 MB/s ujson-1.35-py3 100% |###############################| Time: 0:00:00 1.66 MB/s dill-0.2.8.2-p 100% |###############################| Time: 0:00:00 4.34 MB/s msgpack-numpy- 100% |###############################| Time: 0:00:00 0.00 B/s preshed-1.0.0- 100% |###############################| Time: 0:00:00 0.00 B/s thinc-6.10.3-p 100% |###############################| Time: 0:00:00 5.49 MB/s spacy-2.0.12-p 100% |###############################| Time: 0:00:10 7.42 MB/s (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -V Python 3.5.3 :: Anaconda custom (64-bit) (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -m spacy download en Collecting en_core_web_sm==2.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0 Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz (37.4MB) 100% |################################| 37.4MB ... Installing collected packages: en-core-web-sm Running setup.py install for en-core-web-sm ... done Successfully installed en-core-web-sm-2.0.0 Linking successful C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\en_core_web_sm --> C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\data\en You can now load the model via spacy.load('en') (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz> A: Initially I downloaded two en packages using following statements in anaconda prompt. python -m spacy download en_core_web_lg python -m spacy download en_core_web_sm But, I kept on getting linkage error and finally running below command helped me to establish link and solved error. python -m spacy download en Also make sure you to restart your runtime if working with Jupyter. -PS : If you get linkage error try giving admin previlages. A: The answer to your misunderstanding is a Unix concept, softlinks which we could say that in Windows are similar to shortcuts. Let's explain this. When you spacy download en, spaCy tries to find the best small model that matches your spaCy distribution. The small model that I am talking about defaults to en_core_web_sm which can be found in different variations which correspond to the different spaCy versions (for example spacy, spacy-nightly have en_core_web_sm of different sizes). When spaCy finds the best model for you, it downloads it and then links the name en to the package it downloaded, e.g. en_core_web_sm. That basically means that whenever you refer to en you will be referring to en_core_web_sm. In other words, en after linking is not a "real" package, is just a name for en_core_web_sm. However, it doesn't work the other way. You can't refer directly to en_core_web_sm because your system doesn't know you have it installed. When you did spacy download en you basically did a pip install. So pip knows that you have a package named en installed for your python distribution, but knows nothing about the package en_core_web_sm. This package is just replacing package en when you import it, which means that package en is just a softlink to en_core_web_sm. Of course, you can directly download en_core_web_sm, using the command: python -m spacy download en_core_web_sm, or you can even link the name en to other models as well. For example, you could do python -m spacy download en_core_web_lg and then python -m spacy link en_core_web_lg en. That would make en a name for en_core_web_lg, which is a large spaCy model for the English language. Hope it is clear now :) A: The below worked for me : import en_core_web_sm nlp = en_core_web_sm.load() A: For those who are still facing problems even after installing it as administrator from Anaconda prompt, here's a quick fix: Got to the path where it is downloaded. For e.g. C:\Users\name\AppData\Local\Continuum\anaconda3\Lib\site-packages\en_core_web_sm\en_core_web_sm-2.2.0 Copy the path. Paste it in: nlp = spacy.load(r'C:\Users\name\AppData\Local\Continuum\anaconda3\Lib\site-packages\en_core_web_sm\en_core_web_sm-2.2.0') Works like a charm :) PS: Check for spacy version A: Using the Spacy language model in Colab requires only the following two steps: Download the model (change the name according to the size of the model) !python -m spacy download en_core_web_lg Restart the colab runtime! Perform shortcut key: Ctrl + M + . Test import spacy nlp = spacy.load("en_core_web_lg") successful!!! A: Try this method as this worked like a charm to me: In your Anaconda Prompt, run the command: !python -m spacy download en After running the above command, you should be able to execute the below in your jupyter notebook: spacy.load('en_core_web_sm') A: First of all, install spacy using the following command for jupyter notebook pip install -U spacy Then write the following code: import en_core_web_sm nlp = en_core_web_sm.load() A: I am running Jupyter Notebook on Windows. Finally, its a version issue, Need to execute below commands in conda cmd prompt( open as admin) pip install spacy==2.3.5 python -m spacy download en_core_web_sm python -m spacy download en from chatterbot import ChatBot import spacy import en_core_web_sm nlp = en_core_web_sm.load() ChatBot("hello") Output - A: Don't run !python -m spacy download en_core_web_lg from inside jupyter. Do this instead: import spacy.cli spacy.cli.download("en_core_web_lg") You may need to restart the kernel before running the above two commands for it to work. A: import spacy nlp = spacy.load('/opt/anaconda3/envs/NLPENV/lib/python3.7/site-packages/en_core_web_sm/en_core_web_sm-2.3.1') Try giving the absolute path of the package with the version as shown in the image. It works perfectly fine. A: a simple solution for this which I saw on spacy.io from spacy.lang.en import English nlp=English() https://course.spacy.io/en/chapter1 A: As for Windows based Anaconda, Open Anaconda Prompt Activate your environment. Ex: active myspacyenv pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz python -m spacy download en_core_web_sm Open Jupyter Notebook ex: active myspacyenv and then jupyter notebook on Anaconda Promt import spacy spacy.load('en_core_web_sm') and it will run peacefully! A: Steps to load up modules based on different versions of spacy download the best-matching version of a specific model for your spaCy installation python -m spacy download en_core_web_sm pip install .tar.gz archive from path or URL pip install /Users/you/en_core_web_sm-2.2.0.tar.gz or pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz Add to your requirements file or environment yaml file. Theres range of version that one spacy version is comptable with you can view more under https://github.com/explosion/spacy-models/releases if your not sure running below code nlp = spacy.load('en_core_web_sm') will give off a warning telling what version model will be compatible with your installed spacy verion enironment.yml example name: root channels: - defaults - conda-forge - anaconda dependencies: - python=3.8.3 - pip - spacy=2.3.2 - scikit-learn=0.23.2 - pip: - https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz#egg=en_core_web_sm A: Open Anaconda Navigator. Click on any IDE. Run the code: !pip install -U spacy download en_core_web_sm !pip install -U spacy download en_core_web_sm It will work. If you are open IDE directly close it and follow this procedure once. A: Loading the module using the different syntax worked for me. import en_core_web_sm nlp = en_core_web_sm.load() A: Anaconda Users If you're using a conda virtual environment, be sure that its the same version of Python as that in your base environment. To verify this, run python --version in each environment. If not the same, create a new virtual environment with that version of Python (Ex. conda create --name myenv python=x.x.x). Activate the virtual environment (conda activate myenv) conda install -c conda-forge spacy python -m spacy download en_core_web_sm I just ran into this issue, and the above worked for me. This addresses the issue of the download occurring in an area that is not accessible to your current virtual environment. You should then be able to run the following: import spacy nlp = spacy.load("en_core_web_sm") A: Open command prompt or terminal and execute the below code: pip3 install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz Execute the below chunk in your Jupiter notebook. import spacy nlp = spacy.load('en_core_web_sm') Hope the above code works for all:) A: I had also same issue as I couldnt load module using '''spacy.load()''' You can follow below steps to solve this on windows: download using !python -m spacy download en_core_web_sm import en_core_web_sm as import en_core_web_sm load using en_core_web_sm.load() to some variable Complete code will be: python -m spacy download en_core_web_sm import en_core_web_sm nlp = en_core_web_sm.load() A: This works with colab: !python -m spacy download en import en_core_web_sm nlp = en_core_web_sm.load() Or for the medium: import en_core_web_md nlp = en_core_web_md.load() A: Instead of any of the above, this solved my error. conda install -c conda-forge spacy-model-en_core_web_sm If you are an anaconda user, this is the solution. A: I'm running PyCharm on MacOS and while none of the above answers completely worked for me, they did provide enough clues and I was finally able to everything working. I am connecting to an ec2 instance and have configured PyCharm such that I can edit on my Mac and it automatically updates the files on my ec2 instance. Thus, the problem was on the ec2 side where it was not finding Spacy even though I installed it several different times and ways. If I ran my python script from the command line, everything worked fine. However, from within PyCharm, it was initially not finding Spacy and the models. I eventually fixed the "finding" spacy issue using the above recommendation of adding a "requirements.txt" file. But the models were still not recognized. My solution: download the models manually and place them in the file system on the ec2 instance and explicitly point to them when loaded. I downloaded the files from here: https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.0.0/en_core_web_lg-3.0.0.tar.gz After downloading, I dropped moved them to my ec2 instance, decompressed and untared them in my filesystem, e.g. /path_to_models/en_core_web_lg-3.0.0/ I then load a model using the explicit path and it worked from within PyCharm (note the path used goes all the way to en_core_web_lg-3.0.0; you will get an error if you do not use the folder with the config.cfg file): nlpObject = spacy.load('/path_to_models/en_core_web_lg-3.0.0/en_core_web_lg/en_core_web_lg-3.0.0') A: Check installed version of spacy pip show spacy You will get something like this: Name: spacy Version: 3.1.3 Summary: Industrial-strength Natural Language Processing (NLP) in Python Install the relevant version of the model using: !pip install -U https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz A: I tried all the above answers but could not succeed. Below worked for me : (Specific to WINDOWS os) Run anaconda command prompt with admin privilege(Important) Then run below commands: pip install -U --user spacy python -m spacy download en Try below command for verification: import spacy spacy.load('en') It might work for others versions as well: A: If you have already downloaded spacy and the language model (E.g., en_core_web_sm or en_core_web_md), then you can follow these steps: Open Anaconda prompt as admin Then type : python -m spacy link [package name or path] [shortcut] For E.g., python -m spacy link /Users/you/model en This will create a symlink to the your language model. Now you can load the model using spacy.load("en") in your notebooks or scripts A: This is what I did: Went to the virtual environment where I was working on Anaconda Prompt / Command Line Ran this: python -m spacy download en_core_web_sm And was done A: TRY THIS :- !python -m spacy download en_core_web_md A: Even I faced similar issue. How I resolved it start anaconda prompt in admin mode. installed both python -m spacy download en and python -m spacy download en_core_web_sm after above steps only I started jupyter notebook where I am accessing this package. Now I can access both import spacy nlp = spacy.load('en_core_web_sm') or nlp = spacy.load('en') Both are working for me. A: I faced a similar issue. I installed spacy and en_core_web_sm from a specific conda environment. However, I got two(02) differents issues as following: [Errno 2] No such file or directory: '....\en_core_web_sm\en_core_web_sm-2.3.1\vocab\lexemes.bin' or OSError: [E050] Can't find model 'en_core_web_sm'.... It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. I did the following: Open Command Prompt as Administrator Go to c:> Activate my Conda environment (If you work in a specific conda environment): c:\>activate <conda environment name> (conda environment name)c:\>python -m spacy download en Return to Jupyter Notebook and you can load the language library: nlp = en_core_web_sm.load() For me, it works :) A: Download en_core_web_sm tar file Open terminal from anaconda or open anaconda evn. Run this: pip3 install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz; or pip install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz; Restart jupyter, it will work. A: Run this in os console: python -m spacy download en python -m spacy link en_core_web_sm en_core_web_sm Then run this in python console or on your python IDE: import spacy spacy.load('en_core_web_sm') A: This worked for me: conda install -c conda-forge spacy-model-en_core_web_sm A: Best is to follow the official spacy docs for installation (https://spacy.io/usage): First uninstall your current spacy version pip uninstall spacy Then install pacy correctly pip install -U pip setuptools wheel pip install -U spacy python -m spacy download en_core_web_sm
spacy Can't find model 'en_core_web_sm' on windows 10 and Python 3.5.3 :: Anaconda custom (64-bit)
what is difference between spacy.load('en_core_web_sm') and spacy.load('en')? This link explains different model sizes. But i am still not clear how spacy.load('en_core_web_sm') and spacy.load('en') differ spacy.load('en') runs fine for me. But the spacy.load('en_core_web_sm') throws error i have installed spacyas below. when i go to jupyter notebook and run command nlp = spacy.load('en_core_web_sm') I get the below error --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-4-b472bef03043> in <module>() 1 # Import spaCy and load the language library 2 import spacy ----> 3 nlp = spacy.load('en_core_web_sm') 4 5 # Create a Doc object C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\__init__.py in load(name, **overrides) 13 if depr_path not in (True, False, None): 14 deprecation_warning(Warnings.W001.format(path=depr_path)) ---> 15 return util.load_model(name, **overrides) 16 17 C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\util.py in load_model(name, **overrides) 117 elif hasattr(name, 'exists'): # Path or Path-like to model data 118 return load_model_from_path(name, **overrides) --> 119 raise IOError(Errors.E050.format(name=name)) 120 121 OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. how I installed Spacy --- (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>conda install -c conda-forge spacy Fetching package metadata ............. Solving package specifications: . Package plan for installation in environment C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder: The following NEW packages will be INSTALLED: blas: 1.0-mkl cymem: 1.31.2-py35h6538335_0 conda-forge dill: 0.2.8.2-py35_0 conda-forge msgpack-numpy: 0.4.4.2-py_0 conda-forge murmurhash: 0.28.0-py35h6538335_1000 conda-forge plac: 0.9.6-py_1 conda-forge preshed: 1.0.0-py35h6538335_0 conda-forge pyreadline: 2.1-py35_1000 conda-forge regex: 2017.11.09-py35_0 conda-forge spacy: 2.0.12-py35h830ac7b_0 conda-forge termcolor: 1.1.0-py_2 conda-forge thinc: 6.10.3-py35h830ac7b_2 conda-forge tqdm: 4.29.1-py_0 conda-forge ujson: 1.35-py35hfa6e2cd_1001 conda-forge The following packages will be UPDATED: msgpack-python: 0.4.8-py35_0 --> 0.5.6-py35he980bc4_3 conda-forge The following packages will be DOWNGRADED: freetype: 2.7-vc14_2 conda-forge --> 2.5.5-vc14_2 Proceed ([y]/n)? y blas-1.0-mkl.t 100% |###############################| Time: 0:00:00 0.00 B/s cymem-1.31.2-p 100% |###############################| Time: 0:00:00 1.65 MB/s msgpack-python 100% |###############################| Time: 0:00:00 5.37 MB/s murmurhash-0.2 100% |###############################| Time: 0:00:00 1.49 MB/s plac-0.9.6-py_ 100% |###############################| Time: 0:00:00 0.00 B/s pyreadline-2.1 100% |###############################| Time: 0:00:00 4.62 MB/s regex-2017.11. 100% |###############################| Time: 0:00:00 3.31 MB/s termcolor-1.1. 100% |###############################| Time: 0:00:00 187.81 kB/s tqdm-4.29.1-py 100% |###############################| Time: 0:00:00 2.51 MB/s ujson-1.35-py3 100% |###############################| Time: 0:00:00 1.66 MB/s dill-0.2.8.2-p 100% |###############################| Time: 0:00:00 4.34 MB/s msgpack-numpy- 100% |###############################| Time: 0:00:00 0.00 B/s preshed-1.0.0- 100% |###############################| Time: 0:00:00 0.00 B/s thinc-6.10.3-p 100% |###############################| Time: 0:00:00 5.49 MB/s spacy-2.0.12-p 100% |###############################| Time: 0:00:10 7.42 MB/s (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -V Python 3.5.3 :: Anaconda custom (64-bit) (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -m spacy download en Collecting en_core_web_sm==2.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0 Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz (37.4MB) 100% |################################| 37.4MB ... Installing collected packages: en-core-web-sm Running setup.py install for en-core-web-sm ... done Successfully installed en-core-web-sm-2.0.0 Linking successful C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\en_core_web_sm --> C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\data\en You can now load the model via spacy.load('en') (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>
[ "Initially I downloaded two en packages using following statements in anaconda prompt.\npython -m spacy download en_core_web_lg\npython -m spacy download en_core_web_sm\n\nBut, I kept on getting linkage error and finally running below command helped me to establish link and solved error.\npython -m spacy download en\n\nAlso make sure you to restart your runtime if working with Jupyter.\n-PS : If you get linkage error try giving admin previlages.\n", "The answer to your misunderstanding is a Unix concept, softlinks which we could say that in Windows are similar to shortcuts. Let's explain this.\nWhen you spacy download en, spaCy tries to find the best small model that matches your spaCy distribution. The small model that I am talking about defaults to en_core_web_sm which can be found in different variations which correspond to the different spaCy versions (for example spacy, spacy-nightly have en_core_web_sm of different sizes). \nWhen spaCy finds the best model for you, it downloads it and then links the name en to the package it downloaded, e.g. en_core_web_sm. That basically means that whenever you refer to en you will be referring to en_core_web_sm. In other words, en after linking is not a \"real\" package, is just a name for en_core_web_sm.\nHowever, it doesn't work the other way. You can't refer directly to en_core_web_sm because your system doesn't know you have it installed. When you did spacy download en you basically did a pip install. So pip knows that you have a package named en installed for your python distribution, but knows nothing about the package en_core_web_sm. This package is just replacing package en when you import it, which means that package en is just a softlink to en_core_web_sm.\nOf course, you can directly download en_core_web_sm, using the command: python -m spacy download en_core_web_sm, or you can even link the name en to other models as well. For example, you could do python -m spacy download en_core_web_lg and then python -m spacy link en_core_web_lg en. That would make \nen a name for en_core_web_lg, which is a large spaCy model for the English language.\nHope it is clear now :) \n", "The below worked for me :\nimport en_core_web_sm\n\nnlp = en_core_web_sm.load()\n\n", "For those who are still facing problems even after installing it as administrator from Anaconda prompt, here's a quick fix:\n\nGot to the path where it is downloaded. For e.g.\nC:\\Users\\name\\AppData\\Local\\Continuum\\anaconda3\\Lib\\site-packages\\en_core_web_sm\\en_core_web_sm-2.2.0\n\n\nCopy the path.\n\nPaste it in:\nnlp = spacy.load(r'C:\\Users\\name\\AppData\\Local\\Continuum\\anaconda3\\Lib\\site-packages\\en_core_web_sm\\en_core_web_sm-2.2.0')\n\n\nWorks like a charm :)\n\n\nPS: Check for spacy version\n", "Using the Spacy language model in Colab requires only the following two steps:\n\nDownload the model (change the name according to the size of the model)\n\n!python -m spacy download en_core_web_lg \n\n\nRestart the colab runtime!\nPerform shortcut key: Ctrl + M + .\n\nTest\nimport spacy\nnlp = spacy.load(\"en_core_web_lg\")\n\nsuccessful!!!\n", "Try this method as this worked like a charm to me:\nIn your Anaconda Prompt, run the command:\n!python -m spacy download en\n\nAfter running the above command, you should be able to execute the below in your jupyter notebook:\nspacy.load('en_core_web_sm')\n\n", "First of all, install spacy using the following command for jupyter notebook\npip install -U spacy\nThen write the following code:\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\n\n", "I am running Jupyter Notebook on Windows.\nFinally, its a version issue, Need to execute below commands in conda cmd prompt( open as admin)\n\npip install spacy==2.3.5\n\npython -m spacy download en_core_web_sm\n\npython -m spacy download en\n\n\nfrom chatterbot import ChatBot\nimport spacy\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\nChatBot(\"hello\")\n\nOutput -\n\n", "Don't run !python -m spacy download en_core_web_lg from inside jupyter.\nDo this instead:\nimport spacy.cli\nspacy.cli.download(\"en_core_web_lg\")\n\nYou may need to restart the kernel before running the above two commands for it to work.\n", "import spacy\n\nnlp = spacy.load('/opt/anaconda3/envs/NLPENV/lib/python3.7/site-packages/en_core_web_sm/en_core_web_sm-2.3.1')\n\nTry giving the absolute path of the package with the version as shown in the image.\nIt works perfectly fine.\n", "a simple solution for this which I saw on spacy.io\nfrom spacy.lang.en import English\nnlp=English()\n\nhttps://course.spacy.io/en/chapter1\n", "As for Windows based Anaconda,\n\nOpen Anaconda Prompt\n\nActivate your environment. Ex: active myspacyenv\n\npip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz\n\npython -m spacy download en_core_web_sm\n\nOpen Jupyter Notebook ex: active myspacyenv and then jupyter notebook on Anaconda Promt\n\n\n\nimport spacy spacy.load('en_core_web_sm')\n\nand it will run peacefully!\n", "Steps to load up modules based on different versions of spacy\ndownload the best-matching version of a specific model for your spaCy installation\npython -m spacy download en_core_web_sm\npip install .tar.gz archive from path or URL\npip install /Users/you/en_core_web_sm-2.2.0.tar.gz\n\nor\npip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz\n\nAdd to your requirements file or environment yaml file. Theres range of version that one spacy version is comptable with you can view more under https://github.com/explosion/spacy-models/releases\nif your not sure running below code\nnlp = spacy.load('en_core_web_sm') \n\nwill give off a warning telling what version model will be compatible with your installed spacy verion\nenironment.yml example\nname: root\nchannels:\n - defaults\n - conda-forge\n - anaconda\ndependencies:\n - python=3.8.3\n - pip\n - spacy=2.3.2\n - scikit-learn=0.23.2\n - pip:\n - https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz#egg=en_core_web_sm\n\n", "Open Anaconda Navigator. Click on any IDE. Run the code: \n!pip install -U spacy download en_core_web_sm\n!pip install -U spacy download en_core_web_sm\n\nIt will work. If you are open IDE directly close it and follow this procedure once.\n", "Loading the module using the different syntax worked for me.\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\n\n", "Anaconda Users\n\nIf you're using a conda virtual environment, be sure that its the same version of Python as that in your base environment. To verify this, run python --version in each environment. If not the same, create a new virtual environment with that version of Python (Ex. conda create --name myenv python=x.x.x).\nActivate the virtual environment (conda activate myenv)\nconda install -c conda-forge spacy\npython -m spacy download en_core_web_sm\n\nI just ran into this issue, and the above worked for me. This addresses the issue of the download occurring in an area that is not accessible to your current virtual environment.\nYou should then be able to run the following:\nimport spacy\nnlp = spacy.load(\"en_core_web_sm\")\n\n", "Open command prompt or terminal and execute the below code:\npip3 install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz\n\nExecute the below chunk in your Jupiter notebook.\nimport spacy\nnlp = spacy.load('en_core_web_sm')\nHope the above code works for all:)\n", "I had also same issue as I couldnt load module using '''spacy.load()'''\nYou can follow below steps to solve this on windows:\n\ndownload using !python -m spacy download en_core_web_sm\nimport en_core_web_sm as import en_core_web_sm\nload using en_core_web_sm.load() to some variable\n\nComplete code will be:\npython -m spacy download en_core_web_sm\n\nimport en_core_web_sm\n\nnlp = en_core_web_sm.load()\n\n", "This works with colab:\n!python -m spacy download en\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\n\nOr for the medium:\nimport en_core_web_md\nnlp = en_core_web_md.load()\n\n", "Instead of any of the above, this solved my error.\nconda install -c conda-forge spacy-model-en_core_web_sm\nIf you are an anaconda user, this is the solution.\n", "I'm running PyCharm on MacOS and while none of the above answers completely worked for me, they did provide enough clues and I was finally able to everything working. I am connecting to an ec2 instance and have configured PyCharm such that I can edit on my Mac and it automatically updates the files on my ec2 instance. Thus, the problem was on the ec2 side where it was not finding Spacy even though I installed it several different times and ways. If I ran my python script from the command line, everything worked fine. However, from within PyCharm, it was initially not finding Spacy and the models. I eventually fixed the \"finding\" spacy issue using the above recommendation of adding a \"requirements.txt\" file. But the models were still not recognized.\nMy solution: download the models manually and place them in the file system on the ec2 instance and explicitly point to them when loaded. I downloaded the files from here:\nhttps://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz\nhttps://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.0.0/en_core_web_lg-3.0.0.tar.gz\nAfter downloading, I dropped moved them to my ec2 instance, decompressed and untared them in my filesystem, e.g. /path_to_models/en_core_web_lg-3.0.0/\nI then load a model using the explicit path and it worked from within PyCharm (note the path used goes all the way to en_core_web_lg-3.0.0; you will get an error if you do not use the folder with the config.cfg file):\nnlpObject = spacy.load('/path_to_models/en_core_web_lg-3.0.0/en_core_web_lg/en_core_web_lg-3.0.0')\n\n", "Check installed version of spacy\npip show spacy\nYou will get something like this:\nName: spacy\nVersion: 3.1.3\nSummary: Industrial-strength Natural Language Processing (NLP) in Python\nInstall the relevant version of the model using:\n!pip install -U https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz\n", "I tried all the above answers but could not succeed. Below worked for me :\n(Specific to WINDOWS os)\n\nRun anaconda command prompt with admin privilege(Important)\nThen run below commands:\n\n pip install -U --user spacy \n python -m spacy download en\n\n\nTry below command for verification:\n\nimport spacy\nspacy.load('en')\n\n\nIt might work for others versions as well:\n\n\n", "If you have already downloaded spacy and the language model (E.g., en_core_web_sm or en_core_web_md), then you can follow these steps:\n\nOpen Anaconda prompt as admin\n\nThen type : python -m spacy link [package name or path] [shortcut]\nFor E.g., python -m spacy link /Users/you/model en\n\n\nThis will create a symlink to the your language model. Now you can load the model using spacy.load(\"en\") in your notebooks or scripts\n", "This is what I did:\n\nWent to the virtual environment where I was working on Anaconda Prompt / Command Line\n\nRan this: python -m spacy download en_core_web_sm\n\n\nAnd was done\n", "TRY THIS :-\n!python -m spacy download en_core_web_md\n", "Even I faced similar issue. How I resolved it\n\nstart anaconda prompt in admin mode.\ninstalled both\npython -m spacy download en\nand\npython -m spacy download en_core_web_sm\nafter above steps only I started jupyter notebook where I am accessing this package.\nNow I can access both\nimport spacy\nnlp = spacy.load('en_core_web_sm')\nor\nnlp = spacy.load('en')\nBoth are working for me.\n\n", "I faced a similar issue. I installed spacy and en_core_web_sm from a specific conda environment. However, I got two(02) differents issues as following:\n[Errno 2] No such file or directory: '....\\en_core_web_sm\\en_core_web_sm-2.3.1\\vocab\\lexemes.bin'\nor\nOSError: [E050] Can't find model 'en_core_web_sm'.... It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.\nI did the following:\n\nOpen Command Prompt as Administrator\nGo to c:>\nActivate my Conda environment (If you work in a specific conda environment):\n\nc:\\>activate <conda environment name>\n\n\n(conda environment name)c:\\>python -m spacy download en\nReturn to Jupyter Notebook and you can load the language library:\n\nnlp = en_core_web_sm.load()\n\nFor me, it works :)\n", "Download en_core_web_sm tar file\nOpen terminal from anaconda or open anaconda evn.\nRun this:\npip3 install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz;\n\nor\npip install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz;\n\nRestart jupyter, it will work.\n", "Run this in os console:\npython -m spacy download en\npython -m spacy link en_core_web_sm en_core_web_sm\n\nThen run this in python console or on your python IDE:\nimport spacy\nspacy.load('en_core_web_sm')\n\n", "This worked for me:\nconda install -c conda-forge spacy-model-en_core_web_sm\n", "Best is to follow the official spacy docs for installation (https://spacy.io/usage):\nFirst uninstall your current spacy version\npip uninstall spacy\n\nThen install pacy correctly\npip install -U pip setuptools wheel\npip install -U spacy\npython -m spacy download en_core_web_sm\n\n" ]
[ 156, 83, 40, 17, 15, 11, 5, 5, 5, 4, 4, 3, 3, 2, 2, 2, 2, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "nlp", "python", "python_3.x", "spacy" ]
stackoverflow_0054334304_nlp_python_python_3.x_spacy.txt
Q: How do I get Python to send as many concurrent HTTP requests as possible? I'm trying to send HTTPS requests as quickly as possible. I know this would have to be concurrent requests due to my goal being 150 to 500+ requests a second. I've searched everywhere, but get no Python 3.11+ answer or one that doesn't give me errors. I'm trying to avoid AIOHTTP as the rigmarole of setting it up was a pain, which didn't even work. The input should be an array or URLs and the output an array of the html string. A: It's quite unfortunate that you couldn't setup AIOHTTP properly because this is one of the most efficient way to do asynchronous requests in Python. Setup is not that hard: import asyncio import aiohttp from time import perf_counter def urls(n_reqs: int): for _ in range(n_reqs): yield "https://python.org" async def get(session: aiohttp.ClientSession, url: str): async with session.get(url) as response: _ = await response.text() async def main(n_reqs: int): async with aiohttp.ClientSession() as session: await asyncio.gather( *[get(session, url) for url in urls(n_reqs)] ) if __name__ == "__main__": n_reqs = 10_000 start = perf_counter() asyncio.run(main(n_reqs)) end = perf_counter() print(f"{n_reqs / (end - start)} req/s") You basically need to create a single ClientSession which you then reuse to send the get requests. The requests are made concurrently with to asyncio.gather(). You could also use the newer asyncio.TaskGroup: async def main(n_reqs: int): async with aiohttp.ClientSession() as session: async with asyncio.TaskGroup() as group: for url in urls(n_reqs): group.create_task(get(session, url)) This easily achieves 500+ requests per seconds on my 7+ years old bi-core computer. Contrary to what other answers suggested, this solution does not require to spawn thousands of threads, which are expensive. You may improve the speed even more my using a custom connector in order to allow more concurrent connections (default is 100) in a single session: async def main(n_reqs: int): let connector = aiohttp.TCPConnector(limit=0) async with aiohttp.ClientSession(connector=connector) as session: ... A: Hope this helps, this question asked What is the fastest way to send 10000 http requests I observed 15000 requests in 10s, using wireshark to trap on localhost and saved packets to CSV, only counted packets that had GET in them. FILE: a.py from treq import get from twisted.internet import reactor def done(response): if response.code == 200: get("http://localhost:3000").addCallback(done) get("http://localhost:3000").addCallback(done) reactor.callLater(10, reactor.stop) reactor.run() Run test like this: pip3 install treq python3 a.py # code from above Setup test website like this, mine was on port 3000 mkdir myapp cd myapp npm init npm install express node app.js FILE: app.js const express = require('express') const app = express() const port = 3000 app.get('/', (req, res) => { res.send('Hello World!') }) app.listen(port, () => { console.log(`Example app listening on port ${port}`) }) OUTPUT grep GET wireshark.csv | head "5","0.000418","::1","::1","HTTP","139","GET / HTTP/1.1 " "13","0.002334","::1","::1","HTTP","139","GET / HTTP/1.1 " "17","0.003236","::1","::1","HTTP","139","GET / HTTP/1.1 " "21","0.004018","::1","::1","HTTP","139","GET / HTTP/1.1 " "25","0.004803","::1","::1","HTTP","139","GET / HTTP/1.1 " grep GET wireshark.csv | tail "62145","9.994184","::1","::1","HTTP","139","GET / HTTP/1.1 " "62149","9.995102","::1","::1","HTTP","139","GET / HTTP/1.1 " "62153","9.995860","::1","::1","HTTP","139","GET / HTTP/1.1 " "62157","9.996616","::1","::1","HTTP","139","GET / HTTP/1.1 " "62161","9.997307","::1","::1","HTTP","139","GET / HTTP/1.1 " A: This works, getting around 250+ requests a second. This solution does work on Windows 10. You may have to pip install for concurrent and requests. import time import requests import concurrent.futures start = int(time.time()) # get time before the requests are sent urls = [] # input URLs/IPs array responses = [] # output content of each request as string in an array # create an list of 5000 sites to test with for y in range(5000):urls.append("https://example.com") def send(url):responses.append(requests.get(url).content) with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor: futures = [] for url in urls:futures.append(executor.submit(send, url)) end = int(time.time()) # get time after stuff finishes print(str(round(len(urls)/(end - start),0))+"/sec") # get average requests per second Output: 286.0/sec Note: If your code requires something extremely time dependent, replace the middle part with this: with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor: futures = [] for url in urls: futures.append(executor.submit(send, url)) for future in concurrent.futures.as_completed(futures): responses.append(future.result()) This is a modified version of what this site showed in an example. The secret sauce is the max_workers=10000. Otherwise, it would average about 80/sec. Although, when setting it to beyond 1000, there wasn't any boost in speed.
How do I get Python to send as many concurrent HTTP requests as possible?
I'm trying to send HTTPS requests as quickly as possible. I know this would have to be concurrent requests due to my goal being 150 to 500+ requests a second. I've searched everywhere, but get no Python 3.11+ answer or one that doesn't give me errors. I'm trying to avoid AIOHTTP as the rigmarole of setting it up was a pain, which didn't even work. The input should be an array or URLs and the output an array of the html string.
[ "It's quite unfortunate that you couldn't setup AIOHTTP properly because this is one of the most efficient way to do asynchronous requests in Python.\nSetup is not that hard:\nimport asyncio\nimport aiohttp\nfrom time import perf_counter\n\n\ndef urls(n_reqs: int):\n for _ in range(n_reqs):\n yield \"https://python.org\"\n\nasync def get(session: aiohttp.ClientSession, url: str):\n async with session.get(url) as response:\n _ = await response.text()\n \nasync def main(n_reqs: int):\n async with aiohttp.ClientSession() as session:\n await asyncio.gather(\n *[get(session, url) for url in urls(n_reqs)]\n )\n\n\nif __name__ == \"__main__\":\n n_reqs = 10_000\n \n start = perf_counter()\n asyncio.run(main(n_reqs))\n end = perf_counter()\n \n print(f\"{n_reqs / (end - start)} req/s\")\n\nYou basically need to create a single ClientSession which you then reuse to send the get requests. The requests are made concurrently with to asyncio.gather(). You could also use the newer asyncio.TaskGroup:\nasync def main(n_reqs: int):\n async with aiohttp.ClientSession() as session:\n async with asyncio.TaskGroup() as group:\n for url in urls(n_reqs):\n group.create_task(get(session, url))\n\nThis easily achieves 500+ requests per seconds on my 7+ years old bi-core computer. Contrary to what other answers suggested, this solution does not require to spawn thousands of threads, which are expensive.\nYou may improve the speed even more my using a custom connector in order to allow more concurrent connections (default is 100) in a single session:\nasync def main(n_reqs: int):\n let connector = aiohttp.TCPConnector(limit=0)\n async with aiohttp.ClientSession(connector=connector) as session:\n ...\n\n\n", "Hope this helps, this question asked What is the fastest way to send 10000 http requests\nI observed 15000 requests in 10s, using wireshark to trap on localhost and saved packets to CSV, only counted packets that had GET in them.\nFILE: a.py\nfrom treq import get\nfrom twisted.internet import reactor\n\ndef done(response):\n if response.code == 200:\n get(\"http://localhost:3000\").addCallback(done)\n\nget(\"http://localhost:3000\").addCallback(done)\n\nreactor.callLater(10, reactor.stop)\nreactor.run()\n\nRun test like this:\npip3 install treq\npython3 a.py # code from above\n\nSetup test website like this, mine was on port 3000\nmkdir myapp\ncd myapp\nnpm init\nnpm install express\nnode app.js\n\nFILE: app.js\nconst express = require('express')\nconst app = express()\nconst port = 3000\n\napp.get('/', (req, res) => {\n res.send('Hello World!')\n})\n\napp.listen(port, () => {\n console.log(`Example app listening on port ${port}`)\n})\n\nOUTPUT\ngrep GET wireshark.csv | head\n\"5\",\"0.000418\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"13\",\"0.002334\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"17\",\"0.003236\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"21\",\"0.004018\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"25\",\"0.004803\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\ngrep GET wireshark.csv | tail\n\"62145\",\"9.994184\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62149\",\"9.995102\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62153\",\"9.995860\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62157\",\"9.996616\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62161\",\"9.997307\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\n\n", "This works, getting around 250+ requests a second.\nThis solution does work on Windows 10. You may have to pip install for concurrent and requests.\nimport time\nimport requests\nimport concurrent.futures\n\nstart = int(time.time()) # get time before the requests are sent\n\nurls = [] # input URLs/IPs array\nresponses = [] # output content of each request as string in an array\n\n# create an list of 5000 sites to test with\nfor y in range(5000):urls.append(\"https://example.com\")\n\ndef send(url):responses.append(requests.get(url).content)\n\nwith concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor:\n futures = []\n for url in urls:futures.append(executor.submit(send, url))\n \nend = int(time.time()) # get time after stuff finishes\nprint(str(round(len(urls)/(end - start),0))+\"/sec\") # get average requests per second\n\nOutput:\n286.0/sec\nNote: If your code requires something extremely time dependent, replace the middle part with this:\nwith concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor:\n futures = []\n for url in urls:\n futures.append(executor.submit(send, url))\n for future in concurrent.futures.as_completed(futures):\n responses.append(future.result())\n\nThis is a modified version of what this site showed in an example.\nThe secret sauce is the max_workers=10000. Otherwise, it would average about 80/sec. Although, when setting it to beyond 1000, there wasn't any boost in speed.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "concurrency", "http", "https", "python", "python_3.x" ]
stackoverflow_0074567219_concurrency_http_https_python_python_3.x.txt
Q: What is the best way to pass an extra argument to a scipy.LowLevelCallable function? I have a python script that creates a set of ctype input arguments to pass to scipy.LowLevelCallable(see the notes section) and uses it to make a call to scipy.generic_filter that only executes a single iteration for testing purposes. I also define an extra argument and pass it to the user_data void pointer as following: from scipy import LowLevelCallable, ndimage import numpy as np import ctypes clib = ctypes.cdll.LoadLibrary('path_to_my_file/my_filter.so') clib.max_filter.restype = ctypes.c_int clib.max_filter.argtypes = ( ctypes.POINTER(ctypes.c_double), ctypes.c_long, ctypes.POINTER(ctypes.c_double), ctypes.c_void_p) my_user_data = ctypes.c_double(12345) ptr = ctypes.cast(ctypes.pointer(my_user_data), ctypes.c_void_p) max_filter_llc = LowLevelCallable(clib.max_filter,ptr) #this part only executes the LowLevelCallable function once and has no special meaning image = np.random.random((1, 1)) footprint = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]], dtype=bool) mask = ndimage.generic_filter(image, max_filter_llc, footprint=footprint) path_to_my_file/my_filter.so corresponds to the scipy.LowLevelCallable function argument structure and simply prints the user_data variable: #include <math.h> #include <stdint.h> #include <stdio.h> int my_filter( double * buffer, intptr_t filter_size, double * return_value, void * user_data ) { double x; x = *(double *)(user_data); printf("my user_data input is: %ld", x); return 1; } This prints out my user_data input is: 0, even though I defined my_user_data as 12345 in my python script. How can I change my scripts so I can access the extra argument in my c program? A: @DavidRanieri's comment resolved my problem
What is the best way to pass an extra argument to a scipy.LowLevelCallable function?
I have a python script that creates a set of ctype input arguments to pass to scipy.LowLevelCallable(see the notes section) and uses it to make a call to scipy.generic_filter that only executes a single iteration for testing purposes. I also define an extra argument and pass it to the user_data void pointer as following: from scipy import LowLevelCallable, ndimage import numpy as np import ctypes clib = ctypes.cdll.LoadLibrary('path_to_my_file/my_filter.so') clib.max_filter.restype = ctypes.c_int clib.max_filter.argtypes = ( ctypes.POINTER(ctypes.c_double), ctypes.c_long, ctypes.POINTER(ctypes.c_double), ctypes.c_void_p) my_user_data = ctypes.c_double(12345) ptr = ctypes.cast(ctypes.pointer(my_user_data), ctypes.c_void_p) max_filter_llc = LowLevelCallable(clib.max_filter,ptr) #this part only executes the LowLevelCallable function once and has no special meaning image = np.random.random((1, 1)) footprint = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]], dtype=bool) mask = ndimage.generic_filter(image, max_filter_llc, footprint=footprint) path_to_my_file/my_filter.so corresponds to the scipy.LowLevelCallable function argument structure and simply prints the user_data variable: #include <math.h> #include <stdint.h> #include <stdio.h> int my_filter( double * buffer, intptr_t filter_size, double * return_value, void * user_data ) { double x; x = *(double *)(user_data); printf("my user_data input is: %ld", x); return 1; } This prints out my user_data input is: 0, even though I defined my_user_data as 12345 in my python script. How can I change my scripts so I can access the extra argument in my c program?
[ "@DavidRanieri's comment resolved my problem\n" ]
[ 0 ]
[]
[]
[ "c", "ctypes", "python", "scipy", "scipy.ndimage" ]
stackoverflow_0074658716_c_ctypes_python_scipy_scipy.ndimage.txt
Q: Binary Search Using a Recursive Function taking an intro CS class on python and was met by this lab on my textbook. It calls for binary search using recursive functions. I have the rest of the program, I simply need to define the Binary Search function. Any help on this would be greatly appreciated. Here is the problem: Binary search can be implemented as a recursive algorithm. Each call makes a recursive call on one-half of the list the call received as an argument. Complete the recursive function binary_search() with the following specifications: Parameters: a list of integers a target integer lower and upper bounds within which the recursive call will search Return value: if found, the index within the list where the target is located -1 if target is not found The algorithm begins by choosing an index midway between the lower and upper bounds. If target == nums[index] return index If lower == upper, return lower if target == nums[lower] else -1 to indicate not found Otherwise call the function recursively with half the list as an argument: If nums[index] < target, search the list from index to upper If nums[index] > target, search the list from lower to index The list must be ordered, but duplicates are allowed. Once the search algorithm works correctly, add the following to binary_search(): Count the number of calls to binary_search(). Count the number of times when the target is compared to an element of the list. Note: lower == upper should not be counted. Hint: Use a global variable to count calls and comparisons. The input of the program consists of integers on one line followed by a target integer on the second. The template provides the main program and a helper function that reads a list from input. Ex: If the input is: 1 2 3 4 5 6 7 8 9 2 the output is: index: 1, recursions: 2, comparisons: 3 Here is my code: # TODO: Declare global variables here. recursions = 0 comparisons = 0 def binary_search(nums, target, lower, upper): global recursions global comparisons if target == nums[(lower+upper)/2]: if lower == upper: if target == nums[lower]: return lower else: target == -1 elif nums[(lower+upper)/2] < target: recursions =+1 comparisons =+1 binary_search(upper) elif nums[(lower+upper)/2] > target: recursions =+1 comparisons =+1 binary_search(lower) if __name__ == '__main__': # Input a list of nums from the first line of input nums = [int(n) for n in input().split()] # Input a target value target = int(input()) # Start off with default values: full range of list indices index = binary_search(nums, target, 0, len(nums) - 1) # Output the index where target was found in nums, and the # number of recursions and comparisons performed print(f'index: {index}, recursions: {recursions}, comparisons: {comparisons}') Error output: Traceback (most recent call last): File "main.py", line 34, in <module> index = binary_search(nums, target, 0, len(nums) - 1) File "main.py", line 8, in binary_search if target == nums[(lower+upper)/2]: TypeError: list indices must be integers or slices, not float A: Your error means that lower + upper is an odd number, which when you divide by 2 results in something like 3.5, 8.5, etc., which is an invalid index for a list. To solve this, use floored division (rounding down) with the double slash // operator if target == nums[(lower+upper)//2]: A: Once you've fixed the integer division you'll have a problem when you try to make the recursive call because you're not providing enough parameters. You may find this helpful: recursions = 0 comparisons = 0 def binary_search(lst, t): def _binary_search(lst, lo, hi, t): global recursions, comparisons recursions += 1 if hi >= lo: mid = (hi + lo) // 2 comparisons += 1 if lst[mid] == t: return mid comparisons += 1 if lst[mid] > t: return _binary_search(lst, lo, mid - 1, t) else: return _binary_search(lst, mid + 1, hi, t) else: return -1 return _binary_search(lst, 0, len(lst)-1, t) index = binary_search([1, 2, 3, 4, 5, 6, 7, 8, 9], 2) print(f'{index=} {recursions=} {comparisons=}') Output: index=1 recursions=2 comparisons=3
Binary Search Using a Recursive Function
taking an intro CS class on python and was met by this lab on my textbook. It calls for binary search using recursive functions. I have the rest of the program, I simply need to define the Binary Search function. Any help on this would be greatly appreciated. Here is the problem: Binary search can be implemented as a recursive algorithm. Each call makes a recursive call on one-half of the list the call received as an argument. Complete the recursive function binary_search() with the following specifications: Parameters: a list of integers a target integer lower and upper bounds within which the recursive call will search Return value: if found, the index within the list where the target is located -1 if target is not found The algorithm begins by choosing an index midway between the lower and upper bounds. If target == nums[index] return index If lower == upper, return lower if target == nums[lower] else -1 to indicate not found Otherwise call the function recursively with half the list as an argument: If nums[index] < target, search the list from index to upper If nums[index] > target, search the list from lower to index The list must be ordered, but duplicates are allowed. Once the search algorithm works correctly, add the following to binary_search(): Count the number of calls to binary_search(). Count the number of times when the target is compared to an element of the list. Note: lower == upper should not be counted. Hint: Use a global variable to count calls and comparisons. The input of the program consists of integers on one line followed by a target integer on the second. The template provides the main program and a helper function that reads a list from input. Ex: If the input is: 1 2 3 4 5 6 7 8 9 2 the output is: index: 1, recursions: 2, comparisons: 3 Here is my code: # TODO: Declare global variables here. recursions = 0 comparisons = 0 def binary_search(nums, target, lower, upper): global recursions global comparisons if target == nums[(lower+upper)/2]: if lower == upper: if target == nums[lower]: return lower else: target == -1 elif nums[(lower+upper)/2] < target: recursions =+1 comparisons =+1 binary_search(upper) elif nums[(lower+upper)/2] > target: recursions =+1 comparisons =+1 binary_search(lower) if __name__ == '__main__': # Input a list of nums from the first line of input nums = [int(n) for n in input().split()] # Input a target value target = int(input()) # Start off with default values: full range of list indices index = binary_search(nums, target, 0, len(nums) - 1) # Output the index where target was found in nums, and the # number of recursions and comparisons performed print(f'index: {index}, recursions: {recursions}, comparisons: {comparisons}') Error output: Traceback (most recent call last): File "main.py", line 34, in <module> index = binary_search(nums, target, 0, len(nums) - 1) File "main.py", line 8, in binary_search if target == nums[(lower+upper)/2]: TypeError: list indices must be integers or slices, not float
[ "Your error means that lower + upper is an odd number, which when you divide by 2 results in something like 3.5, 8.5, etc., which is an invalid index for a list.\nTo solve this, use floored division (rounding down) with the double slash // operator\nif target == nums[(lower+upper)//2]:\n\n", "Once you've fixed the integer division you'll have a problem when you try to make the recursive call because you're not providing enough parameters.\nYou may find this helpful:\nrecursions = 0\ncomparisons = 0\n\ndef binary_search(lst, t):\n def _binary_search(lst, lo, hi, t):\n global recursions, comparisons\n recursions += 1\n if hi >= lo:\n mid = (hi + lo) // 2\n comparisons += 1\n if lst[mid] == t:\n return mid\n\n comparisons += 1\n\n if lst[mid] > t:\n return _binary_search(lst, lo, mid - 1, t)\n else:\n return _binary_search(lst, mid + 1, hi, t)\n else:\n return -1\n return _binary_search(lst, 0, len(lst)-1, t)\n \nindex = binary_search([1, 2, 3, 4, 5, 6, 7, 8, 9], 2)\n\nprint(f'{index=} {recursions=} {comparisons=}')\n\nOutput:\nindex=1 recursions=2 comparisons=3\n\n" ]
[ 1, 0 ]
[]
[]
[ "binary_search", "python", "recursion" ]
stackoverflow_0074658593_binary_search_python_recursion.txt
Q: Unhandled Exception due to failure importing database file. How to fix? I have hosted a Flask website on pythonanywhere, but I keep getting the "Unhandled Exception" error when visiting the website. I checked the error logs, and the problem is with a database file, named finance.db. The exact text from the error logs are below: 2022-04-26 07:23:21,225: Error running WSGI application 2022-04-26 07:23:21,239: RuntimeError: does not exist: finance.db 2022-04-26 07:23:21,240: File "/var/www/routsiddharth_pythonanywhere_com_wsgi.py", line 16, in <module> 2022-04-26 07:23:21,240: from app import app as application # noqa 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/mysite/app.py", line 39, in <module> 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/.local/lib/python3.9/site-packages/cs50/sql.py", line 60, in __init__ 2022-04-26 07:23:21,240: raise RuntimeError("does not exist: {}".format(matches.group(1))) 2022-04-26 07:23:21,241: *************************************************** 2022-04-26 07:23:21,241: If you're seeing an import error and don't know why, 2022-04-26 07:23:21,241: we have a dedicated help page to help you debug: 2022-04-26 07:23:21,241: https://help.pythonanywhere.com/pages/DebuggingImportError/ 2022-04-26 07:23:21,241: *************************************************** Here is how I imported the file: from cs50 import SQL db = SQL("sqlite:///finance.db") The finance.db file is in the same directory as the app.py file. How do I fix this error? A: You need to reference the database with the correct path: https://help.pythonanywhere.com/pages/NoSuchFileOrDirectory/ A: You should give the absolute path with one extra '/' db = SQL("sqlite:////home/routsiddharth/mysite/finance.db")
Unhandled Exception due to failure importing database file. How to fix?
I have hosted a Flask website on pythonanywhere, but I keep getting the "Unhandled Exception" error when visiting the website. I checked the error logs, and the problem is with a database file, named finance.db. The exact text from the error logs are below: 2022-04-26 07:23:21,225: Error running WSGI application 2022-04-26 07:23:21,239: RuntimeError: does not exist: finance.db 2022-04-26 07:23:21,240: File "/var/www/routsiddharth_pythonanywhere_com_wsgi.py", line 16, in <module> 2022-04-26 07:23:21,240: from app import app as application # noqa 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/mysite/app.py", line 39, in <module> 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/.local/lib/python3.9/site-packages/cs50/sql.py", line 60, in __init__ 2022-04-26 07:23:21,240: raise RuntimeError("does not exist: {}".format(matches.group(1))) 2022-04-26 07:23:21,241: *************************************************** 2022-04-26 07:23:21,241: If you're seeing an import error and don't know why, 2022-04-26 07:23:21,241: we have a dedicated help page to help you debug: 2022-04-26 07:23:21,241: https://help.pythonanywhere.com/pages/DebuggingImportError/ 2022-04-26 07:23:21,241: *************************************************** Here is how I imported the file: from cs50 import SQL db = SQL("sqlite:///finance.db") The finance.db file is in the same directory as the app.py file. How do I fix this error?
[ "You need to reference the database with the correct path: https://help.pythonanywhere.com/pages/NoSuchFileOrDirectory/\n", "You should give the absolute path with one extra '/'\ndb = SQL(\"sqlite:////home/routsiddharth/mysite/finance.db\")\n\n" ]
[ 2, 0 ]
[]
[]
[ "flask", "python", "pythonanywhere" ]
stackoverflow_0072011386_flask_python_pythonanywhere.txt
Q: How to open a PDF file by clicking on it in TreeView How do I open a file (ex. PDF) when I click on the row identify by its ID? I'm trying to make the treeview that uses a GUI to better access and open these PDFs, but I can't figure out how to actually open files using anything but a button. Can someone please tell me how to use these to find a filepath and open a pdf? Thanks the idea is basically is to open the pdf local file according to its ID in the treeview from tkinter import E, N, Frame, IntVar, LabelFrame, LEFT, RIGHT, BOTTOM, StringVar, Label, Button, END, Toplevel, Entry, Tk, font, Menu import tkinter as tk from tkinter import ttk, Spinbox class PRODUCTOS(): base_datos = "clientes_productos.db" resultado = 0.00 #valor x defecto self.resultado def __init__(self,root): self.wind = root #ventana completa self.wind.title('Facturacion principal') self.wind.geometry("850x525") #Las divisiones de la ventana, caja 1 arriba, caja 2 abajo caja1 = LabelFrame(self.wind, text="", font=("Calibri",14), padx=2, pady=2)#aleja lo q se encuentra dentro caja2 = LabelFrame(self.wind, text="Facturas", font=("Calibri",12), padx=1, pady=1) caja3 = LabelFrame(self.wind, text="", font=("Calibri",12), padx=2, pady=2) caja1.pack(fill="both", expand=True, padx=20, pady=10, ipady=10, ipadx=5)#pady = aleja a la caja 2, X aleja de la esquina derecha caja2.pack(fill="both", expand=True, padx=20, pady=10, ipady=100, ipadx=5)#ipady alarga el labelframe caja3.pack(fill="both", expand=True, padx=20, pady=10, ipady=30, ipadx=5) #los encabezados del cuadro blanco arriba #los encabezados del cuadro blanco arriba self.cuadro_blanco_facturas = ttk.Treeview(caja2, columns=("1","2","3","4","5","6","7"), show="headings", height=10)#Height largo del Scrollbar self.cuadro_blanco_facturas.pack(side=LEFT)#scrollbar self.cuadro_blanco_facturas.place(x=0, y=0)#scrollbar self.cuadro_blanco_facturas.heading("1", text="Nro_Fact.") self.cuadro_blanco_facturas.heading("2", text="ID-Cliente") self.cuadro_blanco_facturas.heading("3", text="Nombre del Cliente") #tamano de las columnas vertical self.cuadro_blanco_facturas.column("1", width=70)# width= anchura, minwidth = lo minimo de esa anchura self.cuadro_blanco_facturas.column("2", width=70) self.cuadro_blanco_facturas.column("3", width=258) #horizontal self.cuadro_blanco_facturas.column('#0', width=50, minwidth=100)#Yscrollbar1 #self.consulta_facturas() #llamada a la TABLA self.cuadro_blanco_facturas.bind("<Double 1>", self.on_double_click) #self.cuadro_blanco_facturas.bind('<Double-Button-1>', self.on_double_click) #scrollbar VERTICAL lado derecho cuadro blanco yscrollbar = ttk.Scrollbar(caja2, orient="vertical", command=self.cuadro_blanco_facturas.yview) yscrollbar.pack(side=RIGHT,fill="y") #scrollbar HORIZONTAL lado derecho cuadro blanco xscrollbar = ttk.Scrollbar(caja2, orient="horizontal", command=self.cuadro_blanco_facturas.xview) xscrollbar.pack(side=BOTTOM,fill="x") self.cuadro_blanco_facturas.configure(yscrollcommand=yscrollbar.set, xscrollcommand=xscrollbar.set) def on_double_click(self, event): iid = self.cuadro_blanco_facturas.focus() # get the iid of the selected item tags = self.cuadro_blanco_facturas.item(iid, 'tags') # get tags attached print(iid) if 'pdf' in tags: text = self.cuadro_blanco_facturas.item(iid, 'text') # get the text of selected item print(text) if __name__ == '__main__': root = Tk() product = PRODUCTOS(root) root.mainloop() A: thanks for helping me out its done! def treeview_click(self, event): iid = self.cuadro_blanco_facturas.focus() # id name = self.cuadro_blanco_facturas.item(iid)["values"][2] #column name espacio = " " file = os.startfile(f"facturas\\{iid}{espacio}{nombre}.pdf")
How to open a PDF file by clicking on it in TreeView
How do I open a file (ex. PDF) when I click on the row identify by its ID? I'm trying to make the treeview that uses a GUI to better access and open these PDFs, but I can't figure out how to actually open files using anything but a button. Can someone please tell me how to use these to find a filepath and open a pdf? Thanks the idea is basically is to open the pdf local file according to its ID in the treeview from tkinter import E, N, Frame, IntVar, LabelFrame, LEFT, RIGHT, BOTTOM, StringVar, Label, Button, END, Toplevel, Entry, Tk, font, Menu import tkinter as tk from tkinter import ttk, Spinbox class PRODUCTOS(): base_datos = "clientes_productos.db" resultado = 0.00 #valor x defecto self.resultado def __init__(self,root): self.wind = root #ventana completa self.wind.title('Facturacion principal') self.wind.geometry("850x525") #Las divisiones de la ventana, caja 1 arriba, caja 2 abajo caja1 = LabelFrame(self.wind, text="", font=("Calibri",14), padx=2, pady=2)#aleja lo q se encuentra dentro caja2 = LabelFrame(self.wind, text="Facturas", font=("Calibri",12), padx=1, pady=1) caja3 = LabelFrame(self.wind, text="", font=("Calibri",12), padx=2, pady=2) caja1.pack(fill="both", expand=True, padx=20, pady=10, ipady=10, ipadx=5)#pady = aleja a la caja 2, X aleja de la esquina derecha caja2.pack(fill="both", expand=True, padx=20, pady=10, ipady=100, ipadx=5)#ipady alarga el labelframe caja3.pack(fill="both", expand=True, padx=20, pady=10, ipady=30, ipadx=5) #los encabezados del cuadro blanco arriba #los encabezados del cuadro blanco arriba self.cuadro_blanco_facturas = ttk.Treeview(caja2, columns=("1","2","3","4","5","6","7"), show="headings", height=10)#Height largo del Scrollbar self.cuadro_blanco_facturas.pack(side=LEFT)#scrollbar self.cuadro_blanco_facturas.place(x=0, y=0)#scrollbar self.cuadro_blanco_facturas.heading("1", text="Nro_Fact.") self.cuadro_blanco_facturas.heading("2", text="ID-Cliente") self.cuadro_blanco_facturas.heading("3", text="Nombre del Cliente") #tamano de las columnas vertical self.cuadro_blanco_facturas.column("1", width=70)# width= anchura, minwidth = lo minimo de esa anchura self.cuadro_blanco_facturas.column("2", width=70) self.cuadro_blanco_facturas.column("3", width=258) #horizontal self.cuadro_blanco_facturas.column('#0', width=50, minwidth=100)#Yscrollbar1 #self.consulta_facturas() #llamada a la TABLA self.cuadro_blanco_facturas.bind("<Double 1>", self.on_double_click) #self.cuadro_blanco_facturas.bind('<Double-Button-1>', self.on_double_click) #scrollbar VERTICAL lado derecho cuadro blanco yscrollbar = ttk.Scrollbar(caja2, orient="vertical", command=self.cuadro_blanco_facturas.yview) yscrollbar.pack(side=RIGHT,fill="y") #scrollbar HORIZONTAL lado derecho cuadro blanco xscrollbar = ttk.Scrollbar(caja2, orient="horizontal", command=self.cuadro_blanco_facturas.xview) xscrollbar.pack(side=BOTTOM,fill="x") self.cuadro_blanco_facturas.configure(yscrollcommand=yscrollbar.set, xscrollcommand=xscrollbar.set) def on_double_click(self, event): iid = self.cuadro_blanco_facturas.focus() # get the iid of the selected item tags = self.cuadro_blanco_facturas.item(iid, 'tags') # get tags attached print(iid) if 'pdf' in tags: text = self.cuadro_blanco_facturas.item(iid, 'text') # get the text of selected item print(text) if __name__ == '__main__': root = Tk() product = PRODUCTOS(root) root.mainloop()
[ "thanks for helping me out\nits done!\ndef treeview_click(self, event):\n iid = self.cuadro_blanco_facturas.focus() # id \n name = self.cuadro_blanco_facturas.item(iid)[\"values\"][2] #column name\n espacio = \" \"\n file = os.startfile(f\"facturas\\\\{iid}{espacio}{nombre}.pdf\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter", "treeview" ]
stackoverflow_0074647514_python_tkinter_treeview.txt
Q: What could be the problem in To-do app using Streamlit in Python? to-dos.py import streamlit as st import get_todos todos = get_todos.getTodos() def add_todos(): todo1 = st.session_state["new_todo"] + "\n" todos.append(todo1) get_todos.writeTodos(todos) st.title("My TO-DO App") ... get_todos.py def getTodos(): with open("docs.txt", "r") as file: data = file.readlines() return data def writeTodos(adder): with open("docs.txt", "w") as file: file.writelines(adder) I built a TO-DO App in Python using streamlit While performing this task in terminal, it's continuously showing 'FileNotFoundError' meanwhile the file actually exist. What could be the problem ? Any syntax error? or Logical Error? Error Traceback: My project structure is shown below: A: The main purpose of virtual environments or venv is to manage settings and dependencies of a particular project regardless of other Python projects. virtualenv tool comes bundled with PyCharm, so the user doesn't need to install it. It is always found in the project directory named venv which should be a unique folder design to fulfil a specific purpose. Note: No external file(s) should be added to the venv folder. This clearly indicates that your structure is not appropriate. I will recommend you visit pycharm project structure to read more about configuration of virtual environments. You should restructure your project properly. It might feel like a pain on the neck but I bet it worth it. Attention: All the external files you added to venv should rather be in your samik folder which is your project main folder.
What could be the problem in To-do app using Streamlit in Python?
to-dos.py import streamlit as st import get_todos todos = get_todos.getTodos() def add_todos(): todo1 = st.session_state["new_todo"] + "\n" todos.append(todo1) get_todos.writeTodos(todos) st.title("My TO-DO App") ... get_todos.py def getTodos(): with open("docs.txt", "r") as file: data = file.readlines() return data def writeTodos(adder): with open("docs.txt", "w") as file: file.writelines(adder) I built a TO-DO App in Python using streamlit While performing this task in terminal, it's continuously showing 'FileNotFoundError' meanwhile the file actually exist. What could be the problem ? Any syntax error? or Logical Error? Error Traceback: My project structure is shown below:
[ "The main purpose of virtual environments or venv is to manage settings and dependencies of a particular project regardless of other Python projects. virtualenv tool comes bundled with PyCharm, so the user doesn't need to install it. It is always found in the project directory named venv which should be a unique folder design to fulfil a specific purpose.\nNote: No external file(s) should be added to the venv folder.\nThis clearly indicates that your structure is not appropriate. I will recommend you visit pycharm project structure to read more about configuration of virtual environments. You should restructure your project properly. It might feel like a pain on the neck but I bet it worth it.\nAttention:\nAll the external files you added to venv should rather be in your samik folder which is your project main folder.\n" ]
[ 1 ]
[]
[]
[ "contextmanager", "filenotfounderror", "pycharm", "python", "streamlit" ]
stackoverflow_0074652347_contextmanager_filenotfounderror_pycharm_python_streamlit.txt
Q: Automating Facebook using Selenium Webdriver driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print("opened facebook") I am using this code to open Facebook and the page opens. driver.find_element(By.NAME, "email").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "pass").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "login").click() sleep(1) Then log in to my account. After successful login, my chrome window closes in a few seconds. Can someone tell me why? Full Code: import time import os import wget import shutil from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager try: usr="" pwd="" driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print ("Opened facebook") driver.find_element(By.NAME, "email").send_keys(usr) print ("Email Id entered") sleep(1) driver.find_element(By.NAME, "pass").send_keys(pwd) print ("Password entered") driver.find_element(By.NAME,"login").click() sleep(100) except Exception as e: print("The error raised is: ", e) A: The program will exit after executing code. Add below statements to keep program running: time.sleep(300) #300 seconds i.e. 5 minutes # close the browser window driver.quit() A: This will fix your problem from selenium.webdriver.chrome.options import Options # Stop Selenium from closing browser automatically chrome_options = Options() chrome_options.add_experimental_option("detach", True) # Chrome driver to run chrome driver = webdriver.Chrome(options=chrome_options)
Automating Facebook using Selenium Webdriver
driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print("opened facebook") I am using this code to open Facebook and the page opens. driver.find_element(By.NAME, "email").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "pass").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "login").click() sleep(1) Then log in to my account. After successful login, my chrome window closes in a few seconds. Can someone tell me why? Full Code: import time import os import wget import shutil from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager try: usr="" pwd="" driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print ("Opened facebook") driver.find_element(By.NAME, "email").send_keys(usr) print ("Email Id entered") sleep(1) driver.find_element(By.NAME, "pass").send_keys(pwd) print ("Password entered") driver.find_element(By.NAME,"login").click() sleep(100) except Exception as e: print("The error raised is: ", e)
[ "The program will exit after executing code. Add below statements to keep program running:\ntime.sleep(300) #300 seconds i.e. 5 minutes\n\n# close the browser window\ndriver.quit()\n\n", "This will fix your problem\nfrom selenium.webdriver.chrome.options import Options\n\n# Stop Selenium from closing browser automatically\nchrome_options = Options()\nchrome_options.add_experimental_option(\"detach\", True)\n\n# Chrome driver to run chrome\ndriver = webdriver.Chrome(options=chrome_options)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver" ]
stackoverflow_0073245674_python_selenium_selenium_webdriver.txt
Q: Camelot - detecting hyperlinks within table I am using Camelot to extract tables from PDF files. While this works very well, it extracts the text only, it does not extract the hyperlinks that are embedded in the tables. Is there a way of using Camelot or a similar package to extract table text and hyperlinks embedded within tables? Thanks! A: most applications such as tablular text extractors simply scrape the visible surface as plain text and actually hyperlinks are often stored elsewhere in the pdf which is NOT a WTSIWYG word processor file. So, if you're lucky you can extract the co-ordinates (without their page allocation like this) C:\Users\lz02\Downloads>type "7 - 20 November 2022 (003).pdf" |findstr /i "(http" <</Subtype/Link/Rect[ 69.75 299.75 280.63 313.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/complaint/) >>/StructParent 5>> <</Subtype/Link/Rect[ 219.37 120.85 402.47 133.06] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 1>> <</Subtype/Link/Rect[ 146.23 108.64 329.33 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 2>> <</Subtype/Link/Rect[ 412.48 108.64 525.55 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 3>> <</Subtype/Link/Rect[ 69.75 96.434 95.085 108.64] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 4>> <</Subtype/Link/Rect[ 69.75 683.75 317.08 697.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/comp-reports/ecu/) >>/StructParent 7>> <</Subtype/Link/Rect[ 463.35 604.46 500.24 617.89] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/reporting-scotland-bbc-one-scotland-20-december-2021) >>/StructParent 8>> <</Subtype/Link/Rect[ 463.35 577.11 500.24 590.54] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/book-of-the-week-preventable-radio-4-19-april-2022) >>/StructParent 9>> <</Subtype/Link/Rect[ 463.35 522.4 521.41 535.83] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/the-one-show-bbc-one-6-october-2022) >>/StructParent 10>> <</Subtype/Link/Rect[ 463.35 495.04 518.04 508.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-6pm-bbc-one-22-september-2022) >>/StructParent 11>> <</Subtype/Link/Rect[ 463.35 469.04 518.04 482.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-1030am-bbc-news-channel-20-september-2022) >>/StructParent 12>> NOTE, the random order, to find which page they belong to you need to traceback to their /StructParent ## A: Yes, it's possible. Camelot, by default, only extracts the text from PDF files, but it also provides options to extract additional information, such as the position and size of text blocks, as well as the coordinates of the lines and curves that define the table cells. With this information, it is possible to identify the table cells that contain hyperlinks, and to extract the text and the hyperlink destination for each of these cells. Here is an example of how this can be done using Camelot: import camelot # Load the PDF file pdf = camelot.read_pdf("example.pdf") # Extract the tables, including their coordinates and text blocks tables = pdf.extract(flavor="lattice", tables=None, spreadsheets=None, str_columns_map=None, columns=None, suppress_stdout=False) # Iterate over the tables for table in tables: # Iterate over the rows in the table for row in table.data: # Iterate over the cells in the row for cell in row: # If the cell contains a hyperlink, extract the text and the hyperlink destination if cell.text.startswith("http"): text = cell.text hyperlink = cell.bbox[0] print(text, hyperlink)
Camelot - detecting hyperlinks within table
I am using Camelot to extract tables from PDF files. While this works very well, it extracts the text only, it does not extract the hyperlinks that are embedded in the tables. Is there a way of using Camelot or a similar package to extract table text and hyperlinks embedded within tables? Thanks!
[ "most applications such as tablular text extractors simply scrape the visible surface as plain text and actually hyperlinks are often stored elsewhere in the pdf which is NOT a WTSIWYG word processor file.\nSo, if you're lucky you can extract the co-ordinates (without their page allocation like this)\nC:\\Users\\lz02\\Downloads>type \"7 - 20 November 2022 (003).pdf\" |findstr /i \"(http\"\n<</Subtype/Link/Rect[ 69.75 299.75 280.63 313.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/complaint/) >>/StructParent 5>>\n<</Subtype/Link/Rect[ 219.37 120.85 402.47 133.06] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 1>>\n<</Subtype/Link/Rect[ 146.23 108.64 329.33 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 2>>\n<</Subtype/Link/Rect[ 412.48 108.64 525.55 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 3>>\n<</Subtype/Link/Rect[ 69.75 96.434 95.085 108.64] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 4>>\n<</Subtype/Link/Rect[ 69.75 683.75 317.08 697.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/comp-reports/ecu/) >>/StructParent 7>>\n<</Subtype/Link/Rect[ 463.35 604.46 500.24 617.89] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/reporting-scotland-bbc-one-scotland-20-december-2021) >>/StructParent 8>>\n<</Subtype/Link/Rect[ 463.35 577.11 500.24 590.54] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/book-of-the-week-preventable-radio-4-19-april-2022) >>/StructParent 9>>\n<</Subtype/Link/Rect[ 463.35 522.4 521.41 535.83] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/the-one-show-bbc-one-6-october-2022) >>/StructParent 10>>\n<</Subtype/Link/Rect[ 463.35 495.04 518.04 508.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-6pm-bbc-one-22-september-2022) >>/StructParent 11>>\n<</Subtype/Link/Rect[ 463.35 469.04 518.04 482.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-1030am-bbc-news-channel-20-september-2022) >>/StructParent 12>>\n\nNOTE, the random order, to find which page they belong to you need to traceback to their /StructParent ##\n", "Yes, it's possible. Camelot, by default, only extracts the text from PDF files, but it also provides options to extract additional information, such as the position and size of text blocks, as well as the coordinates of the lines and curves that define the table cells. With this information, it is possible to identify the table cells that contain hyperlinks, and to extract the text and the hyperlink destination for each of these cells.\nHere is an example of how this can be done using Camelot:\nimport camelot\n\n# Load the PDF file\npdf = camelot.read_pdf(\"example.pdf\")\n\n# Extract the tables, including their coordinates and text blocks\ntables = pdf.extract(flavor=\"lattice\", tables=None, spreadsheets=None,\n str_columns_map=None, columns=None, suppress_stdout=False)\n\n# Iterate over the tables\nfor table in tables:\n # Iterate over the rows in the table\n for row in table.data:\n # Iterate over the cells in the row\n for cell in row:\n # If the cell contains a hyperlink, extract the text and the hyperlink destination\n if cell.text.startswith(\"http\"):\n text = cell.text\n hyperlink = cell.bbox[0]\n print(text, hyperlink)\n\n" ]
[ 0, 0 ]
[]
[]
[ "pdf", "python", "python_camelot" ]
stackoverflow_0074655135_pdf_python_python_camelot.txt
Q: Keras category predictions always same distribution New to Keras/Machine Learning. I figure I am making a dumb mistake but I don't know what. I have 3 labels. The training data for each sequence of timesteps is labeled as [1, 0, 0] or [0, 1, 0], or [0, 0, 1]. I always get a distribution that looks something like this. You can't tell in the photo, but the numbers aren't the same when you zoom in or look at the actual data results. https://imgur.com/a/o04cS97 The actual results is just color coding that spot based on the category above, so the values are all 1 but the labels are always one of the above. model = Sequential() model.add(LSTM(units=50, return_sequences=False, input_shape=(num_timesteps, num_features)) model.add(Dense(3, activation="softmax")) model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=["accuracy"]) model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test)) results = model.predict(x_train) I can change the number of sequences, timesteps, features, epochs, add other lstm layers. The distribution will change but always be like that. I'm expecting based on the data (and based on even just making things random), that the probabilities would be varied and not always discretely layered. I originally did this with just a regular Dense layer and then Dense(3) layer to categorize and I was getting results that went with that expectation. Switching to LSTM due to the type of data and no longer getting expected results but same data A: It sounds like your model is overfitting to your training data. This means that it is performing well on the data it was trained on, but not generalizing well to new data. One common cause of overfitting is using a model that is too complex for the amount of training data you have. In your case, using an LSTM with 50 units may be too complex for your data, especially if you don't have a lot of training examples. To combat overfitting, you can try using regularization techniques such as adding dropout layers to your model. You can also try using a simpler model with fewer parameters, or using more training data. Additionally, it's a good idea to monitor the performance of your model on a validation set during training, to ensure that it is not overfitting. You can do this by passing a validation set to the fit method of your model, and setting the validation_split argument to a value between 0 and 1. This will cause the model to evaluate its performance on the validation set after each epoch of training.
Keras category predictions always same distribution
New to Keras/Machine Learning. I figure I am making a dumb mistake but I don't know what. I have 3 labels. The training data for each sequence of timesteps is labeled as [1, 0, 0] or [0, 1, 0], or [0, 0, 1]. I always get a distribution that looks something like this. You can't tell in the photo, but the numbers aren't the same when you zoom in or look at the actual data results. https://imgur.com/a/o04cS97 The actual results is just color coding that spot based on the category above, so the values are all 1 but the labels are always one of the above. model = Sequential() model.add(LSTM(units=50, return_sequences=False, input_shape=(num_timesteps, num_features)) model.add(Dense(3, activation="softmax")) model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=["accuracy"]) model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test)) results = model.predict(x_train) I can change the number of sequences, timesteps, features, epochs, add other lstm layers. The distribution will change but always be like that. I'm expecting based on the data (and based on even just making things random), that the probabilities would be varied and not always discretely layered. I originally did this with just a regular Dense layer and then Dense(3) layer to categorize and I was getting results that went with that expectation. Switching to LSTM due to the type of data and no longer getting expected results but same data
[ "It sounds like your model is overfitting to your training data. This means that it is performing well on the data it was trained on, but not generalizing well to new data.\nOne common cause of overfitting is using a model that is too complex for the amount of training data you have. In your case, using an LSTM with 50 units may be too complex for your data, especially if you don't have a lot of training examples.\nTo combat overfitting, you can try using regularization techniques such as adding dropout layers to your model. You can also try using a simpler model with fewer parameters, or using more training data.\nAdditionally, it's a good idea to monitor the performance of your model on a validation set during training, to ensure that it is not overfitting. You can do this by passing a validation set to the fit method of your model, and setting the validation_split argument to a value between 0 and 1. This will cause the model to evaluate its performance on the validation set after each epoch of training.\n" ]
[ 0 ]
[]
[]
[ "categorical", "categories", "keras", "lstm", "python" ]
stackoverflow_0074658705_categorical_categories_keras_lstm_python.txt
Q: Airflow Task Succeeded But Not All Data Ingested I have an airflow task to extract data with this flow PostgreSQL -> Google Cloud Storage -> BigQuery The problem that I have is, it seems not all the data is ingested into BigQuery. on the PostgreSQL source, the table has 18M+ rows of data, but after ingested it only has 4M+ rows of data. When I check on production, the data return 18M+ rows with this query: SELECT COUNT(1) FROM my_table -- This return 18M+ rows But after the DAG finished running, when I check on BigQuery: SELECT COUNT(1) FROM data_lake.my_table -- This return 4M+ rows To take notes, not all the tables that I ingested returned like this. All of the smaller tables ingested just fine. But when it hits a certain amount of rows it behaves like this. My suspicion is when the data is extracted from PostgreSQL to Google Cloud Storage. So I'll provide my function here: def create_operator_write_append_init(self, worker=10): worker_var = dict() with TaskGroup(group_id=self.task_id_init) as tg1: for i in range(worker): worker_var[f'worker_{i}'] = PostgresToGCSOperator( task_id = f'worker_{i}', postgres_conn_id = self.conn_id, sql = 'extract_init.sql', bucket = self.bucket, filename = f'{self.filename_init}_{i}.{self.export_format}', export_format = self.export_format, # the export format is json gzip = True, params = { 'worker': i } ) return tg1 and here is the SQL file: SELECT id, name, created_at, updated_at, deleted_at FROM my_table WHERE 1=1 AND ABS(MOD(hashtext(id::TEXT), 10)) = {{params.worker}}; What I did is I chunk the data and split it into several workers, hence the TaskGroup. To provide more information. I use Composer: composer-2.0.32-airflow-2.3.4 Large instance Worker 8CPU Worker 32GB Memory Worker 2GB storage Worker between 1-16 What are the possibilities of these happening? A: PostgresToGCSOperator inherits from BaseSQLToGCSOperator(https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/sql_to_gcs/index.html) According to source code, approx_max_file_size_bytes=1900000000. So if you split your table into 10 parts (or workers lets say) the maximum size of this chunk should be maximum 1.9 gigabyte. In case this chunk is bigger, the previous chunk will be replaced with the new one as you did not specify to create "chunks of your chunk" by PostgresToGCSOperator. You can to it by adding placeholder {} in the filename and the Operator will handle it. def create_operator_write_append_init(self, worker=10): worker_var = dict() with TaskGroup(group_id=self.task_id_init) as tg1: for i in range(worker): worker_var[f'worker_{i}'] = PostgresToGCSOperator( task_id = f'worker_{i}', postgres_conn_id = self.conn_id, sql = 'extract_init.sql', bucket = self.bucket, filename = f'{self.filename_init}_{i}_part_{{}}.{self.export_format}', export_format = self.export_format, # the export format is json gzip = True, params = { 'worker': i } ) return tg1
Airflow Task Succeeded But Not All Data Ingested
I have an airflow task to extract data with this flow PostgreSQL -> Google Cloud Storage -> BigQuery The problem that I have is, it seems not all the data is ingested into BigQuery. on the PostgreSQL source, the table has 18M+ rows of data, but after ingested it only has 4M+ rows of data. When I check on production, the data return 18M+ rows with this query: SELECT COUNT(1) FROM my_table -- This return 18M+ rows But after the DAG finished running, when I check on BigQuery: SELECT COUNT(1) FROM data_lake.my_table -- This return 4M+ rows To take notes, not all the tables that I ingested returned like this. All of the smaller tables ingested just fine. But when it hits a certain amount of rows it behaves like this. My suspicion is when the data is extracted from PostgreSQL to Google Cloud Storage. So I'll provide my function here: def create_operator_write_append_init(self, worker=10): worker_var = dict() with TaskGroup(group_id=self.task_id_init) as tg1: for i in range(worker): worker_var[f'worker_{i}'] = PostgresToGCSOperator( task_id = f'worker_{i}', postgres_conn_id = self.conn_id, sql = 'extract_init.sql', bucket = self.bucket, filename = f'{self.filename_init}_{i}.{self.export_format}', export_format = self.export_format, # the export format is json gzip = True, params = { 'worker': i } ) return tg1 and here is the SQL file: SELECT id, name, created_at, updated_at, deleted_at FROM my_table WHERE 1=1 AND ABS(MOD(hashtext(id::TEXT), 10)) = {{params.worker}}; What I did is I chunk the data and split it into several workers, hence the TaskGroup. To provide more information. I use Composer: composer-2.0.32-airflow-2.3.4 Large instance Worker 8CPU Worker 32GB Memory Worker 2GB storage Worker between 1-16 What are the possibilities of these happening?
[ "PostgresToGCSOperator inherits from BaseSQLToGCSOperator(https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/sql_to_gcs/index.html)\nAccording to source code, approx_max_file_size_bytes=1900000000. So if you split your table into 10 parts (or workers lets say) the maximum size of this chunk should be maximum 1.9 gigabyte. In case this chunk is bigger, the previous chunk will be replaced with the new one as you did not specify to create \"chunks of your chunk\" by PostgresToGCSOperator.\nYou can to it by adding placeholder {} in the filename and the Operator will handle it.\ndef create_operator_write_append_init(self, worker=10):\n worker_var = dict()\n with TaskGroup(group_id=self.task_id_init) as tg1:\n for i in range(worker):\n worker_var[f'worker_{i}'] = PostgresToGCSOperator(\n task_id = f'worker_{i}',\n postgres_conn_id = self.conn_id,\n sql = 'extract_init.sql',\n bucket = self.bucket,\n filename = f'{self.filename_init}_{i}_part_{{}}.{self.export_format}', \n export_format = self.export_format, # the export format is json\n gzip = True,\n params = {\n 'worker': i\n }\n )\n return tg1\n\n" ]
[ 0 ]
[]
[]
[ "airflow", "airflow_2.x", "google_cloud_composer", "python" ]
stackoverflow_0074650653_airflow_airflow_2.x_google_cloud_composer_python.txt
Q: Python module installation failing My python wont install Time Module it was asking me to update my pip to newest, and I did. I receive this error: ERROR: Could not find a version that satisfies the requirement time (from versions: none) ERROR: No matching distribution found for time My python verstion is the latest. Python 3.11.0 Pip version : 22.3.1 It's all to date.. any ideias why? Tried installing via CMD and pycharm packages additions. Also updated python and pip. no sucess.
Python module installation failing
My python wont install Time Module it was asking me to update my pip to newest, and I did. I receive this error: ERROR: Could not find a version that satisfies the requirement time (from versions: none) ERROR: No matching distribution found for time My python verstion is the latest. Python 3.11.0 Pip version : 22.3.1 It's all to date.. any ideias why? Tried installing via CMD and pycharm packages additions. Also updated python and pip. no sucess.
[]
[]
[ "The time module is part of Python's standard library. It's installed along with the rest of Python, and you don't need to (nor can you!) install it with pip.\n\nI can import time in the Python Console\n\nYes, because it's already installed.\n\nbut not in my actual code\n\nI don't believe you. Show us the exact error message you get when you try.\n" ]
[ -1 ]
[ "module", "pip", "python", "time" ]
stackoverflow_0074658997_module_pip_python_time.txt
Q: Going from a TensorArray to a Tensor Given a TensorArray with a fixed size and entries with uniform shapes, I want to go to a Tensor containing the same values, simply by having the index dimension of the TensorArray as a regular axis. TensorArrays have a method called "gather" which purportedly should do just that. And, in fact, the following example works: array = tf.TensorArray(tf.int32, size=3) array.write(0, 10) array.write(1, 20) array.write(2, 30) gathered = array.gather([0, 1, 2]) "gathered" then yields the desired Tensor: tf.Tensor([10 20 30], shape=(3,), dtype=int32) Unfortunately, this stops working when wrapping it inside a tf.function, like so: @tf.function def func(): array = tf.TensorArray(tf.int32, size=3) array.write(0, 10) array.write(1, 20) array.write(2, 30) gathered = array.gather([0, 1, 2]) return gathered tensor = func() "tensor" then wrongly yields the following Tensor: tf.Tensor([0 0 0], shape=(3,), dtype=int32) Do you have an explanation for this, or can you suggest an alternative way to go from a TensorArray to a Tensor inside a tf.function? A: Per https://github.com/tensorflow/tensorflow/issues/30409#issuecomment-508962873 you have to: Replace arr.write(j, t) with arr = arr.write(j, t) The issue is that tf.function executes as a graph. In eager mode the array will be updated (as a convenience), but you're really meant to use the return value to chain operations: https://www.tensorflow.org/api_docs/python/tf/TensorArray#returns_6 A: instead of array.gather(), try using array.stack(), it'll return a tensor from the TensorArray
Going from a TensorArray to a Tensor
Given a TensorArray with a fixed size and entries with uniform shapes, I want to go to a Tensor containing the same values, simply by having the index dimension of the TensorArray as a regular axis. TensorArrays have a method called "gather" which purportedly should do just that. And, in fact, the following example works: array = tf.TensorArray(tf.int32, size=3) array.write(0, 10) array.write(1, 20) array.write(2, 30) gathered = array.gather([0, 1, 2]) "gathered" then yields the desired Tensor: tf.Tensor([10 20 30], shape=(3,), dtype=int32) Unfortunately, this stops working when wrapping it inside a tf.function, like so: @tf.function def func(): array = tf.TensorArray(tf.int32, size=3) array.write(0, 10) array.write(1, 20) array.write(2, 30) gathered = array.gather([0, 1, 2]) return gathered tensor = func() "tensor" then wrongly yields the following Tensor: tf.Tensor([0 0 0], shape=(3,), dtype=int32) Do you have an explanation for this, or can you suggest an alternative way to go from a TensorArray to a Tensor inside a tf.function?
[ "Per https://github.com/tensorflow/tensorflow/issues/30409#issuecomment-508962873 you have to:\n\nReplace arr.write(j, t) with arr = arr.write(j, t)\nThe issue is that tf.function executes as a graph. In eager mode the array will be updated (as a convenience), but you're really meant to use the return value to chain operations: https://www.tensorflow.org/api_docs/python/tf/TensorArray#returns_6\n\n", "instead of array.gather(), try using array.stack(), it'll return a tensor from the TensorArray\n" ]
[ 1, 0 ]
[]
[]
[ "python", "tensorflow", "tensorflow2.0" ]
stackoverflow_0065889381_python_tensorflow_tensorflow2.0.txt
Q: How to write dataframe to csv for max date rows only (filter for max date rows)? How do I get the df.to_csv to write only rows with the max asOfDate? So that each symbol below will only one row? import pandas as pd from yahooquery import Ticker symbols = ['AAPL','GOOG','MSFT'] #There are 75,000 symbols here. header = ["asOfDate","CashAndCashEquivalents","CashFinancial","CurrentAssets","TangibleBookValue","CurrentLiabilities","TotalLiabilitiesNetMinorityInterest"] for tick in symbols: faang = Ticker(tick) faang.balance_sheet(frequency='q') df = faang.balance_sheet(frequency='q') for column_name in header : if column_name not in df.columns: df.loc[:,column_name ] = None #Here, if any column is missing from your header column names for a given "tick", we add this column and set all the valus to None df.to_csv('output.csv', mode='a', index=True, header=False, columns=header) A: if asOfDate column has a date type or of it is a string with date in the format yyyy-mm-dd you can filter the dateframe for the rows you want to write df[df.asOfDate == df.asOfDate.max()].to_csv('output.csv', mode='a', index=True, header=False, columns=header)
How to write dataframe to csv for max date rows only (filter for max date rows)?
How do I get the df.to_csv to write only rows with the max asOfDate? So that each symbol below will only one row? import pandas as pd from yahooquery import Ticker symbols = ['AAPL','GOOG','MSFT'] #There are 75,000 symbols here. header = ["asOfDate","CashAndCashEquivalents","CashFinancial","CurrentAssets","TangibleBookValue","CurrentLiabilities","TotalLiabilitiesNetMinorityInterest"] for tick in symbols: faang = Ticker(tick) faang.balance_sheet(frequency='q') df = faang.balance_sheet(frequency='q') for column_name in header : if column_name not in df.columns: df.loc[:,column_name ] = None #Here, if any column is missing from your header column names for a given "tick", we add this column and set all the valus to None df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)
[ "if asOfDate column has a date type or of it is a string with date in the format yyyy-mm-dd you can filter the dateframe for the rows you want to write\ndf[df.asOfDate == df.asOfDate.max()].to_csv('output.csv', mode='a', index=True, header=False, columns=header)\n\n" ]
[ 1 ]
[]
[]
[ "csv", "dataframe", "date", "pandas", "python" ]
stackoverflow_0074658643_csv_dataframe_date_pandas_python.txt
Q: Python: Plotting time delta I have a DataFrame with a column of the time and a column in which I have stored a time lag. The data looks like this: 2020-04-18 14:00:00 0 days 03:00:00 2020-04-19 02:00:00 1 days 13:00:00 2020-04-28 14:00:00 1 days 17:00:00 2020-04-29 20:00:00 2 days 09:00:00 2020-04-30 19:00:00 2 days 11:00:00 Time, Length: 282, dtype: datetime64[ns] Average time lag, Length: 116, dtype: object I want to plot the Time on the x-axis vs the time lag on the y-axis. However, I keep having errors with plotting the second column. Any tips on how to handle this data for the plot? A: In order to plot the time lag on the y-axis, you will need to convert the time lag from a timedelta object to a numerical value that can be used in the plot. One way to do this is to convert the time lag to seconds using the total_seconds method, and then plot the resulting values on the y-axis. Here is an example of how you can do this: import pandas as pd import matplotlib.pyplot as plt # Create a dataframe with the time and time lag data data = [ ['2020-04-18 14:00:00', '0 days 03:00:00'], ['2020-04-19 02:00:00', '1 days 13:00:00'], ['2020-04-28 14:00:00', '1 days 17:00:00'], ['2020-04-29 20:00:00', '2 days 09:00:00'], ['2020-04-30 19:00:00', '2 days 11:00:00'], ] df = pd.DataFrame(data, columns=['time', 'time_lag']) # Convert the time and time lag columns to datetime and timedelta objects df['time'] = pd.to_datetime(df['time']) df['time_lag'] = pd.to_timedelta(df['time_lag']) # Convert the time lag to seconds df['time_lag_seconds'] = df['time_lag'].dt.total_seconds() # Create a scatter plot with the time on the x-axis and the time lag in seconds on the y-axis plt.scatter(df['time'], df['time_lag_seconds']) plt.show()
Python: Plotting time delta
I have a DataFrame with a column of the time and a column in which I have stored a time lag. The data looks like this: 2020-04-18 14:00:00 0 days 03:00:00 2020-04-19 02:00:00 1 days 13:00:00 2020-04-28 14:00:00 1 days 17:00:00 2020-04-29 20:00:00 2 days 09:00:00 2020-04-30 19:00:00 2 days 11:00:00 Time, Length: 282, dtype: datetime64[ns] Average time lag, Length: 116, dtype: object I want to plot the Time on the x-axis vs the time lag on the y-axis. However, I keep having errors with plotting the second column. Any tips on how to handle this data for the plot?
[ "In order to plot the time lag on the y-axis, you will need to convert the time lag from a timedelta object to a numerical value that can be used in the plot. One way to do this is to convert the time lag to seconds using the total_seconds method, and then plot the resulting values on the y-axis.\nHere is an example of how you can do this:\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Create a dataframe with the time and time lag data\ndata = [ ['2020-04-18 14:00:00', '0 days 03:00:00'],\n ['2020-04-19 02:00:00', '1 days 13:00:00'],\n ['2020-04-28 14:00:00', '1 days 17:00:00'],\n ['2020-04-29 20:00:00', '2 days 09:00:00'],\n ['2020-04-30 19:00:00', '2 days 11:00:00'],\n]\ndf = pd.DataFrame(data, columns=['time', 'time_lag'])\n\n# Convert the time and time lag columns to datetime and timedelta objects\ndf['time'] = pd.to_datetime(df['time'])\ndf['time_lag'] = pd.to_timedelta(df['time_lag'])\n\n# Convert the time lag to seconds\ndf['time_lag_seconds'] = df['time_lag'].dt.total_seconds()\n\n# Create a scatter plot with the time on the x-axis and the time lag in seconds on the y-axis\nplt.scatter(df['time'], df['time_lag_seconds'])\nplt.show()\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "python", "timedelta" ]
stackoverflow_0074659070_matplotlib_python_timedelta.txt
Q: groupby aggregate product in PyTorch I have the same problem as groupby aggregate mean in pytorch. However, I want to create the product of my tensors inside each group (or labels). Unfortunately, I couldn't find a native PyTorch function that could solve my problem, like a hypothetical scatter_prod_ for products (equivalent to scatter_add_ for sums), which was the function used in one of the answers. Recycling the example code from @elyase's question, consider the 2D tensor: samples = torch.Tensor([ [0.1, 0.1], #-> group / class 1 [0.2, 0.2], #-> group / class 2 [0.4, 0.4], #-> group / class 2 [0.0, 0.0] #-> group / class 0 ]) with labels where it is true that len(samples) == len(labels) labels = torch.LongTensor([1, 2, 2, 0]) So my expected output is: res == torch.Tensor([ [0.0, 0.0], [0.1, 0.1], [0.8, 0.8] # -> PRODUCT of [0.2, 0.2] and [0.4, 0.4] ]) Here the question is, again, following @elyase's question, how can this be done in pure PyTorch (i.e. no numpy so that I can autograd) and ideally without for loops? Crossposted in PyTorch forums. A: You can use the scatter_ function to calculate the product of the tensors in each group. samples = torch.Tensor([ [0.1, 0.1], #-> group / class 1 [0.2, 0.2], #-> group / class 2 [0.4, 0.4], #-> group / class 2 [0.0, 0.0] #-> group / class 0 ]) labels = torch.LongTensor([1,2,2,0]) label_size = 3 sample_dim = samples.size(1) index = labels.unsqueeze(1).repeat((1, sample_dim)) res = torch.ones(label_size, sample_dim, dtype=samples.dtype) res.scatter_(0, index, samples, reduce='multiply') res: tensor([[0.0000, 0.0000], [0.1000, 0.1000], [0.0800, 0.0800]])
groupby aggregate product in PyTorch
I have the same problem as groupby aggregate mean in pytorch. However, I want to create the product of my tensors inside each group (or labels). Unfortunately, I couldn't find a native PyTorch function that could solve my problem, like a hypothetical scatter_prod_ for products (equivalent to scatter_add_ for sums), which was the function used in one of the answers. Recycling the example code from @elyase's question, consider the 2D tensor: samples = torch.Tensor([ [0.1, 0.1], #-> group / class 1 [0.2, 0.2], #-> group / class 2 [0.4, 0.4], #-> group / class 2 [0.0, 0.0] #-> group / class 0 ]) with labels where it is true that len(samples) == len(labels) labels = torch.LongTensor([1, 2, 2, 0]) So my expected output is: res == torch.Tensor([ [0.0, 0.0], [0.1, 0.1], [0.8, 0.8] # -> PRODUCT of [0.2, 0.2] and [0.4, 0.4] ]) Here the question is, again, following @elyase's question, how can this be done in pure PyTorch (i.e. no numpy so that I can autograd) and ideally without for loops? Crossposted in PyTorch forums.
[ "You can use the scatter_ function to calculate the product of the tensors in each group.\nsamples = torch.Tensor([\n [0.1, 0.1], #-> group / class 1\n [0.2, 0.2], #-> group / class 2\n [0.4, 0.4], #-> group / class 2\n [0.0, 0.0] #-> group / class 0\n])\n\nlabels = torch.LongTensor([1,2,2,0])\n\nlabel_size = 3\nsample_dim = samples.size(1)\n\nindex = labels.unsqueeze(1).repeat((1, sample_dim))\n\nres = torch.ones(label_size, sample_dim, dtype=samples.dtype)\nres.scatter_(0, index, samples, reduce='multiply')\n\nres:\ntensor([[0.0000, 0.0000],\n [0.1000, 0.1000],\n [0.0800, 0.0800]])\n\n" ]
[ 0 ]
[]
[]
[ "python", "pytorch", "tensor" ]
stackoverflow_0074657919_python_pytorch_tensor.txt
Q: performing operation on matched columns of NumPy arrays I am new to python and programming in general and ran into a question: I have two NumPy arrays of the same shape: they are 2D arrays, of the dimensions 1000 x 2000. I wish to compare the values of each column in array A with the values in array B. The important part is that not every column of A should be compared to every column in B, but rather the same columns of A & B should be compared to one another, as in: A[:,0] should be compared to B[:,0], A[:,1] should be compared to B[:,1],… etc. This was easier to do when I had one dimensional arrays: I used zip(A, B), so I could run the following for-loop: A = np.array([2,5,6,3,7]) B = np.array([1,3,9,4,8]) res_list = [] For number1, number2 in zip(A, B): if number1 > number2: comment1 = “bigger” res_list.append(comment1) if number1 < number2: comment2 = “smaller” res_list.append(comment2) res_list In [702]: res_list Out[702]: ['bigger', 'bigger', 'smaller', 'smaller', 'smaller'] however, I am not sure how to best do this on the 2D array. As output, I am aiming for a list with 2000 sublists (the 2000 cols), to later count the numbers of instances of "bigger" and "smaller" for each of the columns. I am very thankful for any input. So far I have tried to use np.nditer in a double for loop, but it returned all the possible column combinations. I would specifically desire to only combine the "matching" columns. an approximation of the input (but I have: 1000 rows and 2000 cols) In [709]: A Out[709]: array([[2, 5, 6, 3, 7], [6, 2, 9, 2, 3], [2, 1, 4, 5, 7]]) In [710]: B Out[710]: array([[1, 3, 9, 4, 8], [4, 8, 2, 3, 1], [3, 7, 1, 8, 9]]) As desired output, I want to compare the values of the arrays A & B column-wise (only the "matching" columns, not all columns with all columns, as I tried to explain above), and store them in the a nested list (number of "sublists" should correspond to the number of columns): res_list = [["bigger", "bigger", "smaller"], ["bigger", "smaller", "smaller"], ["smaller", "bigger", "bigger], ["smaller", "smaller", "smaller"], ...] A: From the example input and output, I see that you want to do an element wise comparison, and store the values per columns. From your code you understand the 1D variant of this problem, so the question seems to be how to do it in 2D. Solution 1 In order to achieve this, we have to make the 2D problem, a 1D problem, so you can do what you already did. If for example the columns would become rows, then you can redo your zip strategy for every row. In otherwords, if we can turn: a = np.array( [[2, 5, 6, 3, 7], [6, 2, 9, 2, 3], [2, 1, 4, 5, 7]] ) into: array([[2, 6, 2], [5, 2, 1], [6, 9, 4], [3, 2, 5], [7, 3, 7]]) we can iterate over a and b, at the same time, and get our 1D version of the problem. Swapping the x and y axis of the matrix like this, is called transposing, and is very common, the operation for numpy is a.T, (docs ndarry.T). Now we use your code onces for the outer loop of iterating over all the rows (after transposing, all the rows actually hold the column values). After which we use the code on those values, because every row is a 1D numpy array. result = [] # Outer loop, to go over the columns of `a` and `b` at the same time. for row_a, row_b in zip(a.T, b.T): result_col = [] # Inner loop to compare a whole column element wise. for col_a, col_b in zip(row_a, row_b): result_col.append('bigger' if col_a > col_b else 'smaller') result.append(result_col) Note: I use a ternary operator to assign smaller and bigger. Solution 2 As indicated before you are only looking at 2 values that are in the same position for both arrays, this is called an elementwise comparison. Since we are only interested in the values that are at the exact same position, and we know the output shape of our result array (input 1000x2000, output will be 2000x1000), we can also iterate over all the elements using their index. Now some quick handy shortcuts, a.shape holds the dimensions of the array, therefore a.shape will be (1000, 2000). using [::-1] will reverse the order, similar to reverse() Combining a.shape[::-1] will hold (2000, 1000), our expected output shape. np.ndindex provides indexing, based on the number of dimensions provided. An *, performs tuple unpacking, so using it like np.ndindex(*a.shape), is equivalent to np.ndindex(1000, 2000). Therefore we can use their index (from np.ndindex) and turn the x and y around to write the result to the correct location in the output array: a = np.random.randint(0, 255, (1000, 2000)) b = np.random.randint(0, 255, (1000, 2000)) result = np.zeros(a.shape[::-1], dtype=object) for rows, columns in np.ndindex(*a.shape): result[columns, rows] = 'bigger' if a[rows, columns] > b[rows, columns] else 'smaller' print(result) This will lead to the same result. Similarly we could also first transpose the a and b array, drop the [::-1] in the result array, and swap the assignment result[columns, rows] back to result[rows, columns]. Edit Thinking about it a bit longer, you are only interested in doing a comparison between two array of the same shape (dimension). For this numpy already has a good solution, np.where(cond, <true>, <false>). So the entire problem can be reduced to: answer = np.where(a > b, 'bigger', 'smaller').T Note the .T to transpose the solution, such that the answer has the columns in the rows.
performing operation on matched columns of NumPy arrays
I am new to python and programming in general and ran into a question: I have two NumPy arrays of the same shape: they are 2D arrays, of the dimensions 1000 x 2000. I wish to compare the values of each column in array A with the values in array B. The important part is that not every column of A should be compared to every column in B, but rather the same columns of A & B should be compared to one another, as in: A[:,0] should be compared to B[:,0], A[:,1] should be compared to B[:,1],… etc. This was easier to do when I had one dimensional arrays: I used zip(A, B), so I could run the following for-loop: A = np.array([2,5,6,3,7]) B = np.array([1,3,9,4,8]) res_list = [] For number1, number2 in zip(A, B): if number1 > number2: comment1 = “bigger” res_list.append(comment1) if number1 < number2: comment2 = “smaller” res_list.append(comment2) res_list In [702]: res_list Out[702]: ['bigger', 'bigger', 'smaller', 'smaller', 'smaller'] however, I am not sure how to best do this on the 2D array. As output, I am aiming for a list with 2000 sublists (the 2000 cols), to later count the numbers of instances of "bigger" and "smaller" for each of the columns. I am very thankful for any input. So far I have tried to use np.nditer in a double for loop, but it returned all the possible column combinations. I would specifically desire to only combine the "matching" columns. an approximation of the input (but I have: 1000 rows and 2000 cols) In [709]: A Out[709]: array([[2, 5, 6, 3, 7], [6, 2, 9, 2, 3], [2, 1, 4, 5, 7]]) In [710]: B Out[710]: array([[1, 3, 9, 4, 8], [4, 8, 2, 3, 1], [3, 7, 1, 8, 9]]) As desired output, I want to compare the values of the arrays A & B column-wise (only the "matching" columns, not all columns with all columns, as I tried to explain above), and store them in the a nested list (number of "sublists" should correspond to the number of columns): res_list = [["bigger", "bigger", "smaller"], ["bigger", "smaller", "smaller"], ["smaller", "bigger", "bigger], ["smaller", "smaller", "smaller"], ...]
[ "From the example input and output, I see that you want to do an element wise comparison, and store the values per columns. From your code you understand the 1D variant of this problem, so the question seems to be how to do it in 2D.\nSolution 1\nIn order to achieve this, we have to make the 2D problem, a 1D problem, so you can do what you already did. If for example the columns would become rows, then you can redo your zip strategy for every row.\nIn otherwords, if we can turn:\na = np.array(\n [[2, 5, 6, 3, 7],\n [6, 2, 9, 2, 3],\n [2, 1, 4, 5, 7]]\n)\n\ninto:\narray([[2, 6, 2],\n [5, 2, 1],\n [6, 9, 4],\n [3, 2, 5],\n [7, 3, 7]])\n\nwe can iterate over a and b, at the same time, and get our 1D version of the problem. Swapping the x and y axis of the matrix like this, is called transposing, and is very common, the operation for numpy is a.T, (docs ndarry.T).\nNow we use your code onces for the outer loop of iterating over all the rows (after transposing, all the rows actually hold the column values). After which we use the code on those values, because every row is a 1D numpy array.\nresult = []\n\n# Outer loop, to go over the columns of `a` and `b` at the same time.\nfor row_a, row_b in zip(a.T, b.T):\n\n result_col = []\n # Inner loop to compare a whole column element wise.\n for col_a, col_b in zip(row_a, row_b):\n result_col.append('bigger' if col_a > col_b else 'smaller')\n result.append(result_col)\n\nNote: I use a ternary operator to assign smaller and bigger.\nSolution 2\nAs indicated before you are only looking at 2 values that are in the same position for both arrays, this is called an elementwise comparison. Since we are only interested in the values that are at the exact same position, and we know the output shape of our result array (input 1000x2000, output will be 2000x1000), we can also iterate over all the elements using their index.\nNow some quick handy shortcuts,\n\na.shape holds the dimensions of the array, therefore a.shape will be (1000, 2000).\nusing [::-1] will reverse the order, similar to reverse()\nCombining a.shape[::-1] will hold (2000, 1000), our expected output shape.\nnp.ndindex provides indexing, based on the number of dimensions provided.\nAn *, performs tuple unpacking, so using it like np.ndindex(*a.shape), is equivalent to np.ndindex(1000, 2000).\n\nTherefore we can use their index (from np.ndindex) and turn the x and y around to write the result to the correct location in the output array:\na = np.random.randint(0, 255, (1000, 2000))\nb = np.random.randint(0, 255, (1000, 2000))\nresult = np.zeros(a.shape[::-1], dtype=object)\n\nfor rows, columns in np.ndindex(*a.shape):\n result[columns, rows] = 'bigger' if a[rows, columns] > b[rows, columns] else 'smaller'\n\nprint(result)\n\nThis will lead to the same result. Similarly we could also first transpose the a and b array, drop the [::-1] in the result array, and swap the assignment result[columns, rows] back to result[rows, columns].\nEdit\n\nThinking about it a bit longer, you are only interested in doing a comparison between two array of the same shape (dimension). For this numpy already has a good solution, np.where(cond, <true>, <false>).\nSo the entire problem can be reduced to:\nanswer = np.where(a > b, 'bigger', 'smaller').T\n\nNote the .T to transpose the solution, such that the answer has the columns in the rows.\n" ]
[ 0 ]
[]
[]
[ "numpy", "numpy_ndarray", "python", "python_3.x" ]
stackoverflow_0074657977_numpy_numpy_ndarray_python_python_3.x.txt
Q: Grab specific strings within a for loop with variable nested length I have the following telegram export JSON dataset: import pandas as pd df = pd.read_json("data/result.json") >>>df.colums Index(['name', 'type', 'id', 'messages'], dtype='object') >>> type(df) <class 'pandas.core.frame.DataFrame'> # Sample output sample_df = pd.DataFrame({"messages": [ {"id": 11, "from": "user3984", "text": "Do you like soccer?"}, {"id": 312, "from": "user837", "text": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]}, {"id": 4324, "from": "user3984", "text": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']} ]}) Within df, there's a "messages" column, which has the following output: >>> df["messages"] 0 {'id': -999713937, 'type': 'service', 'date': ... 1 {'id': -999713936, 'type': 'service', 'date': ... 2 {'id': -999713935, 'type': 'message', 'date': ... 3 {'id': -999713934, 'type': 'message', 'date': ... 4 {'id': -999713933, 'type': 'message', 'date': ... ... 22377 {'id': 22102, 'type': 'message', 'date': '2022... 22378 {'id': 22103, 'type': 'message', 'date': '2022... 22379 {'id': 22104, 'type': 'message', 'date': '2022... 22380 {'id': 22105, 'type': 'message', 'date': '2022... 22381 {'id': 22106, 'type': 'message', 'date': '2022... Name: messages, Length: 22382, dtype: object Within messages, there's a particular key named "text", and that's the place I want to focus. Turns out when you explore the data, text column can have: A single text: >>> df["messages"][5]["text"] 'JAJAJAJAJAJAJA' >>> df["messages"][22262]["text"] 'No creo' But sometimes it's nested. Like the following: >>> df["messages"][22373]["text"] ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?'] >>> df["messages"][22189]["text"] ['The average married couple has sex roughly once a week. ', {'type': 'mention', 'text': '@googlefactss'}, ' ', {'type': 'hashtag', 'text': '#funfact'}] >>> df["messages"][22345]["text"] [{'type': 'mention', 'text': '@user817430'}] In case for nested data, if I want to grab the main text, I can do the following: >>> df["messages"][22373]["text"][0] 'O ' >>> df["messages"][22189]["text"][0] 'The average married couple has sex roughly once a week. ' >>> From here, everything seems ok. However, the problem arrives when I do the for loop. If I try the following: for item in df["messages"]: tg_id = item.get("id", "None") tg_type = item.get("type", "None") tg_date = item.get("date", "None") tg_from = item.get("from", "None") tg_text = item.get("text", "None") print(tg_id, tg_from, tg_text) A sample output is: 21263 user3984 jajajajaja 21264 user837 ['Not sure', {'type': 'hashtag', 'text': '#confused'}] 21265 user3984 What time is it?✋ MY ASK: How to flatten the rows? I need the following (and store that in a data frame): 21263 user3984 jajajajaja 21264 user837 Not sure 21265 user837 type: hashtag 21266 user837 text: #confused 21267 user3984 What time is it?✋ I tried to detect "text" type like this: for item in df["messages"]: tg_id = item.get("id", "None") tg_type = item.get("type", "None") tg_date = item.get("date", "None") tg_from = item.get("from", "None") tg_text = item.get("text", "None") if type(tg_text) == list: tg_text = tg_text[0] print(tg_id, tg_from, tg_text) With this I only grab the first text, but I'm expecting to grab the other fields as well or to 'flatten' the data. I also tried: for item in df["messages"]: tg_id = item.get("id", "None") tg_type = item.get("type", "None") tg_date = item.get("date", "None") tg_from = item.get("from", "None") tg_text = item.get("text", "None") if type(tg_text) == list: tg_text = tg_text[0] tg_second = tg_text[1]["text"] print(tg_id, tg_from, tg_text, tg_second) But no luck because indices are variable, length from messages are variable too. In addition, even if the output weren't close of my desired solution, I also tried: for item in df["messages"]: tg_text = item.get("text", "None") if type(tg_text) == list: for i in tg_text: print(item, i) mydict = {} for k, v in df.items(): print(k, v) mydict[k] = v # Used df["text"].explode() # Used json_normalize but no luck Any thoughts? A: Assuming a dataframe like the following: df = pd.DataFrame({"messages": [ {"id": 21263, "from": "user3984", "text": "jajajajaja"}, {"id": 21264, "from": "user837", "text": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]}, {"id": 21265, "from": "user3984", "text": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']} ]}) First, expand the messages dictionaries into separate id, from and text columns. expanded = pd.concat([df.drop("messages", axis=1), pd.json_normalize(df["messages"])], axis=1) Then explode the dataframe to have a row for each entry in text: exploded = expanded.explode("text") Then expand the dictionaries that are in some of the entries, converting them to lists of text: def convert_dict(entry): if type(entry) is dict: return [f"{k}: {v}" for k, v in entry.items()] else: return entry exploded["text"] = exploded["text"].apply(convert_dict) Finally, explode again to separate the converted dicts to separate rows. final = exploded.explode("text") The resulting output should look like this id from text 0 21263 user3984 jajajajaja 1 21264 user837 Not sure 1 21264 user837 type: hashtag 1 21264 user837 text: #confused 2 21265 user3984 O 2 21265 user3984 type: mention 2 21265 user3984 text: @user87324 2 21265 user3984 really? A: Just to share some ideas to flatten your list, def flatlist(srclist): flatlist=[] if srclist: #check if srclist is not None for item in srclist: if(type(item) == str): #check if item is type of string flatlist.append(item) if(type(item) == dict): #check if item is type of dict for x in item: flatlist.append(x + ' ' + item[x]) #combine key and value return flatlist for item in df["messages"]: tg_text = item.get("text", "None") flat_list = flatlist(tg_text) # get the flattened list for tg in flat_list: # loop through the list and get the data you want tg_id = item.get("id", "None") tg_from = item.get("from", "None") print(tg_id, tg_from, tg)
Grab specific strings within a for loop with variable nested length
I have the following telegram export JSON dataset: import pandas as pd df = pd.read_json("data/result.json") >>>df.colums Index(['name', 'type', 'id', 'messages'], dtype='object') >>> type(df) <class 'pandas.core.frame.DataFrame'> # Sample output sample_df = pd.DataFrame({"messages": [ {"id": 11, "from": "user3984", "text": "Do you like soccer?"}, {"id": 312, "from": "user837", "text": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]}, {"id": 4324, "from": "user3984", "text": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']} ]}) Within df, there's a "messages" column, which has the following output: >>> df["messages"] 0 {'id': -999713937, 'type': 'service', 'date': ... 1 {'id': -999713936, 'type': 'service', 'date': ... 2 {'id': -999713935, 'type': 'message', 'date': ... 3 {'id': -999713934, 'type': 'message', 'date': ... 4 {'id': -999713933, 'type': 'message', 'date': ... ... 22377 {'id': 22102, 'type': 'message', 'date': '2022... 22378 {'id': 22103, 'type': 'message', 'date': '2022... 22379 {'id': 22104, 'type': 'message', 'date': '2022... 22380 {'id': 22105, 'type': 'message', 'date': '2022... 22381 {'id': 22106, 'type': 'message', 'date': '2022... Name: messages, Length: 22382, dtype: object Within messages, there's a particular key named "text", and that's the place I want to focus. Turns out when you explore the data, text column can have: A single text: >>> df["messages"][5]["text"] 'JAJAJAJAJAJAJA' >>> df["messages"][22262]["text"] 'No creo' But sometimes it's nested. Like the following: >>> df["messages"][22373]["text"] ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?'] >>> df["messages"][22189]["text"] ['The average married couple has sex roughly once a week. ', {'type': 'mention', 'text': '@googlefactss'}, ' ', {'type': 'hashtag', 'text': '#funfact'}] >>> df["messages"][22345]["text"] [{'type': 'mention', 'text': '@user817430'}] In case for nested data, if I want to grab the main text, I can do the following: >>> df["messages"][22373]["text"][0] 'O ' >>> df["messages"][22189]["text"][0] 'The average married couple has sex roughly once a week. ' >>> From here, everything seems ok. However, the problem arrives when I do the for loop. If I try the following: for item in df["messages"]: tg_id = item.get("id", "None") tg_type = item.get("type", "None") tg_date = item.get("date", "None") tg_from = item.get("from", "None") tg_text = item.get("text", "None") print(tg_id, tg_from, tg_text) A sample output is: 21263 user3984 jajajajaja 21264 user837 ['Not sure', {'type': 'hashtag', 'text': '#confused'}] 21265 user3984 What time is it?✋ MY ASK: How to flatten the rows? I need the following (and store that in a data frame): 21263 user3984 jajajajaja 21264 user837 Not sure 21265 user837 type: hashtag 21266 user837 text: #confused 21267 user3984 What time is it?✋ I tried to detect "text" type like this: for item in df["messages"]: tg_id = item.get("id", "None") tg_type = item.get("type", "None") tg_date = item.get("date", "None") tg_from = item.get("from", "None") tg_text = item.get("text", "None") if type(tg_text) == list: tg_text = tg_text[0] print(tg_id, tg_from, tg_text) With this I only grab the first text, but I'm expecting to grab the other fields as well or to 'flatten' the data. I also tried: for item in df["messages"]: tg_id = item.get("id", "None") tg_type = item.get("type", "None") tg_date = item.get("date", "None") tg_from = item.get("from", "None") tg_text = item.get("text", "None") if type(tg_text) == list: tg_text = tg_text[0] tg_second = tg_text[1]["text"] print(tg_id, tg_from, tg_text, tg_second) But no luck because indices are variable, length from messages are variable too. In addition, even if the output weren't close of my desired solution, I also tried: for item in df["messages"]: tg_text = item.get("text", "None") if type(tg_text) == list: for i in tg_text: print(item, i) mydict = {} for k, v in df.items(): print(k, v) mydict[k] = v # Used df["text"].explode() # Used json_normalize but no luck Any thoughts?
[ "Assuming a dataframe like the following:\ndf = pd.DataFrame({\"messages\": [\n {\"id\": 21263, \"from\": \"user3984\", \"text\": \"jajajajaja\"},\n {\"id\": 21264, \"from\": \"user837\", \"text\": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]}, \n {\"id\": 21265, \"from\": \"user3984\", \"text\": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']}\n]})\n\nFirst, expand the messages dictionaries into separate id, from and text columns.\n expanded = pd.concat([df.drop(\"messages\", axis=1), pd.json_normalize(df[\"messages\"])], axis=1)\n\nThen explode the dataframe to have a row for each entry in text:\nexploded = expanded.explode(\"text\")\n\nThen expand the dictionaries that are in some of the entries, converting them to lists of text:\ndef convert_dict(entry):\n if type(entry) is dict:\n return [f\"{k}: {v}\" for k, v in entry.items()]\n else:\n return entry\n\nexploded[\"text\"] = exploded[\"text\"].apply(convert_dict)\n\nFinally, explode again to separate the converted dicts to separate rows.\nfinal = exploded.explode(\"text\")\n\nThe resulting output should look like this\n id from text\n0 21263 user3984 jajajajaja\n1 21264 user837 Not sure\n1 21264 user837 type: hashtag\n1 21264 user837 text: #confused\n2 21265 user3984 O \n2 21265 user3984 type: mention\n2 21265 user3984 text: @user87324\n2 21265 user3984 really?\n\n", "Just to share some ideas to flatten your list,\ndef flatlist(srclist):\n flatlist=[]\n if srclist: #check if srclist is not None\n for item in srclist:\n if(type(item) == str): #check if item is type of string\n flatlist.append(item)\n if(type(item) == dict): #check if item is type of dict\n for x in item:\n flatlist.append(x + ' ' + item[x]) #combine key and value\n return flatlist\n\nfor item in df[\"messages\"]:\n tg_text = item.get(\"text\", \"None\")\n flat_list = flatlist(tg_text) # get the flattened list\n for tg in flat_list: # loop through the list and get the data you want\n tg_id = item.get(\"id\", \"None\")\n tg_from = item.get(\"from\", \"None\")\n \n print(tg_id, tg_from, tg)\n\n" ]
[ 1, 0 ]
[]
[]
[ "for_loop", "json", "list", "nested", "python" ]
stackoverflow_0074650152_for_loop_json_list_nested_python.txt
Q: Try to find a sublist that doesnt occur in the range of ANY of the sublists in another list enhancerlist=[[5,8],[10,11]] TFlist=[[6,7],[24,56]] I have two lists of lists. I am trying to isolate the sublists in my 'TFlist' that don't fit in the range of ANY of the sublists of enhancerlist (by range: TFlist sublist range fits inside of enhancerlist sublist range). SO for example, TFlist[1] will not occur in the range of any sublists in enhancerlist (whereas TFlist [6,7] fits inside the range of [5,8]) , so I want this as output: TF_notinrange=[24,56] the problem with a nested for loop like this: while TFlist: TF=TFlist.pop() for j in enhancerlist: if ((TF[0]>= j[0]) and (TF[1]<= j[1])): continue else: TF_notinrange.append(TF) is that I get this as output: [[24, 56], [3, 4]] the if statement is checking one sublist in enhancerlist at a time and so will append TF even if, later on, there is a sublist it is in the range of. Could I somehow do a while loop with the condition? although it seems like I still have the issue of a nested loop appending things incorrectly ? A: Alternative is to use a list comprehension: TF_notinrange = [tf for tf in TFlist if not any(istart <= tf[0] <= tf[1] <= iend for istart, iend in enhancerlist)] print(TF_notinrange) >>> TF_notinrange Explanation Take ranges of TFlist which are not contained in any ranges of enhancerlist A: You can use chained comparisons along with the less-common for-else block where the else clause triggers only if the for loop was not broken out of prematurely to achieve this: non_overlapping = [] for tf_a, tf_b in TFlist: for enhancer_a, enhancer_b in enhancerlist: if enhancer_a <= tf_a < tf_b <= enhancer_b: break else: non_overlapping.append([tf_a, tf_b]) Note that this assumes that all pairs are already sorted and that no pair comprises a range of length zero (e.g., (2, 2)).
Try to find a sublist that doesnt occur in the range of ANY of the sublists in another list
enhancerlist=[[5,8],[10,11]] TFlist=[[6,7],[24,56]] I have two lists of lists. I am trying to isolate the sublists in my 'TFlist' that don't fit in the range of ANY of the sublists of enhancerlist (by range: TFlist sublist range fits inside of enhancerlist sublist range). SO for example, TFlist[1] will not occur in the range of any sublists in enhancerlist (whereas TFlist [6,7] fits inside the range of [5,8]) , so I want this as output: TF_notinrange=[24,56] the problem with a nested for loop like this: while TFlist: TF=TFlist.pop() for j in enhancerlist: if ((TF[0]>= j[0]) and (TF[1]<= j[1])): continue else: TF_notinrange.append(TF) is that I get this as output: [[24, 56], [3, 4]] the if statement is checking one sublist in enhancerlist at a time and so will append TF even if, later on, there is a sublist it is in the range of. Could I somehow do a while loop with the condition? although it seems like I still have the issue of a nested loop appending things incorrectly ?
[ "Alternative is to use a list comprehension:\nTF_notinrange = [tf for tf in TFlist \n if not any(istart <= tf[0] <= tf[1] <= iend \n for istart, iend in enhancerlist)]\nprint(TF_notinrange)\n>>> TF_notinrange\n\nExplanation\nTake ranges of TFlist which are not contained in any ranges of enhancerlist\n", "You can use chained comparisons along with the less-common for-else block where the else clause triggers only if the for loop was not broken out of prematurely to achieve this:\nnon_overlapping = []\n\nfor tf_a, tf_b in TFlist:\n for enhancer_a, enhancer_b in enhancerlist:\n if enhancer_a <= tf_a < tf_b <= enhancer_b:\n break\n else:\n non_overlapping.append([tf_a, tf_b])\n\nNote that this assumes that all pairs are already sorted and that no pair comprises a range of length zero (e.g., (2, 2)).\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074658926_python.txt
Q: How do I check if a line in a CSV file is not the header row, and then append each line of the file to a variable, excluding the header? I need to use an "if" statement to check if a line in a CSV file is not the header row. Then, I need to append each line of the CSV file to a variable called "mailing_list," excluding the header. How should I do this? This is the CSV file and what I have so far (may not be correct). uuid,username,email,subscribe_status 307919e9-d6f0-4ecf-9bef-c1320db8941a,afarrimond0,[email protected],opt-out 8743d75d-c62a-4bae-8990-3390fefbe5c7,tdelicate1,[email protected],opt-out 68a32cae-847a-47c5-a77c-0d14ccf11e70,edelahuntyk,[email protected],OPT-OUT a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d,tdelicate10,[email protected],active 26edd0b3-0040-4ba9-8c19-9b69d565df36,ogelder2,[email protected],unsubscribed 5c96189f-95fe-4638-9753-081a6e1a82e8,bnornable3,[email protected],opt-out 480fb04a-d7cd-47c5-8079-b580cb14b4d9,csheraton4,[email protected],active d08649ee-62ae-4d1a-b578-fdde309bb721,tstodart5,[email protected],active 5772c293-c2a9-41ff-a8d3-6c666fc19d9a,mbaudino6,[email protected],unsubscribed 9e8fb253-d80d-47b5-8e1d-9a89b5bcc41b,paspling7,[email protected],active 055dff79-7d09-4194-95f2-48dd586b8bd7,mknapton8,[email protected],active 5216dc65-05bb-4aba-a516-3c1317091471,ajelf9,[email protected],unsubscribed 41c30786-aa84-4d60-9879-0c53f8fad970,cgoodleyh,[email protected],active 3fd55224-dbff-4c89-baec-629a3442d8f7,smcgonnelli,[email protected],opt-out 2ac17a63-a64b-42fc-8780-02c5549f23a7,mmayoralj,[email protected],unsubscribed import csv base_url = '../dataset/' def read_mailing_list_file(): with open('mailing_list.csv', 'r') as csv_file: file_reader = csv.reader(csv_file) line_count = 0 mailing_list = open("mailing_list.csv").readlines() for row in file_reader: I am not sure what to try, but I am expecting to append each line of the CSV file to the mailing_list variable, excluding the header. A: Using Sniffer class from csv. From docs: has_header(sample) Analyze the sample text (presumed to be in CSV format) and return True if the first row appears to be a series of column headers. Inspecting each column, one of two key criteria will be considered to estimate if the sample contains a header: the second through n-th rows contain numeric values the second through n-th rows contain strings where at >least one value’s length differs from that of the putative header of that column. Twenty rows after the first row are sampled; if more than half of columns + rows meet the criteria, True is returned. Note This method is a rough heuristic and may produce both false positives and negatives. import csv with open('mailing_list.csv') as csv_file: hdr = csv.Sniffer().has_header(csv_file.read()) csv_file.seek(0) r = csv.reader(csv_file) mailing_list = [] if hdr: next(r) for row in r: mailing_list.append(row) mailing_list Out[11]: [['307919e9-d6f0-4ecf-9bef-c1320db8941a', 'afarrimond0', '[email protected]', 'opt-out'], ['8743d75d-c62a-4bae-8990-3390fefbe5c7', 'tdelicate1', '[email protected]', 'opt-out'], ['68a32cae-847a-47c5-a77c-0d14ccf11e70', 'edelahuntyk', '[email protected]', 'OPT-OUT'], ['a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d', 'tdelicate10', '[email protected]', 'active'], ... ['3fd55224-dbff-4c89-baec-629a3442d8f7', 'smcgonnelli', '[email protected]', 'opt-out'], ['2ac17a63-a64b-42fc-8780-02c5549f23a7', 'mmayoralj', '[email protected]', 'unsubscribed'], []]
How do I check if a line in a CSV file is not the header row, and then append each line of the file to a variable, excluding the header?
I need to use an "if" statement to check if a line in a CSV file is not the header row. Then, I need to append each line of the CSV file to a variable called "mailing_list," excluding the header. How should I do this? This is the CSV file and what I have so far (may not be correct). uuid,username,email,subscribe_status 307919e9-d6f0-4ecf-9bef-c1320db8941a,afarrimond0,[email protected],opt-out 8743d75d-c62a-4bae-8990-3390fefbe5c7,tdelicate1,[email protected],opt-out 68a32cae-847a-47c5-a77c-0d14ccf11e70,edelahuntyk,[email protected],OPT-OUT a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d,tdelicate10,[email protected],active 26edd0b3-0040-4ba9-8c19-9b69d565df36,ogelder2,[email protected],unsubscribed 5c96189f-95fe-4638-9753-081a6e1a82e8,bnornable3,[email protected],opt-out 480fb04a-d7cd-47c5-8079-b580cb14b4d9,csheraton4,[email protected],active d08649ee-62ae-4d1a-b578-fdde309bb721,tstodart5,[email protected],active 5772c293-c2a9-41ff-a8d3-6c666fc19d9a,mbaudino6,[email protected],unsubscribed 9e8fb253-d80d-47b5-8e1d-9a89b5bcc41b,paspling7,[email protected],active 055dff79-7d09-4194-95f2-48dd586b8bd7,mknapton8,[email protected],active 5216dc65-05bb-4aba-a516-3c1317091471,ajelf9,[email protected],unsubscribed 41c30786-aa84-4d60-9879-0c53f8fad970,cgoodleyh,[email protected],active 3fd55224-dbff-4c89-baec-629a3442d8f7,smcgonnelli,[email protected],opt-out 2ac17a63-a64b-42fc-8780-02c5549f23a7,mmayoralj,[email protected],unsubscribed import csv base_url = '../dataset/' def read_mailing_list_file(): with open('mailing_list.csv', 'r') as csv_file: file_reader = csv.reader(csv_file) line_count = 0 mailing_list = open("mailing_list.csv").readlines() for row in file_reader: I am not sure what to try, but I am expecting to append each line of the CSV file to the mailing_list variable, excluding the header.
[ "Using Sniffer class from csv.\nFrom docs:\n\nhas_header(sample)\n\n\nAnalyze the sample text (presumed to be in CSV format) and return True if the first row appears to be a series of column headers. Inspecting each column, one of two key criteria will be considered to estimate if the sample contains a header:\n\n\n the second through n-th rows contain numeric values\n\n\n\n the second through n-th rows contain strings where at >least one value’s length differs from that of the putative header of that column.\n\n\n\nTwenty rows after the first row are sampled; if more than half of columns + rows meet the criteria, True is returned.\n\n\nNote\n\n\nThis method is a rough heuristic and may produce both false positives and negatives.\n\nimport csv \n\nwith open('mailing_list.csv') as csv_file:\n hdr = csv.Sniffer().has_header(csv_file.read())\n csv_file.seek(0)\n r = csv.reader(csv_file)\n mailing_list = []\n if hdr:\n next(r)\n for row in r:\n mailing_list.append(row)\n\nmailing_list \nOut[11]: \n[['307919e9-d6f0-4ecf-9bef-c1320db8941a',\n 'afarrimond0',\n '[email protected]',\n 'opt-out'],\n ['8743d75d-c62a-4bae-8990-3390fefbe5c7',\n 'tdelicate1',\n '[email protected]',\n 'opt-out'],\n ['68a32cae-847a-47c5-a77c-0d14ccf11e70',\n 'edelahuntyk',\n '[email protected]',\n 'OPT-OUT'],\n ['a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d',\n 'tdelicate10',\n '[email protected]',\n 'active'],\n\n ...\n\n ['3fd55224-dbff-4c89-baec-629a3442d8f7',\n 'smcgonnelli',\n '[email protected]',\n 'opt-out'],\n ['2ac17a63-a64b-42fc-8780-02c5549f23a7',\n 'mmayoralj',\n '[email protected]',\n 'unsubscribed'],\n []]\n\n\n" ]
[ 1 ]
[]
[]
[ "append", "csv", "python", "readlines" ]
stackoverflow_0074659020_append_csv_python_readlines.txt
Q: Install packages on EMR via bootstrap actions not working in Jupyter notebook I have an EMR cluster using EMR-6.3.1. I am using the Python3 Kernel. I have a very simple bootstrap script in S3: #!/bin/bash sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0 These are the bootstrap logs + sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0 WARNING: Running pip install with root privileges is generally not a good idea. Try `python3 -m pip install --user` instead. WARNING: The scripts cygdb, cython and cythonize are installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. WARNING: The scripts f2py, f2py3 and f2py3.7 are installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. WARNING: The script plasma_store is installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. and Collecting Cython==0.29.4 Downloading Cython-0.29.4-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB) Requirement already satisfied: boto==2.49.0 in /usr/local/lib/python3.7/site-packages (2.49.0) Collecting boto3==1.18.50 Downloading boto3-1.18.50-py3-none-any.whl (131 kB) Collecting numpy==1.19.5 Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB) Collecting pandas==1.3.2 Downloading pandas-1.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.3 MB) Collecting pyarrow==5.0.0 Downloading pyarrow-5.0.0-cp37-cp37m-manylinux2014_x86_64.whl (23.6 MB) Collecting s3transfer<0.6.0,>=0.5.0 Downloading s3transfer-0.5.2-py3-none-any.whl (79 kB) Collecting botocore<1.22.0,>=1.21.50 Downloading botocore-1.21.65-py3-none-any.whl (8.0 MB) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.7/site-packages (from boto3==1.18.50) (0.10.0) Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/site-packages (from pandas==1.3.2) (2021.1) Collecting python-dateutil>=2.7.3 Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting urllib3<1.27,>=1.25.4 Downloading urllib3-1.26.13-py2.py3-none-any.whl (140 kB) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas==1.3.2) (1.13.0) Installing collected packages: Cython, python-dateutil, urllib3, botocore, s3transfer, boto3, numpy, pandas, pyarrow Successfully installed Cython-0.29.4 boto3-1.18.50 botocore-1.21.65 numpy-1.19.5 pandas-1.3.2 pyarrow-5.0.0 python-dateutil-2.8.2 s3transfer-0.5.2 urllib3-1.26.13 From a notebook, importing pandas and seeing the wrong version - 1.2.3. Further, I see pyarrow fails to import. I've printed the import path of pandas, which python version is run, and sys.path. import os import pandas import sys print(sys.path) print(pandas.__version__) print(os.path.abspath(pandas.__file__)) print(os.popen('echo $PYTHONPATH').read()) print(os.popen('which python3').read()) # sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import import pyarrow ['/', '/emr/notebook-env/lib/python37.zip', '/emr/notebook-env/lib/python3.7', '/emr/notebook-env/lib/python3.7/lib-dynload', '', '/emr/notebook-env/lib/python3.7/site-packages', '/emr/notebook-env/lib/python3.7/site-packages/awseditorssparkmonitoringwidget-1.0-py3.7.egg', '/emr/notebook-env/lib/python3.7/site-packages/IPython/extensions', '/home/emr-notebook/.ipython'] 1.2.3 /emr/notebook-env/lib/python3.7/site-packages/pandas/__init__.py /usr/bin/python3 --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-aea9862499ce> in <module> 9 10 # sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import ---> 11 import pyarrow ModuleNotFoundError: No module named 'pyarrow' I found I can import pyarrow if I add /usr/local/lib64/python3.7/site-packages to sys.path. This seems like am improvement, but still the wrong version of pandas is imported. I've tried: SSH'ing into the master node and mucking with the configuration. sudo python3 -m pip install --user ... export PYTHONPATH=/usr/local/lib64/python3.7/site-packages && sudo python3 -m pip install ... sudo pip3 install --upgrade setuptools && sudo python3 -m pip install ... Using a pyspark kernel and running sc.install_pypi_package("pandas==1.3.2") Any help is appreciated. Thank you. A: It looks like you are running the pip install command with sudo privileges, which is generally not recommended. The warning message is suggesting that you try running the command with the --user flag instead, which will install the packages locally for the current user instead of system-wide with root privileges. You can try running the following command instead: python3 -m pip install --user Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0 This should install the packages locally for the current user, without requiring sudo privileges. You may also need to add the ~/.local/bin directory to your PATH environment variable so that you can run the installed packages from the command line.
Install packages on EMR via bootstrap actions not working in Jupyter notebook
I have an EMR cluster using EMR-6.3.1. I am using the Python3 Kernel. I have a very simple bootstrap script in S3: #!/bin/bash sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0 These are the bootstrap logs + sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0 WARNING: Running pip install with root privileges is generally not a good idea. Try `python3 -m pip install --user` instead. WARNING: The scripts cygdb, cython and cythonize are installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. WARNING: The scripts f2py, f2py3 and f2py3.7 are installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. WARNING: The script plasma_store is installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. and Collecting Cython==0.29.4 Downloading Cython-0.29.4-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB) Requirement already satisfied: boto==2.49.0 in /usr/local/lib/python3.7/site-packages (2.49.0) Collecting boto3==1.18.50 Downloading boto3-1.18.50-py3-none-any.whl (131 kB) Collecting numpy==1.19.5 Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB) Collecting pandas==1.3.2 Downloading pandas-1.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.3 MB) Collecting pyarrow==5.0.0 Downloading pyarrow-5.0.0-cp37-cp37m-manylinux2014_x86_64.whl (23.6 MB) Collecting s3transfer<0.6.0,>=0.5.0 Downloading s3transfer-0.5.2-py3-none-any.whl (79 kB) Collecting botocore<1.22.0,>=1.21.50 Downloading botocore-1.21.65-py3-none-any.whl (8.0 MB) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.7/site-packages (from boto3==1.18.50) (0.10.0) Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/site-packages (from pandas==1.3.2) (2021.1) Collecting python-dateutil>=2.7.3 Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting urllib3<1.27,>=1.25.4 Downloading urllib3-1.26.13-py2.py3-none-any.whl (140 kB) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas==1.3.2) (1.13.0) Installing collected packages: Cython, python-dateutil, urllib3, botocore, s3transfer, boto3, numpy, pandas, pyarrow Successfully installed Cython-0.29.4 boto3-1.18.50 botocore-1.21.65 numpy-1.19.5 pandas-1.3.2 pyarrow-5.0.0 python-dateutil-2.8.2 s3transfer-0.5.2 urllib3-1.26.13 From a notebook, importing pandas and seeing the wrong version - 1.2.3. Further, I see pyarrow fails to import. I've printed the import path of pandas, which python version is run, and sys.path. import os import pandas import sys print(sys.path) print(pandas.__version__) print(os.path.abspath(pandas.__file__)) print(os.popen('echo $PYTHONPATH').read()) print(os.popen('which python3').read()) # sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import import pyarrow ['/', '/emr/notebook-env/lib/python37.zip', '/emr/notebook-env/lib/python3.7', '/emr/notebook-env/lib/python3.7/lib-dynload', '', '/emr/notebook-env/lib/python3.7/site-packages', '/emr/notebook-env/lib/python3.7/site-packages/awseditorssparkmonitoringwidget-1.0-py3.7.egg', '/emr/notebook-env/lib/python3.7/site-packages/IPython/extensions', '/home/emr-notebook/.ipython'] 1.2.3 /emr/notebook-env/lib/python3.7/site-packages/pandas/__init__.py /usr/bin/python3 --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-aea9862499ce> in <module> 9 10 # sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import ---> 11 import pyarrow ModuleNotFoundError: No module named 'pyarrow' I found I can import pyarrow if I add /usr/local/lib64/python3.7/site-packages to sys.path. This seems like am improvement, but still the wrong version of pandas is imported. I've tried: SSH'ing into the master node and mucking with the configuration. sudo python3 -m pip install --user ... export PYTHONPATH=/usr/local/lib64/python3.7/site-packages && sudo python3 -m pip install ... sudo pip3 install --upgrade setuptools && sudo python3 -m pip install ... Using a pyspark kernel and running sc.install_pypi_package("pandas==1.3.2") Any help is appreciated. Thank you.
[ "It looks like you are running the pip install command with sudo privileges, which is generally not recommended. The warning message is suggesting that you try running the command with the --user flag instead, which will install the packages locally for the current user instead of system-wide with root privileges.\nYou can try running the following command instead:\npython3 -m pip install --user Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0\n\nThis should install the packages locally for the current user, without requiring sudo privileges. You may also need to add the ~/.local/bin directory to your PATH environment variable so that you can run the installed packages from the command line.\n" ]
[ 0 ]
[]
[]
[ "amazon_emr", "python" ]
stackoverflow_0074659221_amazon_emr_python.txt
Q: What is IDLE stands for? I have using idle to solve python questions Its a very simple question idle enter image description here A: IDLE stands for Integrated Development and Learning Environment. It is a built-in development environment for writing and running Python code.
What is IDLE stands for?
I have using idle to solve python questions Its a very simple question idle enter image description here
[ "IDLE stands for Integrated Development and Learning Environment. It is a built-in development environment for writing and running Python code.\n" ]
[ 0 ]
[]
[]
[ "python", "python_idle" ]
stackoverflow_0074659121_python_python_idle.txt
Q: Avoid KeyError while selecting several data from several groups I am trying to add certain values from certain Brand from certain Month by using .groupby, but I keep getting the same Error: KeyError: ('Acura', '1', '2020') This Values Do exist in the file i am importing: ANIO ID_MES MARCA MODELO UNI_VEH 2020 1 Acura ILX 6 2020 1 Acura Mdx 19 2020 1 Acura Rdx 78 2020 1 Acura TLX 7 2020 1 Honda Accord- 195 2020 1 Honda BR-V 557 2020 1 Honda Civic 693 2020 1 Honda CR-V 2095 import pandas as pd import matplotlib.pyplot as plt df = pd.read_excel("HondaAcuraSales.xlsx") def sumMonthValues (year, brand): count = 1 sMonthSum = [] if anio == 2022: months = 10 else: months = 12 while count <= months: month = 1 monthS = str(mes) BmY = df.groupby(["BRAND","ID_MONTH","YEAR"]) honda = BmY.get_group((brand, monthS, year)) sales = honda["UNI_SOL"].sum() sMonthSum += [sales] month = month + 1 return sumasMes year = 2020 brand = ('Acura') chuck = sumMonthValues (year, brand) print (chuck) Is there something wrong regarding how am i grouping the data? A: If need filter DataFrame by year, brand and months you can avoid groupby and use DataFrame.loc with mask - if scalar compare by Series.eq, if multiple values use Series.isin: def sumMonthValues (year, brand): months = 10 if year == 2022 else 12 mask = (df['ID_MES'].isin(range(1, months+1)) & df['ANIO'].eq(year) & df['MARCA'].isin(list(brand))) return df.loc[mask, "UNI_VEH"].sum() year = 2020 #one element tuple - added , brand = ('Acura', ) chuck = sumMonthValues (year, brand) print (chuck) 110 A: So i got arround it: Storing the values given from the sum of sales given year and brand per month. import pandas as pd import matplotlib.pyplot as plt df = pd.read_excel("ventasHondaMexico2020-2019.xlsx") def sumMonthValues (year, brand): sMonthSum = [] months = 10 if year == 2022 else 12 nmes = 1 mes = [nmes] while nmes <= months: mask = (df['ID_MES'].isin(mes) & df['ANIO'].eq(year) & df['MARCA'].isin(list(brand))) nmes = nmes +1 mes = [nmes] sumMes = df.loc[mask, "UNI_VEH"].sum() sMonthSum += [sumMes] return sMonthSum year = 2020 #one element tuple - added , brand = ('Acura', ) conteo = 1 chuck = sumMonthValues (year, brand) print (chuck)
Avoid KeyError while selecting several data from several groups
I am trying to add certain values from certain Brand from certain Month by using .groupby, but I keep getting the same Error: KeyError: ('Acura', '1', '2020') This Values Do exist in the file i am importing: ANIO ID_MES MARCA MODELO UNI_VEH 2020 1 Acura ILX 6 2020 1 Acura Mdx 19 2020 1 Acura Rdx 78 2020 1 Acura TLX 7 2020 1 Honda Accord- 195 2020 1 Honda BR-V 557 2020 1 Honda Civic 693 2020 1 Honda CR-V 2095 import pandas as pd import matplotlib.pyplot as plt df = pd.read_excel("HondaAcuraSales.xlsx") def sumMonthValues (year, brand): count = 1 sMonthSum = [] if anio == 2022: months = 10 else: months = 12 while count <= months: month = 1 monthS = str(mes) BmY = df.groupby(["BRAND","ID_MONTH","YEAR"]) honda = BmY.get_group((brand, monthS, year)) sales = honda["UNI_SOL"].sum() sMonthSum += [sales] month = month + 1 return sumasMes year = 2020 brand = ('Acura') chuck = sumMonthValues (year, brand) print (chuck) Is there something wrong regarding how am i grouping the data?
[ "If need filter DataFrame by year, brand and months you can avoid groupby and use DataFrame.loc with mask - if scalar compare by Series.eq, if multiple values use Series.isin:\ndef sumMonthValues (year, brand):\n \n months = 10 if year == 2022 else 12\n \n mask = (df['ID_MES'].isin(range(1, months+1)) &\n df['ANIO'].eq(year) & \n df['MARCA'].isin(list(brand)))\n \n return df.loc[mask, \"UNI_VEH\"].sum()\n\nyear = 2020\n#one element tuple - added ,\nbrand = ('Acura', )\n\nchuck = sumMonthValues (year, brand)\nprint (chuck)\n110\n\n", "So i got arround it: Storing the values given from the sum of sales given year and brand per month.\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.read_excel(\"ventasHondaMexico2020-2019.xlsx\")\n\ndef sumMonthValues (year, brand):\n \n sMonthSum = []\n months = 10 if year == 2022 else 12\n nmes = 1\n mes = [nmes]\n while nmes <= months:\n mask = (df['ID_MES'].isin(mes) &\n df['ANIO'].eq(year) & \n df['MARCA'].isin(list(brand)))\n nmes = nmes +1\n mes = [nmes]\n sumMes = df.loc[mask, \"UNI_VEH\"].sum()\n sMonthSum += [sumMes]\n return sMonthSum\n \n\nyear = 2020\n#one element tuple - added ,\nbrand = ('Acura', )\nconteo = 1 \nchuck = sumMonthValues (year, brand)\nprint (chuck)\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074651550_pandas_python.txt
Q: How to select the first representative element for each group of a DataFrameGroupBy object? I am having the following dataframe data = [ [1000, 1, 1], [1000, 1, 1], [1000, 1, 1], [1000, 1, 2], [1000, 1, 2], [1000, 1, 2], [2000, 0, 1], [2000, 0, 1], [2000, 1, 2], [2000, 0, 2], [2000, 1, 2]] df = pd.DataFrame(data, columns=['route_id', 'direction_id', 'trip_id']) Then, I group my df based on the columns route_id, direction_id by using: t_groups = df.groupby(['route_id','direction_id']) I would like to store the value of the trip_id column based on the first most popular trip_id of each unique route_id, direction_id combination. Ι have tried to apply a function value_counts() but I cannot get the first popular trip_id value. I would like my expected output to be like: route_id direction_id trip_id 0 1000 1 1 1 2000 0 1 2 2000 1 2 Any suggestions? A: To store the value of the trip_id column based on the first most popular trip_id of each unique route_id, direction_id combination, you can use the idxmax method on the groupby object to get the index of the first most popular trip_id, and then use this index to access the value of the trip_id column. Here is an example of how you can do this: import pandas as pd # Create the dataframe data = [[1000, 1, 1], [1000, 1, 1], [1000, 1, 1], [1000, 1, 2], [1000, 1, 2], [1000, 1, 2], [2000, 0, 1], [2000, 0, 1], [2000, 1, 2], [2000, 0, 2], [2000, 1, 2]] df = pd.DataFrame(data, columns=['route_id', 'direction_id', 'trip_id']) # Group the dataframe by route_id and direction_id t_groups = df.groupby(['route_id','direction_id']) # Get the index of the first most popular trip_id for each group idx = t_groups['trip_id'].apply(lambda x: x.value_counts().index[0]) # Access the value of the trip_id column at the index for each group trip_ids = t_groups['trip_id'].apply(lambda x: x.loc[idx]) # Print the values of the trip_id column for each group print(trip_ids) A: This is what you are looking for. df = df.groupby(['route_id', 'direction_id']).first().reset_index() The reset_index() just moves your indices into columns looking exactly like the output you want.
How to select the first representative element for each group of a DataFrameGroupBy object?
I am having the following dataframe data = [ [1000, 1, 1], [1000, 1, 1], [1000, 1, 1], [1000, 1, 2], [1000, 1, 2], [1000, 1, 2], [2000, 0, 1], [2000, 0, 1], [2000, 1, 2], [2000, 0, 2], [2000, 1, 2]] df = pd.DataFrame(data, columns=['route_id', 'direction_id', 'trip_id']) Then, I group my df based on the columns route_id, direction_id by using: t_groups = df.groupby(['route_id','direction_id']) I would like to store the value of the trip_id column based on the first most popular trip_id of each unique route_id, direction_id combination. Ι have tried to apply a function value_counts() but I cannot get the first popular trip_id value. I would like my expected output to be like: route_id direction_id trip_id 0 1000 1 1 1 2000 0 1 2 2000 1 2 Any suggestions?
[ "To store the value of the trip_id column based on the first most popular trip_id of each unique route_id, direction_id combination, you can use the idxmax method on the groupby object to get the index of the first most popular trip_id, and then use this index to access the value of the trip_id column.\nHere is an example of how you can do this:\nimport pandas as pd\n\n# Create the dataframe\ndata = [[1000, 1, 1], [1000, 1, 1], [1000, 1, 1], [1000, 1, 2], [1000, 1, 2], [1000, 1, 2], [2000, 0, 1], [2000, 0, 1], [2000, 1, 2], [2000, 0, 2], [2000, 1, 2]]\ndf = pd.DataFrame(data, columns=['route_id', 'direction_id', 'trip_id'])\n\n# Group the dataframe by route_id and direction_id\nt_groups = df.groupby(['route_id','direction_id'])\n\n# Get the index of the first most popular trip_id for each group\nidx = t_groups['trip_id'].apply(lambda x: x.value_counts().index[0])\n\n# Access the value of the trip_id column at the index for each group\ntrip_ids = t_groups['trip_id'].apply(lambda x: x.loc[idx])\n\n# Print the values of the trip_id column for each group\nprint(trip_ids)\n\n", "This is what you are looking for.\ndf = df.groupby(['route_id', 'direction_id']).first().reset_index()\n\nThe reset_index() just moves your indices into columns looking exactly like the output you want.\n" ]
[ 0, 0 ]
[]
[]
[ "group_by", "pandas", "python" ]
stackoverflow_0074659038_group_by_pandas_python.txt
Q: How to get a continuously changing user input dependent output in the same place at command line I am designing a string based game where the real time positions of characters are represented as a string as follows: -----A-----o----- I am changing the position of the character "A" based upon user keyboard inputs eg: updated position: --------A--o----- I don't want to print the string line by line as It gets updated instead I want to modify it every time in the same place when being output in command line, as the constraint I am working with is : The entire game map should run on a single line on the command line - changing the state of the game should not spawn a new line every time. A: You could use '\r' in some way. Carriage return is a control character or mechanism used to reset a device's position to the beginning of a line of text. e.g. import time for i in range(100): time.sleep(1) print(i, end="\r"); Example including controls import time import keyboard # params size = 20 position = 0 # game loop while True: # draw print("", end="\r") for i in range(size): print("-" if (position != i) else "o", end="") # move if keyboard.is_pressed("left"): position = position - 1 if position < 0: position = 0 elif keyboard.is_pressed("right"): position = position + 1 if position >= size: position = size # delay time.sleep(0.1) A: when printing the output use the end attribute for print statements and set it as "\r" to get back the output position to starting of the line.
How to get a continuously changing user input dependent output in the same place at command line
I am designing a string based game where the real time positions of characters are represented as a string as follows: -----A-----o----- I am changing the position of the character "A" based upon user keyboard inputs eg: updated position: --------A--o----- I don't want to print the string line by line as It gets updated instead I want to modify it every time in the same place when being output in command line, as the constraint I am working with is : The entire game map should run on a single line on the command line - changing the state of the game should not spawn a new line every time.
[ "You could use '\\r' in some way. Carriage return is a control character or mechanism used to reset a device's position to the beginning of a line of text. e.g.\nimport time\n\nfor i in range(100):\n time.sleep(1)\n print(i, end=\"\\r\");\n\nExample including controls\nimport time\nimport keyboard\n\n# params\nsize = 20\nposition = 0\n\n# game loop\nwhile True:\n # draw\n print(\"\", end=\"\\r\")\n for i in range(size):\n print(\"-\" if (position != i) else \"o\", end=\"\")\n # move\n if keyboard.is_pressed(\"left\"): \n position = position - 1\n if position < 0:\n position = 0\n elif keyboard.is_pressed(\"right\"): \n position = position + 1\n if position >= size:\n position = size\n # delay \n time.sleep(0.1)\n \n\n", "when printing the output use the end attribute for print statements and set it as \"\\r\" to get back the output position to starting of the line.\n" ]
[ 0, 0 ]
[]
[]
[ "input", "output", "python", "string" ]
stackoverflow_0074655251_input_output_python_string.txt
Q: How to operate times in python I am very new to using python and now I need to add times in minutes. I mean, the data that the computer gives me is 10:23:12 , the first being the hours, then the minutes and finally the seconds. What I want to do is to have a cumulative time in minutes, that cell 1 is added with cell 2, and cell 2 with cell 2. In excel i have this data Column A Column B 12/1/2022 3:51:52 12/1/2022 3:53:31 12/1/2022 3:55:11 And want to sum each cell. I am using pandas to manipulate the data I am expecting to obtain this Column A Column B Column C Column D 12/1/2022 3:51:52 51.87 0 12/1/2022 3:53:31 53.52 1.65 12/1/2022 3:55:11 55.18 3.32 A: You don't have to directly operate with the time. Just transform the string and do the calculations you need, for example, like this: n = '3:51:52' # As an example. Substitute with your actual strings. hours, minutes, seconds = map(int, n.split(':')) # Effectively, the above line does something akin to this: # hours, minutes, seconds = n.split(':') # hours = int(hours) # minutes = int(minutes) # seconds = int(seconds) time_in_seconds = hours * 360 + minutes * 60 + seconds time_in_minutes = hours * 60 + minutes + seconds / 60
How to operate times in python
I am very new to using python and now I need to add times in minutes. I mean, the data that the computer gives me is 10:23:12 , the first being the hours, then the minutes and finally the seconds. What I want to do is to have a cumulative time in minutes, that cell 1 is added with cell 2, and cell 2 with cell 2. In excel i have this data Column A Column B 12/1/2022 3:51:52 12/1/2022 3:53:31 12/1/2022 3:55:11 And want to sum each cell. I am using pandas to manipulate the data I am expecting to obtain this Column A Column B Column C Column D 12/1/2022 3:51:52 51.87 0 12/1/2022 3:53:31 53.52 1.65 12/1/2022 3:55:11 55.18 3.32
[ "You don't have to directly operate with the time. Just transform the string and do the calculations you need, for example, like this:\nn = '3:51:52' # As an example. Substitute with your actual strings.\nhours, minutes, seconds = map(int, n.split(':'))\n# Effectively, the above line does something akin to this: \n# hours, minutes, seconds = n.split(':')\n# hours = int(hours)\n# minutes = int(minutes)\n# seconds = int(seconds)\n\ntime_in_seconds = hours * 360 + minutes * 60 + seconds\ntime_in_minutes = hours * 60 + minutes + seconds / 60\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "time" ]
stackoverflow_0074659228_pandas_python_time.txt
Q: Geopandas: not able to change the crs of a geopandas object I am trying to set the crs of a geopandas object as described here. The example file can be downloaded from here import geopandas as gdp df = pd.read_pickle('myShp.pickle') I upload the screenshot to show the values of the coordinates then if I try to change the crs the values of the polygon don't change tmp = gpd.GeoDataFrame(df, geometry='geometry') tmp.crs = {'init' :'epsg:32618'} I show again the screenshot If I try: import geopandas as gdp df = pd.read_pickle('myShp.pickle') df = gpd.GeoDataFrame(df, geometry='geometry') dfNew=df.to_crs(epsg=32618) I get: ValueError: Cannot transform naive geometries. Please set a crs on the object first. A: Setting the crs like: gdf.crs = {'init' :'epsg:32618'} does not transform your data, it only sets the CRS (it basically says: "my data is represented in this CRS"). In most cases, the CRS is already set while reading the data with geopandas.read_file (if your file has CRS information). So you only need the above when your data has no CRS information yet. If you actually want to convert the coordinates to a different CRS, you can use the to_crs method: gdf_new = gdf.to_crs(epsg=32618) See https://geopandas.readthedocs.io/en/latest/projections.html A: super late, but it's: tmp.set_crs(...)
Geopandas: not able to change the crs of a geopandas object
I am trying to set the crs of a geopandas object as described here. The example file can be downloaded from here import geopandas as gdp df = pd.read_pickle('myShp.pickle') I upload the screenshot to show the values of the coordinates then if I try to change the crs the values of the polygon don't change tmp = gpd.GeoDataFrame(df, geometry='geometry') tmp.crs = {'init' :'epsg:32618'} I show again the screenshot If I try: import geopandas as gdp df = pd.read_pickle('myShp.pickle') df = gpd.GeoDataFrame(df, geometry='geometry') dfNew=df.to_crs(epsg=32618) I get: ValueError: Cannot transform naive geometries. Please set a crs on the object first.
[ "Setting the crs like:\ngdf.crs = {'init' :'epsg:32618'}\n\ndoes not transform your data, it only sets the CRS (it basically says: \"my data is represented in this CRS\"). In most cases, the CRS is already set while reading the data with geopandas.read_file (if your file has CRS information). So you only need the above when your data has no CRS information yet.\nIf you actually want to convert the coordinates to a different CRS, you can use the to_crs method:\ngdf_new = gdf.to_crs(epsg=32618)\n\nSee https://geopandas.readthedocs.io/en/latest/projections.html\n", "super late, but it's:\ntmp.set_crs(...)\n\n" ]
[ 4, 0 ]
[]
[]
[ "geopandas", "python" ]
stackoverflow_0056274566_geopandas_python.txt
Q: Can a numpy array be printed, if it is tied to an instance of a class? I am trying to create a Map(n) object, which is an n*n 2D array with random [0,1]s, and print it out. How do I access the create_grid return value of my map1 instance, to be able to print it out? Please be gentle, I am self-learning python, and wish to expand my knowledge. I wish to create something like this: map1 = Map(5) map1.print_grid() -> [[0,1,1,1,0] [0,0,1,1,0] [0,0,0,1,0] [0,0,0,0,0] [0,1,0,1,0]] For now it looks like this: class Map: def __init__(self, n): self.n = n self.create_grid(n) def create_grid(self, n): arr = np.random.randint(0, 2, size=(n, n)) It is needed to be converted to string, for further calculations in my program res = arr.astype(str) return res THIS IS MY PROBLEM: def print_grid(self): for i in range(5): for j in range(5): print(res[i][j]) print() A: Maybe you could just make a string representation of your class that uses numpy.np.array2string() import numpy as np class Map: def __init__(self, n): self.n = n self.create_grid(n) def create_grid(self, n): self.arr = np.random.randint(0, 2, size=(n, n)) def __str__(self): return np.array2string(self.arr) print(Map(5)) This will print something like: [[1 1 1 0 1] [1 0 1 0 1] [1 0 0 1 1] [0 1 1 0 0] [1 0 0 0 1]] Of course, you can wrap this in another method if you like: ... def print_grid(self): print(self) ... Map(5).print_grid()
Can a numpy array be printed, if it is tied to an instance of a class?
I am trying to create a Map(n) object, which is an n*n 2D array with random [0,1]s, and print it out. How do I access the create_grid return value of my map1 instance, to be able to print it out? Please be gentle, I am self-learning python, and wish to expand my knowledge. I wish to create something like this: map1 = Map(5) map1.print_grid() -> [[0,1,1,1,0] [0,0,1,1,0] [0,0,0,1,0] [0,0,0,0,0] [0,1,0,1,0]] For now it looks like this: class Map: def __init__(self, n): self.n = n self.create_grid(n) def create_grid(self, n): arr = np.random.randint(0, 2, size=(n, n)) It is needed to be converted to string, for further calculations in my program res = arr.astype(str) return res THIS IS MY PROBLEM: def print_grid(self): for i in range(5): for j in range(5): print(res[i][j]) print()
[ "Maybe you could just make a string representation of your class that uses numpy.np.array2string()\nimport numpy as np\n\nclass Map:\n def __init__(self, n):\n self.n = n\n self.create_grid(n)\n\n def create_grid(self, n):\n self.arr = np.random.randint(0, 2, size=(n, n))\n \n def __str__(self):\n return np.array2string(self.arr)\n\nprint(Map(5))\n\nThis will print something like:\n[[1 1 1 0 1]\n [1 0 1 0 1]\n [1 0 0 1 1]\n [0 1 1 0 0]\n [1 0 0 0 1]]\n\nOf course, you can wrap this in another method if you like:\n ...\n def print_grid(self):\n print(self)\n ...\nMap(5).print_grid()\n\n" ]
[ 1 ]
[]
[]
[ "2d", "arrays", "list", "numpy", "python" ]
stackoverflow_0074659341_2d_arrays_list_numpy_python.txt
Q: Upgrade or migrate only single schema using Flask-Migrate(Alembic) I have a multi-schema DB structure.I am using Flask-Migrate, Flask-Script and alembic to manage migrations.Is there a way to upgrade and perform migrations for only one single schema? Thank you A: You have to filter the imported object to select only ones contained in the wanted schema with: def include_name(name, type_, parent_names): if type_ == "schema": return name == SCHEMA_WANTED return True and then: context.configure( connection=connection, target_metadata=get_metadata(), process_revision_directives=process_revision_directives, include_name=include_name, **current_app.extensions['migrate'].configure_args, ) More info: https://alembic.sqlalchemy.org/en/latest/api/runtime.html#alembic.runtime.environment.EnvironmentContext.configure.params.include_name Personally I don't like to change the alembic env.py so I give those parameters as the Flask-Migrate initialization : alembic_ctx_kwargs = { 'include_name': include_name, 'include_schemas': True, 'version_table_schema': SCHEMA_WANTED, } Migrate(app, db, **alembic_ctx_kwargs)
Upgrade or migrate only single schema using Flask-Migrate(Alembic)
I have a multi-schema DB structure.I am using Flask-Migrate, Flask-Script and alembic to manage migrations.Is there a way to upgrade and perform migrations for only one single schema? Thank you
[ "You have to filter the imported object to select only ones contained in the wanted schema with:\ndef include_name(name, type_, parent_names):\n if type_ == \"schema\":\n return name == SCHEMA_WANTED\n return True\n\nand then:\ncontext.configure(\n connection=connection,\n target_metadata=get_metadata(),\n process_revision_directives=process_revision_directives,\n include_name=include_name,\n **current_app.extensions['migrate'].configure_args,\n)\n\nMore info: https://alembic.sqlalchemy.org/en/latest/api/runtime.html#alembic.runtime.environment.EnvironmentContext.configure.params.include_name\nPersonally I don't like to change the alembic env.py so I give those parameters as the Flask-Migrate initialization :\nalembic_ctx_kwargs = {\n 'include_name': include_name,\n 'include_schemas': True,\n 'version_table_schema': SCHEMA_WANTED,\n}\nMigrate(app, db, **alembic_ctx_kwargs)\n\n" ]
[ 0 ]
[]
[]
[ "alembic", "flask_migrate", "flask_script", "python", "sqlalchemy" ]
stackoverflow_0073899162_alembic_flask_migrate_flask_script_python_sqlalchemy.txt
Q: How to change depth of spectrogram to 3 I have a file that i get after downsampling and after i create spectogram from them, i get in shape of enter image description here And after converting them into spectogram with the help of enter image description here I get the shape enter image description here and after 1 extra dimension with the help of reshape function i get shape of enter image description here But the thing is i need to be able to pass this as input into CNN which expect the final dimension of 3. i tried using np.concatenate along axis=2 but that didn't help. A: This time since it was needed to make a depth of 3. I simply concatenated same image 3 times with the help of np.concatenate function and it worked
How to change depth of spectrogram to 3
I have a file that i get after downsampling and after i create spectogram from them, i get in shape of enter image description here And after converting them into spectogram with the help of enter image description here I get the shape enter image description here and after 1 extra dimension with the help of reshape function i get shape of enter image description here But the thing is i need to be able to pass this as input into CNN which expect the final dimension of 3. i tried using np.concatenate along axis=2 but that didn't help.
[ "This time since it was needed to make a depth of 3. I simply concatenated same image 3 times with the help of np.concatenate function and it worked\n" ]
[ 0 ]
[]
[]
[ "audio_processing", "deep_learning", "python", "signal_processing" ]
stackoverflow_0074499607_audio_processing_deep_learning_python_signal_processing.txt
Q: How can I explode a nested dictionary into a dataframe? I have a nested dictionary as below. I'm trying to convert the below to a dataframe with the columns iid, Invnum, @type, execId, CId, AId, df, type. What’s the best way to go about it? data = {'A': {'B1': {'iid': 'B1', 'Invnum': {'B11': {'@type': '/test_data', 'execId': 42, 'CId': 42, 'AId': 'BAZ'}, 'B12': {'@type': '/test_data', 'CId': 8, 'AId': '123'}}}}, 'B2': {'iid': 'B2', 'Invnum': {'B21': {'@type': '/test_data', 'execId': 215, 'CId': 253,'df': [], 'type': 'F'}, 'B22': {'@type': '/test_data', 'execId': 10,'df': [], 'type': 'F'}}}} for key1 in data['A'].keys(): for key2 in data['A'][key1]['Invnum']: print(key1,key2) Expected output: A: As indicated in the comments, your input data is very obscure. This provides a lot of trouble for us, because we don't know what we can assume or not. For my solution I will assume at least the following, based on the example you provide: In the dictionary there is an entry containing the iid and Invnum as keys in the same level. The Invnum key is the only key, which has multiple values, or in otherwords is iterable (besides df), and on iteration it must hold the last dictionary. In otherwords, after the Invnum value (e.g. B11), you can only get the last dict with the other fields as keys (@type, execId, CId, AId, df, type), if they exists. If there is a df value, it will hold a list. # This is a place holder dictionary, so we can create entries that have the same pattern. entry = {'@type': '', 'execId': '', 'CId': '', 'AId': '', 'df': '', 'type': ''} # This will hold all the (properly) format entries for the df. items = [] def flatten(data): if isinstance(data, dict): match data: # We are searching for a level that contains both an `iid` and `Invnum` key. case {'iid': id, 'Invnum': remainder}: for each in remainder: entry_row = dict(**entry, iid=id, Invnum=each) entry_row.update(remainder[each]) items.append(entry_row) case _: for key, value in data.items(): flatten(value) # We flatten the data, such that the `items` variable will hold consistent entries flatten(data) # Transfer to pandas dataframe, and reorder the values for easy comparison. df = pd.DataFrame(items) df = df[['iid', 'Invnum', '@type', 'execId', 'CId', 'AId', 'df', 'type']] print(df.to_string(index=False)) Output: iid Invnum @type execId CId AId df type B1 B11 /test_data 42 42 BAZ B1 B12 /test_data 8 123 B2 B21 /test_data 215 253 [] F B2 B22 /test_data 10 [] F Note: All entries have been turned into strings, since I am using '' for empty values. I heavily rely on the above made assumptions, in case they are incorrect, the answer will not match your expectation. I am using Structural pattern matching, which is introduced in python 3.10.
How can I explode a nested dictionary into a dataframe?
I have a nested dictionary as below. I'm trying to convert the below to a dataframe with the columns iid, Invnum, @type, execId, CId, AId, df, type. What’s the best way to go about it? data = {'A': {'B1': {'iid': 'B1', 'Invnum': {'B11': {'@type': '/test_data', 'execId': 42, 'CId': 42, 'AId': 'BAZ'}, 'B12': {'@type': '/test_data', 'CId': 8, 'AId': '123'}}}}, 'B2': {'iid': 'B2', 'Invnum': {'B21': {'@type': '/test_data', 'execId': 215, 'CId': 253,'df': [], 'type': 'F'}, 'B22': {'@type': '/test_data', 'execId': 10,'df': [], 'type': 'F'}}}} for key1 in data['A'].keys(): for key2 in data['A'][key1]['Invnum']: print(key1,key2) Expected output:
[ "As indicated in the comments, your input data is very obscure. This provides a lot of trouble for us, because we don't know what we can assume or not. For my solution I will assume at least the following, based on the example you provide:\n\nIn the dictionary there is an entry containing the iid and Invnum as keys in the same level.\nThe Invnum key is the only key, which has multiple values, or in otherwords is iterable (besides df), and on iteration it must hold the last dictionary. In otherwords, after the Invnum value (e.g. B11), you can only get the last dict with the other fields as keys (@type, execId, CId, AId, df, type), if they exists.\nIf there is a df value, it will hold a list.\n\n# This is a place holder dictionary, so we can create entries that have the same pattern.\nentry = {'@type': '', 'execId': '', 'CId': '', 'AId': '', 'df': '', 'type': ''}\n\n# This will hold all the (properly) format entries for the df.\nitems = []\n\ndef flatten(data):\n if isinstance(data, dict):\n match data:\n # We are searching for a level that contains both an `iid` and `Invnum` key.\n case {'iid': id, 'Invnum': remainder}:\n for each in remainder:\n entry_row = dict(**entry, iid=id, Invnum=each)\n entry_row.update(remainder[each])\n items.append(entry_row)\n case _:\n for key, value in data.items():\n flatten(value)\n\n# We flatten the data, such that the `items` variable will hold consistent entries\nflatten(data)\n\n# Transfer to pandas dataframe, and reorder the values for easy comparison.\ndf = pd.DataFrame(items)\ndf = df[['iid', 'Invnum', '@type', 'execId', 'CId', 'AId', 'df', 'type']]\nprint(df.to_string(index=False))\n\nOutput:\niid Invnum @type execId CId AId df type\n B1 B11 /test_data 42 42 BAZ \n B1 B12 /test_data 8 123 \n B2 B21 /test_data 215 253 [] F\n B2 B22 /test_data 10 [] F\n\nNote:\n\nAll entries have been turned into strings, since I am using '' for empty values.\nI heavily rely on the above made assumptions, in case they are incorrect, the answer will not match your expectation.\nI am using Structural pattern matching, which is introduced in python 3.10.\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "dictionary", "pandas", "python", "python_3.x" ]
stackoverflow_0074647213_dataframe_dictionary_pandas_python_python_3.x.txt
Q: Combining series to create a dataframe I have two series of stock prices (containing date, ticker, open, high, low, close) and I'd like to know how to combine them to create a dataframe just like the way Yahoo!Finance does. Is it possible? "Join and merge" don't seem to work A: Use pd.concat([sr1, sr2], axis=1) if neither one of join and merge work.
Combining series to create a dataframe
I have two series of stock prices (containing date, ticker, open, high, low, close) and I'd like to know how to combine them to create a dataframe just like the way Yahoo!Finance does. Is it possible? "Join and merge" don't seem to work
[ "Use pd.concat([sr1, sr2], axis=1) if neither one of join and merge work.\n" ]
[ 3 ]
[]
[]
[ "dataframe", "pandas", "python", "yahoo_api", "yahoo_finance" ]
stackoverflow_0074659398_dataframe_pandas_python_yahoo_api_yahoo_finance.txt
Q: How to print a specific part of an exception error I am trying to handle an exception from an API I am using and would like to send a message to the user with a specific part of the error That is being sent. How would I separate it? The result of printing the exception looks like this: NoneFull details: [{'code': 10010, 'detail': 'Originating number listed in do-not-originate registry D46', 'title': None}] I am trying to print only the 'detail' : part of the exception. A: This should work because it seems that e is a list[dict] in your case: try: pass # Your code... except api.error.PermissionError as e: print(e.args[0][0]['detail']) If you added manually the [] then maybe you will have to remove one of the [0].
How to print a specific part of an exception error
I am trying to handle an exception from an API I am using and would like to send a message to the user with a specific part of the error That is being sent. How would I separate it? The result of printing the exception looks like this: NoneFull details: [{'code': 10010, 'detail': 'Originating number listed in do-not-originate registry D46', 'title': None}] I am trying to print only the 'detail' : part of the exception.
[ "This should work because it seems that e is a list[dict] in your case:\ntry:\n pass # Your code...\nexcept api.error.PermissionError as e:\n print(e.args[0][0]['detail'])\n\nIf you added manually the [] then maybe you will have to remove one of the [0].\n" ]
[ 0 ]
[]
[]
[ "exception", "python" ]
stackoverflow_0074658914_exception_python.txt
Q: RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED using pytorch I am trying to run a simple pytorch sample code. It's works fine using CPU. But when using GPU, i get this error message: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 263, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 260, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED The code i am trying to run is the following: import torch from torch import nn m = nn.Conv1d(16, 33, 3, stride=2) m=m.to('cuda') input = torch.randn(20, 16, 50) input=input.to('cuda') output = m(input) I am running this code in a NVIDIA docker with CUDA version 10.2 and my GPU is a RTX 2070 A: There is some discussion regarding this here. I had the same issue but using cuda 11.1 resolved it for me. This is the exact pip command pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html A: In my case it actually had nothing do with the PyTorch/CUDA/cuDNN version. PyTorch initializes cuDNN lazily whenever a convolution is executed for the first time. However, in my case there was not enough GPU memory left to initialize cuDNN because PyTorch itself already held the entire memory in its internal cache. One can release the cache manually with "torch.cuda.empty_cache()" right before the first convolution that is executed. A cleaner solution is to force cuDNN initialization at the beginning by doing a mock convolution: def force_cudnn_initialization(): s = 32 dev = torch.device('cuda') torch.nn.functional.conv2d(torch.zeros(s, s, s, s, device=dev), torch.zeros(s, s, s, s, device=dev)) Calling the above function at the very beginning of the program solved the problem for me. A: I am also using Cuda 10.2. I had the exact same error when upgrading torch and torchvision to the latest version (torch-1.8.0 and torchvision-0.9.0). Which version are you using? I guess this is not the best solution but by downgrading to torch-1.7.1 and torchvision-0.8.2 it works just fine. A: In my cases this error occurred when trying to estimate loss. I used a mixed bce-dice loss. It turned out that my output was linear instead of sigmoid. I then used the sigmoid predictions as of bellow and worked fine. output = torch.nn.Sigmoid()(output) loss = criterion1(output, target) A: I had the same issue when I was training yolov7 with a chess dataset. By reducing batch size from 8 to 4, the issue was solved. A: In my case, I had an array indexing operation but the index was out of bounds. CUDA did not tell me that. I was using inference on a neural network. So I moved to CPU instead of the GPU. The logs were much more informative after that. For debugging if you see this error, switch to CPU first and you will know what to do.
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED using pytorch
I am trying to run a simple pytorch sample code. It's works fine using CPU. But when using GPU, i get this error message: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 263, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 260, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED The code i am trying to run is the following: import torch from torch import nn m = nn.Conv1d(16, 33, 3, stride=2) m=m.to('cuda') input = torch.randn(20, 16, 50) input=input.to('cuda') output = m(input) I am running this code in a NVIDIA docker with CUDA version 10.2 and my GPU is a RTX 2070
[ "There is some discussion regarding this here. I had the same issue but using cuda 11.1 resolved it for me.\nThis is the exact pip command\npip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n", "In my case it actually had nothing do with the PyTorch/CUDA/cuDNN version. PyTorch initializes cuDNN lazily whenever a convolution is executed for the first time. However, in my case there was not enough GPU memory left to initialize cuDNN because PyTorch itself already held the entire memory in its internal cache. One can release the cache manually with \"torch.cuda.empty_cache()\" right before the first convolution that is executed. A cleaner solution is to force cuDNN initialization at the beginning by doing a mock convolution:\ndef force_cudnn_initialization():\n s = 32\n dev = torch.device('cuda')\n torch.nn.functional.conv2d(torch.zeros(s, s, s, s, device=dev), torch.zeros(s, s, s, s, device=dev))\n\nCalling the above function at the very beginning of the program solved the problem for me.\n", "I am also using Cuda 10.2. I had the exact same error when upgrading torch and torchvision to the latest version (torch-1.8.0 and torchvision-0.9.0). Which version are you using?\nI guess this is not the best solution but by downgrading to torch-1.7.1 and torchvision-0.8.2 it works just fine.\n", "In my cases this error occurred when trying to estimate loss.\nI used a mixed bce-dice loss.\nIt turned out that my output was linear instead of sigmoid.\nI then used the sigmoid predictions as of bellow and worked fine.\noutput = torch.nn.Sigmoid()(output)\nloss = criterion1(output, target)\n\n", "I had the same issue when I was training yolov7 with a chess dataset. By reducing batch size from 8 to 4, the issue was solved.\n", "In my case, I had an array indexing operation but the index was out of bounds. CUDA did not tell me that. I was using inference on a neural network. So I moved to CPU instead of the GPU. The logs were much more informative after that. For debugging if you see this error, switch to CPU first and you will know what to do.\n" ]
[ 19, 12, 7, 1, 1, 0 ]
[]
[]
[ "gpu", "python", "pytorch" ]
stackoverflow_0066588715_gpu_python_pytorch.txt
Q: Replace values from second row onwards in a pandas pipe method I am wondering how to replace values from second row onwards in a pipe method (connecting to the rest of steps). import pandas as pd import numpy as np df = pd.DataFrame( { "Date": ["2020-01-01", "2021-01-01", "2022-01-01"], "Pop": [90, 70, 60], } ) Date Pop 0 2020-01-01 90 1 2021-01-01 70 2 2022-01-01 60 Current solution df.iloc[1:] = np.nan Expected output Date Pop 0 2020-01-01 90 1 2021-01-01 NaN 2 2022-01-01 NaN A: You can also use assign like this: df.assign(Pop=df.loc[[0], 'Pop']) Output: Date Pop 0 2020-01-01 90.0 1 2021-01-01 NaN 2 2022-01-01 NaN Note: assign works with nice column headers, if your headers have spaces or special characters you will need to use a different method. A: To replace the values from the second row onwards in your DataFrame using the pandas method pipe, you can do the following: import pandas as pd import numpy as np df = pd.DataFrame( { "Date": ["2020-01-01", "2021-01-01", "2022-01-01"], "Pop": [90, 70, 60], } ) # Use the `pipe` method to apply a function to your DataFrame df = df.pipe(lambda x: x.iloc[1:]).replace(np.nan) print(df) This will replace the values from the second row onwards in your DataFrame with NaN. The resulting DataFrame will look like this: Date Pop 0 2021-01-01 NaN 1 2022-01-01 NaN Note that this will modify your original DataFrame in place, so if you want to keep the original DataFrame, you should make a copy of it before applying the pipe method. You can do this by using the copy method, like this: import pandas as pd import numpy as np df = pd.DataFrame( { "Date": ["2020-01-01", "2021-01-01", "2022-01-01"], "Pop": [90, 70, 60], } ) # Make a copy of the original DataFrame df_copy = df.copy() # Use the `pipe` method to apply a function to your DataFrame df_copy = df_copy.pipe(lambda x: x.iloc[1:]).replace(np.nan) print(df_copy) This will create a copy of your DataFrame and then modify the copy, so the original DataFrame will remain unchanged. The resulting DataFrame will look the same as in the previous example
Replace values from second row onwards in a pandas pipe method
I am wondering how to replace values from second row onwards in a pipe method (connecting to the rest of steps). import pandas as pd import numpy as np df = pd.DataFrame( { "Date": ["2020-01-01", "2021-01-01", "2022-01-01"], "Pop": [90, 70, 60], } ) Date Pop 0 2020-01-01 90 1 2021-01-01 70 2 2022-01-01 60 Current solution df.iloc[1:] = np.nan Expected output Date Pop 0 2020-01-01 90 1 2021-01-01 NaN 2 2022-01-01 NaN
[ "You can also use assign like this:\ndf.assign(Pop=df.loc[[0], 'Pop'])\n\nOutput:\n Date Pop\n0 2020-01-01 90.0\n1 2021-01-01 NaN\n2 2022-01-01 NaN\n\nNote: assign works with nice column headers, if your headers have spaces or special characters you will need to use a different method.\n", "To replace the values from the second row onwards in your DataFrame using the pandas method pipe, you can do the following:\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(\n {\n \"Date\": [\"2020-01-01\", \"2021-01-01\", \"2022-01-01\"],\n \"Pop\": [90, 70, 60],\n }\n)\n\n# Use the `pipe` method to apply a function to your DataFrame\ndf = df.pipe(lambda x: x.iloc[1:]).replace(np.nan)\n\nprint(df)\n\nThis will replace the values from the second row onwards in your DataFrame with NaN. The resulting DataFrame will look like this:\n Date Pop\n0 2021-01-01 NaN\n1 2022-01-01 NaN\n\nNote that this will modify your original DataFrame in place, so if you want to keep the original DataFrame, you should make a copy of it before applying the pipe method. You can do this by using the copy method, like this:\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(\n {\n \"Date\": [\"2020-01-01\", \"2021-01-01\", \"2022-01-01\"],\n \"Pop\": [90, 70, 60],\n }\n)\n\n# Make a copy of the original DataFrame\ndf_copy = df.copy()\n\n# Use the `pipe` method to apply a function to your DataFrame\ndf_copy = df_copy.pipe(lambda x: x.iloc[1:]).replace(np.nan)\n\nprint(df_copy)\n\nThis will create a copy of your DataFrame and then modify the copy, so the original DataFrame will remain unchanged. The resulting DataFrame will look the same as in the previous example\n" ]
[ 3, 2 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074659122_numpy_pandas_python.txt
Q: Print the nth step of a Generator in an easy way I want to know if there is a better and cleaner way of printing the 3rd step of a generator function. Currently I have written the following code def imparesgen(): n = 0 while n<200: n=n+2 yield n gen = imparesgen() y = 0 for x in gen: y+=1 if y == 3: print(x) This worked, but, is there maybe a simpler way of doing this? Without the use of a list. A: From Itertools recipes: def nth(iterable, n, default=None): "Returns the nth item or a default value" return next(islice(iterable, n, None), default) Applied to your example: import itertools def imparesgen(): n = 0 while n<200: n=n+2 yield n gen = imparesgen() print(next(itertools.islice(gen, 3, None))) A: You could iterate over the generator three times. Since zip stops at the shortest input, you'll only get three iterations provided gen yields at least three elements. gen = imparesgen() for _, item in zip(range(3), gen): pass # Now, item is the third element print(item) Alternatively, use the next() function to get the next element of the generator: gen = imparesgen() next(gen) next(gen) item = next(gen) print(item) Or, if you don't want to write out those next lines multiple times: gen = imparesgen() for _ in range(3): item = next(gen) print(item) A: Is this what you're looking for?: from itertools import islice def imparesgen(): n = 0 while n<200: n=n+2 yield n gen = imparesgen() third = list(islice(gen, 2, 3))[0] # -> 6
Print the nth step of a Generator in an easy way
I want to know if there is a better and cleaner way of printing the 3rd step of a generator function. Currently I have written the following code def imparesgen(): n = 0 while n<200: n=n+2 yield n gen = imparesgen() y = 0 for x in gen: y+=1 if y == 3: print(x) This worked, but, is there maybe a simpler way of doing this? Without the use of a list.
[ "From Itertools recipes:\ndef nth(iterable, n, default=None):\n \"Returns the nth item or a default value\"\n return next(islice(iterable, n, None), default)\n\nApplied to your example:\nimport itertools\n\ndef imparesgen():\n n = 0\n while n<200:\n n=n+2\n yield n\n\ngen = imparesgen()\n\nprint(next(itertools.islice(gen, 3, None)))\n\n", "You could iterate over the generator three times. Since zip stops at the shortest input, you'll only get three iterations provided gen yields at least three elements.\ngen = imparesgen()\nfor _, item in zip(range(3), gen):\n pass\n\n# Now, item is the third element\nprint(item)\n\nAlternatively, use the next() function to get the next element of the generator:\ngen = imparesgen()\nnext(gen)\nnext(gen)\nitem = next(gen)\nprint(item)\n\nOr, if you don't want to write out those next lines multiple times:\ngen = imparesgen()\nfor _ in range(3):\n item = next(gen)\n\nprint(item)\n\n", "Is this what you're looking for?:\nfrom itertools import islice\n\ndef imparesgen():\n n = 0\n while n<200: \n n=n+2\n yield n\n\ngen = imparesgen()\n\nthird = list(islice(gen, 2, 3))[0] # -> 6\n\n" ]
[ 2, 0, 0 ]
[ "Since a generator acts like an iterator, you can call next on it :\nthird = next(next(next(gen)))\n\nI don't think you can go much faster than that in pure-Python. But I think that without benchmarking the code, the speedup won't be noticed.\n" ]
[ -2 ]
[ "generator", "python" ]
stackoverflow_0074659302_generator_python.txt
Q: How can I define the ManyToManyField name in django? I have this relationship Class Item(models.Model): pass Class Category(models.Model): items = models.ManyToManyField(Item) I can define the field name as items for category and access it via category.items but I want to define a field name for Item too as item.categories rather than the default item.category How can I achieve it? Update Tried items = models.ManyToManyField(Item, related_name = "categories") But I get TypeError: Direct assignment to the reverse side of a many-to-many set is prohibited. Use categories.set() instead. on Item.object.create(**data) A: The thing about many to many field is not an actual field. If you take a look at the generated schema you wouldnt find the field as a column in either of the table. What happens in the back is django creates a ItemCatagory table. class ItemCatagory(models.Model): item = modes.ForegnKeyField(Item, related_name="catagories", on_delete... ) catagory = models.ForegnKeyField(Item, related_name="items", on_delete... ) catagory.items will give u the ItemCatagory RelatedObject and if u do catagoty.items.all() it will give u QuerySet[ItemCatagory]. so the default model.save() method woont add those rship. so u would have to overide save method on either of the models. A: When you call Item.objects.create(), you need to omit the categories from the args. Then afterwards you can call set() to set the categories. item = Item.objects.create() item.categories.set(categories) If you only have one category to add, you can call add() instead: item = Item.objects.create() item.categories.add(category) Note: both add() and set() save the update to the database, so you don’t need to call item.save() afterwards. Source: Many-to-Many relations
How can I define the ManyToManyField name in django?
I have this relationship Class Item(models.Model): pass Class Category(models.Model): items = models.ManyToManyField(Item) I can define the field name as items for category and access it via category.items but I want to define a field name for Item too as item.categories rather than the default item.category How can I achieve it? Update Tried items = models.ManyToManyField(Item, related_name = "categories") But I get TypeError: Direct assignment to the reverse side of a many-to-many set is prohibited. Use categories.set() instead. on Item.object.create(**data)
[ "The thing about many to many field is not an actual field. If you take a look at the generated schema you wouldnt find the field as a column in either of the table. What happens in the back is django creates a ItemCatagory table.\nclass ItemCatagory(models.Model):\n item = modes.ForegnKeyField(Item, related_name=\"catagories\", on_delete... )\n catagory = models.ForegnKeyField(Item, related_name=\"items\", on_delete... )\n\ncatagory.items will give u the ItemCatagory RelatedObject and if u do catagoty.items.all() it will give u QuerySet[ItemCatagory]. so the default model.save() method woont add those rship. so u would have to overide save method on either of the models.\n", "When you call Item.objects.create(), you need to omit the categories from the args. Then afterwards you can call set() to set the categories.\nitem = Item.objects.create()\nitem.categories.set(categories)\n\nIf you only have one category to add, you can call add() instead:\nitem = Item.objects.create()\nitem.categories.add(category)\n\nNote: both add() and set() save the update to the database, so you don’t need to call item.save() afterwards.\nSource: Many-to-Many relations\n" ]
[ 1, 0 ]
[ "See, ManyToManyField can't make reverse relationship with related model as python is interpreted language, so it can't read model class of previous one. Instead, you can do one thing ...\n# models.py\n\nclass Item(models.Model):\n item_name = models.CharField(max_length=255, default=\"\")\n\n def __str__(self):\n return self.item_name\n\n\nclass Category(models.Model):\n category_name = models.CharField(max_length=255, default=\"\")\n items = models.ManyToManyField(Item)\n\n def __str__(self):\n return self.category_name\n\nAfter that, you can list down your requirements in views.py file.\n# views.py\n\ndef get_items_by_categories(request):\n \n # Here, you will receive a set of items ...\n \n get_categories = Category.objects.all()\n\n # Filter out items with respect to categories ...\n\n get_items_list = [{\"category\": each.category_name, \"items\": each.items} for each in get_categories]\n\n return render(request, \"categories.html\", {\"data\": get_items_list})\n\nIterate your items with categories in categories.html file.\n{% for each in data %}\n {% for content in each %}\n {{content.category}}\n {% for item in content.items.all %}\n {{item.item_name}}\n {% endfor %}\n {% endfor %}\n{% endfor %}\n\nI hope this solution will help you ..\nThank You !\n" ]
[ -1 ]
[ "django", "django_models", "many_to_many", "python" ]
stackoverflow_0074613493_django_django_models_many_to_many_python.txt
Q: How to break while loop immediately after condition is met (and not run rest of code)? I have this dice game made in python for class, I am using functions for the scoring logic (which I believe is all working as desired). I have put these functions in a while loop so that when either player reaches 100 banked score the game ends. However, I cannot get this to work as intended. while int(player1Score) < 100 or int(player2Score) < 100: player1() player2() Here is one of the functions (the other is the same with score added to player 2's variable and some output changes): def player1(): global player1Score #global score to reference outside of function rTotal = 0 #running total - used when gamble print("\nPlayer 1 turn!") while 1==1: d = diceThrow() #dicethrow output(s) diceIndexer(d) print(f"Dice 1: {dice1} \nDice 2: {dice2}") if dice1 == '1' and dice2 == '1': #both die = 1 (banked & running = 0) player1Score = 0 rTotal = 0 print("DOUBLE ONE! Banked total reset, Player 2's turn.") break elif dice1 == '1' or dice2 == '1': #either die = 1 (running = 0) rTotal = 0 print("Rolled a single one. Player 2's turn!") break else: #normal dice roll - gamble or bank choice = input("Gamble (G) or Bank (B): ") choice = choice.upper() #incase lowercase letter given if choice == 'G': #if Gamble chosen - reroll dice & add to running rTotal += int(dice1) + int(dice2) elif choice == 'B': #used to save score. rTotal += int(dice1) + int(dice2) player1Score += rTotal print(f"\nPlayer 1 banked! Total: {player1Score}") break print("Turn over") I have tried changing the 'or' in the while loop to an 'and'. While that did stop faster, it did not stop exactly as the other player achieved a score higher than 10. A: bro. i read your code but i can't get some points as below. if dice1 == '1' and dice2 == '1': #both die = 1 (banked & running = 0) player1Score = 0 the code above makes player1Score as zero. and there's no adding score code for player2Score. For this reason, the while loop doesn't stop. plz, check it first. A: Boolean logic works like this. You want both scores to be less then 100 while the game runs which also means one of the scores should not be 100 or higher. both scores less then 100 int(player1Score) < 100 and int(player2Score) < 100 one of the socres is not 100 or higher !int(player1Score) >= 100 or !int(player1Score) >= 100 The reason why the game didn't stop when you changed the "or" to "and" is because player1Score and player2Score are being evaluated after both player1() and player2() are called. Even if player1 scored over 100, the game wouldn't stop and the turn would go over to player2, goes into the next loop evaluation and finally stop the loop. To fix this, change the while loop of both player1() and player() to this. while player1Score < 100: And also add evaluation after calculating scores. else: #normal dice roll - gamble or bank choice = input("Gamble (G) or Bank (B): ") choice = choice.upper() #incase lowercase letter given # rTotal += int(dice1) + int(dice2) # rewriting the same code is a bad practice. # In this case, you can just write the same code outside the if-else logic since if-else doesn't effect the code. rTotal += int(dice1) + int(dice2) if choice == 'G': #if Gamble chosen - reroll dice & add to running # added evaluation to stop the game if player1Score+rTotal >= 100: break if choice == 'B': #used to save score. player1Score += rTotal print(f"\nPlayer 1 banked! Total: {player2Score}") break Hope this helps. A: What I did is inside the function put an if statement (outside of the while) to return a value from the function using return like this: def exampleFunction1(): #scoring logic here if bank > finalScore: #Score to reach to win game print(f"Score is over {finalScore}!") return 0 Then inside the while loop, attach the functions to a variable so that the return value could be identified. Use the return value to break the while loop: while True: #Breaks as soon as either player gets above 100 score x = exampleFunction1() if x == 0: break x = exampleFunction2() if x == 0: break (Sidenote: I rewrote my whole program to take one gameLogic() function and apply this to the different players, but I still used the while loop stated second)
How to break while loop immediately after condition is met (and not run rest of code)?
I have this dice game made in python for class, I am using functions for the scoring logic (which I believe is all working as desired). I have put these functions in a while loop so that when either player reaches 100 banked score the game ends. However, I cannot get this to work as intended. while int(player1Score) < 100 or int(player2Score) < 100: player1() player2() Here is one of the functions (the other is the same with score added to player 2's variable and some output changes): def player1(): global player1Score #global score to reference outside of function rTotal = 0 #running total - used when gamble print("\nPlayer 1 turn!") while 1==1: d = diceThrow() #dicethrow output(s) diceIndexer(d) print(f"Dice 1: {dice1} \nDice 2: {dice2}") if dice1 == '1' and dice2 == '1': #both die = 1 (banked & running = 0) player1Score = 0 rTotal = 0 print("DOUBLE ONE! Banked total reset, Player 2's turn.") break elif dice1 == '1' or dice2 == '1': #either die = 1 (running = 0) rTotal = 0 print("Rolled a single one. Player 2's turn!") break else: #normal dice roll - gamble or bank choice = input("Gamble (G) or Bank (B): ") choice = choice.upper() #incase lowercase letter given if choice == 'G': #if Gamble chosen - reroll dice & add to running rTotal += int(dice1) + int(dice2) elif choice == 'B': #used to save score. rTotal += int(dice1) + int(dice2) player1Score += rTotal print(f"\nPlayer 1 banked! Total: {player1Score}") break print("Turn over") I have tried changing the 'or' in the while loop to an 'and'. While that did stop faster, it did not stop exactly as the other player achieved a score higher than 10.
[ "bro.\ni read your code but i can't get some points as below.\n\n\nif dice1 == '1' and dice2 == '1': #both die = 1 (banked & running = 0)\n player1Score = 0\n\n\n\nthe code above makes player1Score as zero.\nand there's no adding score code for player2Score.\nFor this reason, the while loop doesn't stop.\nplz, check it first.\n", "Boolean logic works like this.\nYou want both scores to be less then 100 while the game runs which also means one of the scores should not be 100 or higher.\nboth scores less then 100\nint(player1Score) < 100 and int(player2Score) < 100\n\none of the socres is not 100 or higher\n!int(player1Score) >= 100 or !int(player1Score) >= 100\n\nThe reason why the game didn't stop when you changed the \"or\" to \"and\" is because player1Score and player2Score are being evaluated after both player1() and player2() are called. Even if player1 scored over 100, the game wouldn't stop and the turn would go over to player2, goes into the next loop evaluation and finally stop the loop.\nTo fix this, change the while loop of both player1() and player() to this.\nwhile player1Score < 100:\n\nAnd also add evaluation after calculating scores.\nelse: #normal dice roll - gamble or bank\n choice = input(\"Gamble (G) or Bank (B): \")\n choice = choice.upper() #incase lowercase letter given\n \n # rTotal += int(dice1) + int(dice2)\n # rewriting the same code is a bad practice. \n # In this case, you can just write the same code outside the if-else logic since if-else doesn't effect the code.\n\n rTotal += int(dice1) + int(dice2)\n\n if choice == 'G': #if Gamble chosen - reroll dice & add to running\n # added evaluation to stop the game\n if player1Score+rTotal >= 100:\n break\n if choice == 'B': #used to save score.\n player1Score += rTotal\n print(f\"\\nPlayer 1 banked! Total: {player2Score}\")\n break\n\nHope this helps.\n", "What I did is inside the function put an if statement (outside of the while) to return a value from the function using return like this:\ndef exampleFunction1():\n #scoring logic here\n if bank > finalScore: #Score to reach to win game\n print(f\"Score is over {finalScore}!\")\n return 0\n\nThen inside the while loop, attach the functions to a variable so that the return value could be identified. Use the return value to break the while loop:\nwhile True: #Breaks as soon as either player gets above 100 score\n x = exampleFunction1()\n if x == 0:\n break\n x = exampleFunction2()\n if x == 0:\n break\n\n(Sidenote: I rewrote my whole program to take one gameLogic() function and apply this to the different players, but I still used the while loop stated second)\n" ]
[ 0, 0, 0 ]
[]
[]
[ "boolean_logic", "logic", "python" ]
stackoverflow_0074637964_boolean_logic_logic_python.txt
Q: Seaborn showing wrong y-axis values The dataframe I created is as follows: import pandas as pd import numpy as np import seaborn as sns date = pd.date_range('2003-01-01', '2022-11-01', freq='MS').strftime('%Y-%m-%d').tolist() mom = [np.nan] + list(np.repeat([0.01], 238)) cpi = [100] + list(np.repeat([np.nan], 238)) df = pd.DataFrame(list(zip(date, mom, cpi)), columns=['date','mom','cpi']) df['date'] = pd.to_datetime(df['date']) for i in range(1,len(df),1): df['cpi'][i] = df['cpi'][(i-1)] * (1 + df['mom'][i]) df['yoy'] = df['cpi'].pct_change(periods=12) Y-axis values not displaying correctly as can be seen below. sns.lineplot( x = 'date', y = 'yoy', data = df ) I think the percentage changes I calculated for the yoy column are the cause of the issue. Because there are no issues if I manually fill in the yoy column. Thanks in advance. A: You can use matplotlib to set the axis scaling, as the difference is really subtle in your data: import matplotlib.pyplot as plt ax = plt.gca() ax.set_ylim([df.yoy.min(numeric_only=True), df.yoy.max(numeric_only=True)]) sns.lineplot( x = 'date', y = 'yoy', data = df, ax = ax ) With this the result should be more of a stepping function. You can use something like the max difference to the mean times 1.01 to set the limits a little better, but this is the idea. You can set the axis ticks using ax.set_yticks(ticks=<list of ticks>) (documentation).
Seaborn showing wrong y-axis values
The dataframe I created is as follows: import pandas as pd import numpy as np import seaborn as sns date = pd.date_range('2003-01-01', '2022-11-01', freq='MS').strftime('%Y-%m-%d').tolist() mom = [np.nan] + list(np.repeat([0.01], 238)) cpi = [100] + list(np.repeat([np.nan], 238)) df = pd.DataFrame(list(zip(date, mom, cpi)), columns=['date','mom','cpi']) df['date'] = pd.to_datetime(df['date']) for i in range(1,len(df),1): df['cpi'][i] = df['cpi'][(i-1)] * (1 + df['mom'][i]) df['yoy'] = df['cpi'].pct_change(periods=12) Y-axis values not displaying correctly as can be seen below. sns.lineplot( x = 'date', y = 'yoy', data = df ) I think the percentage changes I calculated for the yoy column are the cause of the issue. Because there are no issues if I manually fill in the yoy column. Thanks in advance.
[ "You can use matplotlib to set the axis scaling, as the difference is really subtle in your data:\nimport matplotlib.pyplot as plt\n\nax = plt.gca()\nax.set_ylim([df.yoy.min(numeric_only=True), df.yoy.max(numeric_only=True)])\n\nsns.lineplot(\n x = 'date',\n y = 'yoy',\n data = df,\n ax = ax\n)\n\nWith this the result should be more of a stepping function.\n\nYou can use something like the max difference to the mean times 1.01 to set the limits a little better, but this is the idea. You can set the axis ticks using ax.set_yticks(ticks=<list of ticks>) (documentation).\n" ]
[ 0 ]
[]
[]
[ "python", "seaborn" ]
stackoverflow_0074658711_python_seaborn.txt
Q: Issue installing Scikit learn I know this question has been up many times before and I've tried to follow the steps as outlined, but my scikit still won't work. I have Python 3.11, on Windows 11, using Spyder. When trying to run the following code I get this error. import sklearn print(sklearn.__version__) File "C:\Program Files\Spyder\pkgs\spyder_kernels\py3compat.py", line 356, in compat_exec exec(code, globals, locals) File "c:\users\XXX\documents\capex\python\untitled0.py", line 8, in import sklearn ModuleNotFoundError: No module named 'sklearn' These are the things I've done/tried but still getting the issue. Run pip3.11 install scikit-learn in the terminal (resulting in Requirement already satisfied) Run python -m pip install -U pip (resulting in Requirement already satisfied) Adding the Python and Python/Script paths to Path in advanced settings, under both user and system sections. Removed Python from the settings with Microsoft store default (it wasn't there to start with) python -m pip show scikit-learn results in that it is installed, version 1.1.3 When running python -c "import sklearn; sklearn.show_versions()" in the terminal it seems to work, but not when running import sklearn in Spyder. Any help would be greatly appreciated! A: Uninstall your current spyder installation and reinstall it with Anaconda. Your terminal environment is fundamentally different from your overall environment. As such installing sci-kit learn on your Linux distro on terminal still doesn't give you access to the library elsewhere. Anaconda is the easiest way to install all the scientific computing packages you need.
Issue installing Scikit learn
I know this question has been up many times before and I've tried to follow the steps as outlined, but my scikit still won't work. I have Python 3.11, on Windows 11, using Spyder. When trying to run the following code I get this error. import sklearn print(sklearn.__version__) File "C:\Program Files\Spyder\pkgs\spyder_kernels\py3compat.py", line 356, in compat_exec exec(code, globals, locals) File "c:\users\XXX\documents\capex\python\untitled0.py", line 8, in import sklearn ModuleNotFoundError: No module named 'sklearn' These are the things I've done/tried but still getting the issue. Run pip3.11 install scikit-learn in the terminal (resulting in Requirement already satisfied) Run python -m pip install -U pip (resulting in Requirement already satisfied) Adding the Python and Python/Script paths to Path in advanced settings, under both user and system sections. Removed Python from the settings with Microsoft store default (it wasn't there to start with) python -m pip show scikit-learn results in that it is installed, version 1.1.3 When running python -c "import sklearn; sklearn.show_versions()" in the terminal it seems to work, but not when running import sklearn in Spyder. Any help would be greatly appreciated!
[ "Uninstall your current spyder installation and reinstall it with Anaconda. Your terminal environment is fundamentally different from your overall environment. As such installing sci-kit learn on your Linux distro on terminal still doesn't give you access to the library elsewhere. Anaconda is the easiest way to install all the scientific computing packages you need.\n" ]
[ 0 ]
[]
[]
[ "installation", "python", "python_3.x", "scikit_learn" ]
stackoverflow_0074658723_installation_python_python_3.x_scikit_learn.txt
Q: PYTHONPATH variable missing when using os.execlpe to restart script as root My end goal is to have a script that can be initially launched by a non-privileged user without using sudo, but will prompt for sudo password and self-elevate to root. I've been doing this with a bash wrapper script but would like something tidier that doesn't need an additional file. Some googling found this question on StackOverflow where the accepted answer suggesting using os.execlpe to re-launch the script while retaining the same environment. I tried it, but it immediately failed to import a non-built-in module on the second run. Investigating revealed that the PYTHONPATH variable is not carried over, while almost every other environment variable is (PERL5LIB is also missing, and a couple of others, but I'm not using them so they're not troubling me). I have a brief little test script that demonstrates the issue: #!/usr/bin/env python import os import sys print(len(os.environ['PYTHONPATH'])) euid = os.geteuid() if euid != 0: print("Script not started as root. Running with sudo.") args = ['sudo', sys,executable] + sys.argv + [os.environ] os.execlpe('sudo', *args) print("Success") Expected output would be: 6548 Script not started as root. Running with sudo. [sudo] password for esker: 6548 Success But instead I'm getting a KeyError: 6548 Script not started as root. Running with sudo. [sudo] password for esker: Traceback (most recent call last): File "/usr/home/esker/execlpe_test.py", line 5, in <module> print(len(os.environ['PYTHONPATH'])) File "/vol/apps/python/2.7.6/lib/python2.7/UserDict.py", line 23, in __getitem__ raise KeyError(key) KeyError: 'PYTHONPATH' What would be the cause of this missing variable, and how can I avoid it disappearing? Alternatively, is there a better way about doing this that won't result in running into the problem? A: I found this very weird too, and couldn't find any direct way to pass the environment into the replaced process. But I didn't do a full system debugging either. What I found to work as a workaround is this: pypath = os.environ.get('PYTHONPATH', "") args = ['sudo', f"PYTHONPATH={pypath}", sys.executable] + sys.argv os.execvpe('sudo', args, os.environ) I.e. explicitly pass PYTHONPATH= to the new process. Note that I prefer to use os.execvpe(), but it works the same with the other exec*(), given the correct call. See this answer for a good overview of the schema. However, PATH and the rest of the environment is still it's own environment, as an initial print(os.environ) shows. But PYTHONPATH will be passed on this way.
PYTHONPATH variable missing when using os.execlpe to restart script as root
My end goal is to have a script that can be initially launched by a non-privileged user without using sudo, but will prompt for sudo password and self-elevate to root. I've been doing this with a bash wrapper script but would like something tidier that doesn't need an additional file. Some googling found this question on StackOverflow where the accepted answer suggesting using os.execlpe to re-launch the script while retaining the same environment. I tried it, but it immediately failed to import a non-built-in module on the second run. Investigating revealed that the PYTHONPATH variable is not carried over, while almost every other environment variable is (PERL5LIB is also missing, and a couple of others, but I'm not using them so they're not troubling me). I have a brief little test script that demonstrates the issue: #!/usr/bin/env python import os import sys print(len(os.environ['PYTHONPATH'])) euid = os.geteuid() if euid != 0: print("Script not started as root. Running with sudo.") args = ['sudo', sys,executable] + sys.argv + [os.environ] os.execlpe('sudo', *args) print("Success") Expected output would be: 6548 Script not started as root. Running with sudo. [sudo] password for esker: 6548 Success But instead I'm getting a KeyError: 6548 Script not started as root. Running with sudo. [sudo] password for esker: Traceback (most recent call last): File "/usr/home/esker/execlpe_test.py", line 5, in <module> print(len(os.environ['PYTHONPATH'])) File "/vol/apps/python/2.7.6/lib/python2.7/UserDict.py", line 23, in __getitem__ raise KeyError(key) KeyError: 'PYTHONPATH' What would be the cause of this missing variable, and how can I avoid it disappearing? Alternatively, is there a better way about doing this that won't result in running into the problem?
[ "I found this very weird too, and couldn't find any direct way to pass the environment into the replaced process. But I didn't do a full system debugging either.\nWhat I found to work as a workaround is this:\npypath = os.environ.get('PYTHONPATH', \"\")\nargs = ['sudo', f\"PYTHONPATH={pypath}\", sys.executable] + sys.argv\nos.execvpe('sudo', args, os.environ)\n\nI.e. explicitly pass PYTHONPATH= to the new process. Note that I prefer to use os.execvpe(), but it works the same with the other exec*(), given the correct call. See this answer for a good overview of the schema.\nHowever, PATH and the rest of the environment is still it's own environment, as an initial print(os.environ) shows. But PYTHONPATH will be passed on this way.\n" ]
[ 0 ]
[ "You're passing the environment as arguments to your script instead of arguments to execlpe. Try this instead:\nargs = ['sudo', sys,executable] + sys.argv + [os.environ]\nos.execvpe('sudo', args, os.environ)\n\nIf you just want to inherit the environment you can even\nos.execvp('sudo', args)\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0044559046_python.txt
Q: local variable 'product' referenced before assignment I am trying to create a django view which will let users to create a new product on the website. class CreateProductView(APIView): serializer_class = CreateProductSerializer def post(self, request, format = None): serializer = self.serializer_class(data=request.data) if serializer.is_valid(): name = serializer.data.name content = serializer.data.content category = serializer.data.category product = Product(name=name, content=content, category=category) product.save() return Response(ProductSerializer(product).data, status=status.HTTP_201_CREATED) But it is giving this error: UnboundLocalError at /api/create-product local variable 'product' referenced before assignment Request Method: POST Request URL: http://127.0.0.1:8000/api/create-product Django Version: 4.0.5 Exception Type: UnboundLocalError Exception Value: local variable 'product' referenced before assignment Exception Location: H:\Extension Drive (H)\My Software Applications\DeCluttered_Life\declutterd_life\api\views.py, line 42, in post Python Executable: C:\Python310\python.exe Python Version: 3.10.5 Python Path: ['H:\\Extension Drive (H)\\My Software ' 'Applications\\DeCluttered_Life\\declutterd_life', 'C:\\Python310\\python310.zip', 'C:\\Python310\\DLLs', 'C:\\Python310\\lib', 'C:\\Python310', 'C:\\Python310\\lib\\site-packages'] Server time: Fri, 02 Dec 2022 17:26:24 +0000 I tried to look other issues similar to this, but couldn't find the solution. A: You are trying to reference the product variable before it has been assigned when serializer.is_valid() is False. You should move the response line inside the if statement, so that it is only returned if serializer.is_valid() is True and handle the response for invalid serializer with an http error for example. A: Check if the serializer.is_valid() returns True. If it doesn't, your code attempts to return a Response object, which uses the product variable. However, if your serializer.is_valid() does returns False, the Response is still created with the product variable, which has not been created yet. It might be wise to write functionality that handles the situation when your serializer.is_valid() returns False. In that case, probably you don't need the product variable.
local variable 'product' referenced before assignment
I am trying to create a django view which will let users to create a new product on the website. class CreateProductView(APIView): serializer_class = CreateProductSerializer def post(self, request, format = None): serializer = self.serializer_class(data=request.data) if serializer.is_valid(): name = serializer.data.name content = serializer.data.content category = serializer.data.category product = Product(name=name, content=content, category=category) product.save() return Response(ProductSerializer(product).data, status=status.HTTP_201_CREATED) But it is giving this error: UnboundLocalError at /api/create-product local variable 'product' referenced before assignment Request Method: POST Request URL: http://127.0.0.1:8000/api/create-product Django Version: 4.0.5 Exception Type: UnboundLocalError Exception Value: local variable 'product' referenced before assignment Exception Location: H:\Extension Drive (H)\My Software Applications\DeCluttered_Life\declutterd_life\api\views.py, line 42, in post Python Executable: C:\Python310\python.exe Python Version: 3.10.5 Python Path: ['H:\\Extension Drive (H)\\My Software ' 'Applications\\DeCluttered_Life\\declutterd_life', 'C:\\Python310\\python310.zip', 'C:\\Python310\\DLLs', 'C:\\Python310\\lib', 'C:\\Python310', 'C:\\Python310\\lib\\site-packages'] Server time: Fri, 02 Dec 2022 17:26:24 +0000 I tried to look other issues similar to this, but couldn't find the solution.
[ "You are trying to reference the product variable before it has been assigned when serializer.is_valid() is False.\nYou should move the response line inside the if statement, so that it is only returned if serializer.is_valid() is True and handle the response for invalid serializer with an http error for example.\n", "Check if the serializer.is_valid() returns True. If it doesn't, your code attempts to return a Response object, which uses the product variable. However, if your serializer.is_valid() does returns False, the Response is still created with the product variable, which has not been created yet.\nIt might be wise to write functionality that handles the situation when your serializer.is_valid() returns False. In that case, probably you don't need the product variable.\n" ]
[ 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074659435_django_python.txt
Q: Get the amount of leading NaN and trailing non NaN values in pandas dataframe I have a dataframe where the rows contain NaN values. The df contains original columns namely Heading 1 Heading 2 and Heading 3 and extra columns called Unnamed: 1 Unnamed: 2 and Unnamed: 3 as shown: Heading 1 Heading 2 Heading 3 Unnamed: 1 Unnamed: 2 Unnamed: 3 NaN 34 24 45 NaN NaN NaN NaN 24 45 11 NaN NaN NaN NaN 45 45 33 4 NaN 24 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN 34 24 NaN NaN NaN 22 34 24 NaN NaN NaN NaN 34 NaN 45 NaN NaN I want to iterate through each row and find out the amount of leading NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). For each and every row this should be calculated and returned in a dictionary where the key is the index of the row and the value for that key is a list containing the amount of leading NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the second element of the list would the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). So the result for the above dataframe would be: {0 : [1, 1], 1 : [2, 2], 2 : [3, 3], 3 : [0, 0], 4 : [2, 0], 5 : [1, 0], 6 : [0, 0], 7 : [1, 1]} Notice how in row 3 and row 7 the original columns contain 1 and 2 NaN respectively but only the leading NaN's are counted and not the in between ones! Thank you! A: To iterate through each row in a DataFrame and count the number of NaN values in the original columns and the number of non-NaN values in the extra columns, you can do the following: import pandas as pd # Define the dataframe df = pd.DataFrame( { "Heading 1": [np.nan, np.nan, 5, 5, np.nan, np.nan], "Heading 2": [34, np.nan, np.nan, 7, np.nan, np.nan], "Unnamed: 1": [24, 44, np.nan, np.nan, 13, np.nan], "Unnamed: 2": [np.nan, np.nan, np.nan, np.nan, 77, 18] } ) # Define the original columns and the extra columns original_cols = ["Heading 1", "Heading 2"] extra_cols = ["Unnamed: 1", "Unnamed: 2"] # Create a dictionary to store the counts counts = {} # Iterate through each row in the DataFrame for index, row in df.iterrows(): # Count the number of NaN values in the original columns original_nan_count = sum(row[col].isna() for col in original_cols) # Count the number of non-NaN values in the extra columns extra_non_nan_count = sum(not row[col].isna() for col in extra_cols) # Add the counts to the dictionary counts[index] = [original_nan_count, extra_non_nan_count] # Print the dictionary of counts print(counts) This will iterate through each row in the DataFrame, count the number of NaN values in the original columns and the number of non-NaN values in the extra columns, and store the counts in a dictionary where the keys are the row indexes and the values are lists containing the counts. The resulting dictionary will look like this: {0: [1, 1], 1: [2, 1], 2: [1, 0], 3: [0, 0], 4: [2, 2], 5: [2, 1]} A: As an alternative: df['Count'] = df[['Heading 1', 'Heading 2']].apply(lambda x: sum(x.isnull()), axis=1) df['Count2'] = df[['Unnamed: 1', 'Unnamed: 2']].apply(lambda x: sum(x.notnull()), axis=1) df['total']=df[['Count','Count2']].values.tolist() output=dict(zip(df.index, df.total)) ''' {0: [1, 1], 1: [2, 1], 2: [1, 0], 3: [0, 0], 4: [2, 2], 5: [2, 1]} ''' or mask=list(map(list, zip(df[['Heading 1', 'Heading 2']].isnull().sum(axis=1), df[['Unnamed: 1', 'Unnamed: 2']].notnull().sum(axis=1)))) output=dict(zip(df.index,mask)) #{0: [1, 1], 1: [2, 1], 2: [1, 0], 3: [0, 0], 4: [2, 2], 5: [2, 1]} A: The .isna() (in Cyzanfar's answer) raises an exception for me: AttributeError: 'numpy.float64' object has no attribute 'isna' You could instead try the following: counts = {} for index, row in df.iterrows(): # Count the number of NaN values in the original columns num_nan_orig = np.sum(np.isnan(row[['Heading 1', 'Heading 2']])) # Count the number of non-NaN values in the extra columns num_non_nan_extra = np.sum(~np.isnan(row[['Unnamed: 1', 'Unnamed: 2']])) counts[index] = [num_nan_orig, num_non_nan_extra] print(counts) Outputs the following: # {0: [1, 1], 1: [2, 1], 2: [1, 0], 3: [0, 0], 4: [2, 2], 5: [2, 1]} The ~ operator (third last line in the code) is the bitwise negation operator in Python, which inverts the boolean value of its operand. In this case, it is inverts the boolean values produced by np.isnan() method, so that the non-NaN values can be counted.
Get the amount of leading NaN and trailing non NaN values in pandas dataframe
I have a dataframe where the rows contain NaN values. The df contains original columns namely Heading 1 Heading 2 and Heading 3 and extra columns called Unnamed: 1 Unnamed: 2 and Unnamed: 3 as shown: Heading 1 Heading 2 Heading 3 Unnamed: 1 Unnamed: 2 Unnamed: 3 NaN 34 24 45 NaN NaN NaN NaN 24 45 11 NaN NaN NaN NaN 45 45 33 4 NaN 24 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN 34 24 NaN NaN NaN 22 34 24 NaN NaN NaN NaN 34 NaN 45 NaN NaN I want to iterate through each row and find out the amount of leading NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). For each and every row this should be calculated and returned in a dictionary where the key is the index of the row and the value for that key is a list containing the amount of leading NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the second element of the list would the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). So the result for the above dataframe would be: {0 : [1, 1], 1 : [2, 2], 2 : [3, 3], 3 : [0, 0], 4 : [2, 0], 5 : [1, 0], 6 : [0, 0], 7 : [1, 1]} Notice how in row 3 and row 7 the original columns contain 1 and 2 NaN respectively but only the leading NaN's are counted and not the in between ones! Thank you!
[ "To iterate through each row in a DataFrame and count the number of NaN values in the original columns and the number of non-NaN values in the extra columns, you can do the following:\nimport pandas as pd\n\n# Define the dataframe\ndf = pd.DataFrame(\n {\n \"Heading 1\": [np.nan, np.nan, 5, 5, np.nan, np.nan],\n \"Heading 2\": [34, np.nan, np.nan, 7, np.nan, np.nan],\n \"Unnamed: 1\": [24, 44, np.nan, np.nan, 13, np.nan],\n \"Unnamed: 2\": [np.nan, np.nan, np.nan, np.nan, 77, 18]\n }\n)\n\n# Define the original columns and the extra columns\noriginal_cols = [\"Heading 1\", \"Heading 2\"]\nextra_cols = [\"Unnamed: 1\", \"Unnamed: 2\"]\n\n# Create a dictionary to store the counts\ncounts = {}\n\n# Iterate through each row in the DataFrame\nfor index, row in df.iterrows():\n # Count the number of NaN values in the original columns\n original_nan_count = sum(row[col].isna() for col in original_cols)\n \n # Count the number of non-NaN values in the extra columns\n extra_non_nan_count = sum(not row[col].isna() for col in extra_cols)\n \n # Add the counts to the dictionary\n counts[index] = [original_nan_count, extra_non_nan_count]\n\n# Print the dictionary of counts\nprint(counts)\n\nThis will iterate through each row in the DataFrame, count the number of NaN values in the original columns and the number of non-NaN values in the extra columns, and store the counts in a dictionary where the keys are the row indexes and the values are lists containing the counts. The resulting dictionary will look like this:\n{0: [1, 1],\n 1: [2, 1],\n 2: [1, 0],\n 3: [0, 0],\n 4: [2, 2],\n 5: [2, 1]}\n\n", "As an alternative:\ndf['Count'] = df[['Heading 1', 'Heading 2']].apply(lambda x: sum(x.isnull()), axis=1)\ndf['Count2'] = df[['Unnamed: 1', 'Unnamed: 2']].apply(lambda x: sum(x.notnull()), axis=1)\ndf['total']=df[['Count','Count2']].values.tolist()\n\noutput=dict(zip(df.index, df.total))\n'''\n{0: [1, 1], 1: [2, 1], 2: [1, 0], 3: [0, 0], 4: [2, 2], 5: [2, 1]}\n'''\n\nor\nmask=list(map(list, zip(df[['Heading 1', 'Heading 2']].isnull().sum(axis=1), df[['Unnamed: 1', 'Unnamed: 2']].notnull().sum(axis=1))))\noutput=dict(zip(df.index,mask))\n#{0: [1, 1], 1: [2, 1], 2: [1, 0], 3: [0, 0], 4: [2, 2], 5: [2, 1]}\n\n", "The .isna() (in Cyzanfar's answer) raises an exception for me:\nAttributeError: 'numpy.float64' object has no attribute 'isna'\n\nYou could instead try the following:\ncounts = {}\n\nfor index, row in df.iterrows():\n# Count the number of NaN values in the original columns\nnum_nan_orig = np.sum(np.isnan(row[['Heading 1', 'Heading 2']]))\n\n# Count the number of non-NaN values in the extra columns\nnum_non_nan_extra = np.sum(~np.isnan(row[['Unnamed: 1', 'Unnamed: 2']]))\n\ncounts[index] = [num_nan_orig, num_non_nan_extra]\nprint(counts) \n\nOutputs the following:\n# {0: [1, 1], 1: [2, 1], 2: [1, 0], 3: [0, 0], 4: [2, 2], 5: [2, 1]}\n\n\nThe ~ operator (third last line in the code) is the bitwise negation operator in Python, which inverts the boolean value of its operand. In this case, it is inverts the boolean values produced by np.isnan() method, so that the non-NaN values can be counted.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "data_cleaning", "data_preprocessing", "dataframe", "pandas", "python" ]
stackoverflow_0074659310_data_cleaning_data_preprocessing_dataframe_pandas_python.txt