content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Matching values in datasets with different structures I need to add values from Dataset A to Dataset B through a shared key. However, the datasets are not organized in the same way. Dataset A has multiple entrances with the same key (name), and Dataset B is the identification (don't ask why they used the ID as a separate dataset): dataset_A = {'Name' : ["John Snow", "John Snow", "Patchface", "Patchface"], 'Score' : [1, 2, 8, 10]} dataset_B = {'Name' : ['John Snow', 'Patchface'], 'ID:' : [1001, 1002]} pd.DataFrame(dataset_A) pd.DataFrame(dataset_B) I need to add ['ID'] to each matching 'Name' in Dataset A in Pandas. Any help? A: Step 1 Dataframe Creation: The dataframes for the two datasets can be created using the following code: import pandas as pd # elements of first dataset first_Set = {'Prod_1': ['Laptop', 'Mobile Phone', 'Desktop', 'LED'], 'Price_1': [25000, 8000, 20000, 35000] } # creation of Dataframe 1 df1 = pd.DataFrame(first_Set, columns=['Prod_1', 'Price_1']) print(df1) # elements of second dataset second_Set = {'Prod_2': ['Laptop', 'Mobile Phone', 'Desktop', 'LED'], 'Price_2': [25000, 10000, 15000, 30000] } # creation of Dataframe 2 df2 = pd.DataFrame(second_Set, columns=['Prod_2', 'Price_2']) print(df2) Output: enter link description here enter link description here Step 2 Comparison of values: You need to import numpy for the successful execution of this step. Here is the general template to perform the comparison: df1[‘new column for the comparison results’] = np.where(condition, ‘value if true’, ‘value if false’) Example: After execution of this code, the new column with the name Price_Matching will be formed under df1. Columns result will be displayed according to the following conditions: If Price_1 is equal to Price_2, then assign the value of True Otherwise, assign the value of False. import numpy as np # add the Price2 column from # df2 to df1 df1['Price_2'] = df2['Price_2'] # create new column in df1 to # check if prices match df1['Price_Matching'] = np.where(df1['Price_1'] == df2['Price_2'], 'True', 'False') df1 Output: enter link description here
Matching values in datasets with different structures
I need to add values from Dataset A to Dataset B through a shared key. However, the datasets are not organized in the same way. Dataset A has multiple entrances with the same key (name), and Dataset B is the identification (don't ask why they used the ID as a separate dataset): dataset_A = {'Name' : ["John Snow", "John Snow", "Patchface", "Patchface"], 'Score' : [1, 2, 8, 10]} dataset_B = {'Name' : ['John Snow', 'Patchface'], 'ID:' : [1001, 1002]} pd.DataFrame(dataset_A) pd.DataFrame(dataset_B) I need to add ['ID'] to each matching 'Name' in Dataset A in Pandas. Any help?
[ "Step 1 Dataframe Creation: The dataframes for the two datasets can be created using the following code:\nimport pandas as pd\n\n# elements of first dataset\nfirst_Set = {'Prod_1': ['Laptop', 'Mobile Phone',\n 'Desktop', 'LED'],\n 'Price_1': [25000, 8000, 20000, 35000]\n }\n\n# creation of Dataframe 1\ndf1 = pd.DataFrame(first_Set, columns=['Prod_1', 'Price_1'])\nprint(df1)\n\n# elements of second dataset\nsecond_Set = {'Prod_2': ['Laptop', 'Mobile Phone',\n 'Desktop', 'LED'],\n 'Price_2': [25000, 10000, 15000, 30000]\n }\n\n# creation of Dataframe 2\ndf2 = pd.DataFrame(second_Set, columns=['Prod_2', 'Price_2'])\nprint(df2)\n\nOutput:\nenter link description here\nenter link description here\nStep 2 Comparison of values: You need to import numpy for the successful execution of this step. Here is the general template to perform the comparison:\ndf1[‘new column for the comparison results’] = np.where(condition, ‘value if true’, ‘value if false’)\nExample: After execution of this code, the new column with the name Price_Matching will be formed under df1. Columns result will be displayed according to the following conditions:\nIf Price_1 is equal to Price_2, then assign the value of True\nOtherwise, assign the value of False.\nimport numpy as np\n\n# add the Price2 column from\n# df2 to df1\ndf1['Price_2'] = df2['Price_2']\n\n# create new column in df1 to\n# check if prices match\ndf1['Price_Matching'] = \nnp.where(df1['Price_1'] == df2['Price_2'],\n 'True', 'False')\ndf1\n\nOutput:\nenter link description here\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074631414_dataframe_pandas_python.txt
Q: Finding the words immediate before and after a string dynamically specified I have a strings below: CREATE VIEW [dbo].[TestView] AS SELECT T1.Col1,T1.Col2,T2.Col1,T2.Col2,T3.Col1,T3.Col2 FROM table_1 T1 LEFT JOIN table_2 T2 ON T1.Col1 = T2.Col2 INNER JOIN Table_3 T3 ON T1.Col2 = T3.Col2 Objective I want to group the list of Tables and their column Names. So I want a dataframe like below: TableName Alias ColumnName table_1 T1 Col1 table_1 T1 Col2 table_2 T2 Col1 table_2 T2 Col1 table_3 T3 Col1 table_3 T3 Col1 As a first step, I need to use regex to get the list strings immediately before and after the string T and Number i.e. T1,T2 and T3. I have tried with : for i in range(1,4): part_i = str('T')+str(i)+'.' str_i = s.partition(part_i) str_i_before = str_i[0] str_i_after = str_i[2].split('AS')[0] But the output is not correct. I think I need to use regex here. I have tried below pattern using re.findall() but no luck '(.+?) T^([\s\d]+)$(.+?)' Any clue? A: Looking at the broader aim, I would approach it as follows: With a regex, first identify the tables and their aliases, which occur after FROM or JOIN keywords. No need to assume aliases start with "T". Collect this information in a dictionary keyed by the table alias, and with a pair as corresponding data: a table name and an empty set for collecting column names. Create a search pattern that finds one of the aliases followed by a point and a column name, and add the found columns in the above mentioned sets. Convert that dictionary into the flat list of (table, alias, column) tuples. Here is a function that does that: def getcolumns(s): tables = { alias: [table, set()] for table, alias in re.findall(r"\b(?:FROM|JOIN)\s*(\w+)\s*(\w+)", s) } for alias, column in re.findall(rf"\b({'|'.join(tables.keys())})\.(\w+)", s): tables[alias][1].add(column) return [ (table, alias, column) for alias, (table, columns) in tables.items() for column in columns ] # The example input: s = """CREATE VIEW [dbo].[TestView] AS SELECT T1.Col1,T1.Col2,T2.Col1,T2.Col2,T3.Col1,T3.Col2 FROM table_1 T1 LEFT JOIN table_2 T2 ON T1.Col1 = T2.Col2 INNER JOIN Table_3 T3 ON T1.Col2 = T3.Col2""" lst = getcolumns(s) lst will be: [ ('table_1', 'T1', 'Col1'), ('table_1', 'T1', 'Col2'), ('table_2', 'T2', 'Col1'), ('table_2', 'T2', 'Col2'), ('Table_3', 'T3', 'Col1'), ('Table_3', 'T3', 'Col2') ]
Finding the words immediate before and after a string dynamically specified
I have a strings below: CREATE VIEW [dbo].[TestView] AS SELECT T1.Col1,T1.Col2,T2.Col1,T2.Col2,T3.Col1,T3.Col2 FROM table_1 T1 LEFT JOIN table_2 T2 ON T1.Col1 = T2.Col2 INNER JOIN Table_3 T3 ON T1.Col2 = T3.Col2 Objective I want to group the list of Tables and their column Names. So I want a dataframe like below: TableName Alias ColumnName table_1 T1 Col1 table_1 T1 Col2 table_2 T2 Col1 table_2 T2 Col1 table_3 T3 Col1 table_3 T3 Col1 As a first step, I need to use regex to get the list strings immediately before and after the string T and Number i.e. T1,T2 and T3. I have tried with : for i in range(1,4): part_i = str('T')+str(i)+'.' str_i = s.partition(part_i) str_i_before = str_i[0] str_i_after = str_i[2].split('AS')[0] But the output is not correct. I think I need to use regex here. I have tried below pattern using re.findall() but no luck '(.+?) T^([\s\d]+)$(.+?)' Any clue?
[ "Looking at the broader aim, I would approach it as follows:\n\nWith a regex, first identify the tables and their aliases, which occur after FROM or JOIN keywords. No need to assume aliases start with \"T\".\n\nCollect this information in a dictionary keyed by the table alias, and with a pair as corresponding data: a table name and an empty set for collecting column names.\n\nCreate a search pattern that finds one of the aliases followed by a point and a column name, and add the found columns in the above mentioned sets.\n\nConvert that dictionary into the flat list of (table, alias, column) tuples.\n\n\nHere is a function that does that:\ndef getcolumns(s):\n tables = {\n alias: [table, set()]\n for table, alias in re.findall(r\"\\b(?:FROM|JOIN)\\s*(\\w+)\\s*(\\w+)\", s)\n }\n \n for alias, column in re.findall(rf\"\\b({'|'.join(tables.keys())})\\.(\\w+)\", s):\n tables[alias][1].add(column)\n \n return [\n (table, alias, column)\n for alias, (table, columns) in tables.items()\n for column in columns\n ]\n\n\n# The example input:\ns = \"\"\"CREATE VIEW [dbo].[TestView] AS SELECT T1.Col1,T1.Col2,T2.Col1,T2.Col2,T3.Col1,T3.Col2\nFROM table_1 T1 LEFT JOIN table_2 T2 ON T1.Col1 = T2.Col2 INNER JOIN Table_3 T3 \nON T1.Col2 = T3.Col2\"\"\"\n\nlst = getcolumns(s)\n\nlst will be:\n[\n ('table_1', 'T1', 'Col1'), \n ('table_1', 'T1', 'Col2'),\n ('table_2', 'T2', 'Col1'),\n ('table_2', 'T2', 'Col2'),\n ('Table_3', 'T3', 'Col1'),\n ('Table_3', 'T3', 'Col2')\n]\n\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074629375_python_regex.txt
Q: Ctypes send an array of structures I have to realize a project between Python and C. One of the instructions is to use ctypes, so I need to call my C function from Python. The latter needs me to send it two integer variables and a structure array. But I can't get the declaration to work. I don't know how to make the declaration. C: typedef struct //Defini une structure qui a été envoyée par le code Python et qui contient uniquement les élements utiles { int UID; float prix; float poids; int quantite; int nb_commande; } reference; int sac_dos_brute(int nb_produit, reference *tab, float masse) // Recupere le nombre de produits et un tableau de structures pour pouvoir gérrer l'expedition { tri(tab, nb_produit); remplir_camion(tab,nb_produit,masse); return tab; } Python: from ctypes import * dll = CDLL('C:/..../workplease.dll') class reference (Structure): _fields_ = [ ('UID',c_int), ('prix',c_float), ('poids',c_float), ('quantite',c_int), ('nb_commande',c_int), ] '''dll.sac_dos_brute.argtypes = [c_int, c_int, POINTER(reference)] dll.sac_dos_brute(nb_produits, nb_camion, tab)''' nb_produit=2 dll.sac_dos_brute.argtypes=[POINTER(reference)] tab = [reference()]*2 tab[0].UID = 234 tab[0].prix= 12 tab[0].poids= 234 tab[0].quantite= 3 tab[0].nb_commande=1 tab[1].UID = 237 tab[1].prix= 15 tab[1].poids= 256 tab[1].quantite= 6 tab[1].nb_commande=2 dll.sac_dos_brute.argtype(c_int,c_int,c_, ) t = dll.sac_dos_brute(nb_produit, byref(tab)) A: I suspect the problem is that you were creating a list[reference] instead of Array[reference] which translates to tab = (reference * 2)(). Let's see if this works: import ctypes as c def sac_dos_brute(nb_produit: int, reference: c.Array[reference], masse: float) -> None: dll = c.cdll.LoadLibrary('C:/..../workplease.dll') dll.sac_dos_brute(c.c_int(nb_produit), c.byref(reference), c.c_float(masse)) nb_produit: int = 2 tab = (reference * 2)() tab[0].UID = 234 tab[0].nom = b'Iphone' tab[0].prix = 12.0 tab[0].poids = 234.0 tab[0].categorie = b'Telephone' tab[0].marque = b'Apple' tab[0].annee = 2020 tab[0].quantite = 3 tab[0].nb_commande = 1 tab[0].avis = 5 tab[1].UID = 237 tab[1].nom = b'Ipad' tab[1].prix = 15.0 tab[1].poids = 256.0 tab[1].categorie = b'Tablette' tab[1].marque = b'Apple' tab[1].annee = 2022 tab[1].quantite = 6 tab[1].nb_commande = 2 tab[1].avis = 4 masse: float = 0.0 sac_dos_brute(nb_produit, tab, masse) A: Since C profile is int sac_dos_brute(int nb_produit, reference *tab, float masse) ; argtypes should be dll.sac_dos_brute.argtypes = [c_int, POINTER(reference), c_float] That one, I am sure you've already tried, and is false in your code just because you are in the process of trying many things. But just to be sure... Then, the way you are creating the array is not correct. [reference()]*2 is a call to reference(), which creates one structure, repeated in an array. You don't want a python list containing several structures (even less if it is several times the same one). You wan't an array to a reference, and to be more accurate, an array to 2 of them So, it should be tab = (reference * 2)() (I see that you've already been told that while I was away) But then, argument passing is way easier. You don't need byref here (byref is when you are passing a structure, for example, but want the C code to receive a pointer to it. You don't want that, since what you have, tab, is already an array. It will be passed as a pointer to function) So, simply call dll.sac_dos_brute(nb_produit, tab, masse) So altogether (without any change to your C code, assuming that it reflects the change you've made since in your python code) from ctypes import * dll = CDLL('C:/..../workplease.dll') class reference (Structure): _fields_ =[ ('UID',c_int), ('prix',c_float), ('poids',c_float), ('quantite',c_int), ('nb_commande',c_int), ] dll.sac_dos_brute.argtypes=[c_int, POINTER(reference), c_float] nb_produit=2 tab = (reference*nb_produit)() tab[0].UID = 234 tab[0].prix= 12 tab[0].poids= 234 tab[0].quantite= 3 tab[0].nb_commande=1 tab[1].UID = 237 tab[1].prix= 15 tab[1].poids= 256 tab[1].quantite= 6 tab[1].nb_commande=2 masse=3.14 # Just something I've chosen to fill the holes t = dll.sac_dos_brute(nb_produit, tab, masse) And of course, from there, you can't put back the fields you've removed, as long as you put them back in both (C and python) structure, and in the same order. For what I saw, you were using them correctly (using b-string, as you should). A: Make sure your .argtypes and .restype exactly match the C function in type and order of parameters. Constructing a ctypes array is done by (type * size)(item, item, ...). Below is C DLL code to print what was received for debugging: test.c #include <stdio.h> #ifdef _WIN32 # define API __declspec(dllexport) #else # define API #endif typedef struct { int UID; float prix; float poids; int quantite; int nb_commande; } reference; API int sac_dos_brute(int nb_produit, reference *tab, float masse) { printf("masse = %f\n", masse); for(int i = 0; i < nb_produit; ++i) printf("tab[%d] = %d, %f ,%f, %d, %d\n", i, tab[i].UID, tab[i].prix, tab[i].poids, tab[i].quantite, tab[i].nb_commande); return 123; } test.py import ctypes as ct class Reference(ct.Structure): _fields_ = (('UID', ct.c_int), ('prix', ct.c_float), ('poids', ct.c_float), ('quantite', ct.c_int), ('nb_commande', ct.c_int)) # Tell class how to display itself def __repr__(self): return (f'Reference(UID={self.UID}, prix={self.prix}, poids={self.poids}, ' f'quantite={self.quantite}, nb_commande={self.nb_commande})') dll = ct.CDLL('./test') # Make sure argument types and restype match the C function in type and order dll.sac_dos_brute.argtypes = ct.c_int, ct.POINTER(Reference), ct.c_float dll.sac_dos_brute.restype = ct.c_int # Efficient way to create an array of two Reference types. tab = (Reference * 2)(Reference(234, 12, 234, 3, 1), Reference(237, 15, 256, 6, 2)) # Test __repr__ for t in tab: print(t) # Test the function print(dll.sac_dos_brute(len(tab), tab, 1.25)) Output: Reference(UID=234, prix=12.0, poids=234.0, quantite=3, nb_commande=1) Reference(UID=237, prix=15.0, poids=256.0, quantite=6, nb_commande=2) masse = 1.250000 tab[0] = 234, 12.000000 ,234.000000, 3, 1 tab[1] = 237, 15.000000 ,256.000000, 6, 2 123
Ctypes send an array of structures
I have to realize a project between Python and C. One of the instructions is to use ctypes, so I need to call my C function from Python. The latter needs me to send it two integer variables and a structure array. But I can't get the declaration to work. I don't know how to make the declaration. C: typedef struct //Defini une structure qui a été envoyée par le code Python et qui contient uniquement les élements utiles { int UID; float prix; float poids; int quantite; int nb_commande; } reference; int sac_dos_brute(int nb_produit, reference *tab, float masse) // Recupere le nombre de produits et un tableau de structures pour pouvoir gérrer l'expedition { tri(tab, nb_produit); remplir_camion(tab,nb_produit,masse); return tab; } Python: from ctypes import * dll = CDLL('C:/..../workplease.dll') class reference (Structure): _fields_ = [ ('UID',c_int), ('prix',c_float), ('poids',c_float), ('quantite',c_int), ('nb_commande',c_int), ] '''dll.sac_dos_brute.argtypes = [c_int, c_int, POINTER(reference)] dll.sac_dos_brute(nb_produits, nb_camion, tab)''' nb_produit=2 dll.sac_dos_brute.argtypes=[POINTER(reference)] tab = [reference()]*2 tab[0].UID = 234 tab[0].prix= 12 tab[0].poids= 234 tab[0].quantite= 3 tab[0].nb_commande=1 tab[1].UID = 237 tab[1].prix= 15 tab[1].poids= 256 tab[1].quantite= 6 tab[1].nb_commande=2 dll.sac_dos_brute.argtype(c_int,c_int,c_, ) t = dll.sac_dos_brute(nb_produit, byref(tab))
[ "I suspect the problem is that you were creating a list[reference] instead of Array[reference] which translates to tab = (reference * 2)(). Let's see if this works:\nimport ctypes as c\n\ndef sac_dos_brute(nb_produit: int, reference: c.Array[reference], masse: float) -> None:\n dll = c.cdll.LoadLibrary('C:/..../workplease.dll')\n dll.sac_dos_brute(c.c_int(nb_produit), c.byref(reference), c.c_float(masse))\n\nnb_produit: int = 2\ntab = (reference * 2)()\ntab[0].UID = 234\ntab[0].nom = b'Iphone'\ntab[0].prix = 12.0\ntab[0].poids = 234.0\ntab[0].categorie = b'Telephone'\ntab[0].marque = b'Apple'\ntab[0].annee = 2020\ntab[0].quantite = 3\ntab[0].nb_commande = 1\ntab[0].avis = 5\ntab[1].UID = 237\ntab[1].nom = b'Ipad'\ntab[1].prix = 15.0\ntab[1].poids = 256.0\ntab[1].categorie = b'Tablette'\ntab[1].marque = b'Apple'\ntab[1].annee = 2022\ntab[1].quantite = 6\ntab[1].nb_commande = 2\ntab[1].avis = 4\nmasse: float = 0.0\n\nsac_dos_brute(nb_produit, tab, masse)\n\n", "Since C profile is\nint sac_dos_brute(int nb_produit, reference *tab, float masse) ;\n\nargtypes should be\ndll.sac_dos_brute.argtypes = [c_int, POINTER(reference), c_float]\n\nThat one, I am sure you've already tried, and is false in your code just because you are in the process of trying many things. But just to be sure...\nThen, the way you are creating the array is not correct.\n[reference()]*2\n\nis a call to reference(), which creates one structure, repeated in an array. You don't want a python list containing several structures (even less if it is several times the same one). You wan't an array to a reference, and to be more accurate, an array to 2 of them\nSo, it should be\ntab = (reference * 2)()\n\n(I see that you've already been told that while I was away)\nBut then, argument passing is way easier. You don't need byref here (byref is when you are passing a structure, for example, but want the C code to receive a pointer to it. You don't want that, since what you have, tab, is already an array. It will be passed as a pointer to function)\nSo, simply call\ndll.sac_dos_brute(nb_produit, tab, masse)\n\nSo altogether (without any change to your C code, assuming that it reflects the change you've made since in your python code)\nfrom ctypes import *\n\ndll = CDLL('C:/..../workplease.dll')\n\nclass reference (Structure):\n _fields_ =[\n ('UID',c_int),\n ('prix',c_float),\n ('poids',c_float),\n ('quantite',c_int),\n ('nb_commande',c_int),\n ]\n\ndll.sac_dos_brute.argtypes=[c_int, POINTER(reference), c_float]\n\n\nnb_produit=2\ntab = (reference*nb_produit)()\n\ntab[0].UID = 234\ntab[0].prix= 12\ntab[0].poids= 234\ntab[0].quantite= 3\ntab[0].nb_commande=1\n\ntab[1].UID = 237\ntab[1].prix= 15\ntab[1].poids= 256\ntab[1].quantite= 6\ntab[1].nb_commande=2\n\nmasse=3.14 # Just something I've chosen to fill the holes\n \nt = dll.sac_dos_brute(nb_produit, tab, masse)\n\nAnd of course, from there, you can't put back the fields you've removed, as long as you put them back in both (C and python) structure, and in the same order.\nFor what I saw, you were using them correctly (using b-string, as you should).\n", "Make sure your .argtypes and .restype exactly match the C function in type and order of parameters.\nConstructing a ctypes array is done by (type * size)(item, item, ...).\nBelow is C DLL code to print what was received for debugging:\ntest.c\n#include <stdio.h>\n\n#ifdef _WIN32\n# define API __declspec(dllexport)\n#else\n# define API\n#endif\n\ntypedef struct {\n int UID;\n float prix;\n float poids;\n int quantite;\n int nb_commande;\n} reference;\n\nAPI int sac_dos_brute(int nb_produit, reference *tab, float masse) {\n printf(\"masse = %f\\n\", masse);\n for(int i = 0; i < nb_produit; ++i)\n printf(\"tab[%d] = %d, %f ,%f, %d, %d\\n\",\n i, tab[i].UID, tab[i].prix, tab[i].poids, tab[i].quantite, tab[i].nb_commande);\n return 123;\n}\n\ntest.py\nimport ctypes as ct\n\nclass Reference(ct.Structure):\n\n _fields_ = (('UID', ct.c_int),\n ('prix', ct.c_float),\n ('poids', ct.c_float),\n ('quantite', ct.c_int),\n ('nb_commande', ct.c_int))\n\n # Tell class how to display itself\n def __repr__(self):\n return (f'Reference(UID={self.UID}, prix={self.prix}, poids={self.poids}, '\n f'quantite={self.quantite}, nb_commande={self.nb_commande})')\n\ndll = ct.CDLL('./test')\n# Make sure argument types and restype match the C function in type and order\ndll.sac_dos_brute.argtypes = ct.c_int, ct.POINTER(Reference), ct.c_float\ndll.sac_dos_brute.restype = ct.c_int\n\n# Efficient way to create an array of two Reference types.\ntab = (Reference * 2)(Reference(234, 12, 234, 3, 1),\n Reference(237, 15, 256, 6, 2))\n\n# Test __repr__\nfor t in tab:\n print(t)\n\n# Test the function\nprint(dll.sac_dos_brute(len(tab), tab, 1.25))\n\nOutput:\nReference(UID=234, prix=12.0, poids=234.0, quantite=3, nb_commande=1)\nReference(UID=237, prix=15.0, poids=256.0, quantite=6, nb_commande=2)\nmasse = 1.250000\ntab[0] = 234, 12.000000 ,234.000000, 3, 1\ntab[1] = 237, 15.000000 ,256.000000, 6, 2\n123\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "c", "ctypes", "python" ]
stackoverflow_0074628879_c_ctypes_python.txt
Q: Why is the "1" after sum necessary to avoid a syntax error Why does this work: def hamming_distance(dna_1,dna_2): hamming_distance = sum(1 for a, b in zip(dna_1, dna_2) if a != b) return hamming_distance As opposed to this: def hamming_distance(dna_1,dna_2): hamming_distance = sum(for a, b in zip(dna_1, dna_2) if a != b) return hamming_distance I get this error: Input In [90] hamming_distance = sum(for a, b in zip(dna_1, dna_2) if a != b) ^ SyntaxError: invalid syntax I expected the function to work without the 1 after the () A: You wrote a generator expression. Generator expressions must produce a value (some expression to the left of the first for). Without it, you're saying "please sum all the lack-of-values not-produced by this generator expression". Ask yourself: What does a genexpr that produces nothing even mean? What is sum summing when it's being passed a series of absolute nothing? You could write a shorter genexpr with the same effect with: hamming_distance = sum(a != b for a, b in zip(dna_1, dna_2)) since bools have integer values of 1 (for True) and 0 (for False), so it would still work, but it would be slower than sum(1 for a, b in zip(dna_1, dna_2) if a != b) (which produces fewer values for sum to work on and, at least on some versions of Python, allows sum to operate faster, since it has a fast path for summing small exact int types that bool breaks). A: The working expression can be unrolled into something like this: hamming_distance = 0 for a, b in zip(dna_1, dna_2): if a != b: hamming_distance += 1 Without a number after +=, what should Python add? It doesn't know, and neither do we. If this "unrolled" syntax or your code's relationship to it is new to you, probably start by reading up on list comprehensions, which generalize into generator expressions (which is what you have).
Why is the "1" after sum necessary to avoid a syntax error
Why does this work: def hamming_distance(dna_1,dna_2): hamming_distance = sum(1 for a, b in zip(dna_1, dna_2) if a != b) return hamming_distance As opposed to this: def hamming_distance(dna_1,dna_2): hamming_distance = sum(for a, b in zip(dna_1, dna_2) if a != b) return hamming_distance I get this error: Input In [90] hamming_distance = sum(for a, b in zip(dna_1, dna_2) if a != b) ^ SyntaxError: invalid syntax I expected the function to work without the 1 after the ()
[ "You wrote a generator expression. Generator expressions must produce a value (some expression to the left of the first for). Without it, you're saying \"please sum all the lack-of-values not-produced by this generator expression\".\nAsk yourself:\n\nWhat does a genexpr that produces nothing even mean?\nWhat is sum summing when it's being passed a series of absolute nothing?\n\nYou could write a shorter genexpr with the same effect with:\nhamming_distance = sum(a != b for a, b in zip(dna_1, dna_2))\n\nsince bools have integer values of 1 (for True) and 0 (for False), so it would still work, but it would be slower than sum(1 for a, b in zip(dna_1, dna_2) if a != b) (which produces fewer values for sum to work on and, at least on some versions of Python, allows sum to operate faster, since it has a fast path for summing small exact int types that bool breaks).\n", "The working expression can be unrolled into something like this:\nhamming_distance = 0\nfor a, b in zip(dna_1, dna_2):\n if a != b:\n hamming_distance += 1\n\nWithout a number after +=, what should Python add? It doesn't know, and neither do we.\nIf this \"unrolled\" syntax or your code's relationship to it is new to you, probably start by reading up on list comprehensions, which generalize into generator expressions (which is what you have).\n" ]
[ 0, 0 ]
[]
[]
[ "function", "python", "sum" ]
stackoverflow_0074631508_function_python_sum.txt
Q: How to design an interface with an irregular grid layout I'm trying to design an interface in Python with Kivy. I need to add widgets to my App in a precise scheme, let's say a grid with two rows and three columns. I would not add widgets in all six positions. I am not sure that the GridLayout is the most suitable, so I started modifying a more complex layout. from kivy.app import App from kivy.lang import Builder from kivy.uix.floatlayout import FloatLayout Builder.load_string(""" <Boxes>: AnchorLayout: anchor_x: 'center' anchor_y: 'top' BoxLayout: orientation: 'vertical' padding: 20 BoxLayout: orientation: 'horizontal' Button: text: "1" Button: text: "2" Button: text: "3" BoxLayout: orientation: 'horizontal' Button: text: "4" Button: text: "6" """) class Boxes(FloatLayout): pass class TestApp(App): def build(self): return Boxes() if __name__ == '__main__': TestApp().run() This code generate this layout: I would like to have the button "4" in the first column of the second row, and the button "6" in the third column of the second row, thus giving space to another button not currently added. The button "4" and "6" should be aligned with buttons "1" and "3" respectively. Any suggestion? Which is the most suitable layout for an irregular grid scheme? Is there a way to add widgets in a Kivy grid layout specifying their position in terms of row and column? A: You can just add a Widget to take up the space that you want blank: BoxLayout: orientation: 'horizontal' Button: text: "4" Widget: Button: text: "6" """)
How to design an interface with an irregular grid layout
I'm trying to design an interface in Python with Kivy. I need to add widgets to my App in a precise scheme, let's say a grid with two rows and three columns. I would not add widgets in all six positions. I am not sure that the GridLayout is the most suitable, so I started modifying a more complex layout. from kivy.app import App from kivy.lang import Builder from kivy.uix.floatlayout import FloatLayout Builder.load_string(""" <Boxes>: AnchorLayout: anchor_x: 'center' anchor_y: 'top' BoxLayout: orientation: 'vertical' padding: 20 BoxLayout: orientation: 'horizontal' Button: text: "1" Button: text: "2" Button: text: "3" BoxLayout: orientation: 'horizontal' Button: text: "4" Button: text: "6" """) class Boxes(FloatLayout): pass class TestApp(App): def build(self): return Boxes() if __name__ == '__main__': TestApp().run() This code generate this layout: I would like to have the button "4" in the first column of the second row, and the button "6" in the third column of the second row, thus giving space to another button not currently added. The button "4" and "6" should be aligned with buttons "1" and "3" respectively. Any suggestion? Which is the most suitable layout for an irregular grid scheme? Is there a way to add widgets in a Kivy grid layout specifying their position in terms of row and column?
[ "You can just add a Widget to take up the space that you want blank:\n BoxLayout:\n orientation: 'horizontal'\n Button:\n text: \"4\"\n Widget:\n Button:\n text: \"6\" \"\"\")\n\n" ]
[ 0 ]
[]
[]
[ "kivy", "layout", "python", "user_interface" ]
stackoverflow_0074629295_kivy_layout_python_user_interface.txt
Q: AWS textract: UnsupportedDocumentException for PNG and JPG images. Error occurs only in production and not locally I'm getting the following error when I deploy A FastAPI app to AWS Lambda that uses the AWS Textract service. The strange thing is that, it works perfectly fine in my local development environment, but throws this error when I deploy it. Error: botocore.errorfactory.UnsupportedDocumentException: An error occurred (UnsupportedDocumentException) when calling the AnalyzeDocument operation: Request has unsupported document format Following is my code: def extractImage(form_image: bytes = File()): response = textractclient.analyze_document( Document={ "Bytes": form_image, }, FeatureTypes=["FORMS"], ) The images that I've tried are png and jpg images and not pdf. A: Are you using the file pointer for any other purpose than this function call ? I just had the same error when trying to call client.analyze_document on the open file using the sample code provided by aws. They give the following code : enter image description here But it turned out (in my case), that calling image = Image.open(img_file) was actually changing the bytes of the file pointer and made it not working anymore. It might be the same in your case if you are doing something similar.
AWS textract: UnsupportedDocumentException for PNG and JPG images. Error occurs only in production and not locally
I'm getting the following error when I deploy A FastAPI app to AWS Lambda that uses the AWS Textract service. The strange thing is that, it works perfectly fine in my local development environment, but throws this error when I deploy it. Error: botocore.errorfactory.UnsupportedDocumentException: An error occurred (UnsupportedDocumentException) when calling the AnalyzeDocument operation: Request has unsupported document format Following is my code: def extractImage(form_image: bytes = File()): response = textractclient.analyze_document( Document={ "Bytes": form_image, }, FeatureTypes=["FORMS"], ) The images that I've tried are png and jpg images and not pdf.
[ "Are you using the file pointer for any other purpose than this function call ?\nI just had the same error when trying to call client.analyze_document on the open file using the sample code provided by aws. They give the following code :\nenter image description here\nBut it turned out (in my case), that calling image = Image.open(img_file) was actually changing the bytes of the file pointer and made it not working anymore. It might be the same in your case if you are doing something similar.\n" ]
[ 2 ]
[]
[]
[ "amazon_textract", "amazon_web_services", "api", "aws_lambda", "python" ]
stackoverflow_0074530762_amazon_textract_amazon_web_services_api_aws_lambda_python.txt
Q: Hangman written in python doesnt recognise correct letters guessed :/ So I'm working my first project which hangman written in python every works like the visuals and the incorrect letters. However, It is unable to recognise the correct letter guessed. import random from hangman_visual import lives_visual_dict import string # random word with open('random.txt', 'r') as f: All_Text = f.read() words = list(map(str, All_Text.split())) WORD2GUESS = random.choice(words).upper() letters = len(WORD2GUESS) print("_" * letters) word_letters = set(WORD2GUESS) alphabet = set(string.ascii_uppercase) lives = 7 used_letters = set() # user input side while len(word_letters) > 0 and lives > 0: # letters used # ' '.join(['a', 'b', 'cd']) --> 'a b cd' print('You have', lives, 'lives left and you have used these letters: ', ' '.join(used_letters)) # what current word is (ie W - R D) word_list = [letter if letter in used_letters else '-' for letter in WORD2GUESS] print(lives_visual_dict[lives]) print('Current word: ', ' '.join(word_list)) user_letter = input('Guess a letter: ').upper() if user_letter in alphabet - used_letters: used_letters.add(user_letter) if user_letter in WORD2GUESS: used_letters.remove(user_letter) print('') else: lives = lives - 1 # takes away a life if wrong print('\nYour letter,', user_letter, 'is not in the word.') elif user_letter in used_letters: print('\nYou have already used that letter. Guess another letter.') else: print('\nThat is not a valid letter.') if lives == 0: print(lives_visual_dict[lives]) print('You died, sorry. The word was', WORD2GUESS) else: print('YAY! You guessed the word', WORD2GUESS, '!!') I've tried this, but it still wont recognise the correct guess. # what current word is (ie W - R D) word_list = [letter if letter in used_letters else '-' for letter in WORD2GUESS] print(lives_visual_dict[lives]) print('Current word: ', ' '.join(word_list)) A: Your error is this: When checking if the user_letter is in the WORD2GUESS, you falsely remove it from used_letters, but it has to stay in the set. Also you are missing some way of escaping the while loop when fully guessing the word. That could be done at the same spot by checking if used_letters contain all the letters in WORD2GUESS. Something like this: import random from hangman_visual import lives_visual_dict import string # random word with open('random.txt', 'r') as f: All_Text = f.read() words = list(map(str, All_Text.split())) WORD2GUESS = random.choice(words).upper() letters = len(WORD2GUESS) print("_" * letters) word_letters = set(WORD2GUESS) alphabet = set(string.ascii_uppercase) lives = 7 used_letters = set() # user input side while len(word_letters) > 0 and lives > 0: # letters used # ' '.join(['a', 'b', 'cd']) --> 'a b cd' print('You have', lives, 'lives left and you have used these letters: ', ' '.join(used_letters)) # what current word is (ie W - R D) word_list = [letter if letter in used_letters else '-' for letter in WORD2GUESS] print(lives_visual_dict[lives]) print('Current word: ', ' '.join(word_list)) user_letter = input('Guess a letter: ').upper() if user_letter in alphabet - used_letters: used_letters.add(user_letter) if user_letter in WORD2GUESS: print('\nThat is a correct letter.') if set(WORD2GUESS).issubset(used_letters): break else: lives = lives - 1 # takes away a life if wrong print('\nYour letter,', user_letter, 'is not in the word.') elif user_letter in used_letters: print('\nYou have already used that letter. Guess another letter.') else: print('\nThat is not a valid letter.') if lives == 0: print(lives_visual_dict[lives]) print('You died, sorry. The word was', WORD2GUESS) else: print('YAY! You guessed the word', WORD2GUESS, '!!')
Hangman written in python doesnt recognise correct letters guessed :/
So I'm working my first project which hangman written in python every works like the visuals and the incorrect letters. However, It is unable to recognise the correct letter guessed. import random from hangman_visual import lives_visual_dict import string # random word with open('random.txt', 'r') as f: All_Text = f.read() words = list(map(str, All_Text.split())) WORD2GUESS = random.choice(words).upper() letters = len(WORD2GUESS) print("_" * letters) word_letters = set(WORD2GUESS) alphabet = set(string.ascii_uppercase) lives = 7 used_letters = set() # user input side while len(word_letters) > 0 and lives > 0: # letters used # ' '.join(['a', 'b', 'cd']) --> 'a b cd' print('You have', lives, 'lives left and you have used these letters: ', ' '.join(used_letters)) # what current word is (ie W - R D) word_list = [letter if letter in used_letters else '-' for letter in WORD2GUESS] print(lives_visual_dict[lives]) print('Current word: ', ' '.join(word_list)) user_letter = input('Guess a letter: ').upper() if user_letter in alphabet - used_letters: used_letters.add(user_letter) if user_letter in WORD2GUESS: used_letters.remove(user_letter) print('') else: lives = lives - 1 # takes away a life if wrong print('\nYour letter,', user_letter, 'is not in the word.') elif user_letter in used_letters: print('\nYou have already used that letter. Guess another letter.') else: print('\nThat is not a valid letter.') if lives == 0: print(lives_visual_dict[lives]) print('You died, sorry. The word was', WORD2GUESS) else: print('YAY! You guessed the word', WORD2GUESS, '!!') I've tried this, but it still wont recognise the correct guess. # what current word is (ie W - R D) word_list = [letter if letter in used_letters else '-' for letter in WORD2GUESS] print(lives_visual_dict[lives]) print('Current word: ', ' '.join(word_list))
[ "Your error is this: When checking if the user_letter is in the WORD2GUESS, you falsely remove it from used_letters, but it has to stay in the set.\nAlso you are missing some way of escaping the while loop when fully guessing the word. That could be done at the same spot by checking if used_letters contain all the letters in WORD2GUESS.\nSomething like this:\nimport random\nfrom hangman_visual import lives_visual_dict\nimport string\n\n# random word \n\nwith open('random.txt', 'r') as f:\n All_Text = f.read()\n words = list(map(str, All_Text.split()))\nWORD2GUESS = random.choice(words).upper()\nletters = len(WORD2GUESS)\nprint(\"_\" * letters)\n\nword_letters = set(WORD2GUESS)\nalphabet = set(string.ascii_uppercase)\nlives = 7\nused_letters = set()\n\n# user input side \nwhile len(word_letters) > 0 and lives > 0:\n # letters used\n # ' '.join(['a', 'b', 'cd']) --> 'a b cd'\n print('You have', lives, 'lives left and you have used these letters: ', ' '.join(used_letters))\n\n # what current word is (ie W - R D)\n word_list = [letter if letter in used_letters else '-' for letter in WORD2GUESS]\n print(lives_visual_dict[lives])\n print('Current word: ', ' '.join(word_list))\n\n user_letter = input('Guess a letter: ').upper()\n if user_letter in alphabet - used_letters:\n used_letters.add(user_letter)\n if user_letter in WORD2GUESS:\n print('\\nThat is a correct letter.')\n if set(WORD2GUESS).issubset(used_letters):\n break\n\n else:\n lives = lives - 1 # takes away a life if wrong\n print('\\nYour letter,', user_letter, 'is not in the word.')\n\n elif user_letter in used_letters:\n print('\\nYou have already used that letter. Guess another letter.')\n\n else:\n print('\\nThat is not a valid letter.')\n\nif lives == 0:\n print(lives_visual_dict[lives])\n print('You died, sorry. The word was', WORD2GUESS)\nelse:\n print('YAY! You guessed the word', WORD2GUESS, '!!')\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074628556_python.txt
Q: Struggling with python's import mechanism I am an experienced java enterprise developer but very new to python enterprise development shop. I am currently, struggling to understand why some imports work while others don't. Some background: Our dev team recently upgraded python from 3.6 to 3.10.5 and following is our package structure src/ bunch of files (dockerfile, Pipfile, requrirements.txt, shell scripts, etc) package/ __init__.py moduleA.py subpackage1/ __init__.py moduleX.py moduleY.py subpackage2/ __init__.py moduleZ.py tests/ __init__.py test1.py Now, inside the moduleA.py, I am trying to import subpackage2/moduleZ.py like so from .subpackage2 import moduleZ But, I get the error saying ImportError: attempted relative import with no known parent package The funny thing is that if I move moduleA.py out of package/ and into src/ then it is able to find everything. I am not sure why is this the case. I run the moduleA.py by executiong python package/moduleA.py. Now, I read that maybe there is a problem becasue you have you give a -m parameter if running a module as a script (or something on those lines). But, if I do that, I get the following error: ModuleNotFoundError: No module names 'package/moduleA.py' I even try running package1/moduleA and remove the .py, but that does not work either. I can understand why as I technically never installed it ? All of this happened because apparently, the tests broke and to make it work they added relative imports. They changed the import from "from subpackage2 import moduleZ" to "from .subpackage2 import moduleZ" and the tests started working, but the app started failing. Any understanding I can get would be much appreciated. A: The -m parameter is used with the import name, not the path. So you'd use python3 -m package.moduleA (with . instead of /, and no .py), not python3 -m package/moduleA.py. That said, it only works if package.moduleA is locatable from one of the roots in sys.path. Shy of installing the package, the simplest way to make it work is to ensure your working directory is src (so package exists in the working directory): $ cd path/to/src $ python3 -m package.moduleA and, with your existing setup, if moduleA.py includes a from .subpackage2 import moduleZ, the import should work; Python knows package.moduleA is a module within package, so it can use a relative import to look for a sibling package to moduleA named subpackage2, and then inside it it can find moduleZ. Obviously, this is brittle (it only works if you cd to the src root directory before running Python, or hack the path to src in PYTHONPATH, which is terrible hack if the code ever has to be run by anyone else); ideally you make this an installable package, install it (in global site-packages, user site-packages, or within a virtual environment created with the built-in venv module or the third-party virtualenv module), and then your working directory no longer matters (since the site-packages will be part of your sys.path automatically). For simple testing, as long as the working directory is correct (not sure what it was for you), and you use -m correctly (you were using it incorrectly), relative imports will work, but it's not the long term solution. A: So first of all - the root importing directory is the directory from which you're running the main script. This directory by default is the root for all imports from all scripts. So if you're executing script from directory src you can do such imports: from package.moduleA import * from package.subpackage1.moduleX import * But now in files moduleA and moduleX you need to make imports based on root folder. If you want to import something from module moduleY inside moduleX you need to do: # this is inside moduleX from package.subpackage1.moduleY import * This is because python is looking for modules in specific locations. First location is your root directory - directory from which you execute your main script. Second location is directory with modules installed by PIP. You can check all directories using following: import sys for p in sys.path: print(p) Now to solve your problem there are couple solutions. The fast one but IMHO not the best one is to add all paths with submodules to sys.path - list variable with all directories where python is looking for modules. new_path = "/path/to/application/app/folder/src/package/subpackage1" if new_path not in sys.path: sys.path.append(new_path) Another solution is to use full path for imports in all package modules: from package.subpackage1.moduleX import * I think in your case it will be the correct solution. You can also combine 2 solutions. First add folders with subpackages to sys.path and use subpackage folders as a root folders for imports. But it's good solution only if you have complex submodule structure. And it's not the best solution if in future you will need to deploy your package as a wheel or share between multiple projects.
Struggling with python's import mechanism
I am an experienced java enterprise developer but very new to python enterprise development shop. I am currently, struggling to understand why some imports work while others don't. Some background: Our dev team recently upgraded python from 3.6 to 3.10.5 and following is our package structure src/ bunch of files (dockerfile, Pipfile, requrirements.txt, shell scripts, etc) package/ __init__.py moduleA.py subpackage1/ __init__.py moduleX.py moduleY.py subpackage2/ __init__.py moduleZ.py tests/ __init__.py test1.py Now, inside the moduleA.py, I am trying to import subpackage2/moduleZ.py like so from .subpackage2 import moduleZ But, I get the error saying ImportError: attempted relative import with no known parent package The funny thing is that if I move moduleA.py out of package/ and into src/ then it is able to find everything. I am not sure why is this the case. I run the moduleA.py by executiong python package/moduleA.py. Now, I read that maybe there is a problem becasue you have you give a -m parameter if running a module as a script (or something on those lines). But, if I do that, I get the following error: ModuleNotFoundError: No module names 'package/moduleA.py' I even try running package1/moduleA and remove the .py, but that does not work either. I can understand why as I technically never installed it ? All of this happened because apparently, the tests broke and to make it work they added relative imports. They changed the import from "from subpackage2 import moduleZ" to "from .subpackage2 import moduleZ" and the tests started working, but the app started failing. Any understanding I can get would be much appreciated.
[ "The -m parameter is used with the import name, not the path. So you'd use python3 -m package.moduleA (with . instead of /, and no .py), not python3 -m package/moduleA.py.\nThat said, it only works if package.moduleA is locatable from one of the roots in sys.path. Shy of installing the package, the simplest way to make it work is to ensure your working directory is src (so package exists in the working directory):\n$ cd path/to/src\n$ python3 -m package.moduleA\n\nand, with your existing setup, if moduleA.py includes a from .subpackage2 import moduleZ, the import should work; Python knows package.moduleA is a module within package, so it can use a relative import to look for a sibling package to moduleA named subpackage2, and then inside it it can find moduleZ.\nObviously, this is brittle (it only works if you cd to the src root directory before running Python, or hack the path to src in PYTHONPATH, which is terrible hack if the code ever has to be run by anyone else); ideally you make this an installable package, install it (in global site-packages, user site-packages, or within a virtual environment created with the built-in venv module or the third-party virtualenv module), and then your working directory no longer matters (since the site-packages will be part of your sys.path automatically). For simple testing, as long as the working directory is correct (not sure what it was for you), and you use -m correctly (you were using it incorrectly), relative imports will work, but it's not the long term solution.\n", "So first of all - the root importing directory is the directory from which you're running the main script.\nThis directory by default is the root for all imports from all scripts.\nSo if you're executing script from directory src you can do such imports:\nfrom package.moduleA import *\nfrom package.subpackage1.moduleX import *\n\nBut now in files moduleA and moduleX you need to make imports based on root folder. If you want to import something from module moduleY inside moduleX you need to do:\n# this is inside moduleX\nfrom package.subpackage1.moduleY import *\n\nThis is because python is looking for modules in specific locations.\nFirst location is your root directory - directory from which you execute your main script.\nSecond location is directory with modules installed by PIP.\nYou can check all directories using following:\nimport sys\nfor p in sys.path:\n print(p)\n\nNow to solve your problem there are couple solutions.\nThe fast one but IMHO not the best one is to add all paths with submodules to sys.path - list variable with all directories where python is looking for modules.\nnew_path = \"/path/to/application/app/folder/src/package/subpackage1\"\nif new_path not in sys.path:\n sys.path.append(new_path)\n\nAnother solution is to use full path for imports in all package modules:\nfrom package.subpackage1.moduleX import *\n\nI think in your case it will be the correct solution.\nYou can also combine 2 solutions.\nFirst add folders with subpackages to sys.path and use subpackage folders as a root folders for imports. But it's good solution only if you have complex submodule structure. And it's not the best solution if in future you will need to deploy your package as a wheel or share between multiple projects.\n" ]
[ 2, 0 ]
[]
[]
[ "importerror", "python", "python_import" ]
stackoverflow_0074631218_importerror_python_python_import.txt
Q: Is this correct for rock-paper-scissor game | Python? The question is : Make a two-player Rock-Paper-Scissors game. (Hint: Ask for player plays (using input), compare them, print out a message of congratulations to the winner, and ask if the players want to start a new game) player1 = input("Player 1: ") player2 = input("Player 2: ") if player1 == "rock" and player2 == "paper": print("Player 2 is the winner!!") elif player1 == "rock" and player2 == "scissor": print("Player 1 is the winner!!") elif player1 == "paper" and player2 == "scissor": print("Player 2 is the winner!!") elif player1 == player2: print("It's a tie!!") After asking if the players want to start a new game, how to restart the code? A: You could use a while loop, i.e. something like play = 'Y' while play=='Y': # your original code play = input("Do you want to play again (Y/N): ") Also I'd suggest you check that the responses from the user are valid (i.e. what if they type potatoe, in your code nothing will print if one or both players don't respond with rock, scissors or paper. You should also look at the lower() command and convert the answer to lower case for this reason. A: options = {"rock" : 0, "paper": 1, "scissors": 2} #correct inputs ratio = options[player1] - options[player2] if player1 not in options.keys() and player2 not in options.keys(): return 0 #tie, both inputs wrong if player1 not in options.keys(): return 2 #Player1 input wrong, Player 2 wins if player2 not in options.keys(): return 1 #Player2 input wrong, Player 1 wins if ratio == 0: return 0 #tie if ratio == 1 or ratio == -2: return 1 #Player 1 wins return 2 #Player 2 wins
Is this correct for rock-paper-scissor game | Python?
The question is : Make a two-player Rock-Paper-Scissors game. (Hint: Ask for player plays (using input), compare them, print out a message of congratulations to the winner, and ask if the players want to start a new game) player1 = input("Player 1: ") player2 = input("Player 2: ") if player1 == "rock" and player2 == "paper": print("Player 2 is the winner!!") elif player1 == "rock" and player2 == "scissor": print("Player 1 is the winner!!") elif player1 == "paper" and player2 == "scissor": print("Player 2 is the winner!!") elif player1 == player2: print("It's a tie!!") After asking if the players want to start a new game, how to restart the code?
[ "You could use a while loop, i.e. something like\nplay = 'Y'\n\nwhile play=='Y':\n # your original code\n\n play = input(\"Do you want to play again (Y/N): \")\n\nAlso I'd suggest you check that the responses from the user are valid (i.e. what if they type potatoe, in your code nothing will print if one or both players don't respond with rock, scissors or paper. You should also look at the lower() command and convert the answer to lower case for this reason.\n", " options = {\"rock\" : 0, \"paper\": 1, \"scissors\": 2} #correct inputs\n ratio = options[player1] - options[player2]\n\n if player1 not in options.keys() and player2 not in options.keys():\n return 0 #tie, both inputs wrong\n\n if player1 not in options.keys():\n return 2 #Player1 input wrong, Player 2 wins\n\n if player2 not in options.keys():\n return 1 #Player2 input wrong, Player 1 wins\n \n if ratio == 0:\n return 0 #tie\n if ratio == 1 or ratio == -2:\n return 1 #Player 1 wins\n return 2 #Player 2 wins\n\n" ]
[ 0, 0 ]
[]
[]
[ "input", "python", "reset" ]
stackoverflow_0071837673_input_python_reset.txt
Q: How to replace a value in a list while input? I have a string in an input that needs to be split into separate values and left in a list. I am using the following construct to enter values. How can the value -1 be noticed on a variable var? import sys readline = sys.stdin.readline var = 10**5 current_line = list(map(int, readline().split())) Example input: -1 3 0 -1 4 5 Required value current_line: [var, 3, 0, var, 4, 5] A: Just check for -1s as you go and replace them with var: current_line = [var if i == -1 else i for i in map(int, readline().split())] var if i == -1 else i is Python's ternary conditional operator (more precisely "conditional expression"), so -1 values get replaced, and all others are kept unchanged. A: You can use a simple list comprehension for that: x = [-1, 3, 0, -1, 4, 5] var = 10**5 # loop through all elements and if the element is -1 replace it with var, else keep the old element x = [var if i == -1 else i for i in x] A: Replace -1 with var as you iterate over the list of values. current_line = [var if x == -1 else x for x in map(int, readline().split())] I'd probably forgo the use of map altogether in this case. No use converting '-1' to -1 if you're going to immediately discard the int value anyway. current_line = [var if x == '-1' else int(x) for x in readline().split()] For a different emphasis, negate the comparison. current_line = [int(x) if x != '-1' else var for x in readline().split()]
How to replace a value in a list while input?
I have a string in an input that needs to be split into separate values and left in a list. I am using the following construct to enter values. How can the value -1 be noticed on a variable var? import sys readline = sys.stdin.readline var = 10**5 current_line = list(map(int, readline().split())) Example input: -1 3 0 -1 4 5 Required value current_line: [var, 3, 0, var, 4, 5]
[ "Just check for -1s as you go and replace them with var:\ncurrent_line = [var if i == -1 else i for i in map(int, readline().split())]\n\nvar if i == -1 else i is Python's ternary conditional operator (more precisely \"conditional expression\"), so -1 values get replaced, and all others are kept unchanged.\n", "You can use a simple list comprehension for that:\nx = [-1, 3, 0, -1, 4, 5]\n\nvar = 10**5\n\n# loop through all elements and if the element is -1 replace it with var, else keep the old element\nx = [var if i == -1 else i for i in x]\n\n", "Replace -1 with var as you iterate over the list of values.\ncurrent_line = [var if x == -1 else x for x in map(int, readline().split())]\n\nI'd probably forgo the use of map altogether in this case. No use converting '-1' to -1 if you're going to immediately discard the int value anyway.\ncurrent_line = [var if x == '-1' else int(x) for x in readline().split()]\n\nFor a different emphasis, negate the comparison.\ncurrent_line = [int(x) if x != '-1' else var for x in readline().split()]\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "input", "python" ]
stackoverflow_0074631596_input_python.txt
Q: How to store a massive number of byte strings while having easy access to any small partion of them Here's my problem. I have a huge dataset of videos. To save space as much as possible, I encode each image frame to a byte string with PyTurboJPEG. In this way, each video is a list of byte strings. Then I use pickle to dump this list of byte strings to the disk. When I want to access a certain video, I load the .pkl file with pickle.load first and then decode the byte strings into numpy.ndarray. This strategy is efficient when I want to access the whole video. However, if I want to access only a part of the whole video, I still have to load the whole video first and then slice it to get a small partition (for example, in some video recognition tasks, the original videos of some dataset are too long and not suitable for training. For each video, we only use a part of it). This method is super io-burdensome and greatly slows the training process. Is there any solution that we can store a list of byte strings in the disk and only load the part we want to RAM? A possible approach is using NpyAppendArray, but it doesn't support object arrays (in our cases, byte string) and directly saving the full numpy.ndarray instead of byte strings on the disk takes too much space. A solution that we can store a list of byte strings in the disk and only load the part we want to RAM A: I found an elegant solution. Just read the original binary stream. For examples, we have three images in a video, the length of the bytes of the first image is 8192, the second is 8633, and the third is 8321. We save the bytes of all three images in one file named video.bin, and the following codes shows how we can access the 2nd and 3rd images without reading the 1st image: lens = [8192, 8633, 8321] # should be saved when you create the binary file fp = open("./video.bin", "rb+") st = 1 # the start index of images we want ed = 2 # the end index of images we want ed_offsets = lens.cumsum() st_offsets = ed_offsets.copy() st_offsets[0] = 0 st_offsets[1:] = ed_offsets[:-1] # calculate the start positions and end positions of each image fp.seek(st_offsets[st]) hyp = fp.read(lens[st: ed].sum()) s = st_offsets[st: ed] s -= s[0] e = s.copy() e[:-1] = s[1:] e[-1] = len(hyp) hyp = [hyp[s[i]: e[i]] for i in range(len(s))] hyp is the array only containing the 2nd and the 3rd images. The core functions we use is fp.seek and fp.read. See the official documentation for more information.
How to store a massive number of byte strings while having easy access to any small partion of them
Here's my problem. I have a huge dataset of videos. To save space as much as possible, I encode each image frame to a byte string with PyTurboJPEG. In this way, each video is a list of byte strings. Then I use pickle to dump this list of byte strings to the disk. When I want to access a certain video, I load the .pkl file with pickle.load first and then decode the byte strings into numpy.ndarray. This strategy is efficient when I want to access the whole video. However, if I want to access only a part of the whole video, I still have to load the whole video first and then slice it to get a small partition (for example, in some video recognition tasks, the original videos of some dataset are too long and not suitable for training. For each video, we only use a part of it). This method is super io-burdensome and greatly slows the training process. Is there any solution that we can store a list of byte strings in the disk and only load the part we want to RAM? A possible approach is using NpyAppendArray, but it doesn't support object arrays (in our cases, byte string) and directly saving the full numpy.ndarray instead of byte strings on the disk takes too much space. A solution that we can store a list of byte strings in the disk and only load the part we want to RAM
[ "I found an elegant solution. Just read the original binary stream. For examples, we have three images in a video, the length of the bytes of the first image is 8192, the second is 8633, and the third is 8321. We save the bytes of all three images in one file named video.bin, and the following codes shows how we can access the 2nd and 3rd images without reading the 1st image:\nlens = [8192, 8633, 8321] # should be saved when you create the binary file\nfp = open(\"./video.bin\", \"rb+\")\nst = 1 # the start index of images we want\ned = 2 # the end index of images we want\ned_offsets = lens.cumsum()\nst_offsets = ed_offsets.copy()\nst_offsets[0] = 0\nst_offsets[1:] = ed_offsets[:-1] # calculate the start positions and end positions of each image\nfp.seek(st_offsets[st])\nhyp = fp.read(lens[st: ed].sum())\ns = st_offsets[st: ed]\ns -= s[0]\ne = s.copy()\ne[:-1] = s[1:]\ne[-1] = len(hyp)\nhyp = [hyp[s[i]: e[i]] for i in range(len(s))]\n\nhyp is the array only containing the 2nd and the 3rd images.\nThe core functions we use is fp.seek and fp.read. See the official documentation for more information.\n" ]
[ 0 ]
[]
[]
[ "large_files", "numpy", "python" ]
stackoverflow_0074628765_large_files_numpy_python.txt
Q: How to convert Class to Class I need to upload some items in DynamoDB. I am using boto3 resource for the same. The item I need to insert are a couple of JSON strings dumped together. I need to convert them to dict so I can insert them in DynamoDB. def update_record_Dynamo( company_name, first_name, last_name, email, status, phone): table_name = "cpe-uat-ddb-cem-newpro" table = boto3.resource("dynamodb").Table(table_name) response = table.get_item(Key={'company': company_name}) existing_user = response['Item']['users'] # user looks like this = ''' { "first_name": "bala", "last_name": "Reaburn", "email": "b1010gmail.com", "phone": "9053258111", "status": "administrator" } ''' flag = { "first_name": first_name, "last_name": last_name, "email": email, "phone": phone, "status": status } flag_dict = json.dumps([existing_user, flag]) # new_users = jam+flag # new_users = json.loads(flag_dict) flag_dict = json.dumps(flag_dict) flag_dict = json.loads(flag_dict) final = json.dumps(eval(str(flag_dict))) print(type(final)) update_record_response = table.update_item( Key={'company': company_name}, AttributeUpdates={ 'users': final, }, ) return update_record_response This is the error I get: botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid type for parameter AttributeUpdates.users, value: [ { "last_name": "", "first_name": "", "email": "", "status": "" } , { "first_name": "bala", "last_name": "Reaburn", "email": "b1010gmail.com", "phone": "9053258688", "status": "administrator" } ], type: <class 'str'>, valid types: <class 'dict'> Invalid type for parameter AttributeUpdates.users, value: [{"last_name": "", "first_name": "", "email": "", "status": ""}, {"first_name": "bala", "last_name": "Reaburn", "email": "b1010gmail.com", "phone": "9053258688", "status": "administrator"}], type: <class 'str'>, valid types: <class 'dict'> A: It looks like you're uploading it as a list of dictionaries, instead of dictionaries themselves. flag_dict = json.dumps([existing_user, flag]) Creates the list, and you can see in the error that comes back that it still has the brackets around it. You'll want to recreate it as a dictionary of dictionaries if you want it in that style. Edit: To rewrite the line above try flag_dict = { 'existing_user': existing_user, 'flag': flag } Edit 2: flag = {"first_name":first_name, "last_name":last_name, "email":email,"phone":phone, "status":status} final = { 'existing_user': existing_user, 'flag': flag } update_record_response = table.update_item( Key={'company': company_name}, AttributeUpdates={ 'users': final, }, ) return update_record_response
How to convert Class to Class
I need to upload some items in DynamoDB. I am using boto3 resource for the same. The item I need to insert are a couple of JSON strings dumped together. I need to convert them to dict so I can insert them in DynamoDB. def update_record_Dynamo( company_name, first_name, last_name, email, status, phone): table_name = "cpe-uat-ddb-cem-newpro" table = boto3.resource("dynamodb").Table(table_name) response = table.get_item(Key={'company': company_name}) existing_user = response['Item']['users'] # user looks like this = ''' { "first_name": "bala", "last_name": "Reaburn", "email": "b1010gmail.com", "phone": "9053258111", "status": "administrator" } ''' flag = { "first_name": first_name, "last_name": last_name, "email": email, "phone": phone, "status": status } flag_dict = json.dumps([existing_user, flag]) # new_users = jam+flag # new_users = json.loads(flag_dict) flag_dict = json.dumps(flag_dict) flag_dict = json.loads(flag_dict) final = json.dumps(eval(str(flag_dict))) print(type(final)) update_record_response = table.update_item( Key={'company': company_name}, AttributeUpdates={ 'users': final, }, ) return update_record_response This is the error I get: botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid type for parameter AttributeUpdates.users, value: [ { "last_name": "", "first_name": "", "email": "", "status": "" } , { "first_name": "bala", "last_name": "Reaburn", "email": "b1010gmail.com", "phone": "9053258688", "status": "administrator" } ], type: <class 'str'>, valid types: <class 'dict'> Invalid type for parameter AttributeUpdates.users, value: [{"last_name": "", "first_name": "", "email": "", "status": ""}, {"first_name": "bala", "last_name": "Reaburn", "email": "b1010gmail.com", "phone": "9053258688", "status": "administrator"}], type: <class 'str'>, valid types: <class 'dict'>
[ "It looks like you're uploading it as a list of dictionaries, instead of dictionaries themselves.\nflag_dict = json.dumps([existing_user, flag])\nCreates the list, and you can see in the error that comes back that it still has the brackets around it. You'll want to recreate it as a dictionary of dictionaries if you want it in that style.\nEdit: To rewrite the line above try\nflag_dict = {\n 'existing_user': existing_user,\n 'flag': flag\n }\n\nEdit 2:\nflag = {\"first_name\":first_name, \"last_name\":last_name, \"email\":email,\"phone\":phone, \"status\":status}\n \nfinal = {\n 'existing_user': existing_user,\n 'flag': flag\n } \n \nupdate_record_response = table.update_item(\n Key={'company': company_name},\n AttributeUpdates={\n 'users': final,\n },\n )\n \n return update_record_response\n\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "dictionary", "python" ]
stackoverflow_0074631481_amazon_dynamodb_dictionary_python.txt
Q: Decoding several text files from Byte to UTF-8 I'm currently trying to loop over roughly 9000 .txt files in python to extract data and add them to a joined pandas data frame. The .txt data is stored in bytes, so in order to access it I was told to use a decoder. Because I'm interested in preserving special characters, I would like to use the UTF-8 decoder, but I'm getting the following error when trying to do so: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3131: invalid start byte For some reason, the code works just fine when using a 'ISO-8859-1' decoder, but this obviously messes up all special characters. Does anyone know how to fix this? I'm pasting my code below! Also, the decoding works for the first ~1600 .txt files in my dataset, but for the rest it doesn't. decode_counter = 0 for index, i in enumerate(corpus[0]): decode_counter += 1 corpus.iloc[index][0] = i.decode('UTF-8') The corpus variable contains the name of the .txt file as an index, and the contents of the individual .txt files in a column named 0. Thank you very much! A: Maybe you could try every codec available in your environment and check which result fits best. Here is a way of doing that: import os, codecs, encodings from collections import OrderedDict from typing import Union from cprinter import TC from input_timeout import InputTimeout class CodecChecker: def __init__(self): self.encodingdict = self.get_codecs() self.results = OrderedDict() def get_codecs(self): dir = encodings.__path__[0] codec_names = OrderedDict() for filename in os.listdir(dir): if not filename.endswith(".py"): continue name = filename[:-3] try: codec_names[name] = OrderedDict({"object": codecs.lookup(name)}) except Exception as Fehler: pass return codec_names def try_open_file(self, path: str, readlines: int = 0): self.results = OrderedDict() results = OrderedDict() if readlines == 0: for key, item in self.encodingdict.items(): results[key] = {"strict_encoded": [], "strict_bad": True} try: with open(path, encoding=key) as f: data = f.read() results[key]["strict_encoded"].append(data) results[key]["strict_bad"] = False except Exception as fe: results[key]["strict_encoded"].append(str(fe)) continue else: for key, item in self.encodingdict.items(): results[key] = {"strict_encoded": [], "strict_bad": True} try: with open(path, encoding=key) as f: for ini, line in enumerate(f.readlines()): if ini == readlines: break results[key]["strict_encoded"].append(line[:-1]) results[key]["strict_bad"] = False except Exception as fe: results[key]["strict_encoded"].append(str(fe)) continue self.results = results.copy() return self def try_convert_bytes(self, variable: bytes): self.results = OrderedDict() results = OrderedDict() modes = ["strict", "ignore", "replace"] for key, item in self.encodingdict.items(): results[key] = { "strict_encoded": [], "strict_bad": True, "ignore_encoded": [], "ignore_bad": True, "replace_encoded": [], "replace_bad": True, } for mo in modes: try: results[key][f"{mo}_encoded"].append( item["object"].decode(variable, mo) ) results[key][f"{mo}_bad"] = False except Exception as Fe: results[key][f"{mo}_encoded"].append(str(Fe)) self.results = results.copy() return self def print_results( self, pause_after_interval: Union[int, float] = 0, items_per_interval: int = 0 ): counter = 0 for key, item in self.results.items(): if pause_after_interval != 0 and items_per_interval != 0: if items_per_interval == counter and counter > 0: i = InputTimeout( timeout=pause_after_interval, input_message=f"Press any key to continue or wait {pause_after_interval} seconds", timeout_message="", defaultvalue="", cancelbutton=None, show_special_characters_warning=None, ).finalvalue counter = 0 print( f'\n\n\n{"Codec".ljust(20)}: {str(TC(key).bg_cyan.fg_black)}'.ljust(100) ) if "strict_bad" in item and "strict_encoded" in item: print(f'{"Mode".ljust(20)}: {TC("strict").fg_yellow.bg_black}') if item["strict_bad"] is False: if isinstance(item["strict_encoded"][0], tuple): if item["strict_bad"] is False: try: print( f"""{'Length'.ljust(20)}: {TC(f'''{item['strict_encoded'][0][1]}''').fg_purple.bg_black}\n{'Converted'.ljust(20)}: {TC(f'''{item['strict_encoded'][0][0]}''').fg_green.bg_black}""" ) except Exception: print( f"""Problems during printing! Raw string: {item['strict_encoded'][0][0]!r}""" ) if item["strict_bad"] is True: try: print( f"""{'Length'.ljust(20)}: {TC(f'''{"None"}''').fg_red.bg_black}\n{'Converted'.ljust(20)}: {TC(f'''{item['strict_encoded'][0]}''').fg_red.bg_black}""" ) except Exception: print( f"""Problems during printing! Raw string: {item['strict_encoded'][0][0]!r}""" ) if isinstance(item["strict_encoded"][0], str): if item["strict_bad"] is False: itemlen = len("".join(item["strict_encoded"])) concatitem = "\n" + "\n".join( [ f"""Line: {str(y).ljust(14)} {str(f'''{x}''')}""" for y, x in enumerate(item["strict_encoded"]) ] ) try: print( f"""{'Length'.ljust(20)}: {TC(f'''{itemlen}''').fg_purple.bg_black}\n{'Converted'.ljust(20)}: {concatitem}""" ) except Exception: print( f"""Problems during printing! Raw string: {concatitem!r}""" ) if item["strict_bad"] is True: concatitem = TC( " ".join(item["strict_encoded"]) ).fg_red.bg_black try: print( f"""{'Length'.ljust(20)}: {TC(f'''{"None"}''').fg_red.bg_black}\n{'Converted'.ljust(20)}: {concatitem}""" ) except Exception: print( f"""Problems during printing! Raw string: {concatitem!r}""" ) print("") if "ignore_bad" in item and "ignore_encoded" in item: print(f'{"Mode".ljust(20)}: {TC("ignore").fg_yellow.bg_black}') if item["ignore_bad"] is False: if isinstance(item["ignore_encoded"][0], tuple): if item["ignore_bad"] is False: try: print( f"""{'Length'.ljust(20)}: {TC(f'''{item['ignore_encoded'][0][1]}''').bg_black.fg_lightgrey}\n{'Converted'.ljust(20)}: {TC(f'''{item['ignore_encoded'][0][0]}''').bg_black.fg_lightgrey}""" ) except Exception: print( f"""Problems during printing! Raw string: {item['ignore_encoded'][0][0]!r}""" ) print("") if "replace_bad" in item and "replace_encoded" in item: print(f'{"Mode".ljust(20)}: {TC("replace").fg_yellow.bg_black}') if item["replace_bad"] is False: if isinstance(item["replace_encoded"][0], tuple): if item["replace_bad"] is False: try: print( f"""{'Length'.ljust(20)}: {TC(f'''{item['replace_encoded'][0][1]}''').bg_black.fg_lightgrey}\n{'Converted'.ljust(20)}: {TC(f'''{item['replace_encoded'][0][0]}''').bg_black.fg_lightgrey}""" ) except Exception: print( f"""Problems during printing! Raw string: {item['replace_encoded'][0][0]!r}""" ) counter = counter + 1 return self if __name__ == "__main__": teststuff = b"""This is a test! Hi there! A little test! """ testfilename = "test_utf8.tmp" with open("test_utf8.tmp", mode="w", encoding="utf-8-sig") as f: f.write(teststuff.decode("utf-8-sig")) codechecker = CodecChecker() codechecker.try_open_file(testfilename, readlines=2).print_results( pause_after_interval=1, items_per_interval=10 ) codechecker.try_open_file(testfilename).print_results() codechecker.try_convert_bytes(teststuff.decode("cp850").encode()).print_results( pause_after_interval=1, items_per_interval=10 ) Or you simply run a script to replace all messed up characters. Since I am a German teacher, I have this problem frequently (encoding problems due to Umlaut). Here is a script to replace all characters (too big to post the script here): https://github.com/hansalemaos/LatinFixer/blob/main/__init__.py
Decoding several text files from Byte to UTF-8
I'm currently trying to loop over roughly 9000 .txt files in python to extract data and add them to a joined pandas data frame. The .txt data is stored in bytes, so in order to access it I was told to use a decoder. Because I'm interested in preserving special characters, I would like to use the UTF-8 decoder, but I'm getting the following error when trying to do so: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3131: invalid start byte For some reason, the code works just fine when using a 'ISO-8859-1' decoder, but this obviously messes up all special characters. Does anyone know how to fix this? I'm pasting my code below! Also, the decoding works for the first ~1600 .txt files in my dataset, but for the rest it doesn't. decode_counter = 0 for index, i in enumerate(corpus[0]): decode_counter += 1 corpus.iloc[index][0] = i.decode('UTF-8') The corpus variable contains the name of the .txt file as an index, and the contents of the individual .txt files in a column named 0. Thank you very much!
[ "Maybe you could try every codec available in your environment and check which result fits best.\nHere is a way of doing that:\nimport os, codecs, encodings\nfrom collections import OrderedDict\nfrom typing import Union\nfrom cprinter import TC\nfrom input_timeout import InputTimeout\n\n\nclass CodecChecker:\n def __init__(self):\n self.encodingdict = self.get_codecs()\n self.results = OrderedDict()\n\n def get_codecs(self):\n dir = encodings.__path__[0]\n codec_names = OrderedDict()\n for filename in os.listdir(dir):\n if not filename.endswith(\".py\"):\n continue\n name = filename[:-3]\n try:\n codec_names[name] = OrderedDict({\"object\": codecs.lookup(name)})\n except Exception as Fehler:\n pass\n return codec_names\n\n def try_open_file(self, path: str, readlines: int = 0):\n self.results = OrderedDict()\n results = OrderedDict()\n if readlines == 0:\n for key, item in self.encodingdict.items():\n results[key] = {\"strict_encoded\": [], \"strict_bad\": True}\n\n try:\n with open(path, encoding=key) as f:\n data = f.read()\n results[key][\"strict_encoded\"].append(data)\n results[key][\"strict_bad\"] = False\n except Exception as fe:\n results[key][\"strict_encoded\"].append(str(fe))\n continue\n else:\n for key, item in self.encodingdict.items():\n results[key] = {\"strict_encoded\": [], \"strict_bad\": True}\n\n try:\n with open(path, encoding=key) as f:\n for ini, line in enumerate(f.readlines()):\n if ini == readlines:\n break\n results[key][\"strict_encoded\"].append(line[:-1])\n results[key][\"strict_bad\"] = False\n except Exception as fe:\n results[key][\"strict_encoded\"].append(str(fe))\n continue\n self.results = results.copy()\n return self\n\n def try_convert_bytes(self, variable: bytes):\n self.results = OrderedDict()\n\n results = OrderedDict()\n modes = [\"strict\", \"ignore\", \"replace\"]\n for key, item in self.encodingdict.items():\n results[key] = {\n \"strict_encoded\": [],\n \"strict_bad\": True,\n \"ignore_encoded\": [],\n \"ignore_bad\": True,\n \"replace_encoded\": [],\n \"replace_bad\": True,\n }\n for mo in modes:\n try:\n results[key][f\"{mo}_encoded\"].append(\n item[\"object\"].decode(variable, mo)\n )\n results[key][f\"{mo}_bad\"] = False\n except Exception as Fe:\n results[key][f\"{mo}_encoded\"].append(str(Fe))\n self.results = results.copy()\n\n return self\n\n def print_results(\n self, pause_after_interval: Union[int, float] = 0, items_per_interval: int = 0\n ):\n counter = 0\n for key, item in self.results.items():\n if pause_after_interval != 0 and items_per_interval != 0:\n if items_per_interval == counter and counter > 0:\n i = InputTimeout(\n timeout=pause_after_interval,\n input_message=f\"Press any key to continue or wait {pause_after_interval} seconds\",\n timeout_message=\"\",\n defaultvalue=\"\",\n cancelbutton=None,\n show_special_characters_warning=None,\n ).finalvalue\n counter = 0\n print(\n f'\\n\\n\\n{\"Codec\".ljust(20)}: {str(TC(key).bg_cyan.fg_black)}'.ljust(100)\n )\n if \"strict_bad\" in item and \"strict_encoded\" in item:\n print(f'{\"Mode\".ljust(20)}: {TC(\"strict\").fg_yellow.bg_black}')\n\n if item[\"strict_bad\"] is False:\n if isinstance(item[\"strict_encoded\"][0], tuple):\n if item[\"strict_bad\"] is False:\n\n try:\n print(\n f\"\"\"{'Length'.ljust(20)}: {TC(f'''{item['strict_encoded'][0][1]}''').fg_purple.bg_black}\\n{'Converted'.ljust(20)}: {TC(f'''{item['strict_encoded'][0][0]}''').fg_green.bg_black}\"\"\"\n )\n except Exception:\n print(\n f\"\"\"Problems during printing! Raw string: {item['strict_encoded'][0][0]!r}\"\"\"\n )\n\n if item[\"strict_bad\"] is True:\n try:\n print(\n f\"\"\"{'Length'.ljust(20)}: {TC(f'''{\"None\"}''').fg_red.bg_black}\\n{'Converted'.ljust(20)}: {TC(f'''{item['strict_encoded'][0]}''').fg_red.bg_black}\"\"\"\n )\n except Exception:\n print(\n f\"\"\"Problems during printing! Raw string: {item['strict_encoded'][0][0]!r}\"\"\"\n )\n if isinstance(item[\"strict_encoded\"][0], str):\n if item[\"strict_bad\"] is False:\n itemlen = len(\"\".join(item[\"strict_encoded\"]))\n concatitem = \"\\n\" + \"\\n\".join(\n [\n f\"\"\"Line: {str(y).ljust(14)} {str(f'''{x}''')}\"\"\"\n for y, x in enumerate(item[\"strict_encoded\"])\n ]\n )\n try:\n print(\n f\"\"\"{'Length'.ljust(20)}: {TC(f'''{itemlen}''').fg_purple.bg_black}\\n{'Converted'.ljust(20)}: {concatitem}\"\"\"\n )\n except Exception:\n print(\n f\"\"\"Problems during printing! Raw string: {concatitem!r}\"\"\"\n )\n\n if item[\"strict_bad\"] is True:\n concatitem = TC(\n \" \".join(item[\"strict_encoded\"])\n ).fg_red.bg_black\n try:\n print(\n f\"\"\"{'Length'.ljust(20)}: {TC(f'''{\"None\"}''').fg_red.bg_black}\\n{'Converted'.ljust(20)}: {concatitem}\"\"\"\n )\n except Exception:\n print(\n f\"\"\"Problems during printing! Raw string: {concatitem!r}\"\"\"\n )\n print(\"\")\n if \"ignore_bad\" in item and \"ignore_encoded\" in item:\n print(f'{\"Mode\".ljust(20)}: {TC(\"ignore\").fg_yellow.bg_black}')\n\n if item[\"ignore_bad\"] is False:\n if isinstance(item[\"ignore_encoded\"][0], tuple):\n\n if item[\"ignore_bad\"] is False:\n\n try:\n print(\n f\"\"\"{'Length'.ljust(20)}: {TC(f'''{item['ignore_encoded'][0][1]}''').bg_black.fg_lightgrey}\\n{'Converted'.ljust(20)}: {TC(f'''{item['ignore_encoded'][0][0]}''').bg_black.fg_lightgrey}\"\"\"\n )\n except Exception:\n print(\n f\"\"\"Problems during printing! Raw string: {item['ignore_encoded'][0][0]!r}\"\"\"\n )\n\n print(\"\")\n\n if \"replace_bad\" in item and \"replace_encoded\" in item:\n print(f'{\"Mode\".ljust(20)}: {TC(\"replace\").fg_yellow.bg_black}')\n\n if item[\"replace_bad\"] is False:\n if isinstance(item[\"replace_encoded\"][0], tuple):\n\n if item[\"replace_bad\"] is False:\n try:\n print(\n f\"\"\"{'Length'.ljust(20)}: {TC(f'''{item['replace_encoded'][0][1]}''').bg_black.fg_lightgrey}\\n{'Converted'.ljust(20)}: {TC(f'''{item['replace_encoded'][0][0]}''').bg_black.fg_lightgrey}\"\"\"\n )\n except Exception:\n print(\n f\"\"\"Problems during printing! Raw string: {item['replace_encoded'][0][0]!r}\"\"\"\n )\n\n counter = counter + 1\n\n return self\n\n\nif __name__ == \"__main__\":\n teststuff = b\"\"\"This is a test! \n Hi there!\n A little test! \"\"\"\n testfilename = \"test_utf8.tmp\"\n with open(\"test_utf8.tmp\", mode=\"w\", encoding=\"utf-8-sig\") as f:\n f.write(teststuff.decode(\"utf-8-sig\"))\n codechecker = CodecChecker()\n codechecker.try_open_file(testfilename, readlines=2).print_results(\n pause_after_interval=1, items_per_interval=10\n )\n codechecker.try_open_file(testfilename).print_results()\n codechecker.try_convert_bytes(teststuff.decode(\"cp850\").encode()).print_results(\n pause_after_interval=1, items_per_interval=10\n )\n\nOr you simply run a script to replace all messed up characters. Since I am a German teacher, I have this problem frequently (encoding problems due to Umlaut). Here is a script to replace all characters (too big to post the script here): https://github.com/hansalemaos/LatinFixer/blob/main/__init__.py\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "python_unicode", "utf_8" ]
stackoverflow_0074630082_pandas_python_python_unicode_utf_8.txt
Q: How to make a recursive function to generate the combination of numbers eg. for n=3, (1,1,1),(1,1,2) and so on? def generate(n): t=[] lol=[[] for i in range(n**n)] helper(n,t,lol) return(lol) def helper(n,t,lol): global j if len(t)==n: lol[j]=lol[j]+t j += 1 return for i in range(1,n+1): print(i) t.append(i) helper(n,t,lol) t.pop() j=0 print(generate(2)) print(generate(3)) Here, for n=2, i'm getting expected answer. But for, n=3, it is showing Index Error: in helper lol[j]=lol[j]+t IndexError: list index out of range A: Try this code: def generate(n): def helper(m, n, s): if m==0: print(s) else: for x in range(1,n+1): helper(m-1, n, s+[x]) assert n>=1 helper(n, n, []) Examples: >>> generate(1) [1] >>> generate(2) [1, 1] [1, 2] [2, 1] [2, 2] >>> generate(3) [1, 1, 1] [1, 1, 2] [1, 1, 3] [1, 2, 1] [1, 2, 2] [1, 2, 3] [1, 3, 1] [1, 3, 2] [1, 3, 3] [2, 1, 1] [2, 1, 2] [2, 1, 3] [2, 2, 1] [2, 2, 2] [2, 2, 3] [2, 3, 1] [2, 3, 2] [2, 3, 3] [3, 1, 1] [3, 1, 2] [3, 1, 3] [3, 2, 1] [3, 2, 2] [3, 2, 3] [3, 3, 1] [3, 3, 2] [3, 3, 3] >>>
How to make a recursive function to generate the combination of numbers eg. for n=3, (1,1,1),(1,1,2) and so on?
def generate(n): t=[] lol=[[] for i in range(n**n)] helper(n,t,lol) return(lol) def helper(n,t,lol): global j if len(t)==n: lol[j]=lol[j]+t j += 1 return for i in range(1,n+1): print(i) t.append(i) helper(n,t,lol) t.pop() j=0 print(generate(2)) print(generate(3)) Here, for n=2, i'm getting expected answer. But for, n=3, it is showing Index Error: in helper lol[j]=lol[j]+t IndexError: list index out of range
[ "Try this code:\ndef generate(n):\n def helper(m, n, s):\n if m==0:\n print(s)\n else:\n for x in range(1,n+1):\n helper(m-1, n, s+[x])\n assert n>=1\n helper(n, n, [])\n\nExamples:\n>>> generate(1)\n[1]\n>>> generate(2)\n[1, 1]\n[1, 2]\n[2, 1]\n[2, 2]\n>>> generate(3)\n[1, 1, 1]\n[1, 1, 2]\n[1, 1, 3]\n[1, 2, 1]\n[1, 2, 2]\n[1, 2, 3]\n[1, 3, 1]\n[1, 3, 2]\n[1, 3, 3]\n[2, 1, 1]\n[2, 1, 2]\n[2, 1, 3]\n[2, 2, 1]\n[2, 2, 2]\n[2, 2, 3]\n[2, 3, 1]\n[2, 3, 2]\n[2, 3, 3]\n[3, 1, 1]\n[3, 1, 2]\n[3, 1, 3]\n[3, 2, 1]\n[3, 2, 2]\n[3, 2, 3]\n[3, 3, 1]\n[3, 3, 2]\n[3, 3, 3]\n>>> \n\n" ]
[ 0 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0074583392_python_recursion.txt
Q: character "@" in the name of a variable in the django template I have a dictionary in my views.py mydata = {'@model': 'wolfen', '@genre': 'fantastic', 'price: '350'} which I pass to the django view in a context like this context['mydata'] = mydata and in my view i display like this {{mydata.@model}} the problem is that the django template uses the "@" character for other tags. How to replace the "@" character to not display the following error Could not parse the remainder: '@model' from 'mydata.@model' thank you the solution of Willem work fine. I have another nested dictionary that looks different mydata_2 ={'1' {'@model': 'wolfen', '@genre': 'fantastic', 'price': '300'} , {'3' {'@model': 'phase4', '@genre': 'fantastic', 'price': '450'} } the keys of the main dictionaries ("1" , "3") can change dynamically. otherwise a big thank you to Willem A: Rename the data, for example with: mydata = {'@model': 'wolfen', '@genre': 'fantastic', 'price': '350'} mydata = {k[1:] if k.startswith('@') else k: v for k, v in mydata.items()} context['mydata'] = mydata Then you can use {{ mydata.model }} in the template. Or for subdictionaries: mydata_2 = { '1': {'@model': 'wolfen', '@genre': 'fantastic', 'price': '300'}, '3': {'@model': 'phase4', '@genre': 'fantastic', 'price': '450'}, } mydata_2 = { k: {ki[1:] if ki.startswith('@') else ki: v for ki, v in subdict.items()} for k, subdict in mydata_2.items() }
character "@" in the name of a variable in the django template
I have a dictionary in my views.py mydata = {'@model': 'wolfen', '@genre': 'fantastic', 'price: '350'} which I pass to the django view in a context like this context['mydata'] = mydata and in my view i display like this {{mydata.@model}} the problem is that the django template uses the "@" character for other tags. How to replace the "@" character to not display the following error Could not parse the remainder: '@model' from 'mydata.@model' thank you the solution of Willem work fine. I have another nested dictionary that looks different mydata_2 ={'1' {'@model': 'wolfen', '@genre': 'fantastic', 'price': '300'} , {'3' {'@model': 'phase4', '@genre': 'fantastic', 'price': '450'} } the keys of the main dictionaries ("1" , "3") can change dynamically. otherwise a big thank you to Willem
[ "Rename the data, for example with:\nmydata = {'@model': 'wolfen', '@genre': 'fantastic', 'price': '350'}\nmydata = {k[1:] if k.startswith('@') else k: v for k, v in mydata.items()}\ncontext['mydata'] = mydata\nThen you can use {{ mydata.model }} in the template.\nOr for subdictionaries:\nmydata_2 = {\n '1': {'@model': 'wolfen', '@genre': 'fantastic', 'price': '300'},\n '3': {'@model': 'phase4', '@genre': 'fantastic', 'price': '450'},\n}\nmydata_2 = {\n k: {ki[1:] if ki.startswith('@') else ki: v for ki, v in subdict.items()}\n for k, subdict in mydata_2.items()\n}\n" ]
[ 0 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0074631600_django_django_templates_python.txt
Q: Open an Alert dialog box while clicking on the camera icon button if the the camera is not open yet in kivymd I'm beginner in Kivy, I make a screen which has an image in which a camera live feed is fitted and 2 buttons the 1st start camera which open the webcam and the 2nd is the icon button to take a picture and store it locally but the problem is that if i clicked the icon button before clicking the start camera it give me this error. cv2.imwrite(image_name, self.image_frame) AttributeError: 'WebCamScreen' object has no attribute 'image_frame' So , how should i put a dialogbox or some other condition to tell the user to start camera first. Here is my WebCamScreen code class WebCamScreen(Screen): def do_start(self): self.capture = cv2.VideoCapture(0) Clock.schedule_interval(self.load_video, 1.0 / 24.0) def load_video(self, *args): ret, frame = self.capture.read() self.image_frame = frame # frame = frame[220:220+250, 400:400+250, :] buffer = cv2.flip(frame, 0).tostring() image_texture = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt="bgr") image_texture.blit_buffer(buffer, colorfmt="bgr", bufferfmt="ubyte") self.ids.img.texture = image_texture def capture_image(self): image_name = "first_pic.jpg" cv2.imwrite(image_name, self.image_frame) and kv code is below <WebCamScreen> MDFloatLayout: MDRaisedButton: text: "Start Camera" size_hint_x: None size_hint_y: None md_bg_color: "orange" pos_hint: {"center_x": 0.5, "center_y": 0.95} on_release: root.do_start() Image: id: img size_hint_x: 0.85 size_hint_y: 0.5 pos_hint: {"center_x": 0.5, "center_y": 0.6} MDIconButton: icon: "camera" md_bg_color: "orange" pos_hint: {"center_x": .5, "center_y": .2} on_release: root.capture_image() A: You can put that line that causes the exception in a try block. And in the except block display a Popup. Or you can disable the capture_image Button, and enable it from within the load_video() method.
Open an Alert dialog box while clicking on the camera icon button if the the camera is not open yet in kivymd
I'm beginner in Kivy, I make a screen which has an image in which a camera live feed is fitted and 2 buttons the 1st start camera which open the webcam and the 2nd is the icon button to take a picture and store it locally but the problem is that if i clicked the icon button before clicking the start camera it give me this error. cv2.imwrite(image_name, self.image_frame) AttributeError: 'WebCamScreen' object has no attribute 'image_frame' So , how should i put a dialogbox or some other condition to tell the user to start camera first. Here is my WebCamScreen code class WebCamScreen(Screen): def do_start(self): self.capture = cv2.VideoCapture(0) Clock.schedule_interval(self.load_video, 1.0 / 24.0) def load_video(self, *args): ret, frame = self.capture.read() self.image_frame = frame # frame = frame[220:220+250, 400:400+250, :] buffer = cv2.flip(frame, 0).tostring() image_texture = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt="bgr") image_texture.blit_buffer(buffer, colorfmt="bgr", bufferfmt="ubyte") self.ids.img.texture = image_texture def capture_image(self): image_name = "first_pic.jpg" cv2.imwrite(image_name, self.image_frame) and kv code is below <WebCamScreen> MDFloatLayout: MDRaisedButton: text: "Start Camera" size_hint_x: None size_hint_y: None md_bg_color: "orange" pos_hint: {"center_x": 0.5, "center_y": 0.95} on_release: root.do_start() Image: id: img size_hint_x: 0.85 size_hint_y: 0.5 pos_hint: {"center_x": 0.5, "center_y": 0.6} MDIconButton: icon: "camera" md_bg_color: "orange" pos_hint: {"center_x": .5, "center_y": .2} on_release: root.capture_image()
[ "You can put that line that causes the exception in a try block. And in the except block display a Popup.\nOr you can disable the capture_image Button, and enable it from within the load_video() method.\n" ]
[ 0 ]
[]
[]
[ "kivy", "kivymd", "python" ]
stackoverflow_0074622929_kivy_kivymd_python.txt
Q: How do I add lines to a key and different lines as values? So I start put with a file that lists title, actor, title, actor, etc. 12 Years a Slave Topsy Chapman 12 Years a Slave Devin Maurice Evans 12 Years a Slave Brad Pitt 12 Years a Slave Jay Huguley 12 Years a Slave Devyn A. Tyler 12 Years a Slave Willo Jean-Baptiste American Hustle Christian Bale American Hustle Bradley Cooper American Hustle Amy Adams American Hustle Jeremy Renner American Hustle Jennifer Lawrence I need to make a dictionary that looks like what's below and lists all actors in the movie {'Movie Title': ['All actors'], 'Movie Title': ['All Actors]} So far I only have this d = {} with open(file), 'r') as f: for key in f: d[key.strip()] = next(f).split() print(d) A: Using a defaultdict is usually a better choice: from collections import defaultdict data = defaultdict(list) with open("filename.txt", 'r') as f: stripped = map(str.strip, f) for movie, actor in zip(stripped, stripped): data[movie].append(actor) print(data) A: So you need to switch between reading the title and reading the actor from the input data. You also need to store the title, so you can use it in the actor line. You can use the setting of the title for switching between reading the title and reading the actor. Some key checking and you have working logic. # pretty printer to make the output nice from pprint import pprint data = """ 12 Years a Slave Topsy Chapman 12 Years a Slave Devin Maurice Evans 12 Years a Slave Brad Pitt 12 Years a Slave Jay Huguley 12 Years a Slave Devyn A. Tyler 12 Years a Slave Willo Jean-Baptiste American Hustle Christian Bale American Hustle Bradley Cooper American Hustle Amy Adams American Hustle Jeremy Renner American Hustle Jennifer Lawrence""" result = {} title = None for line in data.splitlines(): # clean here once line = line.strip() if not title: # store the title title = line else: # check if title already exists if title in result: # if yes, append actor result[title].append(line) else: # if no, create it with new list for actors # and of course, add the current line as actor result[title] = [line] # reset title to None title = None pprint(result) output {'12 Years a Slave': ['Topsy Chapman', 'Devin Maurice Evans', 'Brad Pitt', 'Jay Huguley', 'Devyn A. Tyler', 'Willo Jean-Baptiste'], 'American Hustle': ['Christian Bale', 'Bradley Cooper', 'Amy Adams', 'Jeremy Renner', 'Jennifer Lawrence']} EDIT when reading from a file, you need to do it slightly different. from pprint import pprint result = {} title = None with open("somefile.txt") as infile: for line in infile.read().splitlines(): line = line.strip() if not title: title = line else: if title in result: result[title].append(line) else: result[title] = [line] title = None pprint(result)
How do I add lines to a key and different lines as values?
So I start put with a file that lists title, actor, title, actor, etc. 12 Years a Slave Topsy Chapman 12 Years a Slave Devin Maurice Evans 12 Years a Slave Brad Pitt 12 Years a Slave Jay Huguley 12 Years a Slave Devyn A. Tyler 12 Years a Slave Willo Jean-Baptiste American Hustle Christian Bale American Hustle Bradley Cooper American Hustle Amy Adams American Hustle Jeremy Renner American Hustle Jennifer Lawrence I need to make a dictionary that looks like what's below and lists all actors in the movie {'Movie Title': ['All actors'], 'Movie Title': ['All Actors]} So far I only have this d = {} with open(file), 'r') as f: for key in f: d[key.strip()] = next(f).split() print(d)
[ "Using a defaultdict is usually a better choice:\nfrom collections import defaultdict\ndata = defaultdict(list)\n\nwith open(\"filename.txt\", 'r') as f:\n stripped = map(str.strip, f)\n for movie, actor in zip(stripped, stripped):\n data[movie].append(actor)\n\nprint(data)\n\n", "So you need to switch between reading the title and reading the actor from the input data. You also need to store the title, so you can use it in the actor line.\nYou can use the setting of the title for switching between reading the title and reading the actor.\nSome key checking and you have working logic.\n# pretty printer to make the output nice\nfrom pprint import pprint\n\n\ndata = \"\"\" 12 Years a Slave\n Topsy Chapman\n 12 Years a Slave\n Devin Maurice Evans\n 12 Years a Slave\n Brad Pitt\n 12 Years a Slave\n Jay Huguley\n 12 Years a Slave\n Devyn A. Tyler\n 12 Years a Slave\n Willo Jean-Baptiste\n American Hustle\n Christian Bale\n American Hustle\n Bradley Cooper\n American Hustle\n Amy Adams\n American Hustle\n Jeremy Renner\n American Hustle\n Jennifer Lawrence\"\"\"\n\n\nresult = {}\ntitle = None\nfor line in data.splitlines():\n # clean here once\n line = line.strip()\n if not title:\n # store the title\n title = line\n else:\n # check if title already exists\n if title in result:\n # if yes, append actor\n result[title].append(line)\n else:\n # if no, create it with new list for actors\n # and of course, add the current line as actor\n result[title] = [line]\n # reset title to None\n title = None\n\npprint(result)\n\noutput\n{'12 Years a Slave': ['Topsy Chapman',\n 'Devin Maurice Evans',\n 'Brad Pitt',\n 'Jay Huguley',\n 'Devyn A. Tyler',\n 'Willo Jean-Baptiste'],\n 'American Hustle': ['Christian Bale',\n 'Bradley Cooper',\n 'Amy Adams',\n 'Jeremy Renner',\n 'Jennifer Lawrence']}\n\n\nEDIT\nwhen reading from a file, you need to do it slightly different.\nfrom pprint import pprint\n\nresult = {}\ntitle = None\nwith open(\"somefile.txt\") as infile:\n for line in infile.read().splitlines():\n line = line.strip()\n if not title:\n title = line\n else:\n if title in result:\n result[title].append(line)\n else:\n result[title] = [line]\n title = None\n\n\npprint(result)\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074630771_dictionary_python.txt
Q: How to prevent end users from editing a hidden input value in a Django social website In a website having a "Comment" and "reply to a comment" system. After each comment in the template, There's a "Add a reply" form which have a hidden input to carry the comment pk on its value attribute. How to prevent the end user from editing that hidden input value ? And If this is not possible, What would be the correct approach ? A: You can't prevent somebody from editing the value attribute, since it's client-sided. The better approach would be to check on the server-side whether the user is permitted to comment or reply to the given post. For example, you can check if the user is a friend of the creator of the post. If it's not, you can block the request. Example: # models.py class Post(models.Model): creator = models.ForeignKey(get_user_model(), on_delete=models.CASCADE) body = models.TextField() class User(AbstractUser): friends = models.ManyToManyField(get_user_model()) # views.py class CommentCreate(View): def get(self, request, *args, **kwargs): # retrieve the post object here ... # check if the user is a friend of the creator if post.creator.friends.filter(id=request.user.id).first(): # returns None if none found # user is a friend of the creator # do your stuff here else: # user is NOT a friend of the creator raise PermissionDenied() A: Preventing the end user from editing: You can use the html input type of hidden <input type="hidden" value="{comment.pk}"> A: since it is being rendered by a form, you can set it to read-only by doing this <input type="hidden" readonly> You can read more on readonly attribute here Read-only attribute
How to prevent end users from editing a hidden input value in a Django social website
In a website having a "Comment" and "reply to a comment" system. After each comment in the template, There's a "Add a reply" form which have a hidden input to carry the comment pk on its value attribute. How to prevent the end user from editing that hidden input value ? And If this is not possible, What would be the correct approach ?
[ "You can't prevent somebody from editing the value attribute, since it's client-sided.\nThe better approach would be to check on the server-side whether the user is permitted to comment or reply to the given post. For example, you can check if the user is a friend of the creator of the post. If it's not, you can block the request.\nExample:\n# models.py\nclass Post(models.Model):\n creator = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)\n body = models.TextField()\n\nclass User(AbstractUser):\n friends = models.ManyToManyField(get_user_model())\n\n# views.py\nclass CommentCreate(View):\n \n def get(self, request, *args, **kwargs):\n # retrieve the post object here\n ...\n\n # check if the user is a friend of the creator\n if post.creator.friends.filter(id=request.user.id).first(): # returns None if none found\n # user is a friend of the creator\n # do your stuff here\n else:\n # user is NOT a friend of the creator\n raise PermissionDenied()\n \n\n", "\nPreventing the end user from editing: You can use the html input type of hidden\n\n<input type=\"hidden\" value=\"{comment.pk}\">\n", "since it is being rendered by a form, you can set it to read-only by doing this\n<input type=\"hidden\" readonly>\n\nYou can read more on readonly attribute here Read-only attribute\n" ]
[ 0, 0, 0 ]
[]
[]
[ "django", "django_templates", "html", "javascript", "python" ]
stackoverflow_0074582094_django_django_templates_html_javascript_python.txt
Q: Having access to all modules in python package directly When creating a Python Package with different modules, I have to import it like this: from Package_A import Module_1 Object_1 = Module_1.Class_1() What I would like to have is the possibility to have all the classes available in the main package as follows: import Package_A as pa Object_1 = pa.Class_1() Can anyone help me out here? I have tried adding __all__ = ["Module_1", "Module_2"] and also from Package_A import Module_1, Module_2 to the main __init__.py file A: Packages Suppose you want to design a collection of modules (a “package”) for the uniform handling of sound files and sound data. There are many different sound file formats (usually recognized by their extension, for example: .wav, .aiff, .au), so you may need to create and maintain a growing collection of modules for the conversion between the various file formats. There are also many different operations you might want to perform on sound data (such as mixing, adding echo, applying an equalizer function, creating an artificial stereo effect), so in addition you will be writing a never-ending stream of modules to perform these operations. Here’s a possible structure for your package (expressed in terms of a hierarchical filesystem): sound/ Top-level package __init__.py Initialize the sound package formats/ Subpackage for file format conversions __init__.py wavread.py wavwrite.py aiffread.py aiffwrite.py auread.py auwrite.py ... effects/ Subpackage for sound effects __init__.py echo.py surround.py reverse.py ... filters/ Subpackage for filters __init__.py equalizer.py vocoder.py karaoke.py ... When importing the package, Python searches through the directories on sys.path looking for the package subdirectory. The init.py files are required to make Python treat directories containing the file as packages. This prevents directories with a common name, such as string, unintentionally hiding valid modules that occur later on the module search path. In the simplest case, init.py can just be an empty file, but it can also execute initialization code for the package or set the all variable, described later. Users of the package can import individual modules from the package, for example: import sound.effects.echo This loads the submodule sound.effects.echo. It must be referenced with its full name. sound.effects.echo.echofilter(input, output, delay=0.7, atten=4) An alternative way of importing the submodule is: from sound.effects import echo This also loads the submodule echo, and makes it available without its package prefix, so it can be used as follows: echo.echofilter(input, output, delay=0.7, atten=4) Yet another variation is to import the desired function or variable directly: from sound.effects.echo import echofilter Again, this loads the submodule echo, but this makes its function echofilter() directly available: echofilter(input, output, delay=0.7, atten=4) Importing * From a Package __all__ = ["echo", "surround", "reverse"] This would mean that from sound.effects import * would import the three named submodules of the sound.effects package. If all is not defined, the statement from sound.effects import * does not import all submodules from the package sound.effects into the current namespace; it only ensures that the package sound.effects has been imported (possibly running any initialization code in init.py) and then imports whatever names are defined in the package. This includes any names defined (and submodules explicitly loaded) by init.py. It also includes any submodules of the package that were explicitly loaded by previous import statements. Consider this code: import sound.effects.echo import sound.effects.surround from sound.effects import * In this example, the echo and surround modules are imported in the current namespace because they are defined in the sound.effects package when the from...import statement is executed. (This also works when all is defined.) Although certain modules are designed to export only names that follow certain patterns when you use import *, it is still considered bad practice in production code. Remember, there is nothing wrong with using from package import specific_submodule! In fact, this is the recommended notation unless the importing module needs to use submodules with the same name from different packages.
Having access to all modules in python package directly
When creating a Python Package with different modules, I have to import it like this: from Package_A import Module_1 Object_1 = Module_1.Class_1() What I would like to have is the possibility to have all the classes available in the main package as follows: import Package_A as pa Object_1 = pa.Class_1() Can anyone help me out here? I have tried adding __all__ = ["Module_1", "Module_2"] and also from Package_A import Module_1, Module_2 to the main __init__.py file
[ "Packages\nSuppose you want to design a collection of modules (a “package”) for the uniform handling of sound files and sound data. There are many different sound file formats (usually recognized by their extension, for example: .wav, .aiff, .au), so you may need to create and maintain a growing collection of modules for the conversion between the various file formats. There are also many different operations you might want to perform on sound data (such as mixing, adding echo, applying an equalizer function, creating an artificial stereo effect), so in addition you will be writing a never-ending stream of modules to perform these operations. Here’s a possible structure for your package (expressed in terms of a hierarchical filesystem):\nsound/ Top-level package\n __init__.py Initialize the sound package\n formats/ Subpackage for file format conversions\n __init__.py\n wavread.py\n wavwrite.py\n aiffread.py\n aiffwrite.py\n auread.py\n auwrite.py\n ...\n effects/ Subpackage for sound effects\n __init__.py\n echo.py\n surround.py\n reverse.py\n ...\n filters/ Subpackage for filters\n __init__.py\n equalizer.py\n vocoder.py\n karaoke.py\n ...\n\nWhen importing the package, Python searches through the directories on sys.path looking for the package subdirectory.\nThe init.py files are required to make Python treat directories containing the file as packages. This prevents directories with a common name, such as string, unintentionally hiding valid modules that occur later on the module search path. In the simplest case, init.py can just be an empty file, but it can also execute initialization code for the package or set the all variable, described later.\nUsers of the package can import individual modules from the package, for example:\nimport sound.effects.echo\n\nThis loads the submodule sound.effects.echo. It must be referenced with its full name.\nsound.effects.echo.echofilter(input, output, delay=0.7, atten=4)\n\nAn alternative way of importing the submodule is:\nfrom sound.effects import echo\n\nThis also loads the submodule echo, and makes it available without its package prefix, so it can be used as follows:\necho.echofilter(input, output, delay=0.7, atten=4)\n\nYet another variation is to import the desired function or variable directly:\nfrom sound.effects.echo import echofilter\n\nAgain, this loads the submodule echo, but this makes its function echofilter() directly available:\nechofilter(input, output, delay=0.7, atten=4)\n\nImporting * From a Package\n__all__ = [\"echo\", \"surround\", \"reverse\"]\n\nThis would mean that from sound.effects import * would import the three named submodules of the sound.effects package.\nIf all is not defined, the statement from sound.effects import * does not import all submodules from the package sound.effects into the current namespace; it only ensures that the package sound.effects has been imported (possibly running any initialization code in init.py) and then imports whatever names are defined in the package. This includes any names defined (and submodules explicitly loaded) by init.py. It also includes any submodules of the package that were explicitly loaded by previous import statements. Consider this code:\nimport sound.effects.echo\nimport sound.effects.surround\nfrom sound.effects import *\n\nIn this example, the echo and surround modules are imported in the current namespace because they are defined in the sound.effects package when the from...import statement is executed. (This also works when all is defined.)\nAlthough certain modules are designed to export only names that follow certain patterns when you use import *, it is still considered bad practice in production code.\nRemember, there is nothing wrong with using from package import specific_submodule! In fact, this is the recommended notation unless the importing module needs to use submodules with the same name from different packages.\n" ]
[ 0 ]
[]
[]
[ "python", "python_packaging" ]
stackoverflow_0074631491_python_python_packaging.txt
Q: Adding iterations to a list depending on input Good morning, I am wondering if someone can point me in the right direction. I am trying to have a loop go through a list and add them to a printable list. let's say the user inputs the number two: Then it should print go pull out two random class-names of the list. if the user inputs 4 it should pull out 4 random class-names from the list. this is so i can print out attributes from the classes afterwards depending on the class-names above. with utskrift1=(vars(plane)) i have tried normal loops but it seem to print out the list in it's entirety, and if i ie go print out x=2 then it prints the entire list two times. #The classes: import random class plane: def __init__(self, dyr, dyrefamilie, antallbein): self.dyr = 'Hund' self.dyrefamilie = 'Hundefamilien' self.antallbein = '4' def __str__(self): return f'undulat(={self.dyr},{self.dyrefamilie},{self.antallbein})' plane2 = plane(dyr='something', dyrefamilie="something", antallbein='4') class rocket: def __init__(self, dyr, dyrefamilie, antallbein): self.dyr = 'something' self.dyrefamilie = 'something2' self.antallbein = '4' def __str__(self): return f'katt(={self.dyr},{self.dyrefamilie},{self.antallbein})' rocket2 = rocket(dyr='something', dyrefamilie="something", antallbein='4') class boat: def __init__(self, dyr, dyrefamilie, antallbein): self.dyr = 'something' self.dyrefamilie = 'something2' self.antallbein = '5' def __str__(self): return f'undulat(={self.dyr},{self.dyrefamilie},{self.antallbein})' boat2 = boat(dyr='something', dyrefamilie="something", antallbein='2') Is it possible to randomise a selectetion and have the list.append(selected random name) instead of preselecting it like i have done below? x2=list = [] x1=list = [] # appending instances to list list.append(plane(1,2,3)) list.append(rocket(1,2,3)) list.append(boat(1,2,3)) random.shuffle(x1) #roterer litt rundt på listen for i, x1 in enumerate(x1): #kode fra canvas print('Dyret er en', x1.dyr,'med',x1.antallbein+'-bein'+'.','denne er er en del av', x1.dyrefamilie) A: Use the random std package, just put your instantiated objects in a list and you can choose a random one like such: from random values = [1,2,3,4,5,6] rand_val = random.choice(values) A: You can do a ranged for, which means that the for will execute x times (range(x)). By this way now you have a for executing X times, and you can index in the list, for example, the current iteration 'i' and print its object properties. You have to take care about the range(x) and the range of the list/array you are indexing, if the range(x) > range(list) then you will have out of bound errors. x2=list = [] x1=list = [] # appending instances to list list.append(plane(1,2,3)) list.append(rocket(1,2,3)) list.append(boat(1,2,3)) random.shuffle(x1) #roterer litt rundt på listen for i in range(2): print('Dyret er en', x1[i].dyr,'med',x1[i].antallbein+'-bein'+'.','denne er er en del av', x1[i].dyrefamilie) A: Thanks, got i working now :) used both the solutions explained to me and made this code: num = int(input("Skriv inn antallet du vil generere")) x1=list = [] # appending instances to list values = [plane,rocket,boat] for i in range(num): list.append(random.choice(values)(1,2,3))
Adding iterations to a list depending on input
Good morning, I am wondering if someone can point me in the right direction. I am trying to have a loop go through a list and add them to a printable list. let's say the user inputs the number two: Then it should print go pull out two random class-names of the list. if the user inputs 4 it should pull out 4 random class-names from the list. this is so i can print out attributes from the classes afterwards depending on the class-names above. with utskrift1=(vars(plane)) i have tried normal loops but it seem to print out the list in it's entirety, and if i ie go print out x=2 then it prints the entire list two times. #The classes: import random class plane: def __init__(self, dyr, dyrefamilie, antallbein): self.dyr = 'Hund' self.dyrefamilie = 'Hundefamilien' self.antallbein = '4' def __str__(self): return f'undulat(={self.dyr},{self.dyrefamilie},{self.antallbein})' plane2 = plane(dyr='something', dyrefamilie="something", antallbein='4') class rocket: def __init__(self, dyr, dyrefamilie, antallbein): self.dyr = 'something' self.dyrefamilie = 'something2' self.antallbein = '4' def __str__(self): return f'katt(={self.dyr},{self.dyrefamilie},{self.antallbein})' rocket2 = rocket(dyr='something', dyrefamilie="something", antallbein='4') class boat: def __init__(self, dyr, dyrefamilie, antallbein): self.dyr = 'something' self.dyrefamilie = 'something2' self.antallbein = '5' def __str__(self): return f'undulat(={self.dyr},{self.dyrefamilie},{self.antallbein})' boat2 = boat(dyr='something', dyrefamilie="something", antallbein='2') Is it possible to randomise a selectetion and have the list.append(selected random name) instead of preselecting it like i have done below? x2=list = [] x1=list = [] # appending instances to list list.append(plane(1,2,3)) list.append(rocket(1,2,3)) list.append(boat(1,2,3)) random.shuffle(x1) #roterer litt rundt på listen for i, x1 in enumerate(x1): #kode fra canvas print('Dyret er en', x1.dyr,'med',x1.antallbein+'-bein'+'.','denne er er en del av', x1.dyrefamilie)
[ "Use the random std package, just put your instantiated objects in a list and you can choose a random one like such:\nfrom random\nvalues = [1,2,3,4,5,6]\nrand_val = random.choice(values)\n\n", "You can do a ranged for, which means that the for will execute x times (range(x)). By this way now you have a for executing X times, and you can index in the list, for example, the current iteration 'i' and print its object properties.\nYou have to take care about the range(x) and the range of the list/array you are indexing, if the range(x) > range(list) then you will have out of bound errors.\nx2=list = []\nx1=list = []\n# appending instances to list\n\nlist.append(plane(1,2,3))\nlist.append(rocket(1,2,3))\nlist.append(boat(1,2,3))\n\nrandom.shuffle(x1) #roterer litt rundt på listen\n \nfor i in range(2):\n print('Dyret er en', x1[i].dyr,'med',x1[i].antallbein+'-bein'+'.','denne er er en del av', x1[i].dyrefamilie)\n\n", "Thanks, got i working now :)\nused both the solutions explained to me and made this code:\nnum = int(input(\"Skriv inn antallet du vil generere\"))\n x1=list = []\n # appending instances to list\n values = [plane,rocket,boat]\n for i in range(num):\n list.append(random.choice(values)(1,2,3))\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074624856_python_python_3.x.txt
Q: Convert tab delimited data into dictionary I'm trying to convert an array with a dictionary to a flattened dictionary and export it to a JSON file. I have an initial tab-delimited file, and have tried multiple ways but not coming to the final result. If there is more than one row present then save these as arrays in the dictionary Name file code file_location TESTLIB1 443 123 location1 TESTLIB2 444 124 location2 Current Output: {'library': 'TESTLIB2', 'file': '444', 'code': '124', 'file_location': 'location2'} Desired Output if num_lines > 1: {'library': ['TEST1', 'TEST2'], 'file': ['443', '444'], 'code': ['123', 123], 'file_location': ['location1', 'location2]} Code Snippet data_dict = {} with open('file.tmp') as input: reader = csv.DictReader(input, delimiter='\t') num_lines = sum(1 for line in open('write_object.tmp')) for row in reader: data_dict.update(row) if num_lines > 1: data_dict.update(row) with open('output.json', 'w') as output: output.write(json.dumps(data_dict)) print(data_dict) A: create list for each column and iterate to append row by row import csv import json # read file d = {} with open('write_object.tmp') as f: reader = csv.reader(f, delimiter='\t') headers = next(reader) for head in headers: d[head] = [] for row in reader: for i, head in enumerate(headers): d[head].append(row[i]) # save as json file with open('output.json', 'w') as f: json.dump(d, f) output: {'Name': ['TESTLIB1', 'TESTLIB2'], 'file': ['443', '444'], 'code': ['123', '124'], 'file_location': ['location1', 'location2']} A: from collections import defaultdict data_dict = defaultdict(list) with open('input-file') as inp: for row in csv.DictReader(inp, delimiter='\t'): for key, val in row.items(): data_dict[key].append(val) print(data_dict) # output {'Name': ['TESTLIB1', 'TESTLIB2'], 'file': ['443', '444'], 'code': ['123', '124'], 'file_location': ['location1', 'location2']}
Convert tab delimited data into dictionary
I'm trying to convert an array with a dictionary to a flattened dictionary and export it to a JSON file. I have an initial tab-delimited file, and have tried multiple ways but not coming to the final result. If there is more than one row present then save these as arrays in the dictionary Name file code file_location TESTLIB1 443 123 location1 TESTLIB2 444 124 location2 Current Output: {'library': 'TESTLIB2', 'file': '444', 'code': '124', 'file_location': 'location2'} Desired Output if num_lines > 1: {'library': ['TEST1', 'TEST2'], 'file': ['443', '444'], 'code': ['123', 123], 'file_location': ['location1', 'location2]} Code Snippet data_dict = {} with open('file.tmp') as input: reader = csv.DictReader(input, delimiter='\t') num_lines = sum(1 for line in open('write_object.tmp')) for row in reader: data_dict.update(row) if num_lines > 1: data_dict.update(row) with open('output.json', 'w') as output: output.write(json.dumps(data_dict)) print(data_dict)
[ "create list for each column and iterate to append row by row\nimport csv\nimport json \n\n# read file\nd = {}\nwith open('write_object.tmp') as f:\n reader = csv.reader(f, delimiter='\\t')\n headers = next(reader)\n for head in headers:\n d[head] = []\n for row in reader:\n for i, head in enumerate(headers):\n d[head].append(row[i])\n\n# save as json file\nwith open('output.json', 'w') as f:\n json.dump(d, f)\n\noutput:\n{'Name': ['TESTLIB1', 'TESTLIB2'],\n 'file': ['443', '444'],\n 'code': ['123', '124'],\n 'file_location': ['location1', 'location2']}\n\n", "from collections import defaultdict\n\ndata_dict = defaultdict(list)\nwith open('input-file') as inp:\n for row in csv.DictReader(inp, delimiter='\\t'):\n for key, val in row.items():\n data_dict[key].append(val)\nprint(data_dict)\n\n# output\n{'Name': ['TESTLIB1', 'TESTLIB2'],\n 'file': ['443', '444'],\n 'code': ['123', '124'],\n 'file_location': ['location1', 'location2']}\n\n" ]
[ 1, 0 ]
[]
[]
[ "csv", "dictionary", "python" ]
stackoverflow_0074631513_csv_dictionary_python.txt
Q: In behave, how do you run a scenario only? I have a 'behave' feature that has a lot of tests on it. I only need to run a specific scenario for development needs. How do I do it? (preferably on the command line) A: If you want to run a single test for that feature, use the -n or --name flag which seems to want the text after Scenario: behave -n 'This is a scenario name' You can run a feature file by using -i or --include flags and then the name of the feature file. behave -i file_name.feature or: behave --include file_name You can also exclude with the --exclude flag: behave -e file_name For more information check the documentation for command line arguments. There's a lot of useful information hidden in their appendix section. NOTE: At the time I'm writing this it won't work with Python 3.6 and Behave 1.2.5, due to this issue. (UPDATE: 1.2.6 is out and fixes this, but if you're on python 3.4 that version won't be available from pip so you can workaround this with pip3 install git+https://github.com/behave/behave#1.2.6rc). It also seems like you should be able to pass in the text after Feature: for the -i flag but currently that doesn't work. Somebody remind me to updated if it works again. I also encourage people to check out the wip flag, which allows you to add @wip to a test, then -wip will not only run the test but also allow print/logging statements for debugging. A: To run only a single scenario you can use -n with the name of the scenario: $ behave -n 'clicking the button "foo" should bar the baz' I'm using single quotes above to keep the name of the scenario as one argument for -n. Otherwise, the shell will pass each word of the scenario name as a separate argument. A: Tags provide a couple of options ... 1) Tag the slow ones and then avoid by invoking with the inverse e.g. behave -t '~@slow_tag_name' 2) However for the most flexibility I'd personally recommend tagging each Scenario with a unique ID. e.g. I use a @YYYY_MM_DD_HHmm_Initials tag scheme since, this is unique enough and the traceability is useful/interesting. Then you can always simply invoke with the tag and get it to run the Scenario, .e.g behave @2015_01_03_0936_jh A: A very powerful trick in behave: behave --wip And tag your test-under-development with @wip for the time being. This would have been my #1 answer besides the other mentioned ways to select tests (--name, --tags, --include), but is quite much hidden yet in the answer by @Cynic. A: Also you might well be interested in this beautiful post describing how to run just a single test from a scenario outline in behave. e.g. #This is Gherkin Feature: Running a single test from a scenario outline @scenarioGroupName @scenarioGroupName<scenarioName> Scenario Outline: test running one of many scenarios, iteration <n> / <scenarioName> Given test is index <n> for <scenarioName> When we do something with <scenarioParameter> Then all the checks pass Examples: Test scenarios |n | scenarioName | scenarioParameter |1 | Able | first |2 | Baker | second |3 | Charlie | third #etc To run this with just data for scenario 2/baker, one might do # This is shell behave -k --tags=@scenarioGroupNameBaker Note how scenarioGroupNameBaker tag is built. For each example row, the outline/template tag scenarioGroupName<scenarioName> has <scenarionName> replaced with the value from the example row to give the tag name per scenario-row :D
In behave, how do you run a scenario only?
I have a 'behave' feature that has a lot of tests on it. I only need to run a specific scenario for development needs. How do I do it? (preferably on the command line)
[ "If you want to run a single test for that feature, use the -n or --name flag which seems to want the text after Scenario:\nbehave -n 'This is a scenario name'\n\n\nYou can run a feature file by using -i or --include flags and then the name of the feature file. \nbehave -i file_name.feature\n\nor:\nbehave --include file_name\n\n\nYou can also exclude with the --exclude flag:\nbehave -e file_name\n\n\nFor more information check the documentation for command line arguments. There's a lot of useful information hidden in their appendix section.\n\nNOTE: At the time I'm writing this it won't work with Python 3.6 and Behave 1.2.5, due to this issue. (UPDATE: 1.2.6 is out and fixes this, but if you're on python 3.4 that version won't be available from pip so you can workaround this with pip3 install git+https://github.com/behave/behave#1.2.6rc).\nIt also seems like you should be able to pass in the text after Feature: for the -i flag but currently that doesn't work. Somebody remind me to updated if it works again. I also encourage people to check out the wip flag, which allows you to add @wip to a test, then -wip will not only run the test but also allow print/logging statements for debugging.\n", "To run only a single scenario you can use -n with the name of the scenario:\n$ behave -n 'clicking the button \"foo\" should bar the baz'\n\nI'm using single quotes above to keep the name of the scenario as one argument for -n. Otherwise, the shell will pass each word of the scenario name as a separate argument.\n", "Tags provide a couple of options ...\n1) Tag the slow ones and then avoid by invoking with the inverse e.g. \nbehave -t '~@slow_tag_name' \n\n2) However for the most flexibility I'd personally recommend tagging each Scenario with a unique ID. e.g. I use a @YYYY_MM_DD_HHmm_Initials tag scheme since, this is unique enough and the traceability is useful/interesting. Then you can always simply invoke with the tag and get it to run the Scenario, .e.g\nbehave @2015_01_03_0936_jh \n\n", "A very powerful trick in behave:\nbehave --wip\n\nAnd tag your test-under-development with @wip for the time being. This would have been my #1 answer besides the other mentioned ways to select tests (--name, --tags, --include), but is quite much hidden yet in the answer by @Cynic.\n", "Also you might well be interested in this beautiful post describing how to run just a single test from a scenario outline in behave.\ne.g.\n#This is Gherkin\nFeature: Running a single test from a scenario outline\n\n@scenarioGroupName @scenarioGroupName<scenarioName>\nScenario Outline: test running one of many scenarios, iteration <n> / <scenarioName>\n Given test is index <n> for <scenarioName> \n When we do something with <scenarioParameter> \n Then all the checks pass\n\n\nExamples: Test scenarios\n |n | scenarioName | scenarioParameter\n |1 | Able | first\n |2 | Baker | second \n |3 | Charlie | third\n #etc\n\nTo run this with just data for scenario 2/baker, one might do\n# This is shell\nbehave -k --tags=@scenarioGroupNameBaker\n\nNote how scenarioGroupNameBaker tag is built.\nFor each example row, the outline/template tag scenarioGroupName<scenarioName> has <scenarionName> replaced with the value from the example row to give the tag name per scenario-row :D\n" ]
[ 38, 35, 4, 0, 0 ]
[]
[]
[ "bdd", "python", "python_behave" ]
stackoverflow_0027030233_bdd_python_python_behave.txt
Q: How to call a variadic function with ctypes from multiple threads? I have a shared library, libfoo.so, with a variadic function: int foo(int handle, ...); that uses handle to access to static variables within the library. Now, I want to use it with ctypes in a multithread program. import ctypes as ct # main lib = ct.cdll.LoadLibrary('libfoo.so') foo = lib.foo foo.restype = ct.c_int # thread 1 code def thread1(handle): foo.argtypes = [ct.c_int, ct.c_int] foo(handle, 2); # thread 2 code def thread2(handle): foo.argtypes = [ct.c_int, ct.c_double] foo(handle, 2.); The problem is that both threads modify the same foo.argtypes and this leads to conflicts. I cannot load the same library twice because I need to access to static data into the library. Moreover, the foo object, that is an instance of _FuncPtr, is not copyable. An obvious solution is to add a mutex to protect argtypes while foo is being called. Are there any other solutions to this problem? A: Instead of setting .argtypes for each function resulting in a race condition, create the correct ctypes type as you call each function: import ctypes as ct # main lib = ct.cdll.LoadLibrary('libfoo.so') foo = lib.foo foo.argtypes = ct.c_int, # define the known types. ctypes will allow more foo.restype = ct.c_int # thread 1 code def thread1(handle): foo(handle, ct.c_int(2)) # thread 2 code def thread2(handle): foo(handle, ct.c_float(2)) Here's a test. The C printf aren't serialized so may mix up two prints in a single line, but the numbers are correct: test.c #include <stdio.h> #include <stdarg.h> #ifdef _WIN32 # define API __declspec(dllexport) #else # define API #endif API int foo(int handle, ...) { va_list valist; va_start(valist, handle); switch(handle) { case 1: printf("%d\n", va_arg(valist, int)); break; case 2: printf("%f\n", va_arg(valist, float)); break; case 3: int x = va_arg(valist, int); float y = va_arg(valist, float); printf("%d %f\n", x, y); break; default: ; } va_end(valist); return 123; } test.py import ctypes as ct from threading import Thread dll = ct.CDLL('./test') dll.foo.argtypes = ct.c_int, dll.foo.restype = ct.c_int def thread1(): for _ in range(5): dll.foo(1, ct.c_int(1)); # thread 2 code def thread2(): for _ in range(5): dll.foo(2, ct.c_float(2.125)) def thread3(): for _ in range(5): dll.foo(3, ct.c_int(3), ct.c_float(3.375)) threads = [Thread(target=f) for f in (thread1, thread2, thread3)] for t in threads: t.start() for t in threads: t.join() Output: 1 1 1 2.125000 1 2.125000 3 3.375000 1 2.125000 3 3.375000 2.125000 3 3.375000 2.125000 3 3.375000 3 3.375000
How to call a variadic function with ctypes from multiple threads?
I have a shared library, libfoo.so, with a variadic function: int foo(int handle, ...); that uses handle to access to static variables within the library. Now, I want to use it with ctypes in a multithread program. import ctypes as ct # main lib = ct.cdll.LoadLibrary('libfoo.so') foo = lib.foo foo.restype = ct.c_int # thread 1 code def thread1(handle): foo.argtypes = [ct.c_int, ct.c_int] foo(handle, 2); # thread 2 code def thread2(handle): foo.argtypes = [ct.c_int, ct.c_double] foo(handle, 2.); The problem is that both threads modify the same foo.argtypes and this leads to conflicts. I cannot load the same library twice because I need to access to static data into the library. Moreover, the foo object, that is an instance of _FuncPtr, is not copyable. An obvious solution is to add a mutex to protect argtypes while foo is being called. Are there any other solutions to this problem?
[ "Instead of setting .argtypes for each function resulting in a race condition, create the correct ctypes type as you call each function:\nimport ctypes as ct\n\n# main\nlib = ct.cdll.LoadLibrary('libfoo.so')\nfoo = lib.foo\nfoo.argtypes = ct.c_int, # define the known types. ctypes will allow more\nfoo.restype = ct.c_int\n\n# thread 1 code\ndef thread1(handle):\n foo(handle, ct.c_int(2))\n\n# thread 2 code\ndef thread2(handle):\n foo(handle, ct.c_float(2))\n\nHere's a test. The C printf aren't serialized so may mix up two prints in a single line, but the numbers are correct:\ntest.c\n#include <stdio.h>\n#include <stdarg.h>\n\n#ifdef _WIN32\n# define API __declspec(dllexport)\n#else\n# define API\n#endif\n\nAPI int foo(int handle, ...) {\n va_list valist;\n va_start(valist, handle);\n switch(handle) {\n case 1:\n printf(\"%d\\n\", va_arg(valist, int));\n break;\n case 2:\n printf(\"%f\\n\", va_arg(valist, float));\n break;\n case 3:\n int x = va_arg(valist, int);\n float y = va_arg(valist, float);\n printf(\"%d %f\\n\", x, y);\n break;\n default:\n ;\n }\n va_end(valist);\n return 123;\n}\n\ntest.py\nimport ctypes as ct\nfrom threading import Thread\n\ndll = ct.CDLL('./test')\ndll.foo.argtypes = ct.c_int,\ndll.foo.restype = ct.c_int\n\ndef thread1():\n for _ in range(5):\n dll.foo(1, ct.c_int(1));\n\n# thread 2 code\ndef thread2():\n for _ in range(5):\n dll.foo(2, ct.c_float(2.125))\n\ndef thread3():\n for _ in range(5):\n dll.foo(3, ct.c_int(3), ct.c_float(3.375))\n\nthreads = [Thread(target=f) for f in (thread1, thread2, thread3)]\nfor t in threads:\n t.start()\nfor t in threads:\n t.join()\n\nOutput:\n1\n1\n1\n2.125000\n1\n2.125000\n3 3.375000\n1\n2.125000\n3 3.375000\n2.125000\n3 3.375000\n2.125000\n3 3.375000\n3 3.375000\n\n" ]
[ 1 ]
[]
[]
[ "ctypes", "multithreading", "python", "variadic_functions" ]
stackoverflow_0074630617_ctypes_multithreading_python_variadic_functions.txt
Q: socket.gaierror: [Errno -2] Name or service not known docker-compose + rabbitmq + pika (Python) I have docker container with rabbitmq, it has an address: 192.168.220.10, domain in my local etc/hosts: rabbitmq So, Im trying to use pika (python) from another container with fastapi app, it has an address: 192.168.220.5. Of course, all containers have a network: net: driver: bridge ipam: config: - subnet: 192.168.220.0/24 Below rabbitmq container code from docker-compose.yml rabbitmq: container_name: rabbitmq image: rabbitmq:3-management-alpine restart: always ports: - "5672:5672" - "15672:15672" environment: RABBITMQ_DEFAULT_USER: rabbitmq RABBITMQ_DEFAULT_PASS: 27474129 networks: net: ipv4_address: 192.168.220.10 So theres a problem at the bottom credentials = pika.PlainCredentials('rabbitmq', '27474129') conn_params = pika.ConnectionParameters(host="http://rabbitmq", port=5672) connection = pika.BlockingConnection(conn_params) channel = connection.channel() Im getting an error: socket.gaierror: [Errno -2] Name or service not known I`ve tried: Use ports: 15672 and 5672. Use host: "http://192.168.220.10 A: parameters = pika.URLParameters('amqp://rabbitmq:27474129@rabbitmq:5672/%2F') connection = pika.BlockingConnection(parameters)
socket.gaierror: [Errno -2] Name or service not known docker-compose + rabbitmq + pika (Python)
I have docker container with rabbitmq, it has an address: 192.168.220.10, domain in my local etc/hosts: rabbitmq So, Im trying to use pika (python) from another container with fastapi app, it has an address: 192.168.220.5. Of course, all containers have a network: net: driver: bridge ipam: config: - subnet: 192.168.220.0/24 Below rabbitmq container code from docker-compose.yml rabbitmq: container_name: rabbitmq image: rabbitmq:3-management-alpine restart: always ports: - "5672:5672" - "15672:15672" environment: RABBITMQ_DEFAULT_USER: rabbitmq RABBITMQ_DEFAULT_PASS: 27474129 networks: net: ipv4_address: 192.168.220.10 So theres a problem at the bottom credentials = pika.PlainCredentials('rabbitmq', '27474129') conn_params = pika.ConnectionParameters(host="http://rabbitmq", port=5672) connection = pika.BlockingConnection(conn_params) channel = connection.channel() Im getting an error: socket.gaierror: [Errno -2] Name or service not known I`ve tried: Use ports: 15672 and 5672. Use host: "http://192.168.220.10
[ "parameters = pika.URLParameters('amqp://rabbitmq:27474129@rabbitmq:5672/%2F')\nconnection = pika.BlockingConnection(parameters)\n\n" ]
[ 0 ]
[]
[]
[ "docker_compose", "docker_networking", "pika", "python", "rabbitmq" ]
stackoverflow_0074631641_docker_compose_docker_networking_pika_python_rabbitmq.txt
Q: How do I solve a two-sum problem with multiple solutions? So essentially it is a simple two sum problem but there are multiple solutions. At the end I would like to return all pairs that sum up to the target within a given list and then tally the total number of pairs at the end and return that as well. Currently can only seem to return 1 pair of numbers. So far my solution has to been to try and implement a function that counts the amount of additions done, and while that number is less than the total length of the list the code would continue to iterate. This did not prove effective as it would still not take into account other solutions. Any help would be greatly appreciated A: I took your code and did a couple of tweaks to where summations were being tested and how the data was being stored. Following is your tweaked code. def suminlist(mylist,target): sumlist = [] count = 0 for i in range(len(mylist)): for x in range(i+1,len(mylist)): sum = mylist[i] + mylist[x] if sum == target: count += 1 worklist = [] worklist.append(mylist[i]) worklist.append(mylist[x]) sumlist.append(worklist) return count, sumlist list = [0, 5, 4, -6, 2, 7, 13, 3, 1] print(suminlist(list,4)) Things to point out. The sumlist variable is defined as a list with no initial values. When a summation of two values in the passed list equate to the test value, they are placed into a new interim list and then that list is appended to the sumlist list along with incrementing the count value. Once all list combinations are identified, the count value and sumlist are returned to the calling statement. Following was the test output at the terminal for your list. @Dev:~/Python_Programs/SumList$ python3 SumList.py (2, [[0, 4], [3, 1]]) To split the count value out from the list, you might consider splitting the returned data as noted in the following reference Returning Multiple Values. Give that a try to see if it meets the spirit of your project. A: You can use the itertools module for this job. my_list = [1, 2, 3, 4] target = 3 out = [x for x in itertools.combinations(my_list, r=2) if sum(x) == target] print(out) >>> [(0, 3), (1, 2)] If you feel like using a python standard library import is cheating, the the official documentation linked above showcases example code for a "low level" python implementation. A: Issue: The issue for returning one set of possible several sets remains in the first return line (return sumlist). Based on the code, the function will automatically ends the function as the first set of value that their sum is the same as the target value. Therefore, we need to adjust it. Adjustment: I add a list(finallist[]) at the begining of the function for collecting all the applicable sets that can sum up to the target value. Then, I add a list(list[]) right after the if statement (*since I create an empty list(list[]) right after the if statement, when any sum of two values fulfills the target value, the function will empty the list again to store the new set of two values and append to the finallist[] again). Hence, as long as a set of two numbers can sum up to the target value, we can append them to the list(list[]). Accordingly, I add two more lines of code to append two values into the list(list[]). At the end, I append this list(list[]) to finallist[]. Also, I move the return statement to the final line and adjust the spacing. After this adjustment, the function will not end right after discovering the first possible set of values. Instead, the function will iterate repeatedly until getting all sets of the values and storing in the finalist[]. Originally, the function puts the return statement (return -1) at the end of the function for the situation that none of the sets can sum up to the target value. However, after the previous adjustment, the original return statement (return -1) will not have the opportunity to function as everything will end in the previous return line (return finallist). Therefore, I change it to the else part in the if statement (*meaning: when none of the sum of two values adds up to the target value, we will return 'No two values in the list can add up to the target value.') Changes in Function: def suminlist(mylist,target): # count = 0 # delete # while count < len(mylist): # delete finallist=[] # add for i in range(len(mylist)): for x in range(i+1,len(mylist)): sum = mylist[i]+mylist[x] # count = count + 1 # delete if sum == target: # sumlist = mylist[i],mylist[x] # delete # return sumlist # delete list=[] # add list.append(mylist[i]) # add list.append(mylist[x]) # add finallist.append(list) # add else: # add return 'No two values in the list can add up to the target value.' # add return finallist # add # return -1 # delete Final Version: def suminlist(mylist,target): finallist=[] for i in range(len(mylist)): for x in range(i+1,len(mylist)): sum = mylist[i]+mylist[x] if sum == target: list=[] list.append(mylist[i]) list.append(mylist[x]) finallist.append(list) else: return 'No two values in the list can add up to the target value.' return finallist Test Code and Output: list = [0, 5, 4, -6, 2, 7, 13, 3, 1] print(suminlist(list,100)) # Output: No two values in the list can add up to the target value. print(suminlist(list,4)) # Output: [[0, 4], [3, 1]]
How do I solve a two-sum problem with multiple solutions?
So essentially it is a simple two sum problem but there are multiple solutions. At the end I would like to return all pairs that sum up to the target within a given list and then tally the total number of pairs at the end and return that as well. Currently can only seem to return 1 pair of numbers. So far my solution has to been to try and implement a function that counts the amount of additions done, and while that number is less than the total length of the list the code would continue to iterate. This did not prove effective as it would still not take into account other solutions. Any help would be greatly appreciated
[ "I took your code and did a couple of tweaks to where summations were being tested and how the data was being stored. Following is your tweaked code.\ndef suminlist(mylist,target):\n sumlist = []\n count = 0\n for i in range(len(mylist)):\n for x in range(i+1,len(mylist)):\n sum = mylist[i] + mylist[x]\n if sum == target:\n count += 1\n worklist = []\n worklist.append(mylist[i])\n worklist.append(mylist[x])\n sumlist.append(worklist)\n return count, sumlist\n\nlist = [0, 5, 4, -6, 2, 7, 13, 3, 1] \nprint(suminlist(list,4))\n\nThings to point out.\n\nThe sumlist variable is defined as a list with no initial values.\nWhen a summation of two values in the passed list equate to the test value, they are placed into a new interim list and then that list is appended to the sumlist list along with incrementing the count value.\nOnce all list combinations are identified, the count value and sumlist are returned to the calling statement.\n\nFollowing was the test output at the terminal for your list.\n@Dev:~/Python_Programs/SumList$ python3 SumList.py \n(2, [[0, 4], [3, 1]])\n\nTo split the count value out from the list, you might consider splitting the returned data as noted in the following reference Returning Multiple Values.\nGive that a try to see if it meets the spirit of your project.\n", "You can use the itertools module for this job.\nmy_list = [1, 2, 3, 4]\ntarget = 3\n\nout = [x for x in itertools.combinations(my_list, r=2) if sum(x) == target]\n\nprint(out)\n>>> [(0, 3), (1, 2)]\n\nIf you feel like using a python standard library import is cheating, the the official documentation linked above showcases example code for a \"low level\" python implementation.\n", "Issue:\nThe issue for returning one set of possible several sets remains in the first return line (return sumlist). Based on the code, the function will automatically ends the function as the first set of value that their sum is the same as the target value. Therefore, we need to adjust it.\nAdjustment:\n\nI add a list(finallist[]) at the begining of the function for collecting all the applicable sets that can sum up to the target value. Then, I add a list(list[]) right after the if statement (*since I create an empty list(list[]) right after the if statement, when any sum of two values fulfills the target value, the function will empty the list again to store the new set of two values and append to the finallist[] again). Hence, as long as a set of two numbers can sum up to the target value, we can append them to the list(list[]). Accordingly, I add two more lines of code to append two values into the list(list[]). At the end, I append this list(list[]) to finallist[]. Also, I move the return statement to the final line and adjust the spacing. After this adjustment, the function will not end right after discovering the first possible set of values. Instead, the function will iterate repeatedly until getting all sets of the values and storing in the finalist[].\nOriginally, the function puts the return statement (return -1) at the end of the function for the situation that none of the sets can sum up to the target value. However, after the previous adjustment, the original return statement (return -1) will not have the opportunity to function as everything will end in the previous return line (return finallist). Therefore, I change it to the else part in the if statement (*meaning: when none of the sum of two values adds up to the target value, we will return 'No two values in the list can add up to the target value.')\n\nChanges in Function:\ndef suminlist(mylist,target):\n # count = 0 # delete\n # while count < len(mylist): # delete\n finallist=[] # add\n for i in range(len(mylist)):\n for x in range(i+1,len(mylist)):\n sum = mylist[i]+mylist[x]\n # count = count + 1 # delete\n if sum == target:\n # sumlist = mylist[i],mylist[x] # delete\n # return sumlist # delete\n list=[] # add\n list.append(mylist[i]) # add\n list.append(mylist[x]) # add\n finallist.append(list) # add\n else: # add\n return 'No two values in the list can add up to the target value.' # add\n return finallist # add\n # return -1 # delete\n\nFinal Version:\ndef suminlist(mylist,target):\n finallist=[]\n for i in range(len(mylist)):\n for x in range(i+1,len(mylist)):\n sum = mylist[i]+mylist[x]\n if sum == target:\n list=[]\n list.append(mylist[i])\n list.append(mylist[x])\n finallist.append(list)\n else:\n return 'No two values in the list can add up to the target value.'\n return finallist\n\nTest Code and Output:\nlist = [0, 5, 4, -6, 2, 7, 13, 3, 1] \nprint(suminlist(list,100))\n# Output: No two values in the list can add up to the target value.\nprint(suminlist(list,4))\n# Output: [[0, 4], [3, 1]]\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "list", "python", "sum" ]
stackoverflow_0074621971_list_python_sum.txt
Q: Catch sklearn joblib output to python logging When using sklearn, I want to see the output. Therefore, I use verbose when available. Generally, I want timestamps, process ids etc, so I use the python logging module when I can. Getting sklearn output to the logging module has been done before, e.g. https://stackoverflow.com/a/50803365 However, I want to run in parallell, and joblib also use sys.stdout and sys.stderr directly. Therefore, my attempt (see below) does not work. import logging import sys import contextlib class LogAdapter: def __init__(self,level,logger) -> None: if level == 'INFO': self.report = logger.info elif level == 'ERROR': self.report = logger.error def write(self,msg): stripped = msg.rstrip() if len(stripped) > 0: self.report(stripped) def flush(self): pass @contextlib.contextmanager def redirect_to_log(logger): originals = sys.stdout, sys.stderr sys.stdout = LogAdapter(level='INFO',logger=logger) sys.stderr = LogAdapter(level='ERROR',logger=logger) yield sys.stdout, sys.stderr = originals def test_case(): from sklearn.ensemble import RandomForestClassifier from sklearn.utils import parallel_backend logger = logging.getLogger(__name__) logging.basicConfig( level=logging.DEBUG, format="%(process)d | %(asctime)s | %(name)14s | %(levelname)7s | %(message)s", ) for backend_name in ['loky','threading']: logger.info(f"Testing backend {backend_name}") with parallel_backend(backend_name),redirect_to_log(logger): clf = RandomForestClassifier(2, verbose=4) X = [[0, 0], [1, 1]] Y = [0, 1] clf = clf.fit(X, Y) if __name__ == "__main__": test_case() I get output 19320 | 2022-11-30 17:49:16,938 | __main__ | INFO | Testing backend loky 19320 | 2022-11-30 17:49:16,951 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers. building tree 1 of 2 building tree 2 of 2 19320 | 2022-11-30 17:49:18,923 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 1.9s remaining: 0.0s 19320 | 2022-11-30 17:49:18,923 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 1.9s finished 19320 | 2022-11-30 17:49:18,924 | __main__ | INFO | Testing backend threading 19320 | 2022-11-30 17:49:18,925 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers. 19320 | 2022-11-30 17:49:18,932 | __main__ | INFO | building tree 1 of 2 19320 | 2022-11-30 17:49:18,932 | __main__ | INFO | building tree 2 of 2 19320 | 2022-11-30 17:49:18,934 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s 19320 | 2022-11-30 17:49:18,934 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 0.0s finished As you can see, it works nicely with the threading backend, but not with the loky backend. Loky is for multiprocessing, and the context manager of mine only catch stdout and stderr in the main process. How do I capture stdout of child processes and put them into standard python logging? If it was a plain python subprocess call, I could catch the IO by providing pipes as in https://codereview.stackexchange.com/questions/6567/redirecting-subprocesses-output-stdout-and-stderr-to-the-logging-module Others have tried and failed before me with loky, I realize. One option is to make sure a "setup logging" call is attached to each job pushed via joblib. That could work, but sklearn does not expose that level of detail, by what I know. See e.g. https://github.com/joblib/joblib/issues/1017 A: I did come up with a workaround using the dask backend instead. I define a worker plugin that essentially is my contextmanager import dask.distributed class LogPlugin(dask.distributed.WorkerPlugin): name = "LoggerRedirector" def setup(self, worker: dask.distributed.Worker): self.originals = sys.stdout, sys.stderr init_logging() sys.stdout = LogAdapter(level='INFO',logger=logging.getLogger(__name__)) sys.stderr = LogAdapter(level='ERROR',logger=logging.getLogger(__name__)) def teardown(self, worker: dask.distributed.Worker): sys.stdout, sys.stderr = self.originals and then register it in a dask backend client = dask.distributed.Client() client.register_worker_plugin(LogPlugin()) I can now get the desired output with multiprocessing. It is an acceptable solution, but somewhat annoying, as dask has larger overhead than loky, and imposes a new dependency for me. The new full code is: import logging import sys import contextlib class LogAdapter: def __init__(self,level,logger) -> None: if level == 'INFO': self.report = logger.info elif level == 'ERROR': self.report = logger.error def write(self,msg): stripped = msg.rstrip() if len(stripped) > 0: self.report(stripped) def flush(self): pass @contextlib.contextmanager def redirect_to_log(logger): originals = sys.stdout, sys.stderr sys.stdout = LogAdapter(level='INFO',logger=logger) sys.stderr = LogAdapter(level='ERROR',logger=logger) yield sys.stdout, sys.stderr = originals def init_logging(): logging.basicConfig( level=logging.DEBUG, format="%(process)d | %(asctime)s | %(name)14s | %(levelname)7s | %(message)s", ) import dask.distributed class LogPlugin(dask.distributed.WorkerPlugin): name = "LoggerRedirector" def setup(self, worker: dask.distributed.Worker): self.originals = sys.stdout, sys.stderr init_logging() sys.stdout = LogAdapter(level='INFO',logger=logging.getLogger(__name__)) sys.stderr = LogAdapter(level='ERROR',logger=logging.getLogger(__name__)) def teardown(self, worker: dask.distributed.Worker): sys.stdout, sys.stderr = self.originals def test_case(): from sklearn.ensemble import RandomForestClassifier from sklearn.utils import parallel_backend client = dask.distributed.Client() client.register_worker_plugin(LogPlugin()) logger = logging.getLogger(__name__) init_logging() for backend_name in ['loky','threading','dask']: logger.info(f"Testing backend {backend_name}") with parallel_backend(backend_name),redirect_to_log(logger): clf = RandomForestClassifier(2, verbose=4) X = [[0, 0], [1, 1]] Y = [0, 1] clf = clf.fit(X, Y) if __name__ == "__main__": test_case()
Catch sklearn joblib output to python logging
When using sklearn, I want to see the output. Therefore, I use verbose when available. Generally, I want timestamps, process ids etc, so I use the python logging module when I can. Getting sklearn output to the logging module has been done before, e.g. https://stackoverflow.com/a/50803365 However, I want to run in parallell, and joblib also use sys.stdout and sys.stderr directly. Therefore, my attempt (see below) does not work. import logging import sys import contextlib class LogAdapter: def __init__(self,level,logger) -> None: if level == 'INFO': self.report = logger.info elif level == 'ERROR': self.report = logger.error def write(self,msg): stripped = msg.rstrip() if len(stripped) > 0: self.report(stripped) def flush(self): pass @contextlib.contextmanager def redirect_to_log(logger): originals = sys.stdout, sys.stderr sys.stdout = LogAdapter(level='INFO',logger=logger) sys.stderr = LogAdapter(level='ERROR',logger=logger) yield sys.stdout, sys.stderr = originals def test_case(): from sklearn.ensemble import RandomForestClassifier from sklearn.utils import parallel_backend logger = logging.getLogger(__name__) logging.basicConfig( level=logging.DEBUG, format="%(process)d | %(asctime)s | %(name)14s | %(levelname)7s | %(message)s", ) for backend_name in ['loky','threading']: logger.info(f"Testing backend {backend_name}") with parallel_backend(backend_name),redirect_to_log(logger): clf = RandomForestClassifier(2, verbose=4) X = [[0, 0], [1, 1]] Y = [0, 1] clf = clf.fit(X, Y) if __name__ == "__main__": test_case() I get output 19320 | 2022-11-30 17:49:16,938 | __main__ | INFO | Testing backend loky 19320 | 2022-11-30 17:49:16,951 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers. building tree 1 of 2 building tree 2 of 2 19320 | 2022-11-30 17:49:18,923 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 1.9s remaining: 0.0s 19320 | 2022-11-30 17:49:18,923 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 1.9s finished 19320 | 2022-11-30 17:49:18,924 | __main__ | INFO | Testing backend threading 19320 | 2022-11-30 17:49:18,925 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers. 19320 | 2022-11-30 17:49:18,932 | __main__ | INFO | building tree 1 of 2 19320 | 2022-11-30 17:49:18,932 | __main__ | INFO | building tree 2 of 2 19320 | 2022-11-30 17:49:18,934 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s 19320 | 2022-11-30 17:49:18,934 | __main__ | ERROR | [Parallel(n_jobs=-1)]: Done 2 out of 2 | elapsed: 0.0s finished As you can see, it works nicely with the threading backend, but not with the loky backend. Loky is for multiprocessing, and the context manager of mine only catch stdout and stderr in the main process. How do I capture stdout of child processes and put them into standard python logging? If it was a plain python subprocess call, I could catch the IO by providing pipes as in https://codereview.stackexchange.com/questions/6567/redirecting-subprocesses-output-stdout-and-stderr-to-the-logging-module Others have tried and failed before me with loky, I realize. One option is to make sure a "setup logging" call is attached to each job pushed via joblib. That could work, but sklearn does not expose that level of detail, by what I know. See e.g. https://github.com/joblib/joblib/issues/1017
[ "I did come up with a workaround using the dask backend instead. I define a worker plugin that essentially is my contextmanager\nimport dask.distributed\nclass LogPlugin(dask.distributed.WorkerPlugin):\n name = \"LoggerRedirector\"\n\n def setup(self, worker: dask.distributed.Worker):\n self.originals = sys.stdout, sys.stderr\n init_logging()\n sys.stdout = LogAdapter(level='INFO',logger=logging.getLogger(__name__))\n sys.stderr = LogAdapter(level='ERROR',logger=logging.getLogger(__name__))\n\n def teardown(self, worker: dask.distributed.Worker):\n sys.stdout, sys.stderr = self.originals\n\nand then register it in a dask backend\nclient = dask.distributed.Client()\nclient.register_worker_plugin(LogPlugin())\n\nI can now get the desired output with multiprocessing.\nIt is an acceptable solution, but somewhat annoying, as dask has larger overhead than loky, and imposes a new dependency for me.\nThe new full code is:\n\nimport logging\nimport sys\nimport contextlib\n\nclass LogAdapter:\n def __init__(self,level,logger) -> None:\n if level == 'INFO':\n self.report = logger.info\n elif level == 'ERROR':\n self.report = logger.error\n\n def write(self,msg):\n stripped = msg.rstrip()\n if len(stripped) > 0:\n self.report(stripped)\n\n def flush(self):\n pass\n\[email protected]\ndef redirect_to_log(logger):\n originals = sys.stdout, sys.stderr\n sys.stdout = LogAdapter(level='INFO',logger=logger)\n sys.stderr = LogAdapter(level='ERROR',logger=logger)\n yield\n sys.stdout, sys.stderr = originals\n\ndef init_logging():\n logging.basicConfig(\n level=logging.DEBUG,\n format=\"%(process)d | %(asctime)s | %(name)14s | %(levelname)7s | %(message)s\",\n )\n\nimport dask.distributed\nclass LogPlugin(dask.distributed.WorkerPlugin):\n name = \"LoggerRedirector\"\n\n def setup(self, worker: dask.distributed.Worker):\n self.originals = sys.stdout, sys.stderr\n init_logging()\n sys.stdout = LogAdapter(level='INFO',logger=logging.getLogger(__name__))\n sys.stderr = LogAdapter(level='ERROR',logger=logging.getLogger(__name__))\n\n def teardown(self, worker: dask.distributed.Worker):\n sys.stdout, sys.stderr = self.originals\n\ndef test_case():\n from sklearn.ensemble import RandomForestClassifier\n from sklearn.utils import parallel_backend\n client = dask.distributed.Client()\n client.register_worker_plugin(LogPlugin())\n logger = logging.getLogger(__name__)\n init_logging()\n for backend_name in ['loky','threading','dask']:\n logger.info(f\"Testing backend {backend_name}\")\n with parallel_backend(backend_name),redirect_to_log(logger):\n clf = RandomForestClassifier(2, verbose=4)\n X = [[0, 0], [1, 1]]\n Y = [0, 1]\n clf = clf.fit(X, Y)\n\nif __name__ == \"__main__\":\n test_case()\n\n" ]
[ 0 ]
[]
[]
[ "joblib", "python", "python_logging", "scikit_learn" ]
stackoverflow_0074631614_joblib_python_python_logging_scikit_learn.txt
Q: Why does GPU memory increase when recreating and reassigning a JAX numpy array to the same variable name? When I recreate and reassign a JAX np array to the same variable name, for some reason the GPU memory nearly doubles the first recreation and then stays stable for subsequent recreations/reassignments. Why does this happen and is this generally expected behavior for JAX arrays? Fully runnable minimal example: https://colab.research.google.com/drive/1piUvyVylRBKm1xb1WsocsSVXJzvn5bdI?usp=sharing. For posterity in case colab goes down: %env XLA_PYTHON_CLIENT_PREALLOCATE=false import jax from jax import numpy as jnp from jax import random # First creation of jnp array x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage from the first call is 618 MB # Second creation of jnp array, reassigning it to the same variable name x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage is now 1130 MB - almost double! # Third creation of jnp array, reassigning it to the same variable name x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage is stable at 1130 MB. Thank you! A: The reason for this behavior comes from the interaction of several things: Without pre-allocation, the GPU memory usage will grow as needed, but will not shrink when buffers are deleted. When you reassign a python variable, the old value still exists in memory until the Python garbage collector notices it is no longer referenced, and deletes it. This will take a small amount of time to occur in the background (you can call import gc; gc.collect() to force this to happen at any point). JAX sends instructions to the GPU asynchronously, meaning that once Python garbage-collects a GPU-backed value, the Python script may continue running for a short time before the corresponding buffer is actually removed from the device. All of this means there's some delay between unassigning the previous x value, and that memory being freed on the device, and if you're immediately allocating a new value, the device will likely expand its memory allocation to fit the new array before the old one is deleted. So why does the memory use stay constant on the third call? Well, by this time the first allocation has been removed, and so there is already space for the third allocation without growing the memory footprint. With these things in mind, you can keep the allocation constant by putting a delay between deleting the old value and creating the new value; i.e. replace this: x = jnp.ones(shape=(int(1e8),), dtype=float) with this: del x time.sleep(1) x = jnp.ones(shape=(int(1e8),), dtype=float) When I run it this way, I see constant memory usage at 618MiB.
Why does GPU memory increase when recreating and reassigning a JAX numpy array to the same variable name?
When I recreate and reassign a JAX np array to the same variable name, for some reason the GPU memory nearly doubles the first recreation and then stays stable for subsequent recreations/reassignments. Why does this happen and is this generally expected behavior for JAX arrays? Fully runnable minimal example: https://colab.research.google.com/drive/1piUvyVylRBKm1xb1WsocsSVXJzvn5bdI?usp=sharing. For posterity in case colab goes down: %env XLA_PYTHON_CLIENT_PREALLOCATE=false import jax from jax import numpy as jnp from jax import random # First creation of jnp array x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage from the first call is 618 MB # Second creation of jnp array, reassigning it to the same variable name x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage is now 1130 MB - almost double! # Third creation of jnp array, reassigning it to the same variable name x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage is stable at 1130 MB. Thank you!
[ "The reason for this behavior comes from the interaction of several things:\n\nWithout pre-allocation, the GPU memory usage will grow as needed, but will not shrink when buffers are deleted.\n\nWhen you reassign a python variable, the old value still exists in memory until the Python garbage collector notices it is no longer referenced, and deletes it. This will take a small amount of time to occur in the background (you can call import gc; gc.collect() to force this to happen at any point).\n\nJAX sends instructions to the GPU asynchronously, meaning that once Python garbage-collects a GPU-backed value, the Python script may continue running for a short time before the corresponding buffer is actually removed from the device.\n\n\nAll of this means there's some delay between unassigning the previous x value, and that memory being freed on the device, and if you're immediately allocating a new value, the device will likely expand its memory allocation to fit the new array before the old one is deleted.\nSo why does the memory use stay constant on the third call? Well, by this time the first allocation has been removed, and so there is already space for the third allocation without growing the memory footprint.\nWith these things in mind, you can keep the allocation constant by putting a delay between deleting the old value and creating the new value; i.e. replace this:\nx = jnp.ones(shape=(int(1e8),), dtype=float)\n\nwith this:\ndel x\ntime.sleep(1)\nx = jnp.ones(shape=(int(1e8),), dtype=float)\n\nWhen I run it this way, I see constant memory usage at 618MiB.\n" ]
[ 1 ]
[]
[]
[ "gpu", "jax", "memory", "nvidia", "python" ]
stackoverflow_0074628777_gpu_jax_memory_nvidia_python.txt
Q: How to generate a ripple shape pattern using python I am looking to create a repetitive pattern from a single shape (in the example below, the starting shape would be the smallest centre star) using Python. The pattern would look something like this: To give context, I am working on a project that uses a camera to detect a shape on a rectangle of sand. The idea is that the ripple pattern is drawn out around the object using a pen plotter-type mechanism in the sand to create a zen garden-type feature. Currently, I am running the Canny edge detection algorithm to create a png (in this example it would be the smallest star). I am able to convert this into an SVG using potrace, but am not sure how to create the ripple pattern (and at what stage, i.e. before converting to an SVG, or after). Any help would be appreciated! A: Assing you are using turtle (very beginner friendly) you can use this: import turtle, math turtle.title("Stars!") t = turtle.Turtle() t.speed(900) # make it go fast t.hideturtle() # hide turtle t.width(1.5) # make lines nice & thick def drawstar(size): t.up() # make turtle not draw while repositioning t.goto(0, size * math.sin(144)) # center star at 0, 0 t.setheading(216); # make star flat t.down() # make turtle draw for i in range(5): # draw 5 spikes t.forward(size) t.right(144) t.forward(size) t.right(288) drawstar(250) drawstar(200) drawstar(150) drawstar(100) input() # stop turtle from exiting which creates this: A: Here's how I did it: In the end, I ran a vertex detection algorithm to calculate the shape's vertices. Then, I sorted them in a clockwise order around the centroid coordinate. Using the svgwrite library, I recreated the shapes using lines. I 'drew' a circle with a set radius around each vertex and calculated the intersection between the circle and a straight line from the centroid through the vertex. This gave me two potential solutions (a +ve and a -ve). I chose the point furthest away from the centroid, iterated this method for each vertex and joined the points to create an outline of the shape.
How to generate a ripple shape pattern using python
I am looking to create a repetitive pattern from a single shape (in the example below, the starting shape would be the smallest centre star) using Python. The pattern would look something like this: To give context, I am working on a project that uses a camera to detect a shape on a rectangle of sand. The idea is that the ripple pattern is drawn out around the object using a pen plotter-type mechanism in the sand to create a zen garden-type feature. Currently, I am running the Canny edge detection algorithm to create a png (in this example it would be the smallest star). I am able to convert this into an SVG using potrace, but am not sure how to create the ripple pattern (and at what stage, i.e. before converting to an SVG, or after). Any help would be appreciated!
[ "Assing you are using turtle (very beginner friendly) you can use this:\nimport turtle, math\n\nturtle.title(\"Stars!\")\n\nt = turtle.Turtle()\nt.speed(900) # make it go fast\nt.hideturtle() # hide turtle\nt.width(1.5) # make lines nice & thick\n\ndef drawstar(size):\n t.up() # make turtle not draw while repositioning\n t.goto(0, size * math.sin(144)) # center star at 0, 0\n t.setheading(216); # make star flat\n t.down() # make turtle draw\n for i in range(5): # draw 5 spikes\n t.forward(size)\n t.right(144)\n t.forward(size)\n t.right(288)\n \ndrawstar(250)\ndrawstar(200)\ndrawstar(150)\ndrawstar(100)\n\ninput() # stop turtle from exiting\n\nwhich creates this:\n\n", "Here's how I did it:\n\nIn the end, I ran a vertex detection algorithm to calculate the shape's vertices.\nThen, I sorted them in a clockwise order around the centroid coordinate. Using the svgwrite library, I recreated the shapes using lines.\nI 'drew' a circle with a set radius around each vertex and calculated the intersection between the circle and a straight line from the centroid through the vertex.\nThis gave me two potential solutions (a +ve and a -ve). I chose the point furthest away from the centroid, iterated this method for each vertex and joined the points to create an outline of the shape.\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "image", "python", "shapes" ]
stackoverflow_0074578308_image_python_shapes.txt
Q: Training and validation loss history for MLPRegressor I am using an MLPRegressor to solve a regression problem and would like to plot the loss function, i.e., by how much the loss decreases in each training epoch, for both training and validation. Following the solution here, I have been able to plot the validation loss with loss_curve: pd.DataFrame(mlp.loss_curve_).plot(xlabel="Epoch", ylabel="Loss", legend=False) This only lets me plot the validation loss for now: Is there a way I can also include training loss and a legend denoting both lines?
Training and validation loss history for MLPRegressor
I am using an MLPRegressor to solve a regression problem and would like to plot the loss function, i.e., by how much the loss decreases in each training epoch, for both training and validation. Following the solution here, I have been able to plot the validation loss with loss_curve: pd.DataFrame(mlp.loss_curve_).plot(xlabel="Epoch", ylabel="Loss", legend=False) This only lets me plot the validation loss for now: Is there a way I can also include training loss and a legend denoting both lines?
[]
[]
[ "Yes, there is a way.\nYou can just create two numpy arrays with the information of both train and validation loss and plot two lines in the same graph as in:\nPlotting multiple line graphs in matplotlib\n" ]
[ -1 ]
[ "python", "scikit_learn" ]
stackoverflow_0074631843_python_scikit_learn.txt
Q: How do I combine elements from two loops without issues? When I execute this code... from bs4 import BeautifulSoup with open("games.html", "r") as page: doc = BeautifulSoup(page, "html.parser") titles = doc.select("a.title") prices = doc.select("span.price-inner") for game_soup in doc.find_all("div", {"class": "game-options-wrapper"}): game_ids = (game_soup.button.get("data-game-id")) for title, price_official, price_lowest in zip(titles, prices[::2], prices[1::2]): print(title.text + ',' + str(price_official.text.replace('$', '').replace('~', '')) + ',' + str( price_lowest.text.replace('$', '').replace('~', ''))) The output is... 153356 80011 130187 119003 73502 156474 96592 154207 155123 152790 165013 110837 Call of Duty: Modern Warfare II (2022),69.99,77.05 Red Dead Redemption 2,14.85,13.79 God of War,28.12,22.03 ELDEN RING,50.36,48.10 Cyberpunk 2077,29.99,28.63 EA SPORTS FIFA 23,41.99,39.04 Warhammer 40,000: Darktide,39.99,45.86 Marvels Spider-Man Remastered,30.71,27.07 Persona 5 Royal,37.79,43.32 The Callisto Protocol,59.99,69.41 Need for Speed Unbound,69.99,42.29 Days Gone,15.00,9.01 But I'm trying to get the value next to the other ones on the same line Expected output: Call of Duty: Modern Warfare II (2022),69.99,77.05,153356 Red Dead Redemption 2,14.85,13.79,80011 ... Even when adding game_ids to print(), it spams the same game id for each line. How can I go about resolving this issue? HTML file: https://jsfiddle.net/m3hqy54x/ A: I feel like all 3 details (title, price_official, price_lowest) are probably all in a shared container. It would be better to loop through these containers and select the details as sets from each container to make sure the wight prices and titles are being paired up, but I can't tell you how to do that without seeing at least a snippet from (or all of) "games.html".... Anyway, assuming that '110837\nCall of Duty: Modern Warfare II (2022)' is from the first title here, you can rewrite your last loop as something like: for z in zip(titles, prices[::2], prices[1::2]): z, lw = list(z), '' for i in len(z): if i == 0: # title z[0] = ' '.join(w for w in z[0].text.split('\n', 1)[-1] if w) if '\n' in z[0].text: lw = z[0].text.split('\n', 1)[0] continue z[i] = z[i].text.replace('$', '').replace('~', '') print(','.join(z+[lw])) Added EDIT: After seeing the html, this is my suggested solution: for g in doc.select('div[data-container-game-id]'): gameId = g.get('data-container-game-id') title = g.select_one('a.title') if title: title = ' '.join(w for w in title.text.split() if w) price_official = g.select_one('.price-wrap > div:first-child span.price') price_lowest = g.select_one('.price-wrap > div:first-child+div span.price') if price_official: price_official = price_official.text.replace('$', '').replace('~', '') if price_lowest: price_lowest = price_lowest.text.replace('$', '').replace('~', '') print(', '.join([title, price_official, price_lowest, gameId])) prints Call of Duty: Modern Warfare II (2022), 69.99, 77.05, 153356 Red Dead Redemption 2, 14.85, 13.79, 80011 God of War, 28.12, 22.03, 130187 ELDEN RING, 50.36, 48.10, 119003 Cyberpunk 2077, 29.99, 28.63, 73502 EA SPORTS FIFA 23, 41.99, 39.04, 156474 Warhammer 40,000: Darktide, 39.99, 45.86, 96592 Marvel's Spider-Man Remastered, 30.71, 27.07, 154207 Persona 5 Royal, 37.79, 43.32, 155123 The Callisto Protocol, 59.99, 69.41, 152790 Need for Speed Unbound, 69.99, 42.29, 165013 Days Gone, 15.00, 9.01, 110837 Btw, this might look ok for just four values, but if you have a large amount of details that you want to extract, you might want to consider using a function like this.
How do I combine elements from two loops without issues?
When I execute this code... from bs4 import BeautifulSoup with open("games.html", "r") as page: doc = BeautifulSoup(page, "html.parser") titles = doc.select("a.title") prices = doc.select("span.price-inner") for game_soup in doc.find_all("div", {"class": "game-options-wrapper"}): game_ids = (game_soup.button.get("data-game-id")) for title, price_official, price_lowest in zip(titles, prices[::2], prices[1::2]): print(title.text + ',' + str(price_official.text.replace('$', '').replace('~', '')) + ',' + str( price_lowest.text.replace('$', '').replace('~', ''))) The output is... 153356 80011 130187 119003 73502 156474 96592 154207 155123 152790 165013 110837 Call of Duty: Modern Warfare II (2022),69.99,77.05 Red Dead Redemption 2,14.85,13.79 God of War,28.12,22.03 ELDEN RING,50.36,48.10 Cyberpunk 2077,29.99,28.63 EA SPORTS FIFA 23,41.99,39.04 Warhammer 40,000: Darktide,39.99,45.86 Marvels Spider-Man Remastered,30.71,27.07 Persona 5 Royal,37.79,43.32 The Callisto Protocol,59.99,69.41 Need for Speed Unbound,69.99,42.29 Days Gone,15.00,9.01 But I'm trying to get the value next to the other ones on the same line Expected output: Call of Duty: Modern Warfare II (2022),69.99,77.05,153356 Red Dead Redemption 2,14.85,13.79,80011 ... Even when adding game_ids to print(), it spams the same game id for each line. How can I go about resolving this issue? HTML file: https://jsfiddle.net/m3hqy54x/
[ "I feel like all 3 details (title, price_official, price_lowest) are probably all in a shared container. It would be better to loop through these containers and select the details as sets from each container to make sure the wight prices and titles are being paired up, but I can't tell you how to do that without seeing at least a snippet from (or all of) \"games.html\"....\n\nAnyway, assuming that '110837\\nCall of Duty: Modern Warfare II (2022)' is from the first title here, you can rewrite your last loop as something like:\nfor z in zip(titles, prices[::2], prices[1::2]):\n z, lw = list(z), ''\n for i in len(z):\n if i == 0: # title\n z[0] = ' '.join(w for w in z[0].text.split('\\n', 1)[-1] if w)\n if '\\n' in z[0].text: lw = z[0].text.split('\\n', 1)[0]\n continue\n z[i] = z[i].text.replace('$', '').replace('~', '')\n print(','.join(z+[lw]))\n\n\nAdded EDIT: After seeing the html, this is my suggested solution:\nfor g in doc.select('div[data-container-game-id]'):\n gameId = g.get('data-container-game-id')\n title = g.select_one('a.title')\n if title: title = ' '.join(w for w in title.text.split() if w)\n\n price_official = g.select_one('.price-wrap > div:first-child span.price')\n price_lowest = g.select_one('.price-wrap > div:first-child+div span.price')\n if price_official: \n price_official = price_official.text.replace('$', '').replace('~', '')\n if price_lowest: \n price_lowest = price_lowest.text.replace('$', '').replace('~', '')\n \n print(', '.join([title, price_official, price_lowest, gameId]))\n\nprints\nCall of Duty: Modern Warfare II (2022), 69.99, 77.05, 153356\nRed Dead Redemption 2, 14.85, 13.79, 80011\nGod of War, 28.12, 22.03, 130187\nELDEN RING, 50.36, 48.10, 119003\nCyberpunk 2077, 29.99, 28.63, 73502\nEA SPORTS FIFA 23, 41.99, 39.04, 156474\nWarhammer 40,000: Darktide, 39.99, 45.86, 96592\nMarvel's Spider-Man Remastered, 30.71, 27.07, 154207\nPersona 5 Royal, 37.79, 43.32, 155123\nThe Callisto Protocol, 59.99, 69.41, 152790\nNeed for Speed Unbound, 69.99, 42.29, 165013\nDays Gone, 15.00, 9.01, 110837\n\nBtw, this might look ok for just four values, but if you have a large amount of details that you want to extract, you might want to consider using a function like this.\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "loops", "python", "web_scraping" ]
stackoverflow_0074631667_beautifulsoup_loops_python_web_scraping.txt
Q: How to Tokenize block of text as one token in python? Recently I am working on a genome data set which consists of many blocks of genomes. On previous works on natural language processing, I have used sent_tokenize and word_tokenize from nltk to tokenize the sentences and words. But when I use these functions on genome data set, it is not able to tokenize the genomes correctly. The text below shows some part of the genome data set. >NR_004049 1 tattattatacacaatcccggggcgttctatatagttatgtataatgtat atttatattatttatgcctctaactggaacgtaccttgagcatatatgct gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa caact >NR_004048 1 aatgttttatataaattgcagtatgtgtcacccaaaatagcaaaccccat aaccaaccagattattatgatacataatgcttatatgaaactaagacatt tcgcaacatttattttaggtatataaatacatttattgaaggaattgata tatgccagtaaaatggtgtatttttaatttctttcaataaaaacataatt gacattatataaaaatgaattataaaactctaagcggtggatcactcggc tcatgggtcgatgaagaacgcagcaaactgtgcgtcatcgtgtgaactgc aggacacatgaacatcgacattttgaacgcatatcgcagtccatgctgtt atgtactttaattaattttatagtgctgcttggactacatatggttgagg gttgtaagactatgctaattaagttgcttataaatttttataagcatatg gtatattattggataaatataataatttttattcataatattaaaaaata aatgaaaaacattatctcacatttgaatgt >NR_004047 1 atattcaggttcatcgggcttaacctctaagcagtttcacgtactgttta actctctattcagagttcttttcaactttccctcacggtacttgtttact atcggtctcatggttatatttagtgtttagatggagtttaccacccactt agtgctgcactatcaagcaacactgactctttggaaacatcatctagtaa tcattaacgttatacgggcctggcaccctctatgggtaaatggcctcatt taagaaggacttaaatcgctaatttctcatactagaatattgacgctcca tacactgcatctcacatttgccatatagacaaagtgacttagtgctgaac tgtcttctttacggtcgccgctactaagaaaatccttggtagttactttt cctcccctaattaatatgcttaaattcagggggtagtcccatatgagttg >NR_004052 1 When the tokenizer of ntlk is applied on this dataset, each line of text (for example tattattatacacaatcccggggcgttctatatagttatgtataatgtat ) becomes one token which is not correct. and a block of sequences should be considered as one token. For example in this case contents between >NR_004049 1 and >NR_004048 1 should be consider as one token: >NR_004049 1 tattattatacacaatcccggggcgttctatatagttatgtataatgtat atttatattatttatgcctctaactggaacgtaccttgagcatatatgct gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa caact >NR_004048 1 So each block starting with special words such as >NR_004049 1 until the next special character should be considered as one token. The problem here is tokenizing this kind of data set and i dont have any idea how can i correctly tokenize them. I really appreciate answers which helps me to solve this. Update: One way to solve this problem is to append al lines within each block, and then using the nltk tokenizer. for example This means that to append all lines between >NR_004049 1 and >NR_004048 1 to make one string from several lines, so the nltk tokenizer will consider it as one token. Can any one help me how can i append lines within each block? A: You just need to concatenate the lines between two ids apparently. There should be no need for nltk or any tokenizer, just a bit of programming ;) patterns = {} with open('data', "r") as f: id = None current = "" for line0 in f: line= line0.rstrip() if line[0] == '>' : # new pattern if len(current)>0: # print("adding "+id+" "+current) patterns[id] = current current = "" # to find the next id: tokens = line.split(" ") id = tokens[0][1:] else: # continuing pattern current = current + line if len(current)>0: patterns[id] = current # print("adding "+id+" "+current) # do whatever with the patterns: for id, pattern in patterns.items(): print(f"{id}\t{pattern}")
How to Tokenize block of text as one token in python?
Recently I am working on a genome data set which consists of many blocks of genomes. On previous works on natural language processing, I have used sent_tokenize and word_tokenize from nltk to tokenize the sentences and words. But when I use these functions on genome data set, it is not able to tokenize the genomes correctly. The text below shows some part of the genome data set. >NR_004049 1 tattattatacacaatcccggggcgttctatatagttatgtataatgtat atttatattatttatgcctctaactggaacgtaccttgagcatatatgct gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa caact >NR_004048 1 aatgttttatataaattgcagtatgtgtcacccaaaatagcaaaccccat aaccaaccagattattatgatacataatgcttatatgaaactaagacatt tcgcaacatttattttaggtatataaatacatttattgaaggaattgata tatgccagtaaaatggtgtatttttaatttctttcaataaaaacataatt gacattatataaaaatgaattataaaactctaagcggtggatcactcggc tcatgggtcgatgaagaacgcagcaaactgtgcgtcatcgtgtgaactgc aggacacatgaacatcgacattttgaacgcatatcgcagtccatgctgtt atgtactttaattaattttatagtgctgcttggactacatatggttgagg gttgtaagactatgctaattaagttgcttataaatttttataagcatatg gtatattattggataaatataataatttttattcataatattaaaaaata aatgaaaaacattatctcacatttgaatgt >NR_004047 1 atattcaggttcatcgggcttaacctctaagcagtttcacgtactgttta actctctattcagagttcttttcaactttccctcacggtacttgtttact atcggtctcatggttatatttagtgtttagatggagtttaccacccactt agtgctgcactatcaagcaacactgactctttggaaacatcatctagtaa tcattaacgttatacgggcctggcaccctctatgggtaaatggcctcatt taagaaggacttaaatcgctaatttctcatactagaatattgacgctcca tacactgcatctcacatttgccatatagacaaagtgacttagtgctgaac tgtcttctttacggtcgccgctactaagaaaatccttggtagttactttt cctcccctaattaatatgcttaaattcagggggtagtcccatatgagttg >NR_004052 1 When the tokenizer of ntlk is applied on this dataset, each line of text (for example tattattatacacaatcccggggcgttctatatagttatgtataatgtat ) becomes one token which is not correct. and a block of sequences should be considered as one token. For example in this case contents between >NR_004049 1 and >NR_004048 1 should be consider as one token: >NR_004049 1 tattattatacacaatcccggggcgttctatatagttatgtataatgtat atttatattatttatgcctctaactggaacgtaccttgagcatatatgct gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa caact >NR_004048 1 So each block starting with special words such as >NR_004049 1 until the next special character should be considered as one token. The problem here is tokenizing this kind of data set and i dont have any idea how can i correctly tokenize them. I really appreciate answers which helps me to solve this. Update: One way to solve this problem is to append al lines within each block, and then using the nltk tokenizer. for example This means that to append all lines between >NR_004049 1 and >NR_004048 1 to make one string from several lines, so the nltk tokenizer will consider it as one token. Can any one help me how can i append lines within each block?
[ "You just need to concatenate the lines between two ids apparently. There should be no need for nltk or any tokenizer, just a bit of programming ;)\n\npatterns = {}\nwith open('data', \"r\") as f:\n id = None\n current = \"\"\n for line0 in f:\n line= line0.rstrip()\n if line[0] == '>' : # new pattern\n if len(current)>0:\n# print(\"adding \"+id+\" \"+current)\n patterns[id] = current\n current = \"\"\n # to find the next id:\n tokens = line.split(\" \")\n id = tokens[0][1:]\n else: # continuing pattern\n current = current + line\n if len(current)>0:\n patterns[id] = current\n# print(\"adding \"+id+\" \"+current)\n\n\n# do whatever with the patterns:\nfor id, pattern in patterns.items():\n print(f\"{id}\\t{pattern}\")\n\n" ]
[ 3 ]
[]
[]
[ "nlp", "nltk", "python", "tokenize" ]
stackoverflow_0074623917_nlp_nltk_python_tokenize.txt
Q: Write a function that accepts two strings and returns the indices of all occurrences of second string in the first string as a list Write a function that accepts two strings and returns the indices of all the occurrences of the second string in the first string as a list. If the second string is not present in the first string then it should return -1 def indices(a,b): c=[] if b!="": for i in b: x=b.find(i) a=c.append(x) print(a) else: print(-1) a="karan" b="kumar" indices(a,b) A: Method #1 : Using list comprehension + startswith() This task can be performed using the two functionalities. The startswith function primarily performs the task of getting the starting indices of substring and list comprehension is used to iterate through the whole target string. # Python3 code to demonstrate working of # All occurrences of substring in string # Using list comprehension + startswith() # initializing string test_str = "GeeksforGeeks is best for Geeks" # initializing substring test_sub = "Geeks" # printing original string print("The original string is : " + test_str) # printing substring print("The substring to find : " + test_sub) # using list comprehension + startswith() # All occurrences of substring in string res = [i for i in range(len(test_str)) if test_str.startswith(test_sub, i)] # printing result print("The start indices of the substrings are : " + str(res)) Output : The original string is : GeeksforGeeks is best for Geeks The substring to find : Geeks The start indices of the substrings are : [0, 8, 26] Method #2 : Using re.finditer() The finditer function of the regex library can help us perform the task of finding the occurrences of the substring in the target string and the start function can return the resultant index of each of them. # Python3 code to demonstrate working of # All occurrences of substring in string # Using re.finditer() import re # initializing string test_str = "GeeksforGeeks is best for Geeks" # initializing substring test_sub = "Geeks" # printing original string print("The original string is : " + test_str) # printing substring print("The substring to find : " + test_sub) # using re.finditer() # All occurrences of substring in string res = [i.start() for i in re.finditer(test_sub, test_str)] # printing result print("The start indices of the substrings are : " + str(res)) Output : The original string is : GeeksforGeeks is best for Geeks The substring to find : Geeks The start indices of the substrings are : [0, 8, 26] Method #3 : Using find() and replace() methods # Python3 code to demonstrate working of # All occurrences of substring in string # initializing string test_str = "GeeksforGeeks is best for Geeks" # initializing substring test_sub = "Geeks" # printing original string print("The original string is : " + test_str) # printing substring print("The substring to find : " + test_sub) res=[] while(test_str.find(test_sub)!=-1): res.append(test_str.find(test_sub)) test_str=test_str.replace(test_sub,"*" *len(test_sub),1) # printing result print("The start indices of the substrings are : " + str(res)) Output: The original string is : GeeksforGeeks is best for Geeks The substring to find : Geeks The start indices of the substrings are : [0, 8, 26]
Write a function that accepts two strings and returns the indices of all occurrences of second string in the first string as a list
Write a function that accepts two strings and returns the indices of all the occurrences of the second string in the first string as a list. If the second string is not present in the first string then it should return -1 def indices(a,b): c=[] if b!="": for i in b: x=b.find(i) a=c.append(x) print(a) else: print(-1) a="karan" b="kumar" indices(a,b)
[ "Method #1 : Using list comprehension + startswith() This task can be performed using the two functionalities. The startswith function primarily performs the task of getting the starting indices of substring and list comprehension is used to iterate through the whole target string.\n# Python3 code to demonstrate working of\n# All occurrences of substring in string\n# Using list comprehension + startswith()\n\n# initializing string\ntest_str = \"GeeksforGeeks is best for Geeks\"\n\n# initializing substring\ntest_sub = \"Geeks\"\n\n# printing original string\nprint(\"The original string is : \" + test_str)\n\n# printing substring\nprint(\"The substring to find : \" + test_sub)\n\n# using list comprehension + startswith()\n# All occurrences of substring in string\nres = [i for i in range(len(test_str)) if \ntest_str.startswith(test_sub, i)]\n\n# printing result\nprint(\"The start indices of the substrings are : \" + str(res))\n\nOutput :\nThe original string is : GeeksforGeeks is \nbest for Geeks\nThe substring to find : Geeks\nThe start indices of the substrings are : [0, 8, 26]\n\nMethod #2 : Using re.finditer() The finditer function of the regex library can help us perform the task of finding the occurrences of the substring in the target string and the start function can return the resultant index of each of them.\n# Python3 code to demonstrate working of\n# All occurrences of substring in string\n# Using re.finditer()\nimport re\n\n# initializing string\ntest_str = \"GeeksforGeeks is best for Geeks\"\n\n# initializing substring\ntest_sub = \"Geeks\"\n\n# printing original string\nprint(\"The original string is : \" + test_str)\n\n# printing substring\nprint(\"The substring to find : \" + test_sub)\n\n# using re.finditer()\n# All occurrences of substring in string\nres = [i.start() for i in \nre.finditer(test_sub, test_str)]\n\n# printing result\nprint(\"The start indices of the substrings are : \" + str(res))\n\nOutput :\nThe original string is : GeeksforGeeks is \nbest for Geeks\nThe substring to find : Geeks\nThe start indices of the substrings are : [0, 8, 26]\n\nMethod #3 : Using find() and replace() methods\n# Python3 code to demonstrate working of\n# All occurrences of substring in string\n\n# initializing string\ntest_str = \"GeeksforGeeks is best for Geeks\"\n\n# initializing substring\ntest_sub = \"Geeks\"\n\n# printing original string\nprint(\"The original string is : \" + test_str)\n\n# printing substring\nprint(\"The substring to find : \" + test_sub)\nres=[]\nwhile(test_str.find(test_sub)!=-1):\nres.append(test_str.find(test_sub))\n \ntest_str=test_str.replace(test_sub,\"*\"\n*len(test_sub),1)\n\n# printing result\nprint(\"The start indices of the substrings are : \" + str(res))\n\nOutput:\nThe original string is : GeeksforGeeks is \nbest for Geeks\nThe substring to find : Geeks\nThe start indices of the substrings are : [0, 8, 26]\n\n" ]
[ 0 ]
[]
[]
[ "indices", "python", "string" ]
stackoverflow_0074631907_indices_python_string.txt
Q: Inserting data using PyMongo based on a defined data model I have a dataset consisting of 250 rows that looks like to following: In MongoDB Compass, I inserted the first row as follows: db.employees.insertOne([{"employee_id": 412153, "first_name": "Carrol", "last_name": "Dhin", "email": "[email protected]", "managing": [{"manager_id": 412153, "employee_id": 174543}], "department": [{"department_name": "Accounting", "department_budget": 500000}], "laptop": [{"serial_number": "CSS49745", "manufacturer": "Lenovo", "model": "X1 Gen 10", "date_assigned": {$date: 01-15-2022}, "installed_software": ["MS Office", "Adobe Acrobat", "Slack"]}]}) If I wanted to insert all 250 rows into the database using PyMongo in Python, how would I ensure that every row is entered following the format that I used when I inserted it manually in the Mongo shell? A: from pymongo import MongoClient import pandas as pd client = MongoClient(‘localhost’, 27017) db = client.MD collection = db.gammaCorp df = pd.read_csv(‘ ’) #insert CSV name here data = {} for i in df.index: data['employee_id'] = df['employee_id'][i] data['first_name'] = df['first_name'][i] data['last_name'] = df['last_name'][i] data['email'] = df['email'][i] data['managing'] = [{'manager_id': df['employee_id'][i]}, {'employee_id': df['managing'][i]}] data['department'] = [{'department_name': df['department'][i]}, {'department_budget': df['department_budget'][i]}] data['laptop'] = [{'serial_number': df['serial_number'][i]}, {'manufacturer': df['manufacturer'][i]}, {'model': df['model'][i]}, {'date_assigned': df['date_assigned'][i]}, {'installed_software': df['installed_software'][i]}] collection.insert_one(data)
Inserting data using PyMongo based on a defined data model
I have a dataset consisting of 250 rows that looks like to following: In MongoDB Compass, I inserted the first row as follows: db.employees.insertOne([{"employee_id": 412153, "first_name": "Carrol", "last_name": "Dhin", "email": "[email protected]", "managing": [{"manager_id": 412153, "employee_id": 174543}], "department": [{"department_name": "Accounting", "department_budget": 500000}], "laptop": [{"serial_number": "CSS49745", "manufacturer": "Lenovo", "model": "X1 Gen 10", "date_assigned": {$date: 01-15-2022}, "installed_software": ["MS Office", "Adobe Acrobat", "Slack"]}]}) If I wanted to insert all 250 rows into the database using PyMongo in Python, how would I ensure that every row is entered following the format that I used when I inserted it manually in the Mongo shell?
[ "from pymongo import MongoClient\nimport pandas as pd\n\nclient = MongoClient(‘localhost’, 27017)\ndb = client.MD\ncollection = db.gammaCorp\n\ndf = pd.read_csv(‘ ’) #insert CSV name here\n\ndata = {}\n\nfor i in df.index:\n data['employee_id'] = df['employee_id'][i]\n data['first_name'] = df['first_name'][i]\n data['last_name'] = df['last_name'][i]\n data['email'] = df['email'][i]\n data['managing'] = [{'manager_id': df['employee_id'][i]}, {'employee_id': df['managing'][i]}]\n data['department'] = [{'department_name': df['department'][i]}, {'department_budget': df['department_budget'][i]}]\n data['laptop'] = [{'serial_number': df['serial_number'][i]}, {'manufacturer': df['manufacturer'][i]}, {'model': df['model'][i]}, {'date_assigned': df['date_assigned'][i]}, {'installed_software': df['installed_software'][i]}]\n \n collection.insert_one(data)\n\n" ]
[ 0 ]
[]
[]
[ "mongodb", "pymongo", "python" ]
stackoverflow_0074606136_mongodb_pymongo_python.txt
Q: Difficulties to make the .wait() function work with .stop() on a discord button And thank you in advance to whoever wants to help me. So I created a slash command to create a reminder with 2 buttons to decide what to do next : The call to the command works, the embed too, the buttons too. But after clicking on the buttons I would like them to stop the "await view.wait()" with the "interaction:discord.Interaction.stop()" to complete the for loop and move to the next line of my reader. However I don't think the "interaction:discord.Interaction.stop()" line works and I don't know why : @bot.tree.command(name='reminder') async def reminder(interaction:discord.Interaction): buttonyes=Button(label="YES", style=discord.ButtonStyle.green,custom_id="1") buttonno=Button(label="NO", style=discord.ButtonStyle.red,custom_id="2") view=View() with open("C:/Users/market.csv","r",newline='') as incsv, open(f"C:/Users/market2.csv","w+",newline='') as outcsv: reader = csv.reader(incsv) writer = csv.writer(outcsv) for row in reader: if row[1]=="no" : async def buttonyes_callback(interaction:discord.Interaction): row[1]='yes' writer.writerow(row) await interaction.response.edit_message(content=f"Offer is canceled",embed=None,view=None) interaction:discord.Interaction.stop() async def buttonno_callback(interaction:discord.Interaction): await interaction.response.edit_message(content=f"Offer is still running",embed=None,view=None) interaction:discord.Interaction.stop() buttonyes.callback=buttonyes_callback buttonno.callback=buttonno_callback embed = discord.Embed(title='REMINDER', description=f'Is your {row[2]} over ?', color=65280, timestamp = datetime.datetime.now()) embed.add_field(name='Model', value= row[3], inline=True) embed.add_field(name='Size', value= row[4], inline=True) embed.add_field(name='Price', value= row[5]+f"€", inline=True) view.add_item(buttonyes) view.add_item(buttonno) writer.writerow(row) await interaction.response.send_message(embed=embed,view=view) await view.wait() else: writer.writerow(row) I don't have any error, but the code and the loop are stopped by the "await view.wait()" Thanks in advance A: However I don't think the "interaction:discord.Interaction.stop()" line works and I don't know why This line doesn't make a lot of sense syntactically but I'll gloss over that for now. Interaction.stop() literally doesn't exist, so that line isn't gonna do a whole lot. Not sure what you expect to happen. Docs for Interaction: https://discordpy.readthedocs.io/en/stable/interactions/api.html?highlight=interaction#discord.Interaction You can't just use random functions and hope they exist. Check the docs first. You probably want to call stop() on the View instance instead. You should be getting an error for this though, so you may not have configured logging properly (or this code is never reached). Docs for logging: https://discordpy.readthedocs.io/en/stable/logging.html?highlight=logging buttonyes.callback=buttonyes_callback buttonno.callback=buttonno_callback This is not the way to go for buttons. Don't override attributes in order to attach callbacks. Just subclass View, add Buttons (either with the decorator or using a subclass of Button) and create an instance of it.
Difficulties to make the .wait() function work with .stop() on a discord button
And thank you in advance to whoever wants to help me. So I created a slash command to create a reminder with 2 buttons to decide what to do next : The call to the command works, the embed too, the buttons too. But after clicking on the buttons I would like them to stop the "await view.wait()" with the "interaction:discord.Interaction.stop()" to complete the for loop and move to the next line of my reader. However I don't think the "interaction:discord.Interaction.stop()" line works and I don't know why : @bot.tree.command(name='reminder') async def reminder(interaction:discord.Interaction): buttonyes=Button(label="YES", style=discord.ButtonStyle.green,custom_id="1") buttonno=Button(label="NO", style=discord.ButtonStyle.red,custom_id="2") view=View() with open("C:/Users/market.csv","r",newline='') as incsv, open(f"C:/Users/market2.csv","w+",newline='') as outcsv: reader = csv.reader(incsv) writer = csv.writer(outcsv) for row in reader: if row[1]=="no" : async def buttonyes_callback(interaction:discord.Interaction): row[1]='yes' writer.writerow(row) await interaction.response.edit_message(content=f"Offer is canceled",embed=None,view=None) interaction:discord.Interaction.stop() async def buttonno_callback(interaction:discord.Interaction): await interaction.response.edit_message(content=f"Offer is still running",embed=None,view=None) interaction:discord.Interaction.stop() buttonyes.callback=buttonyes_callback buttonno.callback=buttonno_callback embed = discord.Embed(title='REMINDER', description=f'Is your {row[2]} over ?', color=65280, timestamp = datetime.datetime.now()) embed.add_field(name='Model', value= row[3], inline=True) embed.add_field(name='Size', value= row[4], inline=True) embed.add_field(name='Price', value= row[5]+f"€", inline=True) view.add_item(buttonyes) view.add_item(buttonno) writer.writerow(row) await interaction.response.send_message(embed=embed,view=view) await view.wait() else: writer.writerow(row) I don't have any error, but the code and the loop are stopped by the "await view.wait()" Thanks in advance
[ "\nHowever I don't think the \"interaction:discord.Interaction.stop()\" line works and I don't know why\n\nThis line doesn't make a lot of sense syntactically but I'll gloss over that for now. Interaction.stop() literally doesn't exist, so that line isn't gonna do a whole lot. Not sure what you expect to happen.\nDocs for Interaction: https://discordpy.readthedocs.io/en/stable/interactions/api.html?highlight=interaction#discord.Interaction\nYou can't just use random functions and hope they exist. Check the docs first. You probably want to call stop() on the View instance instead.\nYou should be getting an error for this though, so you may not have configured logging properly (or this code is never reached).\nDocs for logging: https://discordpy.readthedocs.io/en/stable/logging.html?highlight=logging\n\nbuttonyes.callback=buttonyes_callback\nbuttonno.callback=buttonno_callback\n\nThis is not the way to go for buttons. Don't override attributes in order to attach callbacks. Just subclass View, add Buttons (either with the decorator or using a subclass of Button) and create an instance of it.\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python", "wait" ]
stackoverflow_0074629185_discord_discord.py_python_wait.txt
Q: How to plot scikit learn classification report? Is it possible to plot with matplotlib scikit-learn classification report?. Let's assume I print the classification report like this: print '\n*Classification Report:\n', classification_report(y_test, predictions) confusion_matrix_graph = confusion_matrix(y_test, predictions) and I get: Clasification Report: precision recall f1-score support 1 0.62 1.00 0.76 66 2 0.93 0.93 0.93 40 3 0.59 0.97 0.73 67 4 0.47 0.92 0.62 272 5 1.00 0.16 0.28 413 avg / total 0.77 0.57 0.49 858 How can I "plot" the avobe chart?. A: Expanding on Bin's answer: import matplotlib.pyplot as plt import numpy as np def show_values(pc, fmt="%.2f", **kw): ''' Heatmap with text in each cell with matplotlib's pyplot Source: https://stackoverflow.com/a/25074150/395857 By HYRY ''' from itertools import izip pc.update_scalarmappable() ax = pc.get_axes() #ax = pc.axes# FOR LATEST MATPLOTLIB #Use zip BELOW IN PYTHON 3 for p, color, value in izip(pc.get_paths(), pc.get_facecolors(), pc.get_array()): x, y = p.vertices[:-2, :].mean(0) if np.all(color[:3] > 0.5): color = (0.0, 0.0, 0.0) else: color = (1.0, 1.0, 1.0) ax.text(x, y, fmt % value, ha="center", va="center", color=color, **kw) def cm2inch(*tupl): ''' Specify figure size in centimeter in matplotlib Source: https://stackoverflow.com/a/22787457/395857 By gns-ank ''' inch = 2.54 if type(tupl[0]) == tuple: return tuple(i/inch for i in tupl[0]) else: return tuple(i/inch for i in tupl) def heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'): ''' Inspired by: - https://stackoverflow.com/a/16124677/395857 - https://stackoverflow.com/a/25074150/395857 ''' # Plot it out fig, ax = plt.subplots() #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0) c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap) # put the major ticks at the middle of each cell ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False) ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False) # set tick labels #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False) ax.set_xticklabels(xticklabels, minor=False) ax.set_yticklabels(yticklabels, minor=False) # set title and x/y labels plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) # Remove last blank column plt.xlim( (0, AUC.shape[1]) ) # Turn off all the ticks ax = plt.gca() for t in ax.xaxis.get_major_ticks(): t.tick1On = False t.tick2On = False for t in ax.yaxis.get_major_ticks(): t.tick1On = False t.tick2On = False # Add color bar plt.colorbar(c) # Add text in each cell show_values(c) # Proper orientation (origin at the top left instead of bottom left) if correct_orientation: ax.invert_yaxis() ax.xaxis.tick_top() # resize fig = plt.gcf() #fig.set_size_inches(cm2inch(40, 20)) #fig.set_size_inches(cm2inch(40*4, 20*4)) fig.set_size_inches(cm2inch(figure_width, figure_height)) def plot_classification_report(classification_report, title='Classification report ', cmap='RdBu'): ''' Plot scikit-learn classification report. Extension based on https://stackoverflow.com/a/31689645/395857 ''' lines = classification_report.split('\n') classes = [] plotMat = [] support = [] class_names = [] for line in lines[2 : (len(lines) - 2)]: t = line.strip().split() if len(t) < 2: continue classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] support.append(int(t[-1])) class_names.append(t[0]) print(v) plotMat.append(v) print('plotMat: {0}'.format(plotMat)) print('support: {0}'.format(support)) xlabel = 'Metrics' ylabel = 'Classes' xticklabels = ['Precision', 'Recall', 'F1-score'] yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)] figure_width = 25 figure_height = len(class_names) + 7 correct_orientation = False heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap) def main(): sampleClassificationReport = """ precision recall f1-score support Acacia 0.62 1.00 0.76 66 Blossom 0.93 0.93 0.93 40 Camellia 0.59 0.97 0.73 67 Daisy 0.47 0.92 0.62 272 Echium 1.00 0.16 0.28 413 avg / total 0.77 0.57 0.49 858""" plot_classification_report(sampleClassificationReport) plt.savefig('test_plot_classif_report.png', dpi=200, format='png', bbox_inches='tight') plt.close() if __name__ == "__main__": main() #cProfile.run('main()') # if you want to do some profiling outputs: Example with more classes (~40): A: No string processing + sns.heatmap The following solution uses the output_dict=True option in classification_report to get a dictionary and then a heat map is drawn using seaborn to the dataframe created from the dictionary. import numpy as np import seaborn as sns from sklearn.metrics import classification_report import pandas as pd Generating data. Classes: A,B,C,D,E,F,G,H,I true = np.random.randint(0, 10, size=100) pred = np.random.randint(0, 10, size=100) labels = np.arange(10) target_names = list("ABCDEFGHI") Call classification_report with output_dict=True clf_report = classification_report(true, pred, labels=labels, target_names=target_names, output_dict=True) Create a dataframe from the dictionary and plot a heatmap of it. # .iloc[:-1, :] to exclude support sns.heatmap(pd.DataFrame(clf_report).iloc[:-1, :].T, annot=True) A: I just wrote a function plot_classification_report() for this purpose. Hope it helps. This function takes out put of classification_report function as an argument and plot the scores. Here is the function. def plot_classification_report(cr, title='Classification report ', with_avg_total=False, cmap=plt.cm.Blues): lines = cr.split('\n') classes = [] plotMat = [] for line in lines[2 : (len(lines) - 3)]: #print(line) t = line.split() # print(t) classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] print(v) plotMat.append(v) if with_avg_total: aveTotal = lines[len(lines) - 1].split() classes.append('avg/total') vAveTotal = [float(x) for x in t[1:len(aveTotal) - 1]] plotMat.append(vAveTotal) plt.imshow(plotMat, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() x_tick_marks = np.arange(3) y_tick_marks = np.arange(len(classes)) plt.xticks(x_tick_marks, ['precision', 'recall', 'f1-score'], rotation=45) plt.yticks(y_tick_marks, classes) plt.tight_layout() plt.ylabel('Classes') plt.xlabel('Measures') For the example classification_report provided by you. Here are the code and output. sampleClassificationReport = """ precision recall f1-score support 1 0.62 1.00 0.76 66 2 0.93 0.93 0.93 40 3 0.59 0.97 0.73 67 4 0.47 0.92 0.62 272 5 1.00 0.16 0.28 413 avg / total 0.77 0.57 0.49 858""" plot_classification_report(sampleClassificationReport) Here is how to use it with sklearn classification_report output: from sklearn.metrics import classification_report classificationReport = classification_report(y_true, y_pred, target_names=target_names) plot_classification_report(classificationReport) With this function, you can also add the "avg / total" result to the plot. To use it just add an argument with_avg_total like this: plot_classification_report(classificationReport, with_avg_total=True) A: My solution is to use the python package, Yellowbrick. Yellowbrick in a nutshell combines scikit-learn with matplotlib to produce visualizations for your models. In a few lines you can do what was suggested above. http://www.scikit-yb.org/en/latest/api/classifier/classification_report.html from sklearn.naive_bayes import GaussianNB from yellowbrick.classifier import ClassificationReport # Instantiate the classification model and visualizer bayes = GaussianNB() visualizer = ClassificationReport(bayes, classes=classes, support=True) visualizer.fit(X_train, y_train) # Fit the visualizer and the model visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Draw/show the data A: As for those asking how to make this work with the latest version of the classification_report(y_test, y_pred), you have to change the -2 to -4 in plot_classification_report() method in the accepted answer code of this thread. I could not add this as a comment on the answer because my account doesn't have enough reputation. You need to change for line in lines[2 : (len(lines) - 2)]: to for line in lines[2 : (len(lines) - 4)]: or copy this edited version: import matplotlib.pyplot as plt import numpy as np def show_values(pc, fmt="%.2f", **kw): ''' Heatmap with text in each cell with matplotlib's pyplot Source: https://stackoverflow.com/a/25074150/395857 By HYRY ''' pc.update_scalarmappable() ax = pc.axes #ax = pc.axes# FOR LATEST MATPLOTLIB #Use zip BELOW IN PYTHON 3 for p, color, value in zip(pc.get_paths(), pc.get_facecolors(), pc.get_array()): x, y = p.vertices[:-2, :].mean(0) if np.all(color[:3] > 0.5): color = (0.0, 0.0, 0.0) else: color = (1.0, 1.0, 1.0) ax.text(x, y, fmt % value, ha="center", va="center", color=color, **kw) def cm2inch(*tupl): ''' Specify figure size in centimeter in matplotlib Source: https://stackoverflow.com/a/22787457/395857 By gns-ank ''' inch = 2.54 if type(tupl[0]) == tuple: return tuple(i/inch for i in tupl[0]) else: return tuple(i/inch for i in tupl) def heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'): ''' Inspired by: - https://stackoverflow.com/a/16124677/395857 - https://stackoverflow.com/a/25074150/395857 ''' # Plot it out fig, ax = plt.subplots() #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0) c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap) # put the major ticks at the middle of each cell ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False) ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False) # set tick labels #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False) ax.set_xticklabels(xticklabels, minor=False) ax.set_yticklabels(yticklabels, minor=False) # set title and x/y labels plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) # Remove last blank column plt.xlim( (0, AUC.shape[1]) ) # Turn off all the ticks ax = plt.gca() for t in ax.xaxis.get_major_ticks(): t.tick1On = False t.tick2On = False for t in ax.yaxis.get_major_ticks(): t.tick1On = False t.tick2On = False # Add color bar plt.colorbar(c) # Add text in each cell show_values(c) # Proper orientation (origin at the top left instead of bottom left) if correct_orientation: ax.invert_yaxis() ax.xaxis.tick_top() # resize fig = plt.gcf() #fig.set_size_inches(cm2inch(40, 20)) #fig.set_size_inches(cm2inch(40*4, 20*4)) fig.set_size_inches(cm2inch(figure_width, figure_height)) def plot_classification_report(classification_report, title='Classification report ', cmap='RdBu'): ''' Plot scikit-learn classification report. Extension based on https://stackoverflow.com/a/31689645/395857 ''' lines = classification_report.split('\n') classes = [] plotMat = [] support = [] class_names = [] for line in lines[2 : (len(lines) - 4)]: t = line.strip().split() if len(t) < 2: continue classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] support.append(int(t[-1])) class_names.append(t[0]) print(v) plotMat.append(v) print('plotMat: {0}'.format(plotMat)) print('support: {0}'.format(support)) xlabel = 'Metrics' ylabel = 'Classes' xticklabels = ['Precision', 'Recall', 'F1-score'] yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)] figure_width = 25 figure_height = len(class_names) + 7 correct_orientation = False heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap) def main(): # OLD # sampleClassificationReport = """ precision recall f1-score support # # Acacia 0.62 1.00 0.76 66 # Blossom 0.93 0.93 0.93 40 # Camellia 0.59 0.97 0.73 67 # Daisy 0.47 0.92 0.62 272 # Echium 1.00 0.16 0.28 413 # # avg / total 0.77 0.57 0.49 858""" # NEW sampleClassificationReport = """ precision recall f1-score support 1 1.00 0.33 0.50 9 2 0.50 1.00 0.67 9 3 0.86 0.67 0.75 9 4 0.90 1.00 0.95 9 5 0.67 0.89 0.76 9 6 1.00 1.00 1.00 9 7 1.00 1.00 1.00 9 8 0.90 1.00 0.95 9 9 0.86 0.67 0.75 9 10 1.00 0.78 0.88 9 11 1.00 0.89 0.94 9 12 0.90 1.00 0.95 9 13 1.00 0.56 0.71 9 14 1.00 1.00 1.00 9 15 0.60 0.67 0.63 9 16 1.00 0.56 0.71 9 17 0.75 0.67 0.71 9 18 0.80 0.89 0.84 9 19 1.00 1.00 1.00 9 20 1.00 0.78 0.88 9 21 1.00 1.00 1.00 9 22 1.00 1.00 1.00 9 23 0.27 0.44 0.33 9 24 0.60 1.00 0.75 9 25 0.56 1.00 0.72 9 26 0.18 0.22 0.20 9 27 0.82 1.00 0.90 9 28 0.00 0.00 0.00 9 29 0.82 1.00 0.90 9 30 0.62 0.89 0.73 9 31 1.00 0.44 0.62 9 32 1.00 0.78 0.88 9 33 0.86 0.67 0.75 9 34 0.64 1.00 0.78 9 35 1.00 0.33 0.50 9 36 1.00 0.89 0.94 9 37 0.50 0.44 0.47 9 38 0.69 1.00 0.82 9 39 1.00 0.78 0.88 9 40 0.67 0.44 0.53 9 accuracy 0.77 360 macro avg 0.80 0.77 0.76 360 weighted avg 0.80 0.77 0.76 360 """ plot_classification_report(sampleClassificationReport) plt.savefig('test_plot_classif_report.png', dpi=200, format='png', bbox_inches='tight') plt.close() if __name__ == "__main__": main() #cProfile.run('main()') # if you want to do some profiling A: Here you can get the plot same as Franck Dernoncourt's, but with much shorter code (can fit into a single function). import matplotlib.pyplot as plt import numpy as np import itertools def plot_classification_report(classificationReport, title='Classification report', cmap='RdBu'): classificationReport = classificationReport.replace('\n\n', '\n') classificationReport = classificationReport.replace(' / ', '/') lines = classificationReport.split('\n') classes, plotMat, support, class_names = [], [], [], [] for line in lines[1:]: # if you don't want avg/total result, then change [1:] into [1:-1] t = line.strip().split() if len(t) < 2: continue classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] support.append(int(t[-1])) class_names.append(t[0]) plotMat.append(v) plotMat = np.array(plotMat) xticklabels = ['Precision', 'Recall', 'F1-score'] yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)] plt.imshow(plotMat, interpolation='nearest', cmap=cmap, aspect='auto') plt.title(title) plt.colorbar() plt.xticks(np.arange(3), xticklabels, rotation=45) plt.yticks(np.arange(len(classes)), yticklabels) upper_thresh = plotMat.min() + (plotMat.max() - plotMat.min()) / 10 * 8 lower_thresh = plotMat.min() + (plotMat.max() - plotMat.min()) / 10 * 2 for i, j in itertools.product(range(plotMat.shape[0]), range(plotMat.shape[1])): plt.text(j, i, format(plotMat[i, j], '.2f'), horizontalalignment="center", color="white" if (plotMat[i, j] > upper_thresh or plotMat[i, j] < lower_thresh) else "black") plt.ylabel('Metrics') plt.xlabel('Classes') plt.tight_layout() def main(): sampleClassificationReport = """ precision recall f1-score support Acacia 0.62 1.00 0.76 66 Blossom 0.93 0.93 0.93 40 Camellia 0.59 0.97 0.73 67 Daisy 0.47 0.92 0.62 272 Echium 1.00 0.16 0.28 413 avg / total 0.77 0.57 0.49 858""" plot_classification_report(sampleClassificationReport) plt.show() plt.close() if __name__ == '__main__': main() A: This is my simple solution, using seaborn heatmap import seaborn as sns import numpy as np from sklearn.metrics import precision_recall_fscore_support import matplotlib.pyplot as plt y = np.random.randint(low=0, high=10, size=100) y_p = np.random.randint(low=0, high=10, size=100) def plot_classification_report(y_tru, y_prd, figsize=(10, 10), ax=None): plt.figure(figsize=figsize) xticks = ['precision', 'recall', 'f1-score', 'support'] yticks = list(np.unique(y_tru)) yticks += ['avg'] rep = np.array(precision_recall_fscore_support(y_tru, y_prd)).T avg = np.mean(rep, axis=0) avg[-1] = np.sum(rep[:, -1]) rep = np.insert(rep, rep.shape[0], avg, axis=0) sns.heatmap(rep, annot=True, cbar=False, xticklabels=xticks, yticklabels=yticks, ax=ax) plot_classification_report(y, y_p) This is how the plot will look like A: This works for me, pieced it together from the top answer above, also, i cannot comment but THANKS all for this thread, it helped a LOT! def plot_classification_report(cr, title='Classification report ', with_avg_total=False, cmap=plt.cm.Blues): lines = cr.split('\n') classes = [] plotMat = [] for line in lines[2 : (len(lines) - 6)]: rt t = line.split() classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] plotMat.append(v) if with_avg_total: aveTotal = lines[len(lines) - 1].split() classes.append('avg/total') vAveTotal = [float(x) for x in t[1:len(aveTotal) - 1]] plotMat.append(vAveTotal) plt.figure(figsize=(12,48)) #plt.imshow(plotMat, interpolation='nearest', cmap=cmap) THIS also works but the scale is not good neither the colors for many classes(200) #plt.colorbar() plt.title(title) x_tick_marks = np.arange(3) y_tick_marks = np.arange(len(classes)) plt.xticks(x_tick_marks, ['precision', 'recall', 'f1-score'], rotation=45) plt.yticks(y_tick_marks, classes) plt.tight_layout() plt.ylabel('Classes') plt.xlabel('Measures') import seaborn as sns sns.heatmap(plotMat, annot=True) After this, make sure class labels don't contain any space due the splits reportstr = classification_report(true_classes, y_pred,target_names=class_labels_no_spaces) plot_classification_report(reportstr) A: I tried to imitate the output of yellowbrick's ClassificationReport as much as possible using classification_report, seaborn and matplotlib packages from sklearn.metrics import classification_report import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns import pathlib def plot_classification_report(y_test, y_pred, title='Classification Report', figsize=(8, 6), dpi=70, save_fig_path=None, **kwargs): """ Plot the classification report of sklearn Parameters ---------- y_test : pandas.Series of shape (n_samples,) Targets. y_pred : pandas.Series of shape (n_samples,) Predictions. title : str, default = 'Classification Report' Plot title. fig_size : tuple, default = (8, 6) Size (inches) of the plot. dpi : int, default = 70 Image DPI. save_fig_path : str, defaut=None Full path where to save the plot. Will generate the folders if they don't exist already. **kwargs : attributes of classification_report class of sklearn Returns ------- fig : Matplotlib.pyplot.Figure Figure from matplotlib ax : Matplotlib.pyplot.Axe Axe object from matplotlib """ fig, ax = plt.subplots(figsize=figsize, dpi=dpi) clf_report = classification_report(y_test, y_pred, output_dict=True, **kwargs) keys_to_plot = [key for key in clf_report.keys() if key not in ('accuracy', 'macro avg', 'weighted avg')] df = pd.DataFrame(clf_report, columns=keys_to_plot).T #the following line ensures that dataframe are sorted from the majority classes to the minority classes df.sort_values(by=['support'], inplace=True) #first, let's plot the heatmap by masking the 'support' column rows, cols = df.shape mask = np.zeros(df.shape) mask[:,cols-1] = True ax = sns.heatmap(df, mask=mask, annot=True, cmap="YlGn", fmt='.3g', vmin=0.0, vmax=1.0, linewidths=2, linecolor='white' ) #then, let's add the support column by normalizing the colors in this column mask = np.zeros(df.shape) mask[:,:cols-1] = True ax = sns.heatmap(df, mask=mask, annot=True, cmap="YlGn", cbar=False, linewidths=2, linecolor='white', fmt='.0f', vmin=df['support'].min(), vmax=df['support'].sum(), norm=mpl.colors.Normalize(vmin=df['support'].min(), vmax=df['support'].sum()) ) plt.title(title) plt.xticks(rotation = 45) plt.yticks(rotation = 360) if (save_fig_path != None): path = pathlib.Path(save_fig_path) path.parent.mkdir(parents=True, exist_ok=True) fig.savefig(save_fig_path) return fig, ax Syntax - Binary Classification fig, ax = plot_classification_report(y_test, y_pred, title='Random Forest Classification Report', figsize=(8, 6), dpi=70, target_names=["barren","mineralized"], save_fig_path = "dir1/dir2/classificationreport_plot.png") Syntax - Multiclass Classification fig, ax = plot_classification_report(y_test, y_pred, title='Random Forest Classification Report - Multiclass', figsize=(8, 6), dpi=70, target_names=["class1", "class2", "class3", "class4"], save_fig_path = "multi_dir1/multi_dir2/classificationreport_plot.png") A: If you just want to plot the classification report as a bar chart in a Jupyter notebook, you can do the following. # Assuming that classification_report, y_test and predictions are in scope... import pandas as pd # Build a DataFrame from the classification_report output_dict. report_data = [] for label, metrics in classification_report(y_test, predictions, output_dict=True).items(): metrics['label'] = label report_data.append(metrics) report_df = pd.DataFrame( report_data, columns=['label', 'precision', 'recall', 'f1-score', 'support'] ) # Plot as a bar chart. report_df.plot(y=['precision', 'recall', 'f1-score'], x='label', kind='bar') One issue with this visualisation is that imbalanced classes are not obvious, but are important in interpreting the results. One way to represent this is to add a version of the label that includes the number of samples (i.e. the support): # Add a column to the DataFrame. report_df['labelsupport'] = [f'{label} (n={support})' for label, support in zip(report_df.label, report_df.support)] # Plot the chart the same way, but use `labelsupport` as the x-axis. report_df.plot(y=['precision', 'recall', 'f1-score'], x='labelsupport', kind='bar') A: It was really useful for my Franck Dernoncourt and Bin's answer, but I had two problems. First, when I tried to use it with classes like "No hit" or a name with space inside, the plot failed. And the other problem was to use this functions with MatPlotlib 3.* and scikitLearn-0.22.* versions. So I did some little changes: import matplotlib.pyplot as plt import numpy as np def show_values(pc, fmt="%.2f", **kw): ''' Heatmap with text in each cell with matplotlib's pyplot Source: https://stackoverflow.com/a/25074150/395857 By HYRY ''' pc.update_scalarmappable() ax = pc.axes #ax = pc.axes# FOR LATEST MATPLOTLIB #Use zip BELOW IN PYTHON 3 for p, color, value in zip(pc.get_paths(), pc.get_facecolors(), pc.get_array()): x, y = p.vertices[:-2, :].mean(0) if np.all(color[:3] > 0.5): color = (0.0, 0.0, 0.0) else: color = (1.0, 1.0, 1.0) ax.text(x, y, fmt % value, ha="center", va="center", color=color, **kw) def cm2inch(*tupl): ''' Specify figure size in centimeter in matplotlib Source: https://stackoverflow.com/a/22787457/395857 By gns-ank ''' inch = 2.54 if type(tupl[0]) == tuple: return tuple(i/inch for i in tupl[0]) else: return tuple(i/inch for i in tupl) def heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'): ''' Inspired by: - https://stackoverflow.com/a/16124677/395857 - https://stackoverflow.com/a/25074150/395857 ''' # Plot it out fig, ax = plt.subplots() #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0) c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap, vmin=0.0, vmax=1.0) # put the major ticks at the middle of each cell ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False) ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False) # set tick labels #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False) ax.set_xticklabels(xticklabels, minor=False) ax.set_yticklabels(yticklabels, minor=False) # set title and x/y labels plt.title(title, y=1.25) plt.xlabel(xlabel) plt.ylabel(ylabel) # Remove last blank column plt.xlim( (0, AUC.shape[1]) ) # Turn off all the ticks ax = plt.gca() for t in ax.xaxis.get_major_ticks(): t.tick1line.set_visible(False) t.tick2line.set_visible(False) for t in ax.yaxis.get_major_ticks(): t.tick1line.set_visible(False) t.tick2line.set_visible(False) # Add color bar plt.colorbar(c) # Add text in each cell show_values(c) # Proper orientation (origin at the top left instead of bottom left) if correct_orientation: ax.invert_yaxis() ax.xaxis.tick_top() # resize fig = plt.gcf() #fig.set_size_inches(cm2inch(40, 20)) #fig.set_size_inches(cm2inch(40*4, 20*4)) fig.set_size_inches(cm2inch(figure_width, figure_height)) def plot_classification_report(classification_report, number_of_classes=2, title='Classification report ', cmap='RdYlGn'): ''' Plot scikit-learn classification report. Extension based on https://stackoverflow.com/a/31689645/395857 ''' lines = classification_report.split('\n') #drop initial lines lines = lines[2:] classes = [] plotMat = [] support = [] class_names = [] for line in lines[: number_of_classes]: t = list(filter(None, line.strip().split(' '))) if len(t) < 4: continue classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] support.append(int(t[-1])) class_names.append(t[0]) plotMat.append(v) xlabel = 'Metrics' ylabel = 'Classes' xticklabels = ['Precision', 'Recall', 'F1-score'] yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)] figure_width = 10 figure_height = len(class_names) + 3 correct_orientation = True heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap) plt.show() A: You can use sklearn-evaluation to plot sklearn's classification report (tested it with version 0.8.2). from sklearn import datasets from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn_evaluation import plot X, y = datasets.make_classification(200, 10, n_informative=5, class_sep=0.65) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) y_pred_rf = RandomForestClassifier().fit(X_train, y_train).predict(X_test) y_pred_lr = LogisticRegression().fit(X_train, y_train).predict(X_test) target_names = ["Not spam", "Spam"] cr_rf = plot.ClassificationReport.from_raw_data( y_test, y_pred_rf, target_names=target_names ) cr_lr = plot.ClassificationReport.from_raw_data( y_test, y_pred_lr, target_names=target_names ) # display one of the classification reports cr_rf # compare both reports cr_rf + cr_lr # how better the random forest is? cr_rf - cr_lr
How to plot scikit learn classification report?
Is it possible to plot with matplotlib scikit-learn classification report?. Let's assume I print the classification report like this: print '\n*Classification Report:\n', classification_report(y_test, predictions) confusion_matrix_graph = confusion_matrix(y_test, predictions) and I get: Clasification Report: precision recall f1-score support 1 0.62 1.00 0.76 66 2 0.93 0.93 0.93 40 3 0.59 0.97 0.73 67 4 0.47 0.92 0.62 272 5 1.00 0.16 0.28 413 avg / total 0.77 0.57 0.49 858 How can I "plot" the avobe chart?.
[ "Expanding on Bin's answer:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef show_values(pc, fmt=\"%.2f\", **kw):\n '''\n Heatmap with text in each cell with matplotlib's pyplot\n Source: https://stackoverflow.com/a/25074150/395857 \n By HYRY\n '''\n from itertools import izip\n pc.update_scalarmappable()\n ax = pc.get_axes()\n #ax = pc.axes# FOR LATEST MATPLOTLIB\n #Use zip BELOW IN PYTHON 3\n for p, color, value in izip(pc.get_paths(), pc.get_facecolors(), pc.get_array()):\n x, y = p.vertices[:-2, :].mean(0)\n if np.all(color[:3] > 0.5):\n color = (0.0, 0.0, 0.0)\n else:\n color = (1.0, 1.0, 1.0)\n ax.text(x, y, fmt % value, ha=\"center\", va=\"center\", color=color, **kw)\n\n\ndef cm2inch(*tupl):\n '''\n Specify figure size in centimeter in matplotlib\n Source: https://stackoverflow.com/a/22787457/395857\n By gns-ank\n '''\n inch = 2.54\n if type(tupl[0]) == tuple:\n return tuple(i/inch for i in tupl[0])\n else:\n return tuple(i/inch for i in tupl)\n\n\ndef heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'):\n '''\n Inspired by:\n - https://stackoverflow.com/a/16124677/395857 \n - https://stackoverflow.com/a/25074150/395857\n '''\n\n # Plot it out\n fig, ax = plt.subplots() \n #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0)\n c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap)\n\n # put the major ticks at the middle of each cell\n ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False)\n ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False)\n\n # set tick labels\n #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False)\n ax.set_xticklabels(xticklabels, minor=False)\n ax.set_yticklabels(yticklabels, minor=False)\n\n # set title and x/y labels\n plt.title(title)\n plt.xlabel(xlabel)\n plt.ylabel(ylabel) \n\n # Remove last blank column\n plt.xlim( (0, AUC.shape[1]) )\n\n # Turn off all the ticks\n ax = plt.gca() \n for t in ax.xaxis.get_major_ticks():\n t.tick1On = False\n t.tick2On = False\n for t in ax.yaxis.get_major_ticks():\n t.tick1On = False\n t.tick2On = False\n\n # Add color bar\n plt.colorbar(c)\n\n # Add text in each cell \n show_values(c)\n\n # Proper orientation (origin at the top left instead of bottom left)\n if correct_orientation:\n ax.invert_yaxis()\n ax.xaxis.tick_top() \n\n # resize \n fig = plt.gcf()\n #fig.set_size_inches(cm2inch(40, 20))\n #fig.set_size_inches(cm2inch(40*4, 20*4))\n fig.set_size_inches(cm2inch(figure_width, figure_height))\n\n\n\ndef plot_classification_report(classification_report, title='Classification report ', cmap='RdBu'):\n '''\n Plot scikit-learn classification report.\n Extension based on https://stackoverflow.com/a/31689645/395857 \n '''\n lines = classification_report.split('\\n')\n\n classes = []\n plotMat = []\n support = []\n class_names = []\n for line in lines[2 : (len(lines) - 2)]:\n t = line.strip().split()\n if len(t) < 2: continue\n classes.append(t[0])\n v = [float(x) for x in t[1: len(t) - 1]]\n support.append(int(t[-1]))\n class_names.append(t[0])\n print(v)\n plotMat.append(v)\n\n print('plotMat: {0}'.format(plotMat))\n print('support: {0}'.format(support))\n\n xlabel = 'Metrics'\n ylabel = 'Classes'\n xticklabels = ['Precision', 'Recall', 'F1-score']\n yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)]\n figure_width = 25\n figure_height = len(class_names) + 7\n correct_orientation = False\n heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap)\n\n\ndef main():\n sampleClassificationReport = \"\"\" precision recall f1-score support\n\n Acacia 0.62 1.00 0.76 66\n Blossom 0.93 0.93 0.93 40\n Camellia 0.59 0.97 0.73 67\n Daisy 0.47 0.92 0.62 272\n Echium 1.00 0.16 0.28 413\n\n avg / total 0.77 0.57 0.49 858\"\"\"\n\n\n plot_classification_report(sampleClassificationReport)\n plt.savefig('test_plot_classif_report.png', dpi=200, format='png', bbox_inches='tight')\n plt.close()\n\nif __name__ == \"__main__\":\n main()\n #cProfile.run('main()') # if you want to do some profiling\n\noutputs:\n\nExample with more classes (~40):\n\n", "No string processing + sns.heatmap\nThe following solution uses the output_dict=True option in classification_report to get a dictionary and then a heat map is drawn using seaborn to the dataframe created from the dictionary.\n\nimport numpy as np\nimport seaborn as sns\nfrom sklearn.metrics import classification_report\nimport pandas as pd\n\nGenerating data. Classes: A,B,C,D,E,F,G,H,I\ntrue = np.random.randint(0, 10, size=100)\npred = np.random.randint(0, 10, size=100)\nlabels = np.arange(10)\ntarget_names = list(\"ABCDEFGHI\")\n\nCall classification_report with output_dict=True\nclf_report = classification_report(true,\n pred,\n labels=labels,\n target_names=target_names,\n output_dict=True)\n\nCreate a dataframe from the dictionary and plot a heatmap of it. \n# .iloc[:-1, :] to exclude support\nsns.heatmap(pd.DataFrame(clf_report).iloc[:-1, :].T, annot=True)\n\n\n", "I just wrote a function plot_classification_report() for this purpose. Hope it helps.\nThis function takes out put of classification_report function as an argument and plot the scores. Here is the function. \ndef plot_classification_report(cr, title='Classification report ', with_avg_total=False, cmap=plt.cm.Blues):\n\n lines = cr.split('\\n')\n\n classes = []\n plotMat = []\n for line in lines[2 : (len(lines) - 3)]:\n #print(line)\n t = line.split()\n # print(t)\n classes.append(t[0])\n v = [float(x) for x in t[1: len(t) - 1]]\n print(v)\n plotMat.append(v)\n\n if with_avg_total:\n aveTotal = lines[len(lines) - 1].split()\n classes.append('avg/total')\n vAveTotal = [float(x) for x in t[1:len(aveTotal) - 1]]\n plotMat.append(vAveTotal)\n\n\n plt.imshow(plotMat, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n x_tick_marks = np.arange(3)\n y_tick_marks = np.arange(len(classes))\n plt.xticks(x_tick_marks, ['precision', 'recall', 'f1-score'], rotation=45)\n plt.yticks(y_tick_marks, classes)\n plt.tight_layout()\n plt.ylabel('Classes')\n plt.xlabel('Measures')\n\nFor the example classification_report provided by you. Here are the code and output.\nsampleClassificationReport = \"\"\" precision recall f1-score support\n\n 1 0.62 1.00 0.76 66\n 2 0.93 0.93 0.93 40\n 3 0.59 0.97 0.73 67\n 4 0.47 0.92 0.62 272\n 5 1.00 0.16 0.28 413\n\navg / total 0.77 0.57 0.49 858\"\"\"\n\n\nplot_classification_report(sampleClassificationReport)\n\n\nHere is how to use it with sklearn classification_report output:\nfrom sklearn.metrics import classification_report\nclassificationReport = classification_report(y_true, y_pred, target_names=target_names)\n\nplot_classification_report(classificationReport)\n\nWith this function, you can also add the \"avg / total\" result to the plot. To use it just add an argument with_avg_total like this:\nplot_classification_report(classificationReport, with_avg_total=True)\n\n", "\nMy solution is to use the python package, Yellowbrick. Yellowbrick in a nutshell combines scikit-learn with matplotlib to produce visualizations for your models. In a few lines you can do what was suggested above.\nhttp://www.scikit-yb.org/en/latest/api/classifier/classification_report.html\nfrom sklearn.naive_bayes import GaussianNB\nfrom yellowbrick.classifier import ClassificationReport\n\n# Instantiate the classification model and visualizer\nbayes = GaussianNB()\nvisualizer = ClassificationReport(bayes, classes=classes, support=True)\n\nvisualizer.fit(X_train, y_train) # Fit the visualizer and the model\nvisualizer.score(X_test, y_test) # Evaluate the model on the test data\nvisualizer.show() # Draw/show the data\n\n", "As for those asking how to make this work with the latest version of the classification_report(y_test, y_pred), you have to change the -2 to -4 in plot_classification_report() method in the accepted answer code of this thread.\nI could not add this as a comment on the answer because my account doesn't have enough reputation.\nYou need to change\n for line in lines[2 : (len(lines) - 2)]:\nto\n for line in lines[2 : (len(lines) - 4)]:\nor copy this edited version:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef show_values(pc, fmt=\"%.2f\", **kw):\n '''\n Heatmap with text in each cell with matplotlib's pyplot\n Source: https://stackoverflow.com/a/25074150/395857 \n By HYRY\n '''\n pc.update_scalarmappable()\n ax = pc.axes\n #ax = pc.axes# FOR LATEST MATPLOTLIB\n #Use zip BELOW IN PYTHON 3\n for p, color, value in zip(pc.get_paths(), pc.get_facecolors(), pc.get_array()):\n x, y = p.vertices[:-2, :].mean(0)\n if np.all(color[:3] > 0.5):\n color = (0.0, 0.0, 0.0)\n else:\n color = (1.0, 1.0, 1.0)\n ax.text(x, y, fmt % value, ha=\"center\", va=\"center\", color=color, **kw)\n\n\ndef cm2inch(*tupl):\n '''\n Specify figure size in centimeter in matplotlib\n Source: https://stackoverflow.com/a/22787457/395857\n By gns-ank\n '''\n inch = 2.54\n if type(tupl[0]) == tuple:\n return tuple(i/inch for i in tupl[0])\n else:\n return tuple(i/inch for i in tupl)\n\n\ndef heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'):\n '''\n Inspired by:\n - https://stackoverflow.com/a/16124677/395857 \n - https://stackoverflow.com/a/25074150/395857\n '''\n\n # Plot it out\n fig, ax = plt.subplots() \n #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0)\n c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap)\n\n # put the major ticks at the middle of each cell\n ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False)\n ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False)\n\n # set tick labels\n #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False)\n ax.set_xticklabels(xticklabels, minor=False)\n ax.set_yticklabels(yticklabels, minor=False)\n\n # set title and x/y labels\n plt.title(title)\n plt.xlabel(xlabel)\n plt.ylabel(ylabel) \n\n # Remove last blank column\n plt.xlim( (0, AUC.shape[1]) )\n\n # Turn off all the ticks\n ax = plt.gca() \n for t in ax.xaxis.get_major_ticks():\n t.tick1On = False\n t.tick2On = False\n for t in ax.yaxis.get_major_ticks():\n t.tick1On = False\n t.tick2On = False\n\n # Add color bar\n plt.colorbar(c)\n\n # Add text in each cell \n show_values(c)\n\n # Proper orientation (origin at the top left instead of bottom left)\n if correct_orientation:\n ax.invert_yaxis()\n ax.xaxis.tick_top() \n\n # resize \n fig = plt.gcf()\n #fig.set_size_inches(cm2inch(40, 20))\n #fig.set_size_inches(cm2inch(40*4, 20*4))\n fig.set_size_inches(cm2inch(figure_width, figure_height))\n\n\n\ndef plot_classification_report(classification_report, title='Classification report ', cmap='RdBu'):\n '''\n Plot scikit-learn classification report.\n Extension based on https://stackoverflow.com/a/31689645/395857 \n '''\n lines = classification_report.split('\\n')\n\n classes = []\n plotMat = []\n support = []\n class_names = []\n\n for line in lines[2 : (len(lines) - 4)]:\n t = line.strip().split()\n if len(t) < 2: continue\n classes.append(t[0])\n v = [float(x) for x in t[1: len(t) - 1]]\n support.append(int(t[-1]))\n class_names.append(t[0])\n print(v)\n plotMat.append(v)\n\n print('plotMat: {0}'.format(plotMat))\n print('support: {0}'.format(support))\n\n xlabel = 'Metrics'\n ylabel = 'Classes'\n xticklabels = ['Precision', 'Recall', 'F1-score']\n yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)]\n figure_width = 25\n figure_height = len(class_names) + 7\n correct_orientation = False\n heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap)\n\n\ndef main():\n # OLD \n # sampleClassificationReport = \"\"\" precision recall f1-score support\n # \n # Acacia 0.62 1.00 0.76 66\n # Blossom 0.93 0.93 0.93 40\n # Camellia 0.59 0.97 0.73 67\n # Daisy 0.47 0.92 0.62 272\n # Echium 1.00 0.16 0.28 413\n # \n # avg / total 0.77 0.57 0.49 858\"\"\"\n\n # NEW\n sampleClassificationReport = \"\"\" precision recall f1-score support\n\n 1 1.00 0.33 0.50 9\n 2 0.50 1.00 0.67 9\n 3 0.86 0.67 0.75 9\n 4 0.90 1.00 0.95 9\n 5 0.67 0.89 0.76 9\n 6 1.00 1.00 1.00 9\n 7 1.00 1.00 1.00 9\n 8 0.90 1.00 0.95 9\n 9 0.86 0.67 0.75 9\n 10 1.00 0.78 0.88 9\n 11 1.00 0.89 0.94 9\n 12 0.90 1.00 0.95 9\n 13 1.00 0.56 0.71 9\n 14 1.00 1.00 1.00 9\n 15 0.60 0.67 0.63 9\n 16 1.00 0.56 0.71 9\n 17 0.75 0.67 0.71 9\n 18 0.80 0.89 0.84 9\n 19 1.00 1.00 1.00 9\n 20 1.00 0.78 0.88 9\n 21 1.00 1.00 1.00 9\n 22 1.00 1.00 1.00 9\n 23 0.27 0.44 0.33 9\n 24 0.60 1.00 0.75 9\n 25 0.56 1.00 0.72 9\n 26 0.18 0.22 0.20 9\n 27 0.82 1.00 0.90 9\n 28 0.00 0.00 0.00 9\n 29 0.82 1.00 0.90 9\n 30 0.62 0.89 0.73 9\n 31 1.00 0.44 0.62 9\n 32 1.00 0.78 0.88 9\n 33 0.86 0.67 0.75 9\n 34 0.64 1.00 0.78 9\n 35 1.00 0.33 0.50 9\n 36 1.00 0.89 0.94 9\n 37 0.50 0.44 0.47 9\n 38 0.69 1.00 0.82 9\n 39 1.00 0.78 0.88 9\n 40 0.67 0.44 0.53 9\n\n accuracy 0.77 360\n macro avg 0.80 0.77 0.76 360\nweighted avg 0.80 0.77 0.76 360\n \"\"\"\n plot_classification_report(sampleClassificationReport)\n plt.savefig('test_plot_classif_report.png', dpi=200, format='png', bbox_inches='tight')\n plt.close()\n\nif __name__ == \"__main__\":\n main()\n #cProfile.run('main()') # if you want to do some profiling\n\n", "Here you can get the plot same as Franck Dernoncourt's, but with much shorter code (can fit into a single function).\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport itertools\n\n\ndef plot_classification_report(classificationReport,\n title='Classification report',\n cmap='RdBu'):\n\n classificationReport = classificationReport.replace('\\n\\n', '\\n')\n classificationReport = classificationReport.replace(' / ', '/')\n lines = classificationReport.split('\\n')\n\n classes, plotMat, support, class_names = [], [], [], []\n for line in lines[1:]: # if you don't want avg/total result, then change [1:] into [1:-1]\n t = line.strip().split()\n if len(t) < 2:\n continue\n classes.append(t[0])\n v = [float(x) for x in t[1: len(t) - 1]]\n support.append(int(t[-1]))\n class_names.append(t[0])\n plotMat.append(v)\n\n plotMat = np.array(plotMat)\n xticklabels = ['Precision', 'Recall', 'F1-score']\n yticklabels = ['{0} ({1})'.format(class_names[idx], sup)\n for idx, sup in enumerate(support)]\n\n plt.imshow(plotMat, interpolation='nearest', cmap=cmap, aspect='auto')\n plt.title(title)\n plt.colorbar()\n plt.xticks(np.arange(3), xticklabels, rotation=45)\n plt.yticks(np.arange(len(classes)), yticklabels)\n\n upper_thresh = plotMat.min() + (plotMat.max() - plotMat.min()) / 10 * 8\n lower_thresh = plotMat.min() + (plotMat.max() - plotMat.min()) / 10 * 2\n for i, j in itertools.product(range(plotMat.shape[0]), range(plotMat.shape[1])):\n plt.text(j, i, format(plotMat[i, j], '.2f'),\n horizontalalignment=\"center\",\n color=\"white\" if (plotMat[i, j] > upper_thresh or plotMat[i, j] < lower_thresh) else \"black\")\n\n plt.ylabel('Metrics')\n plt.xlabel('Classes')\n plt.tight_layout()\n\n\ndef main():\n\n sampleClassificationReport = \"\"\" precision recall f1-score support\n\n Acacia 0.62 1.00 0.76 66\n Blossom 0.93 0.93 0.93 40\n Camellia 0.59 0.97 0.73 67\n Daisy 0.47 0.92 0.62 272\n Echium 1.00 0.16 0.28 413\n\n avg / total 0.77 0.57 0.49 858\"\"\"\n\n plot_classification_report(sampleClassificationReport)\n plt.show()\n plt.close()\n\n\nif __name__ == '__main__':\n main()\n\n\n", "This is my simple solution, using seaborn heatmap\nimport seaborn as sns\nimport numpy as np\nfrom sklearn.metrics import precision_recall_fscore_support\nimport matplotlib.pyplot as plt\n\ny = np.random.randint(low=0, high=10, size=100)\ny_p = np.random.randint(low=0, high=10, size=100)\n\ndef plot_classification_report(y_tru, y_prd, figsize=(10, 10), ax=None):\n\n plt.figure(figsize=figsize)\n\n xticks = ['precision', 'recall', 'f1-score', 'support']\n yticks = list(np.unique(y_tru))\n yticks += ['avg']\n\n rep = np.array(precision_recall_fscore_support(y_tru, y_prd)).T\n avg = np.mean(rep, axis=0)\n avg[-1] = np.sum(rep[:, -1])\n rep = np.insert(rep, rep.shape[0], avg, axis=0)\n\n sns.heatmap(rep,\n annot=True, \n cbar=False, \n xticklabels=xticks, \n yticklabels=yticks,\n ax=ax)\n\nplot_classification_report(y, y_p)\n\nThis is how the plot will look like\n", "This works for me, pieced it together from the top answer above, also, i cannot comment but THANKS all for this thread, it helped a LOT!\ndef plot_classification_report(cr, title='Classification report ', with_avg_total=False, cmap=plt.cm.Blues):\n lines = cr.split('\\n')\n classes = []\n plotMat = []\n for line in lines[2 : (len(lines) - 6)]: rt\n t = line.split()\n classes.append(t[0])\n v = [float(x) for x in t[1: len(t) - 1]]\n plotMat.append(v)\n\n if with_avg_total:\n aveTotal = lines[len(lines) - 1].split()\n classes.append('avg/total')\n vAveTotal = [float(x) for x in t[1:len(aveTotal) - 1]]\n plotMat.append(vAveTotal)\n\n plt.figure(figsize=(12,48))\n #plt.imshow(plotMat, interpolation='nearest', cmap=cmap) THIS also works but the scale is not good neither the colors for many classes(200)\n #plt.colorbar()\n\n plt.title(title)\n x_tick_marks = np.arange(3)\n y_tick_marks = np.arange(len(classes))\n plt.xticks(x_tick_marks, ['precision', 'recall', 'f1-score'], rotation=45)\n plt.yticks(y_tick_marks, classes)\n plt.tight_layout()\n plt.ylabel('Classes')\n plt.xlabel('Measures')\n import seaborn as sns\n sns.heatmap(plotMat, annot=True) \n\nAfter this, make sure class labels don't contain any space due the splits\nreportstr = classification_report(true_classes, y_pred,target_names=class_labels_no_spaces)\n\nplot_classification_report(reportstr)\n\n", "I tried to imitate the output of yellowbrick's ClassificationReport as much as possible using classification_report, seaborn and matplotlib packages\nfrom sklearn.metrics import classification_report\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pathlib\n\ndef plot_classification_report(y_test, y_pred, title='Classification Report', figsize=(8, 6), dpi=70, save_fig_path=None, **kwargs):\n \"\"\"\n Plot the classification report of sklearn\n \n Parameters\n ----------\n y_test : pandas.Series of shape (n_samples,)\n Targets.\n y_pred : pandas.Series of shape (n_samples,)\n Predictions.\n title : str, default = 'Classification Report'\n Plot title.\n fig_size : tuple, default = (8, 6)\n Size (inches) of the plot.\n dpi : int, default = 70\n Image DPI.\n save_fig_path : str, defaut=None\n Full path where to save the plot. Will generate the folders if they don't exist already.\n **kwargs : attributes of classification_report class of sklearn\n \n Returns\n -------\n fig : Matplotlib.pyplot.Figure\n Figure from matplotlib\n ax : Matplotlib.pyplot.Axe\n Axe object from matplotlib\n \"\"\" \n fig, ax = plt.subplots(figsize=figsize, dpi=dpi)\n \n clf_report = classification_report(y_test, y_pred, output_dict=True, **kwargs)\n keys_to_plot = [key for key in clf_report.keys() if key not in ('accuracy', 'macro avg', 'weighted avg')]\n df = pd.DataFrame(clf_report, columns=keys_to_plot).T\n #the following line ensures that dataframe are sorted from the majority classes to the minority classes\n df.sort_values(by=['support'], inplace=True) \n \n #first, let's plot the heatmap by masking the 'support' column\n rows, cols = df.shape\n mask = np.zeros(df.shape)\n mask[:,cols-1] = True\n \n ax = sns.heatmap(df, mask=mask, annot=True, cmap=\"YlGn\", fmt='.3g',\n vmin=0.0,\n vmax=1.0,\n linewidths=2, linecolor='white'\n )\n \n #then, let's add the support column by normalizing the colors in this column\n mask = np.zeros(df.shape)\n mask[:,:cols-1] = True \n \n ax = sns.heatmap(df, mask=mask, annot=True, cmap=\"YlGn\", cbar=False,\n linewidths=2, linecolor='white', fmt='.0f',\n vmin=df['support'].min(),\n vmax=df['support'].sum(), \n norm=mpl.colors.Normalize(vmin=df['support'].min(),\n vmax=df['support'].sum())\n ) \n \n plt.title(title)\n plt.xticks(rotation = 45)\n plt.yticks(rotation = 360)\n \n if (save_fig_path != None):\n path = pathlib.Path(save_fig_path)\n path.parent.mkdir(parents=True, exist_ok=True)\n fig.savefig(save_fig_path)\n \n return fig, ax\n\nSyntax - Binary Classification\nfig, ax = plot_classification_report(y_test, y_pred, \n title='Random Forest Classification Report',\n figsize=(8, 6), dpi=70,\n target_names=[\"barren\",\"mineralized\"], \n save_fig_path = \"dir1/dir2/classificationreport_plot.png\")\n\n\nSyntax - Multiclass Classification\nfig, ax = plot_classification_report(y_test, y_pred, \n title='Random Forest Classification Report - Multiclass',\n figsize=(8, 6), dpi=70,\n target_names=[\"class1\", \"class2\", \"class3\", \"class4\"],\n save_fig_path = \"multi_dir1/multi_dir2/classificationreport_plot.png\")\n\n\n", "If you just want to plot the classification report as a bar chart in a Jupyter notebook, you can do the following.\n# Assuming that classification_report, y_test and predictions are in scope...\nimport pandas as pd\n\n# Build a DataFrame from the classification_report output_dict.\nreport_data = []\nfor label, metrics in classification_report(y_test, predictions, output_dict=True).items():\n metrics['label'] = label\n report_data.append(metrics)\n\nreport_df = pd.DataFrame(\n report_data, \n columns=['label', 'precision', 'recall', 'f1-score', 'support']\n)\n\n# Plot as a bar chart.\nreport_df.plot(y=['precision', 'recall', 'f1-score'], x='label', kind='bar')\n\nOne issue with this visualisation is that imbalanced classes are not obvious, but are important in interpreting the results. One way to represent this is to add a version of the label that includes the number of samples (i.e. the support):\n# Add a column to the DataFrame.\nreport_df['labelsupport'] = [f'{label} (n={support})' \n for label, support in zip(report_df.label, report_df.support)]\n\n# Plot the chart the same way, but use `labelsupport` as the x-axis.\nreport_df.plot(y=['precision', 'recall', 'f1-score'], x='labelsupport', kind='bar')\n\n", "It was really useful for my Franck Dernoncourt and Bin's answer, but I had two problems.\nFirst, when I tried to use it with classes like \"No hit\" or a name with space inside, the plot failed.\nAnd the other problem was to use this functions with MatPlotlib 3.* and scikitLearn-0.22.* versions. So I did some little changes:\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef show_values(pc, fmt=\"%.2f\", **kw):\n '''\n Heatmap with text in each cell with matplotlib's pyplot\n Source: https://stackoverflow.com/a/25074150/395857 \n By HYRY\n '''\n pc.update_scalarmappable()\n ax = pc.axes\n #ax = pc.axes# FOR LATEST MATPLOTLIB\n #Use zip BELOW IN PYTHON 3\n for p, color, value in zip(pc.get_paths(), pc.get_facecolors(), pc.get_array()):\n x, y = p.vertices[:-2, :].mean(0)\n if np.all(color[:3] > 0.5):\n color = (0.0, 0.0, 0.0)\n else:\n color = (1.0, 1.0, 1.0)\n ax.text(x, y, fmt % value, ha=\"center\", va=\"center\", color=color, **kw)\n\n\ndef cm2inch(*tupl):\n '''\n Specify figure size in centimeter in matplotlib\n Source: https://stackoverflow.com/a/22787457/395857\n By gns-ank\n '''\n inch = 2.54\n if type(tupl[0]) == tuple:\n return tuple(i/inch for i in tupl[0])\n else:\n return tuple(i/inch for i in tupl)\n\n\ndef heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'):\n '''\n Inspired by:\n - https://stackoverflow.com/a/16124677/395857 \n - https://stackoverflow.com/a/25074150/395857\n '''\n\n # Plot it out\n fig, ax = plt.subplots() \n #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0)\n c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap, vmin=0.0, vmax=1.0)\n\n # put the major ticks at the middle of each cell\n ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False)\n ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False)\n\n # set tick labels\n #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False)\n ax.set_xticklabels(xticklabels, minor=False)\n ax.set_yticklabels(yticklabels, minor=False)\n\n # set title and x/y labels\n plt.title(title, y=1.25)\n plt.xlabel(xlabel)\n plt.ylabel(ylabel) \n\n # Remove last blank column\n plt.xlim( (0, AUC.shape[1]) )\n\n # Turn off all the ticks\n ax = plt.gca() \n for t in ax.xaxis.get_major_ticks():\n t.tick1line.set_visible(False)\n t.tick2line.set_visible(False)\n for t in ax.yaxis.get_major_ticks():\n t.tick1line.set_visible(False)\n t.tick2line.set_visible(False)\n\n # Add color bar\n plt.colorbar(c)\n\n # Add text in each cell \n show_values(c)\n\n # Proper orientation (origin at the top left instead of bottom left)\n if correct_orientation:\n ax.invert_yaxis()\n ax.xaxis.tick_top() \n\n # resize \n fig = plt.gcf()\n #fig.set_size_inches(cm2inch(40, 20))\n #fig.set_size_inches(cm2inch(40*4, 20*4))\n fig.set_size_inches(cm2inch(figure_width, figure_height))\n\n\n\ndef plot_classification_report(classification_report, number_of_classes=2, title='Classification report ', cmap='RdYlGn'):\n '''\n Plot scikit-learn classification report.\n Extension based on https://stackoverflow.com/a/31689645/395857 \n '''\n lines = classification_report.split('\\n')\n \n #drop initial lines\n lines = lines[2:]\n\n classes = []\n plotMat = []\n support = []\n class_names = []\n for line in lines[: number_of_classes]:\n t = list(filter(None, line.strip().split(' ')))\n if len(t) < 4: continue\n classes.append(t[0])\n v = [float(x) for x in t[1: len(t) - 1]]\n support.append(int(t[-1]))\n class_names.append(t[0])\n plotMat.append(v)\n\n\n xlabel = 'Metrics'\n ylabel = 'Classes'\n xticklabels = ['Precision', 'Recall', 'F1-score']\n yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)]\n figure_width = 10\n figure_height = len(class_names) + 3\n correct_orientation = True\n heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap)\n plt.show()\n\n\n\n\n", "You can use sklearn-evaluation to plot sklearn's classification report (tested it with version 0.8.2).\nfrom sklearn import datasets\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\n\nfrom sklearn_evaluation import plot\n\nX, y = datasets.make_classification(200, 10, n_informative=5, class_sep=0.65)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n\ny_pred_rf = RandomForestClassifier().fit(X_train, y_train).predict(X_test)\ny_pred_lr = LogisticRegression().fit(X_train, y_train).predict(X_test)\n\ntarget_names = [\"Not spam\", \"Spam\"]\n\ncr_rf = plot.ClassificationReport.from_raw_data(\n y_test, y_pred_rf, target_names=target_names\n)\ncr_lr = plot.ClassificationReport.from_raw_data(\n y_test, y_pred_lr, target_names=target_names\n)\n\n# display one of the classification reports\ncr_rf\n\n\n# compare both reports\ncr_rf + cr_lr\n\n\n# how better the random forest is?\ncr_rf - cr_lr\n\n\n" ]
[ 42, 28, 15, 14, 7, 6, 3, 2, 2, 1, 1, 0 ]
[ "You can do:\nimport matplotlib.pyplot as plt\n\ncm = [[0.50, 1.00, 0.67],\n [0.00, 0.00, 0.00],\n [1.00, 0.67, 0.80]]\nlabels = ['class 0', 'class 1', 'class 2']\nfig, ax = plt.subplots()\nh = ax.matshow(cm)\nfig.colorbar(h)\nax.set_xticklabels([''] + labels)\nax.set_yticklabels([''] + labels)\nax.set_xlabel('Predicted')\nax.set_ylabel('Ground truth')\n\n\n" ]
[ -1 ]
[ "matplotlib", "numpy", "python", "scikit_learn" ]
stackoverflow_0028200786_matplotlib_numpy_python_scikit_learn.txt
Q: How to sent push message from multiple apps in python I have 2 different apps with 2 different credentials for firebase. So firebase_admin is 2 times initialised. import of firebase from firebase_admin import credentials, messaging Initialise 2 firebase_admins and assigne the json credentials. #initialize firebase 1 json_file_first = location of the json one credentials_first = credentials.Certificate(json_file_first) first_app = firebase_admin.initialize_app(credentials_first,name="first") #initialize firebase 2 json_file_second = location of the json second credentials_second = credentials.Certificate(json_file_second) second_app = firebase_admin.initialize_app(credentials_second, name="second") Creating and sending the message: def sendPushNotificationTest(title, msg, registration_token): message = messaging.MulticastMessage( //here its using the default instance of firebase_admin notification=messaging.Notification( title=title, body=msg), tokens=registration_token ) respons = messaging.send_multicast(message) When creating/sending the push message it can be either with firebase_admin "first" or with firebase_admin "second". Where do i assign the right initialised firebase_admin. A: I did found the solution, its in the response below the message itself. You can add the firebase_admin instance. def sendPushNotificationTest(title, msg, registration_token): message = messaging.MulticastMessage( ///i think i have to define the sender before messaging notification=messaging.Notification( title=title, body=msg), tokens=registration_token ) respons = messaging.send_multicast(message, app="first") //assign the right instance
How to sent push message from multiple apps in python
I have 2 different apps with 2 different credentials for firebase. So firebase_admin is 2 times initialised. import of firebase from firebase_admin import credentials, messaging Initialise 2 firebase_admins and assigne the json credentials. #initialize firebase 1 json_file_first = location of the json one credentials_first = credentials.Certificate(json_file_first) first_app = firebase_admin.initialize_app(credentials_first,name="first") #initialize firebase 2 json_file_second = location of the json second credentials_second = credentials.Certificate(json_file_second) second_app = firebase_admin.initialize_app(credentials_second, name="second") Creating and sending the message: def sendPushNotificationTest(title, msg, registration_token): message = messaging.MulticastMessage( //here its using the default instance of firebase_admin notification=messaging.Notification( title=title, body=msg), tokens=registration_token ) respons = messaging.send_multicast(message) When creating/sending the push message it can be either with firebase_admin "first" or with firebase_admin "second". Where do i assign the right initialised firebase_admin.
[ "I did found the solution, its in the response below the message itself. You can add the firebase_admin instance.\ndef sendPushNotificationTest(title, msg, registration_token):\n message = messaging.MulticastMessage( ///i think i have to define the sender before messaging\n notification=messaging.Notification(\n title=title,\n body=msg),\n tokens=registration_token\n\n )\n respons = messaging.send_multicast(message, app=\"first\") //assign the right instance\n\n" ]
[ 0 ]
[]
[]
[ "firebase", "python" ]
stackoverflow_0074631191_firebase_python.txt
Q: How to click on button in telethon? How i can click on buttons in "keyboardButton" I can click on inline buttons in messages, but i don't know how click in buttons above the keyboard .click() don't work, because it's not a message A: Please, correct me if i wrong. Use event.click(0,0) If the button is at the top left Use event.click(0,1) If the button is at number 2 from the top left So, (row,column) Hope it's easy to understand.
How to click on button in telethon?
How i can click on buttons in "keyboardButton" I can click on inline buttons in messages, but i don't know how click in buttons above the keyboard .click() don't work, because it's not a message
[ "Please, correct me if i wrong.\nUse event.click(0,0)\nIf the button is at the top left\nUse event.click(0,1)\nIf the button is at number 2 from the top left\nSo, (row,column)\nHope it's easy to understand.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074630777_python.txt
Q: How can I get elements from nested lists? The problem that I have to solve is; write a recursive python function recSumList() that sums all the integer and float elements in the list. you can use the type() function to find the type of the element. List : [1, "abcd", 2.2, [3.6, 4], [5, "8", 6]] I just could write; def recSumlist(): list1 = [1, "abcd", 2.2, [3.6, 4], [5, "8", 6]] I don't know how to get elements from nested lists and write this in recursive function. A: If current item is a list, get sum of all the elements. Else if it is an int or float return the corresponding value. If its str or something else return 0 to ignore the values def recSumlist(l): if isinstance(l, list): return sum(recSumlist(i) for i in l) return l if isinstance(l, (float, int)) else 0 print(recSumlist([1, "abcd", 2.2, [3.6, 4], [5, "8", 6]])) Output: 21.8 As suggested by @chris, you can also further shorten it to this if you want - def recSumlist(l): return sum(x if isinstance(x, int) or isinstance(x, float) else recSumlist(x) if isinstance(x, list) else 0 for x in l)
How can I get elements from nested lists?
The problem that I have to solve is; write a recursive python function recSumList() that sums all the integer and float elements in the list. you can use the type() function to find the type of the element. List : [1, "abcd", 2.2, [3.6, 4], [5, "8", 6]] I just could write; def recSumlist(): list1 = [1, "abcd", 2.2, [3.6, 4], [5, "8", 6]] I don't know how to get elements from nested lists and write this in recursive function.
[ "\nIf current item is a list, get sum of all the elements.\nElse if it is an int or float return the corresponding value.\nIf its str or something else return 0 to ignore the values\n\ndef recSumlist(l):\n if isinstance(l, list):\n return sum(recSumlist(i) for i in l)\n return l if isinstance(l, (float, int)) else 0\n\nprint(recSumlist([1, \"abcd\", 2.2, [3.6, 4], [5, \"8\", 6]]))\n\nOutput:\n21.8\n\n\nAs suggested by @chris, you can also further shorten it to this if you want -\ndef recSumlist(l):\n return sum(x if isinstance(x, int) or isinstance(x, float) else recSumlist(x) if isinstance(x, list) else 0 for x in l)\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074631977_python.txt
Q: How can I do a histogram with 1D gaussian mixture with sklearn? I would like to do an histogram with mixture 1D gaussian as the picture. Thanks Meng for the picture. My histogram is this: I have a file with a lot of data (4,000,000 of numbers) in a column: 1.727182 1.645300 1.619943 1.709263 1.614427 1.522313 And I'm using the follow script with modifications than Meng and Justice Lord have done : from matplotlib import rc from sklearn import mixture import matplotlib.pyplot as plt import numpy as np import matplotlib import matplotlib.ticker as tkr import scipy.stats as stats x = open("prueba.dat").read().splitlines() f = np.ravel(x).astype(np.float) f=f.reshape(-1,1) g = mixture.GaussianMixture(n_components=3,covariance_type='full') g.fit(f) weights = g.weights_ means = g.means_ covars = g.covariances_ plt.hist(f, bins=100, histtype='bar', density=True, ec='red', alpha=0.5) plt.plot(f,weights[0]*stats.norm.pdf(f,means[0],np.sqrt(covars[0])), c='red') plt.rcParams['agg.path.chunksize'] = 10000 plt.grid() plt.show() And when I run the script, I have the follow plot: So, I don't have idea how put the start and end of all gaussians that must be there. I'm new in python and I'm confuse with the way to use the modules. Please, Can you help me and guide me how can I do this plot? Thanks a lot A: Although this is a reasonably old thread, I would like to provide my take on it. I believe my answer can be more comprehensible to some. Moreover, I include a test to check whether or not the desired number of components makes statistical sense via the BIC criterion. # import libraries (some are for cosmetics) import matplotlib.pyplot as plt import numpy as np from scipy import stats from matplotlib.ticker import (MultipleLocator, FormatStrFormatter, AutoMinorLocator) import astropy from scipy.stats import norm from sklearn.mixture import GaussianMixture as GMM import matplotlib as mpl mpl.rcParams['axes.linewidth'] = 1.5 mpl.rcParams.update({'font.size': 15, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'}) # create the data as in @Meng's answer x = np.concatenate((np.random.normal(5, 5, 1000), np.random.normal(10, 2, 1000))) x = x.reshape(-1, 1) # first of all, let's confirm the optimal number of components bics = [] min_bic = 0 counter=1 for i in range (10): # test the AIC/BIC metric between 1 and 10 components gmm = GMM(n_components = counter, max_iter=1000, random_state=0, covariance_type = 'full') labels = gmm.fit(x).predict(x) bic = gmm.bic(x) bics.append(bic) if bic < min_bic or min_bic == 0: min_bic = bic opt_bic = counter counter = counter + 1 # plot the evolution of BIC/AIC with the number of components fig = plt.figure(figsize=(10, 4)) ax = fig.add_subplot(1,2,1) # Plot 1 plt.plot(np.arange(1,11), bics, 'o-', lw=3, c='black', label='BIC') plt.legend(frameon=False, fontsize=15) plt.xlabel('Number of components', fontsize=20) plt.ylabel('Information criterion', fontsize=20) plt.xticks(np.arange(0,11, 2)) plt.title('Opt. components = '+str(opt_bic), fontsize=20) # Since the optimal value is n=2 according to both BIC and AIC, let's write down: n_optimal = opt_bic # create GMM model object gmm = GMM(n_components = n_optimal, max_iter=1000, random_state=10, covariance_type = 'full') # find useful parameters mean = gmm.fit(x).means_ covs = gmm.fit(x).covariances_ weights = gmm.fit(x).weights_ # create necessary things to plot x_axis = np.arange(-20, 30, 0.1) y_axis0 = norm.pdf(x_axis, float(mean[0][0]), np.sqrt(float(covs[0][0][0])))*weights[0] # 1st gaussian y_axis1 = norm.pdf(x_axis, float(mean[1][0]), np.sqrt(float(covs[1][0][0])))*weights[1] # 2nd gaussian ax = fig.add_subplot(1,2,2) # Plot 2 plt.hist(x, density=True, color='black', bins=np.arange(-100, 100, 1)) plt.plot(x_axis, y_axis0, lw=3, c='C0') plt.plot(x_axis, y_axis1, lw=3, c='C1') plt.plot(x_axis, y_axis0+y_axis1, lw=3, c='C2', ls='dashed') plt.xlim(-10, 20) #plt.ylim(0.0, 2.0) plt.xlabel(r"X", fontsize=20) plt.ylabel(r"Density", fontsize=20) plt.subplots_adjust(wspace=0.3) plt.show() plt.close('all') A: It's all about reshape. First, you need to reshape f. For pdf, reshape before using stats.norm.pdf. Similarly, sort and reshape before plotting. from matplotlib import rc from sklearn import mixture import matplotlib.pyplot as plt import numpy as np import matplotlib import matplotlib.ticker as tkr import scipy.stats as stats # x = open("prueba.dat").read().splitlines() # create the data x = np.concatenate((np.random.normal(5, 5, 1000),np.random.normal(10, 2, 1000))) f = np.ravel(x).astype(np.float) f=f.reshape(-1,1) g = mixture.GaussianMixture(n_components=3,covariance_type='full') g.fit(f) weights = g.weights_ means = g.means_ covars = g.covariances_ plt.hist(f, bins=100, histtype='bar', density=True, ec='red', alpha=0.5) f_axis = f.copy().ravel() f_axis.sort() plt.plot(f_axis,weights[0]*stats.norm.pdf(f_axis,means[0],np.sqrt(covars[0])).ravel(), c='red') plt.rcParams['agg.path.chunksize'] = 10000 plt.grid() plt.show() A: limba2, thanks very much for the answer. I couldn't run it as it is. There seems there is a prb with numpy library version. I fixed it by upgrading the following: pip install -U threadpoolctl
How can I do a histogram with 1D gaussian mixture with sklearn?
I would like to do an histogram with mixture 1D gaussian as the picture. Thanks Meng for the picture. My histogram is this: I have a file with a lot of data (4,000,000 of numbers) in a column: 1.727182 1.645300 1.619943 1.709263 1.614427 1.522313 And I'm using the follow script with modifications than Meng and Justice Lord have done : from matplotlib import rc from sklearn import mixture import matplotlib.pyplot as plt import numpy as np import matplotlib import matplotlib.ticker as tkr import scipy.stats as stats x = open("prueba.dat").read().splitlines() f = np.ravel(x).astype(np.float) f=f.reshape(-1,1) g = mixture.GaussianMixture(n_components=3,covariance_type='full') g.fit(f) weights = g.weights_ means = g.means_ covars = g.covariances_ plt.hist(f, bins=100, histtype='bar', density=True, ec='red', alpha=0.5) plt.plot(f,weights[0]*stats.norm.pdf(f,means[0],np.sqrt(covars[0])), c='red') plt.rcParams['agg.path.chunksize'] = 10000 plt.grid() plt.show() And when I run the script, I have the follow plot: So, I don't have idea how put the start and end of all gaussians that must be there. I'm new in python and I'm confuse with the way to use the modules. Please, Can you help me and guide me how can I do this plot? Thanks a lot
[ "Although this is a reasonably old thread, I would like to provide my take on it. I believe my answer can be more comprehensible to some. Moreover, I include a test to check whether or not the desired number of components makes statistical sense via the BIC criterion.\n# import libraries (some are for cosmetics)\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\nfrom matplotlib.ticker import (MultipleLocator, FormatStrFormatter, AutoMinorLocator)\nimport astropy\nfrom scipy.stats import norm\nfrom sklearn.mixture import GaussianMixture as GMM\nimport matplotlib as mpl\nmpl.rcParams['axes.linewidth'] = 1.5\nmpl.rcParams.update({'font.size': 15, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})\n\n\n# create the data as in @Meng's answer\nx = np.concatenate((np.random.normal(5, 5, 1000), np.random.normal(10, 2, 1000)))\nx = x.reshape(-1, 1)\n\n# first of all, let's confirm the optimal number of components\nbics = []\nmin_bic = 0\ncounter=1\nfor i in range (10): # test the AIC/BIC metric between 1 and 10 components\n gmm = GMM(n_components = counter, max_iter=1000, random_state=0, covariance_type = 'full')\n labels = gmm.fit(x).predict(x)\n bic = gmm.bic(x)\n bics.append(bic)\n if bic < min_bic or min_bic == 0:\n min_bic = bic\n opt_bic = counter\n counter = counter + 1\n\n\n# plot the evolution of BIC/AIC with the number of components\nfig = plt.figure(figsize=(10, 4))\nax = fig.add_subplot(1,2,1)\n# Plot 1\nplt.plot(np.arange(1,11), bics, 'o-', lw=3, c='black', label='BIC')\nplt.legend(frameon=False, fontsize=15)\nplt.xlabel('Number of components', fontsize=20)\nplt.ylabel('Information criterion', fontsize=20)\nplt.xticks(np.arange(0,11, 2))\nplt.title('Opt. components = '+str(opt_bic), fontsize=20)\n\n\n# Since the optimal value is n=2 according to both BIC and AIC, let's write down:\nn_optimal = opt_bic\n\n# create GMM model object\ngmm = GMM(n_components = n_optimal, max_iter=1000, random_state=10, covariance_type = 'full')\n\n# find useful parameters\nmean = gmm.fit(x).means_ \ncovs = gmm.fit(x).covariances_\nweights = gmm.fit(x).weights_\n\n# create necessary things to plot\nx_axis = np.arange(-20, 30, 0.1)\ny_axis0 = norm.pdf(x_axis, float(mean[0][0]), np.sqrt(float(covs[0][0][0])))*weights[0] # 1st gaussian\ny_axis1 = norm.pdf(x_axis, float(mean[1][0]), np.sqrt(float(covs[1][0][0])))*weights[1] # 2nd gaussian\n\nax = fig.add_subplot(1,2,2)\n# Plot 2\nplt.hist(x, density=True, color='black', bins=np.arange(-100, 100, 1))\nplt.plot(x_axis, y_axis0, lw=3, c='C0')\nplt.plot(x_axis, y_axis1, lw=3, c='C1')\nplt.plot(x_axis, y_axis0+y_axis1, lw=3, c='C2', ls='dashed')\nplt.xlim(-10, 20)\n#plt.ylim(0.0, 2.0)\nplt.xlabel(r\"X\", fontsize=20)\nplt.ylabel(r\"Density\", fontsize=20)\n\nplt.subplots_adjust(wspace=0.3)\nplt.show()\nplt.close('all')\n\n\n", "It's all about reshape.\nFirst, you need to reshape f.\nFor pdf, reshape before using stats.norm.pdf. Similarly, sort and reshape before plotting.\nfrom matplotlib import rc\nfrom sklearn import mixture\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib\nimport matplotlib.ticker as tkr\nimport scipy.stats as stats\n\n# x = open(\"prueba.dat\").read().splitlines()\n\n# create the data\nx = np.concatenate((np.random.normal(5, 5, 1000),np.random.normal(10, 2, 1000)))\n\nf = np.ravel(x).astype(np.float)\nf=f.reshape(-1,1)\ng = mixture.GaussianMixture(n_components=3,covariance_type='full')\ng.fit(f)\nweights = g.weights_\nmeans = g.means_\ncovars = g.covariances_\n\nplt.hist(f, bins=100, histtype='bar', density=True, ec='red', alpha=0.5)\n\nf_axis = f.copy().ravel()\nf_axis.sort()\nplt.plot(f_axis,weights[0]*stats.norm.pdf(f_axis,means[0],np.sqrt(covars[0])).ravel(), c='red')\n\nplt.rcParams['agg.path.chunksize'] = 10000\n\nplt.grid()\nplt.show()\n\n\n", "limba2, thanks very much for the answer. I couldn't run it as it is. There seems there is a prb with numpy library version. I fixed it by upgrading the following:\npip install -U threadpoolctl\n" ]
[ 4, 3, 0 ]
[]
[]
[ "gmm", "histogram", "matplotlib", "python", "scikit_learn" ]
stackoverflow_0055187037_gmm_histogram_matplotlib_python_scikit_learn.txt
Q: Django check user.is_authenticated before UserCreationForm to prevent a user to signup twice In Django, I am trying to prevent an already existing user to register (sign up) again. In my case, the user can sign up with a form. My approach is to check in views.py if the user already exists by checking is_authenticated upfront. If the user does not exist, then the form entries will be processed and the user will be created. The problem: if the user already exists, I would expect the condition request.user.is_authenticated to be True and the browser to be redirected to home. Instead, the evaluation goes on to process the form throwing (of course) the following error: Exception Value: duplicate key value violates unique constraint "auth_user_username_key" DETAIL: Key (username)=(john.doe) already exists. This is a sample of my views.py: def register_user(request): if request.method == "POST": if request.user.is_authenticated: messages.error(request, ('User already exists.')) return redirect('home') form = UserCreationForm(request.POST) if form.is_valid(): form.save() ... # do more stuff What am I missing? A: Have you tried request.user.is_anonymous? A: If the user is already logged in it will raise is_authenticated as True and False it there's no user logged in. I guess you're trying to register/sign up with no active session, then the first inner if request.user.is_authenticated is evaluated False and not used, so it goes to the second inner if and then the database error is raised because you tried to use the same username.
Django check user.is_authenticated before UserCreationForm to prevent a user to signup twice
In Django, I am trying to prevent an already existing user to register (sign up) again. In my case, the user can sign up with a form. My approach is to check in views.py if the user already exists by checking is_authenticated upfront. If the user does not exist, then the form entries will be processed and the user will be created. The problem: if the user already exists, I would expect the condition request.user.is_authenticated to be True and the browser to be redirected to home. Instead, the evaluation goes on to process the form throwing (of course) the following error: Exception Value: duplicate key value violates unique constraint "auth_user_username_key" DETAIL: Key (username)=(john.doe) already exists. This is a sample of my views.py: def register_user(request): if request.method == "POST": if request.user.is_authenticated: messages.error(request, ('User already exists.')) return redirect('home') form = UserCreationForm(request.POST) if form.is_valid(): form.save() ... # do more stuff What am I missing?
[ "Have you tried request.user.is_anonymous?\n", "If the user is already logged in it will raise is_authenticated as True and False it there's no user logged in.\nI guess you're trying to register/sign up with no active session, then the first inner if request.user.is_authenticated is evaluated False and not used, so it goes to the second inner if and then the database error is raised because you tried to use the same username.\n" ]
[ 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074632086_django_python.txt
Q: Confluent Kafka poll, when does a message get committed I have a Python application that has autocommit=True and it is using poll() to get messages with a interval of 1 second. I was reading on the documentation and it mentions that polling reads message in a background thread and queues them so that the main thread can take them afterwards. I was a bit confused there on what happens if I have multiple messages queued and my consumer crashes. Would those messages queued from the background thread have been committed already and hence get lost? A: As mentioned in the docs, every auto.commit.interval.ms, any polled offsets will get committed. If you are concerned about missing data, you should always disable auto-commits, in any Kafka client, and handle commits on your own after you know you've actually processed those records.
Confluent Kafka poll, when does a message get committed
I have a Python application that has autocommit=True and it is using poll() to get messages with a interval of 1 second. I was reading on the documentation and it mentions that polling reads message in a background thread and queues them so that the main thread can take them afterwards. I was a bit confused there on what happens if I have multiple messages queued and my consumer crashes. Would those messages queued from the background thread have been committed already and hence get lost?
[ "As mentioned in the docs, every auto.commit.interval.ms, any polled offsets will get committed.\nIf you are concerned about missing data, you should always disable auto-commits, in any Kafka client, and handle commits on your own after you know you've actually processed those records.\n" ]
[ 1 ]
[]
[]
[ "apache_kafka", "confluent_kafka_python", "librdkafka", "python" ]
stackoverflow_0074629190_apache_kafka_confluent_kafka_python_librdkafka_python.txt
Q: How can I round a string made of numbers efficiently using Python? Using Python 3... I've written code that rounds the values for ArcGIS symbology labels. The label is given as a string like "0.3324 - 0.6631". My reproducible code is... label = "0.3324 - 0.6631" label_list = [] label_split = label.split(" - ") for num in label_split: num = round(float(num), 2) # rounded to 2 decimals num = str(num) label_list.append(num) label = label_list[0]+" - "+label_list[1] This code works but does anyone have any recommendations/better approaches for rounding numbers inside of strings? A: This solution doesn't try to operate on a sequence but on the 2 values. A bit more readable to me. x, _, y = label.partition(" - ") label = f"{float(x):.2f} - {float(y):.2f}" A: You could use a regular expression to search for a number with more than two decimal places, and round it. The regex \b\d+\.\d+\b will find all numbers with decimal points that are surrounded by word boundaries in your string. It is surrounded in word boundaries to prevent it from picking up numbers attached to words, such as ABC0.1234. Explanation of regex (Try online): \b\d+\.\d+\b \b \b : Word boundary \d+ \d+ : One or more digits \. : Decimal point The re.sub function allows you to specify a function that will take a match object as the input, and return the required replacement. Let's define such a function that will parse the number to a float, and then format it to two decimal places using the f-string syntax (You can format it any way you like, I like this way) def round_to_2(match): num = float(match.group(0)) return f"{num:.2f}" To use this function, we simply specify the function as the repl argument for re.sub label = "0.3324 - 0.6631 ABC0.1234 0.12 1.234 1.23 123.4567 1.2" label_rep = re.sub(r"\b\d+\.\d+\b", round_to_2, label) This gives label_rep as: '0.33 - 0.66 ABC0.1234 0.12 1.23 1.23 123.46 1.20' The advantage of this is that you didn't need to hardcode any separator or split character. All numbers in your string are found and formatted. Note that this will add extra zeros to the number if required. A: a slightly different approach: label = "0.3324 - 0.6631" '{:.2f}-{:.2f}'.format(*map(float,label.split('-'))) >>> '0.33-0.66'
How can I round a string made of numbers efficiently using Python?
Using Python 3... I've written code that rounds the values for ArcGIS symbology labels. The label is given as a string like "0.3324 - 0.6631". My reproducible code is... label = "0.3324 - 0.6631" label_list = [] label_split = label.split(" - ") for num in label_split: num = round(float(num), 2) # rounded to 2 decimals num = str(num) label_list.append(num) label = label_list[0]+" - "+label_list[1] This code works but does anyone have any recommendations/better approaches for rounding numbers inside of strings?
[ "This solution doesn't try to operate on a sequence but on the 2 values.\nA bit more readable to me.\nx, _, y = label.partition(\" - \")\nlabel = f\"{float(x):.2f} - {float(y):.2f}\"\n\n", "You could use a regular expression to search for a number with more than two decimal places, and round it.\nThe regex \\b\\d+\\.\\d+\\b will find all numbers with decimal points that are surrounded by word boundaries in your string. It is surrounded in word boundaries to prevent it from picking up numbers attached to words, such as ABC0.1234.\nExplanation of regex (Try online):\n\\b\\d+\\.\\d+\\b\n\n\\b \\b : Word boundary\n \\d+ \\d+ : One or more digits\n \\. : Decimal point\n\nThe re.sub function allows you to specify a function that will take a match object as the input, and return the required replacement. Let's define such a function that will parse the number to a float, and then format it to two decimal places using the f-string syntax (You can format it any way you like, I like this way)\ndef round_to_2(match):\n num = float(match.group(0))\n return f\"{num:.2f}\"\n\nTo use this function, we simply specify the function as the repl argument for re.sub\nlabel = \"0.3324 - 0.6631 ABC0.1234 0.12 1.234 1.23 123.4567 1.2\"\nlabel_rep = re.sub(r\"\\b\\d+\\.\\d+\\b\", round_to_2, label)\n\nThis gives label_rep as:\n'0.33 - 0.66 ABC0.1234 0.12 1.23 1.23 123.46 1.20'\n\nThe advantage of this is that you didn't need to hardcode any separator or split character. All numbers in your string are found and formatted. Note that this will add extra zeros to the number if required.\n", "a slightly different approach:\nlabel = \"0.3324 - 0.6631\"\n'{:.2f}-{:.2f}'.format(*map(float,label.split('-')))\n\n>>>\n'0.33-0.66'\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "floating_point", "python", "python_3.x", "rounding", "string" ]
stackoverflow_0074630596_floating_point_python_python_3.x_rounding_string.txt
Q: Creating a nested dictionaries via for-loop I'm having trouble creating a dictionary with multiple keys and values inside an other dictionary by using a for-loop. I have a program that reads another text file, and then inputs it's information to the dictionaries. The file looks something like this: GPU;GeForce GTX 1070 Ti;430 CPU;AMD Ryzen 7 2700X;233 GPU;GeForce GTX 2060;400 CPU;Intel Core i7-11700;360 RAM;HyperX 16GB;180 PSU;Corsair RM850X;210 What I'm trying to achieve is that I'm trying to create a dictionary for each component type {GPU, CPU, RAM, PSU, etc.} and to those I'm trying to input another dictionary, which consist from multiple keys and values which are {name1 : price1, name2 : price2, etc.} After running the program, the complete dictionary should look like this: "GPU": {"GeForce GTX 1070 Ti": 430, "GeForce GTX 2060 2": 233}, "CPU": {"AMD Ryzen 7 2700X": 233, "Intel Core i7-11700 : 360}, "RAM": {"HyperX 16GB": 180}, "PSU": {"Corsair RM850X": 210} But instead, it looks like this: "GPU": {"GeForce GTX 2060 2": 233}, "CPU": {"Intel Core i7-11700 : 360}, "RAM": {"HyperX 16GB": 180}, "PSU": {"Corsair RM850X": 210} Here is the problem: I can't create the dictionary properly, because new inner keys and values override each other. How can I make this loop not to do so, but instead just add the new values in the inner dict after each other? Here's my code: def main(): filename = input("Enter the component file name: ") file = open(filename, mode="r") # Defining the outer dict. This dict's keys are the component types and # it's values are inner dictionaries. outer_dict = {} for row in file: row = row.strip() parts = row.split(";") # Defining variables for each part per line. type = parts[0] name = parts[1] price = int(parts[2]) # Defining the inner dict. This dict's keys are the component's name # and it's price. There can be multiple names and prices in this dict. inner_dict = {} # Adding each name and price to the inner dictionaries. for i in range(1, len(parts)): inner_dict[name] = price # Adding the created inner dict into the outer dictionary. outer_dict[type] = inner_dict file.close() if __name__ == "__main__": main() Thank you all for your help in advance. It really is needed! A: You can simply achieve the expected behavior using collections.defaultdict and a simple loop. NB. I am emulating a file with a split text here f = '''GPU;GeForce GTX 1070 Ti;430 CPU;AMD Ryzen 7 2700X;233 GPU;GeForce GTX 2060;400 CPU;Intel Core i7-11700;360 RAM;HyperX 16GB;180 PSU;Corsair RM850X;210''' from collections import defaultdict out = defaultdict(dict) for line in f.split('\n'): typ,name,price = line.split(';') out[typ][name] = price dict(out) output: >>> dict(out) {'GPU': {'GeForce GTX 1070 Ti': '430', 'GeForce GTX 2060': '400'}, 'CPU': {'AMD Ryzen 7 2700X': '233', 'Intel Core i7-11700': '360'}, 'RAM': {'HyperX 16GB': '180'}, 'PSU': {'Corsair RM850X': '210'}} with a file: with open('file.txt') as f: for line in f: # rest of the loop from above A: It's this part. You're replacing the value of your dictionary key. outer_dict[type] = inner_dict Instead change it to add a new sub-key like this type = parts[0] name = parts[1] price = int(parts[2]) outer_dict[type][name] = price # add `name` to the `type` dict as a new key A: For outer_dict you can use defaultdict from collections, making {} the default value. That way you can simply do outer_dict[type][name] = price for each line A: Thank you all for your help. I forgot to mention, that I'm not allowed to use builtins, but your advice was still informational and maybe useful for someone else struggling with a samillair problem. I got the code to work in a wanted way by using the update-method, advised by: TheFlyingObject Here's the fixed inner for-loop, which works like it's supposed to: inner_dict = {} # Adding each name and price to the inner dictionaries. for i in range(1, len(parts)): inner_dict[name] = price # Adding the created inner dict into the outer dictionary if it's # not already there. if type not in outer_dict: outer_dict[type] = inner_dict # If there is already an outer dict existing, updating the inner dict. else: outer_dict[type].update(inner_dict) A: You can solve this by making a shallow copy of your inner dictionary before assigning it to the outer one: copy_dict = inner_dict.copy() outer_dict[type] = copy_dict This should fix the overwriting of the inner dictionary for all key values.
Creating a nested dictionaries via for-loop
I'm having trouble creating a dictionary with multiple keys and values inside an other dictionary by using a for-loop. I have a program that reads another text file, and then inputs it's information to the dictionaries. The file looks something like this: GPU;GeForce GTX 1070 Ti;430 CPU;AMD Ryzen 7 2700X;233 GPU;GeForce GTX 2060;400 CPU;Intel Core i7-11700;360 RAM;HyperX 16GB;180 PSU;Corsair RM850X;210 What I'm trying to achieve is that I'm trying to create a dictionary for each component type {GPU, CPU, RAM, PSU, etc.} and to those I'm trying to input another dictionary, which consist from multiple keys and values which are {name1 : price1, name2 : price2, etc.} After running the program, the complete dictionary should look like this: "GPU": {"GeForce GTX 1070 Ti": 430, "GeForce GTX 2060 2": 233}, "CPU": {"AMD Ryzen 7 2700X": 233, "Intel Core i7-11700 : 360}, "RAM": {"HyperX 16GB": 180}, "PSU": {"Corsair RM850X": 210} But instead, it looks like this: "GPU": {"GeForce GTX 2060 2": 233}, "CPU": {"Intel Core i7-11700 : 360}, "RAM": {"HyperX 16GB": 180}, "PSU": {"Corsair RM850X": 210} Here is the problem: I can't create the dictionary properly, because new inner keys and values override each other. How can I make this loop not to do so, but instead just add the new values in the inner dict after each other? Here's my code: def main(): filename = input("Enter the component file name: ") file = open(filename, mode="r") # Defining the outer dict. This dict's keys are the component types and # it's values are inner dictionaries. outer_dict = {} for row in file: row = row.strip() parts = row.split(";") # Defining variables for each part per line. type = parts[0] name = parts[1] price = int(parts[2]) # Defining the inner dict. This dict's keys are the component's name # and it's price. There can be multiple names and prices in this dict. inner_dict = {} # Adding each name and price to the inner dictionaries. for i in range(1, len(parts)): inner_dict[name] = price # Adding the created inner dict into the outer dictionary. outer_dict[type] = inner_dict file.close() if __name__ == "__main__": main() Thank you all for your help in advance. It really is needed!
[ "You can simply achieve the expected behavior using collections.defaultdict and a simple loop.\nNB. I am emulating a file with a split text here\nf = '''GPU;GeForce GTX 1070 Ti;430\nCPU;AMD Ryzen 7 2700X;233\nGPU;GeForce GTX 2060;400\nCPU;Intel Core i7-11700;360\nRAM;HyperX 16GB;180\nPSU;Corsair RM850X;210'''\n\nfrom collections import defaultdict\n\nout = defaultdict(dict)\n\nfor line in f.split('\\n'):\n typ,name,price = line.split(';')\n out[typ][name] = price\n\ndict(out)\n\noutput:\n>>> dict(out)\n{'GPU': {'GeForce GTX 1070 Ti': '430', 'GeForce GTX 2060': '400'},\n 'CPU': {'AMD Ryzen 7 2700X': '233', 'Intel Core i7-11700': '360'},\n 'RAM': {'HyperX 16GB': '180'},\n 'PSU': {'Corsair RM850X': '210'}}\n\nwith a file:\nwith open('file.txt') as f:\n for line in f:\n # rest of the loop from above\n\n", "It's this part. You're replacing the value of your dictionary key.\nouter_dict[type] = inner_dict\n\nInstead change it to add a new sub-key like this\ntype = parts[0]\nname = parts[1]\nprice = int(parts[2])\nouter_dict[type][name] = price # add `name` to the `type` dict as a new key\n\n", "For outer_dict you can use defaultdict from collections, making {} the default value. That way you can simply do outer_dict[type][name] = price for each line\n", "Thank you all for your help. I forgot to mention, that I'm not allowed to use builtins, but your advice was still informational and maybe useful for someone else struggling with a samillair problem.\nI got the code to work in a wanted way by using the update-method, advised by: TheFlyingObject\nHere's the fixed inner for-loop, which works like it's supposed to:\n inner_dict = {}\n\n # Adding each name and price to the inner dictionaries.\n for i in range(1, len(parts)):\n inner_dict[name] = price\n\n # Adding the created inner dict into the outer dictionary if it's\n # not already there.\n if type not in outer_dict:\n outer_dict[type] = inner_dict\n\n # If there is already an outer dict existing, updating the inner dict.\n else:\n outer_dict[type].update(inner_dict)\n\n", "You can solve this by making a shallow copy of your inner dictionary before assigning it to the outer one:\ncopy_dict = inner_dict.copy()\nouter_dict[type] = copy_dict\n\nThis should fix the overwriting of the inner dictionary for all key values.\n" ]
[ 4, 0, 0, 0, 0 ]
[]
[]
[ "data_structures", "dictionary", "filereader", "nested", "python" ]
stackoverflow_0070112370_data_structures_dictionary_filereader_nested_python.txt
Q: ERROR: No results. Previous SQL was not a query cnxn = pyodbc.connect(driver="{ODBC Driver 17 for SQL Server}", server="xxx", database="yy", user="abc", password="abc") cursor = cnxn.cursor() b = alter table temp1 add column3 varchar(10) cursor.execute(b) cursor.fetchall() from the above code I'm trying to alter the table and add the column as i contain 2 tables 1 is existing table and the other is new table the column from the new table as to be added to the exixting table so i have done the code but i got the error of ERROR: No results. Previous SQL was not a query. so please help me out to clear this error. A: change the line of code to as below b = 'alter table temp1 add column3 varchar(10);' this assigns the entire SQL command as string to the variable b, which then you can use in the call to execute() function of the cursor. Also, the ALTER TABLE SQL statement will not return any results set, so you need not call fetchXX() method after executing it. A: For those that are wondering autocommit mentioned above is a pyodbc connect parameter. conn = pyodbc.connect(sql_server_cnxn_string, autocommit=True)
ERROR: No results. Previous SQL was not a query
cnxn = pyodbc.connect(driver="{ODBC Driver 17 for SQL Server}", server="xxx", database="yy", user="abc", password="abc") cursor = cnxn.cursor() b = alter table temp1 add column3 varchar(10) cursor.execute(b) cursor.fetchall() from the above code I'm trying to alter the table and add the column as i contain 2 tables 1 is existing table and the other is new table the column from the new table as to be added to the exixting table so i have done the code but i got the error of ERROR: No results. Previous SQL was not a query. so please help me out to clear this error.
[ "change the line of code to as below\nb = 'alter table temp1 add column3 varchar(10);'\nthis assigns the entire SQL command as string to the variable b, which then you can use in the call to execute() function of the cursor.\nAlso, the ALTER TABLE SQL statement will not return any results set, so you need not call fetchXX() method after executing it.\n", "For those that are wondering autocommit mentioned above is a pyodbc connect parameter.\nconn = pyodbc.connect(sql_server_cnxn_string, autocommit=True)\n" ]
[ 0, 0 ]
[]
[]
[ "python", "sql", "sql_server" ]
stackoverflow_0070363825_python_sql_sql_server.txt
Q: python runs older version after installing updated version on Mac I am currently running python 3.6 on my Mac, and installed the latest version of Python (3.11) by downloading and installing through the official python releases. Running python3.11 opens the interpreter in 3.11, and python3.11 --version returns Python 3.11.0, but python -V in terminal returns Python 3.6.1 :: Continuum Analytics, Inc.. I tried to install again via homebrew using brew install [email protected] but got the same results. More frustrating, when I try to open a virtual environment using python3 -m venv env I get Error: Command '['/Users/User/env/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. I altered .bash_profile with # Setting PATH for Python 3.11 # The original version is saved in .bash_profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/3.11/bin:${PATH}" export PATH . "$HOME/.cargo/env" And created a .zprofile based on this post with export PYTHONPATH=$HOME/Users/User and a .zshrc based on this post, but --version still throws python3.6. I'm running Big Sur OS. Pip and homebrew are up to date and upgraded. Acknowledging that I'm totally foolish, what do I need to do to get python >3.7 running in terminal? A: By default MacOS ships with Python-2.-. But, I guess most of us have long back started to work with Python-3 and it is very irritating to run python3 every time instead of python in terminal. Here is how to do this. Open the terminal (bash or zsh) whatever shell you are using. Install python-3 using Homebrew (https://brew.sh). brew install python Look where it is installed. ls -l /usr/local/bin/python* The output is something like this: lrwxr-xr-x 1 irfan admin 34 Nov 11 16:32 /usr/local/bin/python3 -> ../Cellar/python/3.11/bin/python3 lrwxr-xr-x 1 irfan admin 41 Nov 11 16:32 /usr/local/bin/python3-config -> ../Cellar/python/3.11/bin/python3-config lrwxr-xr-x 1 irfan admin 36 Nov 11 16:32 /usr/local/bin/python3.11 -> ../Cellar/python/3.11/bin/python3.11 lrwxr-xr-x 1 irfan admin 43 Nov 11 16:32 /usr/local/bin/python3.11-config -> ../Cellar/python/3.11/bin/python3.11-config lrwxr-xr-x 1 irfan admin 37 Nov 11 16:32 /usr/local/bin/python3.11m -> ../Cellar/python/3.11/bin/python3.11m lrwxr-xr-x 1 irfan admin 44 Nov 11 16:32 /usr/local/bin/python3.11m-config -> ../Cellar/python/3.11/bin/python3.11m-config Change the default python symlink to the version you want to use from above. Note that, we only need to choose the one that end with python3.*. Please avoid using the ones that end with config or python3.*m or python3.*m-config. Below command shows how it should be done: ln -s -f /usr/local/bin/python3.11 /usr/local/bin/python Close the current terminal session or keep it that way and instead open a new terminal window (not tab). Run this: python --version You will get this: Python 3.11 A: What you want to do is overwrite a python symlink. After installing python via homebrew, you can see that python3.11 is just symlink. cd /usr/local/bin; ll | grep python3.11 The result is: lrwxr-xr-x 1 user admin 43 Nov 7 15:43 python3.11@ -> ../Cellar/[email protected]/3.11.0/bin/python3.11 So let's just overwrite it. ln -s -f $(which python3.11) $(which python) ln -s -f $(which python3.11) $(which python3) ln -s -f $(which pip3.11) $(which pip) ln -s -f $(which pip3.11) $(which pip3) After these commands, pip, pip3, python3, python will invoke the version 3.11. This command makes soft symlink. ln -s This command with -f option overwrite an existing soft symlink. A soft symlink is similar to a shortcut. In man page,which command is described as which - shows the full path of (shell) commands.
python runs older version after installing updated version on Mac
I am currently running python 3.6 on my Mac, and installed the latest version of Python (3.11) by downloading and installing through the official python releases. Running python3.11 opens the interpreter in 3.11, and python3.11 --version returns Python 3.11.0, but python -V in terminal returns Python 3.6.1 :: Continuum Analytics, Inc.. I tried to install again via homebrew using brew install [email protected] but got the same results. More frustrating, when I try to open a virtual environment using python3 -m venv env I get Error: Command '['/Users/User/env/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. I altered .bash_profile with # Setting PATH for Python 3.11 # The original version is saved in .bash_profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/3.11/bin:${PATH}" export PATH . "$HOME/.cargo/env" And created a .zprofile based on this post with export PYTHONPATH=$HOME/Users/User and a .zshrc based on this post, but --version still throws python3.6. I'm running Big Sur OS. Pip and homebrew are up to date and upgraded. Acknowledging that I'm totally foolish, what do I need to do to get python >3.7 running in terminal?
[ "By default MacOS ships with Python-2.-. But, I guess most of us have long back started to work with Python-3 and it is very irritating to run python3 every time instead of python in terminal. Here is how to do this.\nOpen the terminal (bash or zsh) whatever shell you are using.\nInstall python-3 using Homebrew (https://brew.sh).\nbrew install python\n\nLook where it is installed.\nls -l /usr/local/bin/python*\n\nThe output is something like this:\nlrwxr-xr-x 1 irfan admin 34 Nov 11 16:32 /usr/local/bin/python3 -> ../Cellar/python/3.11/bin/python3\nlrwxr-xr-x 1 irfan admin 41 Nov 11 16:32 /usr/local/bin/python3-config -> ../Cellar/python/3.11/bin/python3-config\nlrwxr-xr-x 1 irfan admin 36 Nov 11 16:32 /usr/local/bin/python3.11 -> ../Cellar/python/3.11/bin/python3.11\nlrwxr-xr-x 1 irfan admin 43 Nov 11 16:32 /usr/local/bin/python3.11-config -> ../Cellar/python/3.11/bin/python3.11-config\nlrwxr-xr-x 1 irfan admin 37 Nov 11 16:32 /usr/local/bin/python3.11m -> ../Cellar/python/3.11/bin/python3.11m\nlrwxr-xr-x 1 irfan admin 44 Nov 11 16:32 /usr/local/bin/python3.11m-config -> ../Cellar/python/3.11/bin/python3.11m-config\n\nChange the default python symlink to the version you want to use from above.\nNote that, we only need to choose the one that end with python3.*. Please avoid using the ones that end with config or python3.*m or python3.*m-config.\nBelow command shows how it should be done:\nln -s -f /usr/local/bin/python3.11 /usr/local/bin/python\n\nClose the current terminal session or keep it that way and instead open a new terminal window (not tab). Run this:\npython --version\n\nYou will get this:\nPython 3.11\n\n", "What you want to do is overwrite a python symlink.\nAfter installing python via homebrew, you can see that python3.11 is just symlink.\ncd /usr/local/bin; ll | grep python3.11\n\nThe result is:\nlrwxr-xr-x 1 user admin 43 Nov 7 15:43 python3.11@ -> ../Cellar/[email protected]/3.11.0/bin/python3.11\n\nSo let's just overwrite it.\nln -s -f $(which python3.11) $(which python)\nln -s -f $(which python3.11) $(which python3)\nln -s -f $(which pip3.11) $(which pip)\nln -s -f $(which pip3.11) $(which pip3)\n\nAfter these commands, pip, pip3, python3, python will invoke the version 3.11.\nThis command makes soft symlink.\nln -s\n\nThis command with -f option overwrite an existing soft symlink.\nA soft symlink is similar to a shortcut.\nIn man page,which command is described as\n\nwhich - shows the full path of (shell) commands.\n\n" ]
[ 1, 1 ]
[]
[]
[ "homebrew", "pip", "python", "python_venv", "terminal" ]
stackoverflow_0074631751_homebrew_pip_python_python_venv_terminal.txt
Q: Plots not rendering properly on an implicit plot (python) I am trying to plot two curves and then mark their intersection with a vertical line from the x-axis to the point of intersection. Sometimes the line will generate other times it will not unless I show the plot and then append it as well. To see this try option 1 with b = 24, c = 159 then try option 4 where a = 15, c = -19. Below is the code, any help would be great, thanks! import math import numpy as np import matplotlib.pyplot as plt import sympy as sym import sys from sympy import * from sympy.plotting import plot, plot_implicit, plot_parametric from sympy import sympify def find_intersection(F,G): result = sym.solve([F,G],(x,y)) return result def check_roots(roots): i=0 complexsol = 0 negativesol = 0 while i < 3: rx, ry = roots[i] if sympify(rx).is_real == True: if rx > 0: return rx, ry break else: print('Root ', i+1 , ' was discarded for being negative') negativesol = negativesol + 1 else: print('Root ', i+1, ' was discarded for being complex') complexsol = complexsol + 1 i+=1 if negativesol + complexsol == 3: sys.exit("There are no solutions by Khayyam's method") def plot_solution(F, G, xroot, yroot): newxroot = float(xroot) newyroot = float(yroot) x, y = symbols('x y') plot1 = plot_implicit(F, (x,0,2*newxroot), (y,0,2*newyroot), line_color='b',show=False) plot2 = plot_implicit(G, (x,0,2*newxroot), (y,0,2*newyroot), line_color='r',show=False) plot1.append(plot2[0]) plot3 = plot_implicit(Eq(x, newxroot),(x,0,newxroot), (y,0,newyroot), line_color='black', show=False) plot1.append(plot3[0]) plot1.show() print('Approximate x1 solution is: ', round(newxroot,1)) print('Exact x1 solution is: ', newxroot) print('Solving cubics using two conic sections in the sytle of Omar Khayyam') print('Cubic forms:') print('0: Cancel program') print('1: x^3 + bx = c') print('2: x^3 + ax^2 + bx + c = 0') print('3: x^3 + c = bx') print('4: x^3 + c = ax^2') print('5: x^3 + ax^2 = c') print('6: x^3 = bx + c') print('7: x^3 = ax^2 + c') userchoice = input('Please choose the form of your cubic or press 0 to cancel: ') if userchoice == '0': sys.exit('You have chosen to cancel the program') if userchoice == '1': # good choice is 15, -14 b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2/math.sqrt(b),y) eq2 = sym.Eq(b*x**2 + b*y**2, c*x) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '2': # good choice is -6,11,-6 a = int(input('Enter your a value: ')) b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq((x + a)*(y + b), a*b-c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '3': # good choice is 15, -14 b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y + c, b*x) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '4': # good choice is 15,-14 a = int(input('Enter your a value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y + c, a*y) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '5': # good choice is 15, -14 a = int(input('Enter your a value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y + a*y, c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '6': # good choice is 15, -14 b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y, b*x + c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '7': # good choice is 15, 19 a = int(input('Enter your a value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y, a*y + c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) Sometimes it worked if i redered plot3 (the intersection line) before plot2 (one of the curves) but this was not always a successful solution. PS if it also finds that there are no positive roots it exits the programme and should display a message (line 32), which it does but it also displays an error message saying 'An exception has occurred, use %tb to see the full traceback.' if anyone knows how to fix this that would also be helpful! A: Replace the plot3 = plot_implic.... command with the following: plot3 = plot_implicit(Eq(x, newxroot),(x,0,2*newxroot), (y,0,2*newyroot), line_color='black', show=False, adaptive=False) Note that I used adaptive=False which should create a constant width line, and I also plotted the vertical line over the same range as the two previous expressions. Also, I suggest using an improved plotting module. With this module, you can simply write: from spb import plot_implicit def plot_solution(F, G, xroot, yroot): newxroot = float(xroot) newyroot = float(yroot) x, y = symbols('x y') plot_implicit(F, G, Eq(x, newxroot), (x,0,2*newxroot), (y,0,2*newyroot)) print('Approximate x1 solution is: ', round(newxroot,1)) print('Exact x1 solution is: ', newxroot) Note the constant-width lines and labels.
Plots not rendering properly on an implicit plot (python)
I am trying to plot two curves and then mark their intersection with a vertical line from the x-axis to the point of intersection. Sometimes the line will generate other times it will not unless I show the plot and then append it as well. To see this try option 1 with b = 24, c = 159 then try option 4 where a = 15, c = -19. Below is the code, any help would be great, thanks! import math import numpy as np import matplotlib.pyplot as plt import sympy as sym import sys from sympy import * from sympy.plotting import plot, plot_implicit, plot_parametric from sympy import sympify def find_intersection(F,G): result = sym.solve([F,G],(x,y)) return result def check_roots(roots): i=0 complexsol = 0 negativesol = 0 while i < 3: rx, ry = roots[i] if sympify(rx).is_real == True: if rx > 0: return rx, ry break else: print('Root ', i+1 , ' was discarded for being negative') negativesol = negativesol + 1 else: print('Root ', i+1, ' was discarded for being complex') complexsol = complexsol + 1 i+=1 if negativesol + complexsol == 3: sys.exit("There are no solutions by Khayyam's method") def plot_solution(F, G, xroot, yroot): newxroot = float(xroot) newyroot = float(yroot) x, y = symbols('x y') plot1 = plot_implicit(F, (x,0,2*newxroot), (y,0,2*newyroot), line_color='b',show=False) plot2 = plot_implicit(G, (x,0,2*newxroot), (y,0,2*newyroot), line_color='r',show=False) plot1.append(plot2[0]) plot3 = plot_implicit(Eq(x, newxroot),(x,0,newxroot), (y,0,newyroot), line_color='black', show=False) plot1.append(plot3[0]) plot1.show() print('Approximate x1 solution is: ', round(newxroot,1)) print('Exact x1 solution is: ', newxroot) print('Solving cubics using two conic sections in the sytle of Omar Khayyam') print('Cubic forms:') print('0: Cancel program') print('1: x^3 + bx = c') print('2: x^3 + ax^2 + bx + c = 0') print('3: x^3 + c = bx') print('4: x^3 + c = ax^2') print('5: x^3 + ax^2 = c') print('6: x^3 = bx + c') print('7: x^3 = ax^2 + c') userchoice = input('Please choose the form of your cubic or press 0 to cancel: ') if userchoice == '0': sys.exit('You have chosen to cancel the program') if userchoice == '1': # good choice is 15, -14 b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2/math.sqrt(b),y) eq2 = sym.Eq(b*x**2 + b*y**2, c*x) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '2': # good choice is -6,11,-6 a = int(input('Enter your a value: ')) b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq((x + a)*(y + b), a*b-c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '3': # good choice is 15, -14 b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y + c, b*x) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '4': # good choice is 15,-14 a = int(input('Enter your a value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y + c, a*y) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '5': # good choice is 15, -14 a = int(input('Enter your a value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y + a*y, c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '6': # good choice is 15, -14 b = int(input('Enter your b value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y, b*x + c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) if userchoice == '7': # good choice is 15, 19 a = int(input('Enter your a value: ')) c = int(input('Enter your c value: ')) x, y = sym.symbols('x y') eq1 = sym.Eq(x**2,y) eq2 = sym.Eq(x*y, a*y + c) exact_sols = find_intersection(eq1,eq2) xsol, ysol = check_roots(exact_sols) plot_solution(eq1, eq2, xsol, ysol) Sometimes it worked if i redered plot3 (the intersection line) before plot2 (one of the curves) but this was not always a successful solution. PS if it also finds that there are no positive roots it exits the programme and should display a message (line 32), which it does but it also displays an error message saying 'An exception has occurred, use %tb to see the full traceback.' if anyone knows how to fix this that would also be helpful!
[ "Replace the plot3 = plot_implic.... command with the following:\nplot3 = plot_implicit(Eq(x, newxroot),(x,0,2*newxroot), (y,0,2*newyroot), line_color='black', show=False, adaptive=False)\n\nNote that I used adaptive=False which should create a constant width line, and I also plotted the vertical line over the same range as the two previous expressions.\n\nAlso, I suggest using an improved plotting module. With this module, you can simply write:\nfrom spb import plot_implicit\n\ndef plot_solution(F, G, xroot, yroot):\n newxroot = float(xroot)\n newyroot = float(yroot)\n x, y = symbols('x y')\n plot_implicit(F, G, Eq(x, newxroot), (x,0,2*newxroot), (y,0,2*newyroot))\n print('Approximate x1 solution is: ', round(newxroot,1))\n print('Exact x1 solution is: ', newxroot)\n\n\nNote the constant-width lines and labels.\n" ]
[ 0 ]
[]
[]
[ "math", "mathematical_expressions", "plot", "python", "sympy" ]
stackoverflow_0074630508_math_mathematical_expressions_plot_python_sympy.txt
Q: pytesseract not keeping leading zeroes when using image_to_data() I'm using pytesseract to process the following image: When I use the image_to_string() function config = "--oem 3 -l eng --psm 7" pytesseract.image_to_string(potential_image, config = config) I get the correct "03" output. However, when I use the image_to_data() function predict = pytesseract.image_to_data(potential_image, config = config, output_type="data.frame") print(predict) predict = predict[predict["conf"] != -1] try: detected = " ".join([str(int(a)) if isinstance(a, float) else str(a) for a in predict["text"].tolist()]) confidence = predict["conf"].iloc[0] print("Converted detected:", detected) print("with confidence:", confidence) except: pass I get: level page_num block_num par_num line_num word_num left top width height conf text 4 5 1 1 1 1 1 4 4 25 16 95.180374 3.0 Converted detected: 3 with confidence: 95.180374 Where the leading 0 is not preserved, and the result is a float that I later have to convert to an int / string. Is there a way to preserve the text output so that it is the same as image_to_string()? A: Rather than using data.frame as the output type, use a regular Python dictionary: pytesseract.image_to_data(image, config = config, output_type = pytesseract.Output.DICT)
pytesseract not keeping leading zeroes when using image_to_data()
I'm using pytesseract to process the following image: When I use the image_to_string() function config = "--oem 3 -l eng --psm 7" pytesseract.image_to_string(potential_image, config = config) I get the correct "03" output. However, when I use the image_to_data() function predict = pytesseract.image_to_data(potential_image, config = config, output_type="data.frame") print(predict) predict = predict[predict["conf"] != -1] try: detected = " ".join([str(int(a)) if isinstance(a, float) else str(a) for a in predict["text"].tolist()]) confidence = predict["conf"].iloc[0] print("Converted detected:", detected) print("with confidence:", confidence) except: pass I get: level page_num block_num par_num line_num word_num left top width height conf text 4 5 1 1 1 1 1 4 4 25 16 95.180374 3.0 Converted detected: 3 with confidence: 95.180374 Where the leading 0 is not preserved, and the result is a float that I later have to convert to an int / string. Is there a way to preserve the text output so that it is the same as image_to_string()?
[ "Rather than using data.frame as the output type, use a regular Python dictionary:\npytesseract.image_to_data(image, config = config, output_type = pytesseract.Output.DICT)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python", "python_tesseract" ]
stackoverflow_0074291461_dataframe_python_python_tesseract.txt
Q: Python for each element in a list add the value of previous index and next index For each element in a list I want to add the value before and after the element and append the result to an empty list. The problem is that at index 0 there is no index before and at the end there is no index next. At index 0 I want to add the value of index 0 with value of index 1, and in the last index I want to add the value of the last index with the same index value. As following: vec = [1,2,3,4,5] newVec = [] for i in range(len(vec)): newValue = vec[i] + vec[i+1] + vec[i-1] # if i + 1 or i - 1 does now exist pass newVec.append(newValue) Expected output: newVec = [1+2, 2+1+3, 3+2+4,4+3+5,5+4] # newVec = [3, 6, 9, 12, 9] A: You can make the conditions inside the for loop for i in range(len(vec)): if i == 0 : newValue = vec[i] + vec[i+1] elif i == len(vec)-1: newValue = vec[i] + vec[i-1] else: newValue = vec[i] + vec[i+1] + vec[i-1] newVec.append(newValue) print(newVec) output: [3, 6, 9, 12, 9] A: You have possible exceptions here, I think this code will do the trick and manage the exceptions. vec = [1, 2, 3, 4, 5] new_vec = [] for index, number in enumerate(vec): new_value = number if index != 0: new_value += vec[index - 1] try: new_value += vec[index + 1] except IndexError: pass new_vec.append(new_value) Your output will look like this: [3, 6, 9, 12, 9] Good luck ! A: You can just add 0 to either side of vec so that it's adding nothing to create an accurate result. Then just use a for i in range(1, ...) loop, starting at value 1 to add value before and after i. This is what i got for my code: vec = [1,2,3,4,5] newVec = [] vec.insert(0, 0) vec.insert(len(vec) + 1, 0) for i in range(1, len(vec) - 1): newVec.append(vec[i-1] + vec[i] + vec[i+1]) print(newVec) Which creates the output of: [3, 6, 9, 12, 9] Hope this helps.
Python for each element in a list add the value of previous index and next index
For each element in a list I want to add the value before and after the element and append the result to an empty list. The problem is that at index 0 there is no index before and at the end there is no index next. At index 0 I want to add the value of index 0 with value of index 1, and in the last index I want to add the value of the last index with the same index value. As following: vec = [1,2,3,4,5] newVec = [] for i in range(len(vec)): newValue = vec[i] + vec[i+1] + vec[i-1] # if i + 1 or i - 1 does now exist pass newVec.append(newValue) Expected output: newVec = [1+2, 2+1+3, 3+2+4,4+3+5,5+4] # newVec = [3, 6, 9, 12, 9]
[ "You can make the conditions inside the for loop\nfor i in range(len(vec)):\n if i == 0 :\n newValue = vec[i] + vec[i+1]\n elif i == len(vec)-1:\n newValue = vec[i] + vec[i-1]\n else:\n newValue = vec[i] + vec[i+1] + vec[i-1]\n newVec.append(newValue)\n\nprint(newVec)\n\noutput:\n[3, 6, 9, 12, 9]\n\n", "You have possible exceptions here, I think this code will do the trick and manage the exceptions.\nvec = [1, 2, 3, 4, 5]\nnew_vec = []\nfor index, number in enumerate(vec):\n new_value = number\n if index != 0:\n new_value += vec[index - 1]\n try:\n new_value += vec[index + 1]\n except IndexError:\n pass\n new_vec.append(new_value)\n\nYour output will look like this:\n[3, 6, 9, 12, 9]\n\nGood luck !\n", "You can just add 0 to either side of vec so that it's adding nothing to create an accurate result. Then just use a for i in range(1, ...) loop, starting at value 1 to add value before and after i. This is what i got for my code:\nvec = [1,2,3,4,5]\nnewVec = []\nvec.insert(0, 0)\nvec.insert(len(vec) + 1, 0)\nfor i in range(1, len(vec) - 1):\n newVec.append(vec[i-1] + vec[i] + vec[i+1])\nprint(newVec)\n\nWhich creates the output of:\n\n[3, 6, 9, 12, 9]\n\nHope this helps.\n" ]
[ 2, 2, 1 ]
[]
[]
[ "append", "for_loop", "list", "python", "range" ]
stackoverflow_0074632371_append_for_loop_list_python_range.txt
Q: Python dataprep lat_long_clean low performance on my dataset I have latitude and longitude data in a dataframe with the following format: Longitude Latitude 055.25.30E 21.19.15S 075.26.27W 40.39.08N 085.02.00W 29.44.00N I run the below code based on clean_lat_long: from dataprep.clean import clean_lat_long dfa['lat_long'] = dfa['Latitude'] + ' ' + dfa['Longitude'] clean_lat_long(dfa, "lat_long", split=True) The performance is very low with only 0,09% of my data cleaned: Latitude and Longitude Cleaning Report: 13 values cleaned (0.09%) 15169 values unable to be parsed (99.91%), set to NaN Result contains 13 (0.09%) values in the correct format and 15169 null values (99.91%) How can I improve these results? A: I obtained much better results by removing the first point (.) between degrees and minutes with the following instruction: dfa['lat_long'] = dfa['Latitude'].str.replace('.', ' ',1, regex=True) + ' ' + dfa['Longitude'].str.replace('.', ' ',1, regex=True) Which transformed the dataset into: Longitude Latitude 055 25.30E 21 19.15S 075 26.27W 40 39.08N 085 02.00W 29 44.00N Results become, yes, much better, which demonstrates that the tool clean_lat_long is not magic and data should be prepared upstream to make it work: Latitude and Longitude Cleaning Report: 15159 values cleaned (99.85%) 23 values unable to be parsed (0.15%), set to NaN Result contains 15159 (99.85%) values in the correct format and 23 null values (0.15%)
Python dataprep lat_long_clean low performance on my dataset
I have latitude and longitude data in a dataframe with the following format: Longitude Latitude 055.25.30E 21.19.15S 075.26.27W 40.39.08N 085.02.00W 29.44.00N I run the below code based on clean_lat_long: from dataprep.clean import clean_lat_long dfa['lat_long'] = dfa['Latitude'] + ' ' + dfa['Longitude'] clean_lat_long(dfa, "lat_long", split=True) The performance is very low with only 0,09% of my data cleaned: Latitude and Longitude Cleaning Report: 13 values cleaned (0.09%) 15169 values unable to be parsed (99.91%), set to NaN Result contains 13 (0.09%) values in the correct format and 15169 null values (99.91%) How can I improve these results?
[ "I obtained much better results by removing the first point (.) between degrees and minutes with the following instruction:\ndfa['lat_long'] = dfa['Latitude'].str.replace('.', ' ',1, regex=True) + ' ' + dfa['Longitude'].str.replace('.', ' ',1, regex=True) \n\nWhich transformed the dataset into:\nLongitude Latitude\n055 25.30E 21 19.15S\n075 26.27W 40 39.08N\n085 02.00W 29 44.00N\n\nResults become, yes, much better, which demonstrates that the tool clean_lat_long is not magic and data should be prepared upstream to make it work:\nLatitude and Longitude Cleaning Report:\n 15159 values cleaned (99.85%)\n 23 values unable to be parsed (0.15%), set to NaN\nResult contains 15159 (99.85%) values in the correct format and 23 null values (0.15%)\n\n" ]
[ 0 ]
[]
[]
[ "geocoding", "python" ]
stackoverflow_0074630909_geocoding_python.txt
Q: Unable to save email attachment from outlook to local drive i have written the below code to save email attachment from outlook with specific subject but its throwing an error. Below is the code : import win32com.client import os outlook = win32com.client.Dispatch('outlook.application').GetNamespace("MAPI") inbox = outlook.Folders('xyz.com').Folders("Inbox") messages = inbox.Items for msg in messages: if 'Subject' in msg.Subject: print(msg.Subject) if not os.path.exists('C:\xyz\reports'): for atch in msg.Attachments: atch.SaveAsFile(os.getcwd() + 'C:\xyz\reports' + atch.Filename) Error msg **: (-2147352567, 'Exception occured.',(4096, 'Microsoft Outlook', 'Cannot save the attachment. File name or directory name is not valid.' None, 0, -2147024773), None) A: Change the line atch.SaveAsFile(os.getcwd() + 'H:\Atul\Save' + atch.Filename) to atch.SaveAsFile('H:\Atul\Save\' + atch.Filename)
Unable to save email attachment from outlook to local drive
i have written the below code to save email attachment from outlook with specific subject but its throwing an error. Below is the code : import win32com.client import os outlook = win32com.client.Dispatch('outlook.application').GetNamespace("MAPI") inbox = outlook.Folders('xyz.com').Folders("Inbox") messages = inbox.Items for msg in messages: if 'Subject' in msg.Subject: print(msg.Subject) if not os.path.exists('C:\xyz\reports'): for atch in msg.Attachments: atch.SaveAsFile(os.getcwd() + 'C:\xyz\reports' + atch.Filename) Error msg **: (-2147352567, 'Exception occured.',(4096, 'Microsoft Outlook', 'Cannot save the attachment. File name or directory name is not valid.' None, 0, -2147024773), None)
[ "Change the line\natch.SaveAsFile(os.getcwd() + 'H:\\Atul\\Save' + atch.Filename)\n\nto\natch.SaveAsFile('H:\\Atul\\Save\\' + atch.Filename)\n\n" ]
[ 0 ]
[]
[]
[ "outlook", "pandas", "python", "pywin32" ]
stackoverflow_0074632167_outlook_pandas_python_pywin32.txt
Q: Syntax error: positional argument follows key word argument I'm fairly new to coding and I'm having trouble figuring out this error. The error: > File "main.py", line 7 bot = commands. Bot(commandS_prefix= "!", intents = discord. Intents,all()) SyntaxError: positional argument follow s keyword argument My code: import discord import os from discord import app_commands from discord.ext import commands from keep_alive import keep_alive bot = commands.Bot(commandS_prefix="!", intents = discord.Intents,all()) client = discord.Client(intents=discord.Intents.default()) @bot.event async def on_ready(): print("Bot is up and ready!") try: synced = await bot. tree.sync() print(f"Synced {len(synced)} command(s)") except Exception as e: print(e) @bot.tree.command(name="hello") async def hello(interaction: discord.Interaction): await interaction. response.send_message(f"Hey {interaction.user.mention}! This is a slash command!") @bot.tree.command(name="say") @app_commands.describe(thing_to_say = "What should i say?") async def say(interaction: discord.Interaction, thing_to_say: str): await interaction.response.send_message(f"{interaction.user.name} said: 'thing_to_say'") @client.event async def on_ready(): print('We have logged in as {0.user}'.format(client)) @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith('$hello'): await message.channel.send('Hello!') keep_alive() client.run(os.getenv('TOKEN')) I've looked through most things but can't seem to figure anything out. A: bot = commands. Bot(commandS_prefix= "!", intents = discord. Intents,all()) Because you have a comma between Intents and all(), all() is being interpreted as a separate argument. Change the comma to a period.
Syntax error: positional argument follows key word argument
I'm fairly new to coding and I'm having trouble figuring out this error. The error: > File "main.py", line 7 bot = commands. Bot(commandS_prefix= "!", intents = discord. Intents,all()) SyntaxError: positional argument follow s keyword argument My code: import discord import os from discord import app_commands from discord.ext import commands from keep_alive import keep_alive bot = commands.Bot(commandS_prefix="!", intents = discord.Intents,all()) client = discord.Client(intents=discord.Intents.default()) @bot.event async def on_ready(): print("Bot is up and ready!") try: synced = await bot. tree.sync() print(f"Synced {len(synced)} command(s)") except Exception as e: print(e) @bot.tree.command(name="hello") async def hello(interaction: discord.Interaction): await interaction. response.send_message(f"Hey {interaction.user.mention}! This is a slash command!") @bot.tree.command(name="say") @app_commands.describe(thing_to_say = "What should i say?") async def say(interaction: discord.Interaction, thing_to_say: str): await interaction.response.send_message(f"{interaction.user.name} said: 'thing_to_say'") @client.event async def on_ready(): print('We have logged in as {0.user}'.format(client)) @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith('$hello'): await message.channel.send('Hello!') keep_alive() client.run(os.getenv('TOKEN')) I've looked through most things but can't seem to figure anything out.
[ "bot = commands. Bot(commandS_prefix= \"!\", intents = discord. Intents,all())\n\nBecause you have a comma between Intents and all(), all() is being interpreted as a separate argument. Change the comma to a period.\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074632276_discord.py_python.txt
Q: How to scrape Next button on Linkedin with Selenium using Python? I am trying to scrape LinkedIn website using Selenium. I can't parse Next button. It resists as much as it can. I've spent a half of a day to adress this, but all in vain. I tried absolutely various options, with text and so on. Only work with start ID but scrape other button. selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//button[@aria-label='Далее']"} This is quite common for this site: //*[starts-with(@id,'e')] My code: from selenium import webdriver from selenium.webdriver import Keys from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from time import sleep chrome_driver_path = Service("E:\programming\chromedriver_win32\chromedriver.exe") driver = webdriver.Chrome(service=chrome_driver_path) url = "https://www.linkedin.com/feed/" driver.get(url) SEARCH_QUERY = "python developer" LOGIN = "EMAIL" PASSWORD = "PASSWORD" sleep(10) sign_in_link = driver.find_element(By.XPATH, '/html/body/div[1]/main/p[1]/a') sign_in_link.click() login_input = driver.find_element(By.XPATH, '//*[@id="username"]') login_input.send_keys(LOGIN) sleep(1) password_input = driver.find_element(By.XPATH, '//*[@id="password"]') password_input.send_keys(PASSWORD) sleep(1) enter_button = driver.find_element(By.XPATH, '//*[@id="organic-div"]/form/div[3]/button') enter_button.click() sleep(25) lens_button = driver.find_element(By.XPATH, '//*[@id="global-nav-search"]/div/button') lens_button.click() sleep(5) search_input = driver.find_element(By.XPATH, '//*[@id="global-nav-typeahead"]/input') search_input.send_keys(SEARCH_QUERY) search_input.send_keys(Keys.ENTER) sleep(5) people_button = driver.find_element(By.XPATH, '//*[@id="search-reusables__filters-bar"]/ul/li[1]/button') people_button.click() sleep(5) page_button = driver.find_element(By.XPATH, "//button[@aria-label='Далее']") page_button.click() sleep(60) Chrome inspection of button Next Button A: OK, there are several issues here: The main problem why your code not worked is because the "next" pagination is initially even not created on the page until you scrolling the page, so I added the mechanism, to scroll the page until that button can be clicked. it's not good to create locators based on local language texts. You should use WebDriverWait expected_conditions explicit waits, not hardcoded pauses. I used mixed locators types to show that sometimes it's better to use By.ID and sometimes By.XPATH etc. the following code works: import time from selenium import webdriver from selenium.webdriver import Keys from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 10) url = "https://www.linkedin.com/feed/" driver.get(url) wait.until(EC.element_to_be_clickable((By.XPATH, "//a[contains(@href,'login')]"))).click() wait.until(EC.element_to_be_clickable((By.ID, "username"))).send_keys(my_email) wait.until(EC.element_to_be_clickable((By.ID, "password"))).send_keys(my_password) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[type='submit']"))).click() search_input = wait.until(EC.element_to_be_clickable((By.XPATH, "//input[contains(@class,'search-global')]"))) search_input.click() search_input.send_keys("python developer" + Keys.ENTER) wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id="search-reusables__filters-bar"]/ul/li[1]/button'))).click() wait = WebDriverWait(driver, 4) while True: try: next_btn = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "button.artdeco-pagination__button.artdeco-pagination__button--next"))) next_btn.location_once_scrolled_into_view time.sleep(0.2) next_btn.click() break except: driver.execute_script("window.scrollBy(0, arguments[0]);", 600)
How to scrape Next button on Linkedin with Selenium using Python?
I am trying to scrape LinkedIn website using Selenium. I can't parse Next button. It resists as much as it can. I've spent a half of a day to adress this, but all in vain. I tried absolutely various options, with text and so on. Only work with start ID but scrape other button. selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//button[@aria-label='Далее']"} This is quite common for this site: //*[starts-with(@id,'e')] My code: from selenium import webdriver from selenium.webdriver import Keys from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from time import sleep chrome_driver_path = Service("E:\programming\chromedriver_win32\chromedriver.exe") driver = webdriver.Chrome(service=chrome_driver_path) url = "https://www.linkedin.com/feed/" driver.get(url) SEARCH_QUERY = "python developer" LOGIN = "EMAIL" PASSWORD = "PASSWORD" sleep(10) sign_in_link = driver.find_element(By.XPATH, '/html/body/div[1]/main/p[1]/a') sign_in_link.click() login_input = driver.find_element(By.XPATH, '//*[@id="username"]') login_input.send_keys(LOGIN) sleep(1) password_input = driver.find_element(By.XPATH, '//*[@id="password"]') password_input.send_keys(PASSWORD) sleep(1) enter_button = driver.find_element(By.XPATH, '//*[@id="organic-div"]/form/div[3]/button') enter_button.click() sleep(25) lens_button = driver.find_element(By.XPATH, '//*[@id="global-nav-search"]/div/button') lens_button.click() sleep(5) search_input = driver.find_element(By.XPATH, '//*[@id="global-nav-typeahead"]/input') search_input.send_keys(SEARCH_QUERY) search_input.send_keys(Keys.ENTER) sleep(5) people_button = driver.find_element(By.XPATH, '//*[@id="search-reusables__filters-bar"]/ul/li[1]/button') people_button.click() sleep(5) page_button = driver.find_element(By.XPATH, "//button[@aria-label='Далее']") page_button.click() sleep(60) Chrome inspection of button Next Button
[ "OK, there are several issues here:\n\nThe main problem why your code not worked is because the \"next\" pagination is initially even not created on the page until you scrolling the page, so I added the mechanism, to scroll the page until that button can be clicked.\nit's not good to create locators based on local language texts.\nYou should use WebDriverWait expected_conditions explicit waits, not hardcoded pauses.\n\nI used mixed locators types to show that sometimes it's better to use By.ID and sometimes By.XPATH etc.\nthe following code works:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver import Keys\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://www.linkedin.com/feed/\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//a[contains(@href,'login')]\"))).click()\nwait.until(EC.element_to_be_clickable((By.ID, \"username\"))).send_keys(my_email)\nwait.until(EC.element_to_be_clickable((By.ID, \"password\"))).send_keys(my_password)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button[type='submit']\"))).click()\nsearch_input = wait.until(EC.element_to_be_clickable((By.XPATH, \"//input[contains(@class,'search-global')]\")))\nsearch_input.click()\nsearch_input.send_keys(\"python developer\" + Keys.ENTER)\nwait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id=\"search-reusables__filters-bar\"]/ul/li[1]/button'))).click()\nwait = WebDriverWait(driver, 4)\nwhile True:\n try:\n next_btn = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, \"button.artdeco-pagination__button.artdeco-pagination__button--next\")))\n next_btn.location_once_scrolled_into_view\n time.sleep(0.2)\n next_btn.click()\n break\n except:\n driver.execute_script(\"window.scrollBy(0, arguments[0]);\", 600)\n\n" ]
[ 0 ]
[]
[]
[ "python", "scroll", "selenium", "webdriverwait", "xpath" ]
stackoverflow_0074631101_python_scroll_selenium_webdriverwait_xpath.txt
Q: Can't install tensorflow-text~=2.11.0 I got a warning something like warnings.warn( No local packages or working download links found for tensorflow-text~=2.11.0 error: Could not find suitable distribution for Requirement.parse('tensorflow-text~=2.11.0') and if I run pip install 'tensorflow-text~=2.11.0' I got : ERROR: Could not find a version that satisfies the requirement tensorflow-text~=2.11.0 (from versions: 2.8.1, 2.8.2, 2.9.0rc0, 2.9.0rc1, 2.9.0, 2.10.0b2, 2.10.0rc0, 2.10.0) ERROR: No matching distribution found for tensorflow-text~=2.11.0 tensorflow-text 2.11.0 available on pypi and if I run pip install tensorflow-text it installs tensorflow-text 2.10.0 and downgrade the whole tensorflow to 2.10.0 Version Info: OS: Windows 10 Environment: Conda (miniconda3) Python: 3.10.8 Tensorflow: 2.11 I've tried pip and conda-forge A: As per their note, they have dropped building for Windows with v2.11.0. So, you'll need to build from source or seek a third-party build. A: I think you should run pip install tensorflow-text==2.11.0 without any quotes or swung dash A: Install directtly. The latest version of tensorflow-text is 2.11.0 _pypi pip install tensorflow-text
Can't install tensorflow-text~=2.11.0
I got a warning something like warnings.warn( No local packages or working download links found for tensorflow-text~=2.11.0 error: Could not find suitable distribution for Requirement.parse('tensorflow-text~=2.11.0') and if I run pip install 'tensorflow-text~=2.11.0' I got : ERROR: Could not find a version that satisfies the requirement tensorflow-text~=2.11.0 (from versions: 2.8.1, 2.8.2, 2.9.0rc0, 2.9.0rc1, 2.9.0, 2.10.0b2, 2.10.0rc0, 2.10.0) ERROR: No matching distribution found for tensorflow-text~=2.11.0 tensorflow-text 2.11.0 available on pypi and if I run pip install tensorflow-text it installs tensorflow-text 2.10.0 and downgrade the whole tensorflow to 2.10.0 Version Info: OS: Windows 10 Environment: Conda (miniconda3) Python: 3.10.8 Tensorflow: 2.11 I've tried pip and conda-forge
[ "As per their note, they have dropped building for Windows with v2.11.0. So, you'll need to build from source or seek a third-party build.\n", "I think you should run\npip install tensorflow-text==2.11.0\n\nwithout any quotes or swung dash\n", "Install directtly. The latest version of tensorflow-text is 2.11.0 _pypi\npip install tensorflow-text\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "conda", "pip", "python", "tensorflow", "windows" ]
stackoverflow_0074628389_conda_pip_python_tensorflow_windows.txt
Q: How can I add a new column to a dataframe with a lookup to the same column? (1 rows above) I've created this dataframe - Range = np.arange(1,101,1) A={ 0:-1, 1:0, 2:4 } Table = pd.DataFrame({"Row": Range}) Table["Intervals"]=np.where(Table["Row"]==1,0,(Table["Row"]%3).map(A)) Table Row Intervals 0 1 0 1 2 4 2 3 -1 3 4 0 4 5 4 ... ... ... 95 96 -1 96 97 0 97 98 4 98 99 -1 99 100 0 I'd like to add a new column that the first cell will contain the number -25 and the second number will be -25+4, the third number will be -25+4+(-1)...and so on. I've tried to use shift but no luck - Table["X"]=np.where(Table["Row"]==1,-25,Table["X"].shift(1)) Will appreciate any help! A: We'll start by adding the value -25 to the 0th row in a new column, NewColumn Table.loc[0, "NewColumn"] = -25 Then we fill the nulls with the Intervals column and convert back to int (they were floats) Table["NewColumn"] = Table["NewColumn"].fillna(Table["Intervals"]).astype(int) And last cumulative sum the NewColumn Table["NewColumn"] = Table["NewColumn"].cumsum() All together Range = np.arange(1,101,1) A={ 0:-1, 1:0, 2:4 } Table = pd.DataFrame({"Row": Range}) Table["Intervals"]=np.where(Table["Row"]==1,0,(Table["Row"]%3).map(A)) Table.loc[0, "NewColumn"] = -25 Table["NewColumn"] = Table["NewColumn"].fillna(Table["Intervals"]).astype(int) Table["NewColumn"] = Table["NewColumn"].cumsum() print(Table) Row Intervals NewColumn 0 1 0 -25 1 2 4 -21 2 3 -1 -22 3 4 0 -22 4 5 4 -18 .. ... ... ... 95 96 -1 71 96 97 0 71 97 98 4 75 98 99 -1 74 99 100 0 74 A: You're looking for cumulative sum. >>> Table['n'] = np.concatenate([[-25], Table.Intervals[1:]]) >>> Table['cum'] = Table.n.cumsum() >>> Table Row Intervals n cum 0 1 0 -25 -25 1 2 4 4 -21 2 3 -1 -1 -22 3 4 0 0 -22 4 5 4 4 -18 .. ... ... .. ... 95 96 -1 -1 71 96 97 0 0 71 97 98 4 4 75 98 99 -1 -1 74 99 100 0 0 74 [100 rows x 4 columns]
How can I add a new column to a dataframe with a lookup to the same column? (1 rows above)
I've created this dataframe - Range = np.arange(1,101,1) A={ 0:-1, 1:0, 2:4 } Table = pd.DataFrame({"Row": Range}) Table["Intervals"]=np.where(Table["Row"]==1,0,(Table["Row"]%3).map(A)) Table Row Intervals 0 1 0 1 2 4 2 3 -1 3 4 0 4 5 4 ... ... ... 95 96 -1 96 97 0 97 98 4 98 99 -1 99 100 0 I'd like to add a new column that the first cell will contain the number -25 and the second number will be -25+4, the third number will be -25+4+(-1)...and so on. I've tried to use shift but no luck - Table["X"]=np.where(Table["Row"]==1,-25,Table["X"].shift(1)) Will appreciate any help!
[ "We'll start by adding the value -25 to the 0th row in a new column, NewColumn\nTable.loc[0, \"NewColumn\"] = -25\n\nThen we fill the nulls with the Intervals column and convert back to int (they were floats)\nTable[\"NewColumn\"] = Table[\"NewColumn\"].fillna(Table[\"Intervals\"]).astype(int)\n\nAnd last cumulative sum the NewColumn\nTable[\"NewColumn\"] = Table[\"NewColumn\"].cumsum()\n\nAll together\nRange = np.arange(1,101,1)\n\nA={\n0:-1,\n1:0,\n2:4\n}\n\nTable = pd.DataFrame({\"Row\": Range})\nTable[\"Intervals\"]=np.where(Table[\"Row\"]==1,0,(Table[\"Row\"]%3).map(A))\n\nTable.loc[0, \"NewColumn\"] = -25\n\nTable[\"NewColumn\"] = Table[\"NewColumn\"].fillna(Table[\"Intervals\"]).astype(int)\n\nTable[\"NewColumn\"] = Table[\"NewColumn\"].cumsum()\n\nprint(Table)\n\n Row Intervals NewColumn\n0 1 0 -25\n1 2 4 -21\n2 3 -1 -22\n3 4 0 -22\n4 5 4 -18\n.. ... ... ...\n95 96 -1 71\n96 97 0 71\n97 98 4 75\n98 99 -1 74\n99 100 0 74\n\n", "You're looking for cumulative sum.\n>>> Table['n'] = np.concatenate([[-25], Table.Intervals[1:]])\n>>> Table['cum'] = Table.n.cumsum()\n>>> Table\n Row Intervals n cum\n0 1 0 -25 -25\n1 2 4 4 -21\n2 3 -1 -1 -22\n3 4 0 0 -22\n4 5 4 4 -18\n.. ... ... .. ...\n95 96 -1 -1 71\n96 97 0 0 71\n97 98 4 4 75\n98 99 -1 -1 74\n99 100 0 0 74\n\n[100 rows x 4 columns]\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074632426_dataframe_pandas_python.txt
Q: Sort values and re order after Context I have a table in this format: File IDR IDC Type I/P Value 1 1 ID1 Primary P 5 1 1 ID2 Secondary P 6 1 1 ID3 Primary I 7 2 2 ID4 Primary I 8 2 2 ID5 Secondary P 10 Each ['File'] have has its own IDR. Each ['IDR'] has an IDC with a type (Primary/Secondary) and a value. The problem I need to sort the ['Values'] descending, but giving priority to Primary and Secondary after. The 2 most important columns are File and IDR. First I tried sorting the values: df3 = df2.groupby(['filename', 'ID_R']).apply(lambda x: x.sort_values(by=['Type', 'Value'], ascending=[True, False]) Now I need to mix between P/I. I tried to mix the previous code with: .assign(seq=df.groupby(['Type', 'I/P']).cumcount()).sort_values(['Type', 'seq', 'I/P']).drop(columns='seq') But this way the first groupby(['filename', 'ID_R']) is ignored. Desired Output: File IDR IDC Type I/P Value 1 1 ID3 Primary I 7 1 1 ID1 Primary P 5 1 1 ID2 Secondary P 6 2 2 ID4 Primary I 8 2 2 ID5 Secondary P 10 A: Could you sort all the columns at once? print(df.sort_values([ "File", "IDR", "Type", "IDC", "Value" ], ascending=[True, True, True, False, True,])) File IDR IDC Type I/P Value 2 1 1 ID3 Primary I 7 0 1 1 ID1 Primary P 5 1 1 1 ID2 Secondary P 6 3 2 2 ID4 Primary I 8 4 2 2 ID5 Secondary P 10
Sort values and re order after
Context I have a table in this format: File IDR IDC Type I/P Value 1 1 ID1 Primary P 5 1 1 ID2 Secondary P 6 1 1 ID3 Primary I 7 2 2 ID4 Primary I 8 2 2 ID5 Secondary P 10 Each ['File'] have has its own IDR. Each ['IDR'] has an IDC with a type (Primary/Secondary) and a value. The problem I need to sort the ['Values'] descending, but giving priority to Primary and Secondary after. The 2 most important columns are File and IDR. First I tried sorting the values: df3 = df2.groupby(['filename', 'ID_R']).apply(lambda x: x.sort_values(by=['Type', 'Value'], ascending=[True, False]) Now I need to mix between P/I. I tried to mix the previous code with: .assign(seq=df.groupby(['Type', 'I/P']).cumcount()).sort_values(['Type', 'seq', 'I/P']).drop(columns='seq') But this way the first groupby(['filename', 'ID_R']) is ignored. Desired Output: File IDR IDC Type I/P Value 1 1 ID3 Primary I 7 1 1 ID1 Primary P 5 1 1 ID2 Secondary P 6 2 2 ID4 Primary I 8 2 2 ID5 Secondary P 10
[ "Could you sort all the columns at once?\nprint(df.sort_values([\n \"File\",\n \"IDR\",\n \"Type\",\n \"IDC\",\n \"Value\"\n], ascending=[True, True, True, False, True,]))\n\n File IDR IDC Type I/P Value\n2 1 1 ID3 Primary I 7\n0 1 1 ID1 Primary P 5\n1 1 1 ID2 Secondary P 6\n3 2 2 ID4 Primary I 8\n4 2 2 ID5 Secondary P 10\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "group_by", "pandas", "python", "sorting" ]
stackoverflow_0074632281_dataframe_group_by_pandas_python_sorting.txt
Q: Removing a key from a dictionary using the value Can I somehow delete a value from a dictionary using its key? The function del_contact is supposed to delete a contact using only the name of the contact, but unfortunately I have the dictionary and the value, but not the key. How can this be solved? my_contacts = { 1: { "Name": "Tom Jones", "Number": "911", "Birthday": "22.10.1995", "Address": "212 street" }, 2: { "Name": "Bob Marley", "Number": "0800838383", "Birthday": "22.10.1991", "Address": "31 street" } } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) index = len(my_contacts) + 1 for _ in range(user_input): details = {} name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") details["Name"] = name details["Number"] = number details["Birthday"] = birthday details["Address"] = address my_contacts[index] = details index += 1 print(my_contacts) def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") my_contacts.pop(user_input) add_contact() print(my_contacts) The problem is that my key of the dictionary is 1 or Name, and I want to be able to remove the contact using only the value of Name. A: Basically what you can do is iterate of the dict and save only the keys without that user's name. This code will do the trick: my_contacts = {1: {"Name": "Tom Jones", "Number": "911", "Birthday": "22.10.1995", "Address": "212 street"}, 2: {"Name": "Bob Marley", "Number": "0800838383", "Birthday": "22.10.1991", "Address": "31 street"} } user_input = "Tom Jones" my_contacts = {key: value for key, value in my_contacts.items() if value["Name"] != user_input} A: There's two steps here: Finding the key/value pair(s) that match Deleting the keys you found If you're only deleting a single item (even with multiple matches), always, these can be combined: def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") for k, v in my_contacts.items(): # Need both key and value, we test key, delete by value if v["Name"] == user_input: del my_contacts[k] break # Deleted one item, stop now (we'd RuntimeError if iteration continued) else: # Optionally raise exception or print error indicating name wasn't found If you might delete multiple entries, and must operate in place, you'd split the steps (because it's illegal to continue iterating a dict while mutating the set of keys): def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") to_delete = [k for k, v in my_contacts.items() if v["Name"] == user_input] if not to_delete: # Optionally raise exception or print error indicating name wasn't found for k in to_delete: del my_contacts[k] Or if you're okay with replacing the original dict, rather than mutating it in place, you can one-line the multi-deletion case as: def del_contact(): global my_contacts # Assignment requires explicitly declared use of global user_input = input("Please enter the name of the contact you want to delete: ") my_contacts = {k: v for k, v in my_contacts.items() if v["Name"] != user_input} # Flip test, build new dict # Detecting no matches is more annoying here (it's basically comparing # length before and after), so I'm omitting it A: Being that the index are used as keys for the dictionary, the range function can be used as follows: def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") for i in range(1,len(my_contacts)+1): if my_contacts[i]['Name'] == user_input: del my_contacts[i] break If there can be multiple contacts with the same name that need to be deleted, remove the break statement. But as mentioned in the comments, there is no need to use an index as a key for the dictionaries. A potential bug you'll encounter with that, is if the first entry is deleted whose name is Tom Jones the dict will have a length of 1 with only one key - 2. Then when you try to add more contacts, when you check the length of the dictionary index = len(my_contacts) + 1, since length is 1, index will be 2. Hence my_contacts[index] = details will update the contact with a key of 2 or "Name": "Bob Marley" instead of adding a new contact. A: I think thats what you need, hope that helps: Code: my_contacts = {1: {"Name": "Tom Jones", "Number": "911", "Birthday": "22.10.1995", "Address": "212 street"}, 2: {"Name": "Bob Marley", "Number": "0800838383", "Birthday": "22.10.1991", "Address": "31 street"} } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) index = len(my_contacts) + 1 for _ in range(user_input): details = {} name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") details["Name"] = name details["Number"] = number details["Birthday"] = birthday details["Address"] = address my_contacts[index] = details index += 1 print(my_contacts) add_contact() print(my_contacts) def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") values = [key for key in my_contacts.keys() if user_input in my_contacts[key].values()] for value in values: my_contacts.pop(value) del_contact() print(my_contacts) Input: {1: {'Name': 'Tom Jones', 'Number': '911', 'Birthday': '22.10.1995', 'Address': '212 street'}, 2: {'Name': 'Bob Marley', 'Number': '0800838383', 'Birthday': '22.10.1991', 'Address': '31 street'}, 3: {'Name': 'myname', 'Number': '1233455', 'Birthday': '12-12-22', 'Address': 'blabla street'}} Please enter the name of the contact you want to delete: Bob Marley Output: {1: {'Name': 'Tom Jones', 'Number': '911', 'Birthday': '22.10.1995', 'Address': '212 street'}, 3: {'Name': 'myname', 'Number': '1233455', 'Birthday': '12-12-22', 'Address': 'blabla street'}}
Removing a key from a dictionary using the value
Can I somehow delete a value from a dictionary using its key? The function del_contact is supposed to delete a contact using only the name of the contact, but unfortunately I have the dictionary and the value, but not the key. How can this be solved? my_contacts = { 1: { "Name": "Tom Jones", "Number": "911", "Birthday": "22.10.1995", "Address": "212 street" }, 2: { "Name": "Bob Marley", "Number": "0800838383", "Birthday": "22.10.1991", "Address": "31 street" } } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) index = len(my_contacts) + 1 for _ in range(user_input): details = {} name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") details["Name"] = name details["Number"] = number details["Birthday"] = birthday details["Address"] = address my_contacts[index] = details index += 1 print(my_contacts) def del_contact(): user_input = input("Please enter the name of the contact you want to delete: ") my_contacts.pop(user_input) add_contact() print(my_contacts) The problem is that my key of the dictionary is 1 or Name, and I want to be able to remove the contact using only the value of Name.
[ "Basically what you can do is iterate of the dict and save only the keys without that user's name.\nThis code will do the trick:\nmy_contacts = {1: {\"Name\": \"Tom Jones\",\n \"Number\": \"911\",\n \"Birthday\": \"22.10.1995\",\n \"Address\": \"212 street\"},\n 2: {\"Name\": \"Bob Marley\",\n \"Number\": \"0800838383\",\n \"Birthday\": \"22.10.1991\",\n \"Address\": \"31 street\"}\n }\nuser_input = \"Tom Jones\"\n\nmy_contacts = {key: value for key, value in my_contacts.items() if value[\"Name\"] != user_input}\n\n", "There's two steps here:\n\nFinding the key/value pair(s) that match\nDeleting the keys you found\n\nIf you're only deleting a single item (even with multiple matches), always, these can be combined:\ndef del_contact():\n user_input = input(\"Please enter the name of the contact you want to delete: \")\n for k, v in my_contacts.items(): # Need both key and value, we test key, delete by value\n if v[\"Name\"] == user_input:\n del my_contacts[k]\n break # Deleted one item, stop now (we'd RuntimeError if iteration continued)\n else:\n # Optionally raise exception or print error indicating name wasn't found\n\nIf you might delete multiple entries, and must operate in place, you'd split the steps (because it's illegal to continue iterating a dict while mutating the set of keys):\ndef del_contact():\n user_input = input(\"Please enter the name of the contact you want to delete: \")\n to_delete = [k for k, v in my_contacts.items() if v[\"Name\"] == user_input]\n if not to_delete:\n # Optionally raise exception or print error indicating name wasn't found\n for k in to_delete:\n del my_contacts[k]\n\nOr if you're okay with replacing the original dict, rather than mutating it in place, you can one-line the multi-deletion case as:\ndef del_contact():\n global my_contacts # Assignment requires explicitly declared use of global\n user_input = input(\"Please enter the name of the contact you want to delete: \")\n my_contacts = {k: v for k, v in my_contacts.items() if v[\"Name\"] != user_input} # Flip test, build new dict\n # Detecting no matches is more annoying here (it's basically comparing\n # length before and after), so I'm omitting it\n\n", "Being that the index are used as keys for the dictionary, the range function can be used as follows:\ndef del_contact():\n user_input = input(\"Please enter the name of the contact you want to delete: \")\n for i in range(1,len(my_contacts)+1):\n if my_contacts[i]['Name'] == user_input:\n del my_contacts[i]\n break\n\nIf there can be multiple contacts with the same name that need to be deleted, remove the break statement.\nBut as mentioned in the comments, there is no need to use an index as a key for the dictionaries. A potential bug you'll encounter with that, is if the first entry is deleted whose name is Tom Jones the dict will have a length of 1 with only one key - 2. Then when you try to add more contacts, when you check the length of the dictionary index = len(my_contacts) + 1, since length is 1, index will be 2. Hence my_contacts[index] = details will update the contact with a key of 2 or \"Name\": \"Bob Marley\" instead of adding a new contact.\n", "I think thats what you need, hope that helps:\nCode:\nmy_contacts = {1: {\"Name\": \"Tom Jones\",\n \"Number\": \"911\",\n \"Birthday\": \"22.10.1995\",\n \"Address\": \"212 street\"},\n 2: {\"Name\": \"Bob Marley\",\n \"Number\": \"0800838383\",\n \"Birthday\": \"22.10.1991\",\n \"Address\": \"31 street\"}\n }\n\ndef add_contact():\n user_input = int(input(\"please enter how many contacts you wanna add: \"))\n index = len(my_contacts) + 1\n for _ in range(user_input):\n details = {}\n name = input(\"Enter the name: \")\n number = input(\"Enter the number: \")\n birthday = input(\"Enter the birthday\")\n address = input(\"Enter the address\")\n details[\"Name\"] = name\n details[\"Number\"] = number\n details[\"Birthday\"] = birthday\n details[\"Address\"] = address\n my_contacts[index] = details\n index += 1\n print(my_contacts)\n\nadd_contact()\n\nprint(my_contacts)\ndef del_contact():\n user_input = input(\"Please enter the name of the contact you want to delete: \")\n values = [key for key in my_contacts.keys() if user_input in my_contacts[key].values()]\n for value in values:\n my_contacts.pop(value)\ndel_contact()\nprint(my_contacts)\n\nInput:\n{1: {'Name': 'Tom Jones', 'Number': '911', 'Birthday': '22.10.1995', 'Address': '212 street'}, 2: {'Name': 'Bob Marley', 'Number': '0800838383', 'Birthday': '22.10.1991', 'Address': '31 street'}, 3: {'Name': 'myname', 'Number': '1233455', 'Birthday': '12-12-22', 'Address': 'blabla street'}}\n\nPlease enter the name of the contact you want to delete: Bob Marley\n\nOutput:\n{1: {'Name': 'Tom Jones', 'Number': '911', 'Birthday': '22.10.1995', 'Address': '212 street'}, 3: {'Name': 'myname', 'Number': '1233455', 'Birthday': '12-12-22', 'Address': 'blabla street'}}\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "contacts", "dictionary", "python" ]
stackoverflow_0074632313_contacts_dictionary_python.txt
Q: How to separately send email to the form I made web contact form, my email sending to subscribe email box , I want to send email to the form only. Please help me driver.get('https://shop.rtrpilates.com/') driver.find_element_by_partial_link_text('Contact'),click try: username_box = driver.find_element_by_xpath('//input[@type="email"]') username_box.send_keys("[email protected]") I don't understand how can I create a block between this, Help please Advance thanks A: Seems you main problem here is that you trying to use deprecated methods find_element_by_*. None of these is supported by Selenium 4. Also code you shared is missing delays to wait for elements to become clickable etc. The following short code works: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 10) url = "https://shop.rtrpilates.com/" driver.get(url) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".header__inline-menu a[href*='contact']"))).click() wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".contact__fields input[type='email']"))).send_keys("[email protected]") The result is
How to separately send email to the form
I made web contact form, my email sending to subscribe email box , I want to send email to the form only. Please help me driver.get('https://shop.rtrpilates.com/') driver.find_element_by_partial_link_text('Contact'),click try: username_box = driver.find_element_by_xpath('//input[@type="email"]') username_box.send_keys("[email protected]") I don't understand how can I create a block between this, Help please Advance thanks
[ "Seems you main problem here is that you trying to use deprecated methods find_element_by_*. None of these is supported by Selenium 4.\nAlso code you shared is missing delays to wait for elements to become clickable etc.\nThe following short code works:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://shop.rtrpilates.com/\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \".header__inline-menu a[href*='contact']\"))).click()\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \".contact__fields input[type='email']\"))).send_keys(\"[email protected]\")\n\nThe result is\n\n" ]
[ 0 ]
[]
[]
[ "css_selectors", "python", "selenium", "selenium4", "webdriverwait" ]
stackoverflow_0074629118_css_selectors_python_selenium_selenium4_webdriverwait.txt
Q: I get invalid_grant error for mastodon/mastodon.py. How do I do Oauth2 instead? This code is taken almost verbatim from mastodon.py's README.md it always returns Traceback (most recent call last): File "C:\Users\matth\GitHub\feed_thing\again.py", line 32, in mastodon.log_in( File "C:\Users\matth.virtualenvs\feed_thing-LXMa84iN\lib\site-packages\mastodon\Mastodon.py", line 568, in log_in raise MastodonIllegalArgumentError('Invalid user name, password, or redirect_uris: %s' % e) mastodon.Mastodon.MastodonIllegalArgumentError: Invalid user name, password, or redirect_uris: ('Mastodon API returned error', 400, 'Bad Request', 'invalid_grant') UPDATE: I get this failure from mastodon.social, but not mstdn.social and not ohai.social. It turns out to be because of 2 factor auth and the sample code below is not a OAUTH2 dance. How would I replace the code below with an OAUTH2 dance? import os from mastodon import Mastodon USERNAME_AS_EMAIL = os.environ["USERNAME_AS_EMAIL"] PASSWORD = os.environ["PASSWORD"] APP_NAME = "APP_NAME" Mastodon.create_app( APP_NAME, api_base_url = 'https://mastodon.social', to_file = 'pytooter_clientcred.secret' ) mastodon = Mastodon( client_id = 'pytooter_clientcred.secret', api_base_url = 'https://mastodon.social' ) mastodon.log_in( USERNAME_AS_EMAIL, PASSWORD, to_file = 'pytooter_usercred.secret' ) mastodon = Mastodon( client_id = 'pytooter_clientcred.secret', access_token = 'pytooter_usercred.secret', api_base_url = 'https://mastodon.social' ) A: I found part of the solution. If 2 factor auth is enabled on your account, mastodon.py (as of today) can't handle login. If you disable 2 factor auth, then you can login as expected. Another hint for people who find this, sometimes you need to delete the *.secret files for mastodon.py to work (i.e. if you've changed server names but didn't change the .secret files)
I get invalid_grant error for mastodon/mastodon.py. How do I do Oauth2 instead?
This code is taken almost verbatim from mastodon.py's README.md it always returns Traceback (most recent call last): File "C:\Users\matth\GitHub\feed_thing\again.py", line 32, in mastodon.log_in( File "C:\Users\matth.virtualenvs\feed_thing-LXMa84iN\lib\site-packages\mastodon\Mastodon.py", line 568, in log_in raise MastodonIllegalArgumentError('Invalid user name, password, or redirect_uris: %s' % e) mastodon.Mastodon.MastodonIllegalArgumentError: Invalid user name, password, or redirect_uris: ('Mastodon API returned error', 400, 'Bad Request', 'invalid_grant') UPDATE: I get this failure from mastodon.social, but not mstdn.social and not ohai.social. It turns out to be because of 2 factor auth and the sample code below is not a OAUTH2 dance. How would I replace the code below with an OAUTH2 dance? import os from mastodon import Mastodon USERNAME_AS_EMAIL = os.environ["USERNAME_AS_EMAIL"] PASSWORD = os.environ["PASSWORD"] APP_NAME = "APP_NAME" Mastodon.create_app( APP_NAME, api_base_url = 'https://mastodon.social', to_file = 'pytooter_clientcred.secret' ) mastodon = Mastodon( client_id = 'pytooter_clientcred.secret', api_base_url = 'https://mastodon.social' ) mastodon.log_in( USERNAME_AS_EMAIL, PASSWORD, to_file = 'pytooter_usercred.secret' ) mastodon = Mastodon( client_id = 'pytooter_clientcred.secret', access_token = 'pytooter_usercred.secret', api_base_url = 'https://mastodon.social' )
[ "I found part of the solution. If 2 factor auth is enabled on your account, mastodon.py (as of today) can't handle login. If you disable 2 factor auth, then you can login as expected.\nAnother hint for people who find this, sometimes you need to delete the *.secret files for mastodon.py to work (i.e. if you've changed server names but didn't change the .secret files)\n" ]
[ 1 ]
[]
[]
[ "mastodon", "mastodon_py", "python" ]
stackoverflow_0074631471_mastodon_mastodon_py_python.txt
Q: Traceback (most recent call last): _tkinter.TclError: cannot use geometry manager grid inside . which already has slaves managed by pack in python I have a problem with my code in Python I want to transfer the data after entering it into the excel file But I get an error with the title: Traceback (most recent call last): File "C:\Users\yasen\Desktop\برمجة\graphics\main11.py", line 60, in firstname_text = Label(text = "الاسم الاول * ",).grid(row=0) File "C:\Users\yasen\AppData\Local\Programs\Python\Python310\lib\tkinter_init_.py", line 2522, in grid_configure self.tk.call( _tkinter.TclError: cannot use geometry manager grid inside . which already has slaves managed by pack my code from tkinter import * import pandas as pd def send_info(): path = 'Registers.form.xlsx' df1 = pd.read_excel(path) SeriesA = df1['firstname'] SeriesB = df1['secondname'] SeriesC = df1['thirdname'] SeriesD = df1['age'] SeriesE = df1['phonenumber'] SeriesF = df1['Departmentname'] SeriesG = df1['familymembers'] SeriesH = df1['bookingdate'] A = pd.Series(firstname.get) B = pd.Series(secondname.get) C = pd.Series(thirdname.get) D = pd.Series(age.get) E = pd.Series(phonenumber.get) F = pd.Series(Departmentname.get) G = pd.Series(familymembers.get) H = pd.Series(bookingdate.get) SeriesA = SeriesA.append(A) SeriesB = SeriesA.append(B) SeriesC = SeriesA.append(C) SeriesD = SeriesA.append(D) SeriesE = SeriesA.append(E) SeriesF = SeriesA.append(F) SeriesG = SeriesA.append(G) SeriesH = SeriesA.append(H) df2 = pd.DataFrame({"firstname": SeriesA, "secondnamer": SeriesB, "thirdname": SeriesC, "age": SeriesD, "phonenumber": SeriesE, "Departmentname": SeriesF, "familymembers": SeriesG, "bookingdate": SeriesH}) df2.to_excel(path, index=False) firstname_entry.delete(0, END) secondname_entry.delete(0, END) thirdname_entry.delete(0, END) age_entry.delete(0, END) phonenumber_entry.delete(0, END) Departmentname_entry.delete(0, END) familymembers_entry.delete(0, END) bookingdate_entry.delete(0, END) screen = Tk() screen.geometry("1080x1080") screen.title("مديرية شؤون البطاقة الوطنية") welcome_text = Label(text="أهلا وسهلا بكم في الحجز الالكتروني للبطاقة الموحدة ", fg="white", bg="green",height=2 ,width=1080) welcome_text.pack() firstname_text = Label(text = "الاسم الاول * ",).grid(row=0) secondname_text = Label(text = "الاسم الثاني * ",).grid(row=1) thirdname_text = Label(text = "الاسم الثالث * ",).grid(row=2) age_text = Label(text = "العمر * ",).grid(row=3) phonenumber_text = Label(text = "رقم الهاتف * ",).grid(row=4) Departmentname_text = Label(text = "اسم دائرة الاحوال المدنية * ",).grid(row=5) familymembers_text = Label(text = "عدد افراد الاسرة * ",).grid(row=6) bookingdate_text = Label(text = "موعد تاريخ الحجز المطلوب * ",).grid(row=7) firstname_text.place(x = 540, y = 70) secondname_text.place(x = 540, y = 130) thirdname_text.place(x = 540, y = 190) age_text.place(x = 540, y = 250) phonenumber_text.place(x = 540, y = 310) Departmentname_text.place(x = 540, y = 370) familymembers_text.place(x = 540, y = 430) bookingdate_text.place(x = 540, y = 490) firstname = StringVar() secondname = StringVar() thirdname = StringVar() age = StringVar() phonenumber = StringVar() Departmentname = StringVar() familymembers = StringVar() bookingdate = StringVar() firstname_entry = Entry(textvariable = firstname, width = "40") secondname_entry = Entry(textvariable = secondname, width = "40") thirdname_entry = Entry(textvariable = thirdname, width = "40") age_entry = Entry(textvariable = age, width = "40") phonenumber_entry = Entry(textvariable = phonenumber, width = "40") Departmentname_entry = Entry(textvariable = Departmentname, width = "40") familymembers_entry = Entry(textvariable = familymembers, width = "40") bookingdate_entry = Entry(textvariable = bookingdate, width = "40") firstname_entry.place(x = 450, y = 100) secondname_entry.place(x = 450, y = 160) thirdname_entry.place(x = 450, y = 220) age_entry.place(x = 450, y = 280) phonenumber_entry.place(x = 450, y = 340) Departmentname_entry.place(x = 450, y = 400) familymembers_entry.place(x = 450, y = 460) bookingdate_entry.place(x = 450, y = 520) firstname_entry.grid(row=0,column=1) secondname_entry.grid(row=1,column=1) thirdname_entry.grid(row=2,column=1) age_entry.grid(row=3,column=1) phonenumber_entry.grid(row=4,column=1) Departmentname_entry.grid(row=5,column=1) familymembers_entry.grid(row=6,column=1) bookingdate_entry.grid(row=7,column=1) register = Button(screen,text = "Register", width = "60", height = "2", command = send_info, bg = "grey").grid(row=3,column=1, pady=4) register.place(x = 360, y = 600) screen.mainloop () A: As I just said on FB - you cant miy .place, .pack. and .grid from tkinter import * import pandas as pd def send_info(): path = 'Registers.form.xlsx' df1 = pd.read_excel(path) SeriesA = df1['firstname'] SeriesB = df1['secondname'] SeriesC = df1['thirdname'] SeriesD = df1['age'] SeriesE = df1['phonenumber'] SeriesF = df1['Departmentname'] SeriesG = df1['familymembers'] SeriesH = df1['bookingdate'] A = pd.Series(firstname.get) B = pd.Series(secondname.get) C = pd.Series(thirdname.get) D = pd.Series(age.get) E = pd.Series(phonenumber.get) F = pd.Series(Departmentname.get) G = pd.Series(familymembers.get) H = pd.Series(bookingdate.get) SeriesA = SeriesA.append(A) SeriesB = SeriesA.append(B) SeriesC = SeriesA.append(C) SeriesD = SeriesA.append(D) SeriesE = SeriesA.append(E) SeriesF = SeriesA.append(F) SeriesG = SeriesA.append(G) SeriesH = SeriesA.append(H) df2 = pd.DataFrame( {"firstname": SeriesA, "secondnamer": SeriesB, "thirdname": SeriesC, "age": SeriesD, "phonenumber": SeriesE, "Departmentname": SeriesF, "familymembers": SeriesG, "bookingdate": SeriesH}) df2.to_excel(path, index=False) firstname_entry.delete(0, END) secondname_entry.delete(0, END) thirdname_entry.delete(0, END) age_entry.delete(0, END) phonenumber_entry.delete(0, END) Departmentname_entry.delete(0, END) familymembers_entry.delete(0, END) bookingdate_entry.delete(0, END) screen = Tk() screen.geometry("1080x1080") screen.title("مديرية شؤون البطاقة الوطنية") welcome_text = Label(text="أهلا وسهلا بكم في الحجز الالكتروني للبطاقة الموحدة ", fg="white", bg="green", height=2, width=1080) firstname_text = Label(text="الاسم الاول * ", ) secondname_text = Label(text="الاسم الثاني * ", ) thirdname_text = Label(text="الاسم الثالث * ", ) age_text = Label(text="العمر * ", ) phonenumber_text = Label(text="رقم الهاتف * ", ) Departmentname_text = Label(text="اسم دائرة الاحوال المدنية * ", ) familymembers_text = Label(text="عدد افراد الاسرة * ", ) bookingdate_text = Label(text="موعد تاريخ الحجز المطلوب * ", ) firstname = StringVar() firstname.set("Test") secondname = StringVar() thirdname = StringVar() age = StringVar() phonenumber = StringVar() Departmentname = StringVar() familymembers = StringVar() bookingdate = StringVar() firstname_entry = Entry(textvariable=firstname, width=40) secondname_entry = Entry(textvariable=secondname, width=40) thirdname_entry = Entry(textvariable=thirdname, width=40) age_entry = Entry(textvariable=age, width=40) phonenumber_entry = Entry(textvariable=phonenumber, width=40) Departmentname_entry = Entry(textvariable=Departmentname, width=40) familymembers_entry = Entry(textvariable=familymembers, width=40) bookingdate_entry = Entry(textvariable=bookingdate, width=40) firstname_entry.grid(row=0, column=1) secondname_entry.grid(row=1, column=1) thirdname_entry.grid(row=2, column=1) age_entry.grid(row=3, column=1) phonenumber_entry.grid(row=4, column=1) Departmentname_entry.grid(row=5, column=1) familymembers_entry.grid(row=6, column=1) bookingdate_entry.grid(row=7, column=1) '''register = Button(screen, text="Register", width="60", height="2", command=send_info, bg="grey").grid(row=3, column=1, pady=4) register.place(x=360, y=600)''' screen.mainloop() A: This is what you want, like this. You see Label on green background and gray button. I remove all widgets grid() because of that saying cannot use geometry manager grid inside . which already has slaves managed by pack Code: from tkinter import * import pandas as pd def send_info(): path = 'Registers.form.xlsx' df1 = pd.read_excel(path) SeriesA = df1['firstname'] SeriesB = df1['secondname'] SeriesC = df1['thirdname'] SeriesD = df1['age'] SeriesE = df1['phonenumber'] SeriesF = df1['Departmentname'] SeriesG = df1['familymembers'] SeriesH = df1['bookingdate'] A = pd.Series(firstname.get) B = pd.Series(secondname.get) C = pd.Series(thirdname.get) D = pd.Series(age.get) E = pd.Series(phonenumber.get) F = pd.Series(Departmentname.get) G = pd.Series(familymembers.get) H = pd.Series(bookingdate.get) SeriesA = SeriesA.append(A) SeriesB = SeriesA.append(B) SeriesC = SeriesA.append(C) SeriesD = SeriesA.append(D) SeriesE = SeriesA.append(E) SeriesF = SeriesA.append(F) SeriesG = SeriesA.append(G) SeriesH = SeriesA.append(H) df2 = pd.DataFrame({"firstname": SeriesA, "secondnamer": SeriesB, "thirdname": SeriesC, "age": SeriesD, "phonenumber": SeriesE, "Departmentname": SeriesF, "familymembers": SeriesG, "bookingdate": SeriesH}) df2.to_excel(path, index=False) firstname_entry.delete(0, END) secondname_entry.delete(0, END) thirdname_entry.delete(0, END) age_entry.delete(0, END) phonenumber_entry.delete(0, END) Departmentname_entry.delete(0, END) familymembers_entry.delete(0, END) bookingdate_entry.delete(0, END) screen = Tk() screen.geometry("1080x1080") screen.title("مديرية شؤون البطاقة الوطنية") welcome_text = Label(text="أهلا وسهلا بكم في الحجز الالكتروني للبطاقة الموحدة ", fg="white", bg="green",height=2 ,width=1080) welcome_text.pack() firstname_text = Label(text="الاسم الاول * ") secondname_text = Label(text="الاسم الثاني * ") thirdname_text = Label(text="الاسم الثالث * ") age_text = Label(text="العمر * ") phonenumber_text = Label(text="رقم الهاتف * ") Departmentname_text = Label(text="اسم دائرة الاحوال المدنية * ") familymembers_text = Label(text="عدد افراد الاسرة * ") bookingdate_text = Label(text="موعد تاريخ الحجز المطلوب * ") firstname_text.place(x=540, y=70) secondname_text.place(x=540, y=130) thirdname_text.place(x=540, y=190) age_text.place(x=540, y=250) phonenumber_text.place(x=540, y=310) Departmentname_text.place(x=540, y=370) familymembers_text.place(x=540, y=430) bookingdate_text.place(x=540, y=490) firstname = StringVar() secondname = StringVar() thirdname = StringVar() age = StringVar() phonenumber = StringVar() Departmentname = StringVar() familymembers = StringVar() bookingdate = StringVar() firstname_entry = Entry(textvariable=firstname, width="40") secondname_entry = Entry(textvariable=secondname, width="40") thirdname_entry = Entry(textvariable=thirdname, width="40") age_entry = Entry(textvariable=age, width="40") phonenumber_entry = Entry(textvariable=phonenumber, width="40") Departmentname_entry = Entry(textvariable=Departmentname, width="40") familymembers_entry = Entry(textvariable=familymembers, width="40") bookingdate_entry = Entry(textvariable=bookingdate, width="40") firstname_entry.place(x=450, y=100) secondname_entry.place(x=450, y=160) thirdname_entry.place(x=450, y=220) age_entry.place(x=450, y=280) phonenumber_entry.place(x=450, y=340) Departmentname_entry.place(x=450, y=400) familymembers_entry.place(x=450, y=460) bookingdate_entry.place(x= 450, y=520) register = Button(screen,text="Register", width="60", height="2", command=send_info, bg="grey") register.place(x=360, y=600) screen.mainloop() Result: A: This is because Tkinter doesn't allow to use more than one placement of a widget in the whole script. So, if you want to run your code, then you'll have to remove either the grid() function everywhere and replace it with the pack() function or the remove the pack() function and replace it with the grid() function.
Traceback (most recent call last): _tkinter.TclError: cannot use geometry manager grid inside . which already has slaves managed by pack in python
I have a problem with my code in Python I want to transfer the data after entering it into the excel file But I get an error with the title: Traceback (most recent call last): File "C:\Users\yasen\Desktop\برمجة\graphics\main11.py", line 60, in firstname_text = Label(text = "الاسم الاول * ",).grid(row=0) File "C:\Users\yasen\AppData\Local\Programs\Python\Python310\lib\tkinter_init_.py", line 2522, in grid_configure self.tk.call( _tkinter.TclError: cannot use geometry manager grid inside . which already has slaves managed by pack my code from tkinter import * import pandas as pd def send_info(): path = 'Registers.form.xlsx' df1 = pd.read_excel(path) SeriesA = df1['firstname'] SeriesB = df1['secondname'] SeriesC = df1['thirdname'] SeriesD = df1['age'] SeriesE = df1['phonenumber'] SeriesF = df1['Departmentname'] SeriesG = df1['familymembers'] SeriesH = df1['bookingdate'] A = pd.Series(firstname.get) B = pd.Series(secondname.get) C = pd.Series(thirdname.get) D = pd.Series(age.get) E = pd.Series(phonenumber.get) F = pd.Series(Departmentname.get) G = pd.Series(familymembers.get) H = pd.Series(bookingdate.get) SeriesA = SeriesA.append(A) SeriesB = SeriesA.append(B) SeriesC = SeriesA.append(C) SeriesD = SeriesA.append(D) SeriesE = SeriesA.append(E) SeriesF = SeriesA.append(F) SeriesG = SeriesA.append(G) SeriesH = SeriesA.append(H) df2 = pd.DataFrame({"firstname": SeriesA, "secondnamer": SeriesB, "thirdname": SeriesC, "age": SeriesD, "phonenumber": SeriesE, "Departmentname": SeriesF, "familymembers": SeriesG, "bookingdate": SeriesH}) df2.to_excel(path, index=False) firstname_entry.delete(0, END) secondname_entry.delete(0, END) thirdname_entry.delete(0, END) age_entry.delete(0, END) phonenumber_entry.delete(0, END) Departmentname_entry.delete(0, END) familymembers_entry.delete(0, END) bookingdate_entry.delete(0, END) screen = Tk() screen.geometry("1080x1080") screen.title("مديرية شؤون البطاقة الوطنية") welcome_text = Label(text="أهلا وسهلا بكم في الحجز الالكتروني للبطاقة الموحدة ", fg="white", bg="green",height=2 ,width=1080) welcome_text.pack() firstname_text = Label(text = "الاسم الاول * ",).grid(row=0) secondname_text = Label(text = "الاسم الثاني * ",).grid(row=1) thirdname_text = Label(text = "الاسم الثالث * ",).grid(row=2) age_text = Label(text = "العمر * ",).grid(row=3) phonenumber_text = Label(text = "رقم الهاتف * ",).grid(row=4) Departmentname_text = Label(text = "اسم دائرة الاحوال المدنية * ",).grid(row=5) familymembers_text = Label(text = "عدد افراد الاسرة * ",).grid(row=6) bookingdate_text = Label(text = "موعد تاريخ الحجز المطلوب * ",).grid(row=7) firstname_text.place(x = 540, y = 70) secondname_text.place(x = 540, y = 130) thirdname_text.place(x = 540, y = 190) age_text.place(x = 540, y = 250) phonenumber_text.place(x = 540, y = 310) Departmentname_text.place(x = 540, y = 370) familymembers_text.place(x = 540, y = 430) bookingdate_text.place(x = 540, y = 490) firstname = StringVar() secondname = StringVar() thirdname = StringVar() age = StringVar() phonenumber = StringVar() Departmentname = StringVar() familymembers = StringVar() bookingdate = StringVar() firstname_entry = Entry(textvariable = firstname, width = "40") secondname_entry = Entry(textvariable = secondname, width = "40") thirdname_entry = Entry(textvariable = thirdname, width = "40") age_entry = Entry(textvariable = age, width = "40") phonenumber_entry = Entry(textvariable = phonenumber, width = "40") Departmentname_entry = Entry(textvariable = Departmentname, width = "40") familymembers_entry = Entry(textvariable = familymembers, width = "40") bookingdate_entry = Entry(textvariable = bookingdate, width = "40") firstname_entry.place(x = 450, y = 100) secondname_entry.place(x = 450, y = 160) thirdname_entry.place(x = 450, y = 220) age_entry.place(x = 450, y = 280) phonenumber_entry.place(x = 450, y = 340) Departmentname_entry.place(x = 450, y = 400) familymembers_entry.place(x = 450, y = 460) bookingdate_entry.place(x = 450, y = 520) firstname_entry.grid(row=0,column=1) secondname_entry.grid(row=1,column=1) thirdname_entry.grid(row=2,column=1) age_entry.grid(row=3,column=1) phonenumber_entry.grid(row=4,column=1) Departmentname_entry.grid(row=5,column=1) familymembers_entry.grid(row=6,column=1) bookingdate_entry.grid(row=7,column=1) register = Button(screen,text = "Register", width = "60", height = "2", command = send_info, bg = "grey").grid(row=3,column=1, pady=4) register.place(x = 360, y = 600) screen.mainloop ()
[ "As I just said on FB - you cant miy .place, .pack. and .grid\nfrom tkinter import *\nimport pandas as pd\n \n \n def send_info():\n path = 'Registers.form.xlsx'\n df1 = pd.read_excel(path)\n SeriesA = df1['firstname']\n SeriesB = df1['secondname']\n SeriesC = df1['thirdname']\n SeriesD = df1['age']\n SeriesE = df1['phonenumber']\n SeriesF = df1['Departmentname']\n SeriesG = df1['familymembers']\n SeriesH = df1['bookingdate']\n A = pd.Series(firstname.get)\n B = pd.Series(secondname.get)\n C = pd.Series(thirdname.get)\n D = pd.Series(age.get)\n E = pd.Series(phonenumber.get)\n F = pd.Series(Departmentname.get)\n G = pd.Series(familymembers.get)\n H = pd.Series(bookingdate.get)\n SeriesA = SeriesA.append(A)\n SeriesB = SeriesA.append(B)\n SeriesC = SeriesA.append(C)\n SeriesD = SeriesA.append(D)\n SeriesE = SeriesA.append(E)\n SeriesF = SeriesA.append(F)\n SeriesG = SeriesA.append(G)\n SeriesH = SeriesA.append(H)\n df2 = pd.DataFrame(\n {\"firstname\": SeriesA, \"secondnamer\": SeriesB, \"thirdname\": SeriesC, \"age\": SeriesD, \"phonenumber\": SeriesE,\n \"Departmentname\": SeriesF, \"familymembers\": SeriesG, \"bookingdate\": SeriesH})\n df2.to_excel(path, index=False)\n \n firstname_entry.delete(0, END)\n secondname_entry.delete(0, END)\n thirdname_entry.delete(0, END)\n age_entry.delete(0, END)\n phonenumber_entry.delete(0, END)\n Departmentname_entry.delete(0, END)\n familymembers_entry.delete(0, END)\n bookingdate_entry.delete(0, END)\n \n \n screen = Tk()\n screen.geometry(\"1080x1080\")\n screen.title(\"مديرية شؤون البطاقة الوطنية\")\n welcome_text = Label(text=\"أهلا وسهلا بكم في الحجز الالكتروني للبطاقة الموحدة \", fg=\"white\", bg=\"green\", height=2,\n width=1080)\n \n firstname_text = Label(text=\"الاسم الاول * \", )\n secondname_text = Label(text=\"الاسم الثاني * \", )\n thirdname_text = Label(text=\"الاسم الثالث * \", )\n age_text = Label(text=\"العمر * \", )\n phonenumber_text = Label(text=\"رقم الهاتف * \", )\n Departmentname_text = Label(text=\"اسم دائرة الاحوال المدنية * \", )\n familymembers_text = Label(text=\"عدد افراد الاسرة * \", )\n bookingdate_text = Label(text=\"موعد تاريخ الحجز المطلوب * \", )\n \n \n firstname = StringVar()\n firstname.set(\"Test\")\n secondname = StringVar()\n thirdname = StringVar()\n age = StringVar()\n phonenumber = StringVar()\n Departmentname = StringVar()\n familymembers = StringVar()\n bookingdate = StringVar()\n \n firstname_entry = Entry(textvariable=firstname, width=40)\n secondname_entry = Entry(textvariable=secondname, width=40)\n thirdname_entry = Entry(textvariable=thirdname, width=40)\n age_entry = Entry(textvariable=age, width=40)\n phonenumber_entry = Entry(textvariable=phonenumber, width=40)\n Departmentname_entry = Entry(textvariable=Departmentname, width=40)\n familymembers_entry = Entry(textvariable=familymembers, width=40)\n bookingdate_entry = Entry(textvariable=bookingdate, width=40)\n \n firstname_entry.grid(row=0, column=1)\n secondname_entry.grid(row=1, column=1)\n thirdname_entry.grid(row=2, column=1)\n age_entry.grid(row=3, column=1)\n phonenumber_entry.grid(row=4, column=1)\n Departmentname_entry.grid(row=5, column=1)\n familymembers_entry.grid(row=6, column=1)\n bookingdate_entry.grid(row=7, column=1)\n \n '''register = Button(screen, text=\"Register\", width=\"60\", height=\"2\", command=send_info, bg=\"grey\").grid(row=3, column=1,\n pady=4)\n register.place(x=360, y=600)'''\n \n screen.mainloop()\n\n", "This is what you want, like this. You see Label on green background and gray button. I remove all widgets grid() because of that saying cannot use geometry manager grid inside . which already has slaves managed by pack\nCode:\nfrom tkinter import *\nimport pandas as pd\n\ndef send_info():\n path = 'Registers.form.xlsx'\n df1 = pd.read_excel(path)\n SeriesA = df1['firstname']\n SeriesB = df1['secondname']\n SeriesC = df1['thirdname']\n SeriesD = df1['age']\n SeriesE = df1['phonenumber']\n SeriesF = df1['Departmentname']\n SeriesG = df1['familymembers']\n SeriesH = df1['bookingdate']\n A = pd.Series(firstname.get)\n B = pd.Series(secondname.get)\n C = pd.Series(thirdname.get)\n D = pd.Series(age.get)\n E = pd.Series(phonenumber.get)\n F = pd.Series(Departmentname.get)\n G = pd.Series(familymembers.get)\n H = pd.Series(bookingdate.get)\n SeriesA = SeriesA.append(A)\n SeriesB = SeriesA.append(B)\n SeriesC = SeriesA.append(C)\n SeriesD = SeriesA.append(D)\n SeriesE = SeriesA.append(E)\n SeriesF = SeriesA.append(F)\n SeriesG = SeriesA.append(G)\n SeriesH = SeriesA.append(H)\n df2 = pd.DataFrame({\"firstname\": SeriesA, \"secondnamer\": SeriesB, \"thirdname\": SeriesC, \"age\": SeriesD, \"phonenumber\": SeriesE, \"Departmentname\": SeriesF, \"familymembers\": SeriesG, \"bookingdate\": SeriesH})\n df2.to_excel(path, index=False)\n\n firstname_entry.delete(0, END)\n secondname_entry.delete(0, END)\n thirdname_entry.delete(0, END)\n age_entry.delete(0, END)\n phonenumber_entry.delete(0, END)\n Departmentname_entry.delete(0, END)\n familymembers_entry.delete(0, END)\n bookingdate_entry.delete(0, END)\n \nscreen = Tk()\nscreen.geometry(\"1080x1080\")\nscreen.title(\"مديرية شؤون البطاقة الوطنية\")\nwelcome_text = Label(text=\"أهلا وسهلا بكم في الحجز الالكتروني للبطاقة الموحدة \", fg=\"white\", bg=\"green\",height=2 ,width=1080)\nwelcome_text.pack()\n \nfirstname_text = Label(text=\"الاسم الاول * \")\nsecondname_text = Label(text=\"الاسم الثاني * \")\nthirdname_text = Label(text=\"الاسم الثالث * \")\nage_text = Label(text=\"العمر * \")\nphonenumber_text = Label(text=\"رقم الهاتف * \")\nDepartmentname_text = Label(text=\"اسم دائرة الاحوال المدنية * \")\nfamilymembers_text = Label(text=\"عدد افراد الاسرة * \")\nbookingdate_text = Label(text=\"موعد تاريخ الحجز المطلوب * \")\n\nfirstname_text.place(x=540, y=70)\nsecondname_text.place(x=540, y=130)\nthirdname_text.place(x=540, y=190)\nage_text.place(x=540, y=250)\nphonenumber_text.place(x=540, y=310)\nDepartmentname_text.place(x=540, y=370)\nfamilymembers_text.place(x=540, y=430)\nbookingdate_text.place(x=540, y=490)\n\nfirstname = StringVar()\nsecondname = StringVar()\nthirdname = StringVar()\nage = StringVar()\nphonenumber = StringVar()\nDepartmentname = StringVar()\nfamilymembers = StringVar()\nbookingdate = StringVar()\n\nfirstname_entry = Entry(textvariable=firstname, width=\"40\")\nsecondname_entry = Entry(textvariable=secondname, width=\"40\")\nthirdname_entry = Entry(textvariable=thirdname, width=\"40\")\nage_entry = Entry(textvariable=age, width=\"40\")\nphonenumber_entry = Entry(textvariable=phonenumber, width=\"40\")\nDepartmentname_entry = Entry(textvariable=Departmentname, width=\"40\")\nfamilymembers_entry = Entry(textvariable=familymembers, width=\"40\")\nbookingdate_entry = Entry(textvariable=bookingdate, width=\"40\")\n\nfirstname_entry.place(x=450, y=100)\nsecondname_entry.place(x=450, y=160)\nthirdname_entry.place(x=450, y=220)\nage_entry.place(x=450, y=280)\nphonenumber_entry.place(x=450, y=340)\nDepartmentname_entry.place(x=450, y=400)\nfamilymembers_entry.place(x=450, y=460)\nbookingdate_entry.place(x= 450, y=520)\n\nregister = Button(screen,text=\"Register\", width=\"60\", height=\"2\", command=send_info, bg=\"grey\")\nregister.place(x=360, y=600)\n\nscreen.mainloop()\n\nResult:\n\n", "This is because Tkinter doesn't allow to use more than one placement of a widget in the whole script. So, if you want to run your code, then you'll have to remove either the grid() function everywhere and replace it with the pack() function or the remove the pack() function and replace it with the grid() function.\n" ]
[ 0, 0, -2 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0071792488_python_tkinter.txt
Q: Python: daily average function I am trying to make a function that returns a list/array with the daily averages for a variable from either one of 3 csv files Each csv file is similar to this: date, time, variable1, variable2, variable3 2021-01-01,01:00:00,1.43738,25.838,22.453 2021-01-01,02:00:00,2.08652,21.028,19.099 2021-01-01,03:00:00,1.39101,23.18,20.925 2021-01-01,04:00:00,0.76506,22.053,19.974 The date contains the entire year of 2021 with increments of 1 hour def daily_average(data, station, variable): The function has 3 parameters: data station: One of the 3 csv files variable: Either variable 1 or 2 or 3 Libraries such as datetime, calendar and numpy can be used Pandas can also be used A: Well. First of all - try to do it yourself before asking a question. It will help you to learn things. But now to your question. csv_lines_test = [ "date, time, variable1, variable2, variable3\n", "2021-01-01,01:00:00,1.43738,25.838,22.453\n", "2021-01-01,02:00:00,2.08652,21.028,19.099\n", "2021-01-01,03:00:00,1.39101,23.18,20.925\n", "2021-01-01,04:00:00,0.76506,22.053,19.974\n", ] import datetime as dt def daily_average(csv_lines, date, variable_num): # variable_num should be 1-3 avg_arr = [] # Read csv file line by line. for i, line in enumerate(csv_lines): if i == 0: # Skip headers continue line = line.rstrip() values = line.split(',') date_csv = dt.datetime.strptime(values[0], "%Y-%m-%d").date() val_arr = [float(val) for val in values[2:]] if date == date_csv: avg_arr.append(val_arr[variable_num-1]) return sum(avg_arr) / len(avg_arr) avg = daily_average(csv_lines_test, dt.date(2021, 1, 1), 1) print(avg) If you want to read data directly from csv file: with open("csv_file_path.csv", 'r') as f: data = [line for line in f] avg = daily_average(data, dt.date(2021, 1, 1), 1) print(avg)
Python: daily average function
I am trying to make a function that returns a list/array with the daily averages for a variable from either one of 3 csv files Each csv file is similar to this: date, time, variable1, variable2, variable3 2021-01-01,01:00:00,1.43738,25.838,22.453 2021-01-01,02:00:00,2.08652,21.028,19.099 2021-01-01,03:00:00,1.39101,23.18,20.925 2021-01-01,04:00:00,0.76506,22.053,19.974 The date contains the entire year of 2021 with increments of 1 hour def daily_average(data, station, variable): The function has 3 parameters: data station: One of the 3 csv files variable: Either variable 1 or 2 or 3 Libraries such as datetime, calendar and numpy can be used Pandas can also be used
[ "Well.\nFirst of all - try to do it yourself before asking a question.\nIt will help you to learn things.\nBut now to your question.\ncsv_lines_test = [\n\"date, time, variable1, variable2, variable3\\n\",\n\"2021-01-01,01:00:00,1.43738,25.838,22.453\\n\",\n\"2021-01-01,02:00:00,2.08652,21.028,19.099\\n\",\n\"2021-01-01,03:00:00,1.39101,23.18,20.925\\n\",\n\"2021-01-01,04:00:00,0.76506,22.053,19.974\\n\",\n]\n\nimport datetime as dt\n\ndef daily_average(csv_lines, date, variable_num):\n # variable_num should be 1-3\n avg_arr = []\n # Read csv file line by line.\n for i, line in enumerate(csv_lines):\n if i == 0:\n # Skip headers\n continue\n line = line.rstrip()\n values = line.split(',')\n date_csv = dt.datetime.strptime(values[0], \"%Y-%m-%d\").date()\n val_arr = [float(val) for val in values[2:]]\n if date == date_csv:\n avg_arr.append(val_arr[variable_num-1])\n return sum(avg_arr) / len(avg_arr)\n\n\navg = daily_average(csv_lines_test, dt.date(2021, 1, 1), 1)\nprint(avg)\n\nIf you want to read data directly from csv file:\nwith open(\"csv_file_path.csv\", 'r') as f:\n data = [line for line in f]\n avg = daily_average(data, dt.date(2021, 1, 1), 1)\n print(avg)\n\n" ]
[ 0 ]
[]
[]
[ "average", "function", "numpy", "pandas", "python" ]
stackoverflow_0074632400_average_function_numpy_pandas_python.txt
Q: Read CSV file from Azure Blob Storage with out knowing the csv file name in python In azure Blob storage i have CSV files. I need to read those CSV files into dataframe. Csv file name vary every time. So i need to read csv from from azure blobstorage container folder. Folder name is constant but csv file name vary. A: Here is how you can read csv files to dataframes from azure.storage.blob import BlockBlobService import pandas as pd from io import StringIO STORAGEACCOUNTNAME= "<YOUR_STORAGE_ACCOUNTNAME>" STORAGEACCOUNTKEY= "<YOUR_STORAGE_ACCOUNT_KEY>" CONTAINERNAME= "<YOUR_CONTAINER_NAME>" BLOBNAME= "<BLOB_NAME>" blob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY) blobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME).content df = pd.read_csv(StringIO(blobstring)) print(df) RESULTS: REFERENCES: Explore data in Azure Blob storage with the pandas Python package A: To get this resolved. You may want to consider either having the CSV files with generic names so as to call them generically. But since you mentioned the CSV file name changes. I'd suggest saving only that CSV in the container then call it using the code below: file_loc = "wasbs://<continer name>@<storage account name>.blob.core.windows.net/*.csv df = pd.read_csv(file_loc)
Read CSV file from Azure Blob Storage with out knowing the csv file name in python
In azure Blob storage i have CSV files. I need to read those CSV files into dataframe. Csv file name vary every time. So i need to read csv from from azure blobstorage container folder. Folder name is constant but csv file name vary.
[ "Here is how you can read csv files to dataframes\nfrom azure.storage.blob import BlockBlobService\nimport pandas as pd\nfrom io import StringIO\n\nSTORAGEACCOUNTNAME= \"<YOUR_STORAGE_ACCOUNTNAME>\"\nSTORAGEACCOUNTKEY= \"<YOUR_STORAGE_ACCOUNT_KEY>\"\nCONTAINERNAME= \"<YOUR_CONTAINER_NAME>\"\nBLOBNAME= \"<BLOB_NAME>\"\n\nblob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)\n\nblobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME).content\ndf = pd.read_csv(StringIO(blobstring))\nprint(df)\n\n\nRESULTS:\n\nREFERENCES:\nExplore data in Azure Blob storage with the pandas Python package\n", "To get this resolved. You may want to consider either having the CSV files with generic names so as to call them generically. But since you mentioned the CSV file name changes. I'd suggest saving only that CSV in the container then call it using the code below:\nfile_loc = \"wasbs://<continer name>@<storage account name>.blob.core.windows.net/*.csv\ndf = pd.read_csv(file_loc)\n\n" ]
[ 0, 0 ]
[]
[]
[ "azure_blob_storage", "pandas", "python" ]
stackoverflow_0071935502_azure_blob_storage_pandas_python.txt
Q: How to separate string between semicolon and count length In a pandas data frame is there anyway to split a column by '; ' and count the string length, like in this example: Col1 Col2 123; 345 3; 3 54; 8903 2; 4 the result should be in an XLSX file. for index, row in df1.iterrows(): valid0 = row['Part_Number'] valid1 = valid0.split('; ') valid1 = [len(i) for i in valid1[:]] valid2 = str(valid1) valid3 = ''.join(valid2) df1['Part_Number_valid'] = df1['Part_Number'].replace({valid0:valid3}) A: Can you try this: df['col3']=df['Col1'].apply(lambda x: '; '.join([str(len(i)) for i in x.split('; ')])) ''' Col1 col3 0 123; 345 3; 3 1 54; 8903 2; 4 '''
How to separate string between semicolon and count length
In a pandas data frame is there anyway to split a column by '; ' and count the string length, like in this example: Col1 Col2 123; 345 3; 3 54; 8903 2; 4 the result should be in an XLSX file. for index, row in df1.iterrows(): valid0 = row['Part_Number'] valid1 = valid0.split('; ') valid1 = [len(i) for i in valid1[:]] valid2 = str(valid1) valid3 = ''.join(valid2) df1['Part_Number_valid'] = df1['Part_Number'].replace({valid0:valid3})
[ "Can you try this:\ndf['col3']=df['Col1'].apply(lambda x: '; '.join([str(len(i)) for i in x.split('; ')]))\n\n'''\n Col1 col3\n0 123; 345 3; 3\n1 54; 8903 2; 4\n\n\n'''\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074632209_pandas_python.txt
Q: Regular expression to capture n words after pattern that do not contain that pattern I'm trying to write a regular expression that captures n words after a pattern, which was answered in this question, except I want the search to keep going for another n words if it encounters that pattern again. For example, if my main search pattern is 'x', and I want to capture a word that contains 'x' and n=3 words after it that don't contain 'x', the following string should result in three matches: Lorem ipsum dolxor sit amet, consectetur adipiscing elit. Morxbi fringilla, dui axt tincidunt consectetur, libero arcu cursus arcxu, ut commodo lexctus magna vitxae venenatis neque. Matches ('x's in bold for ease of viewing) dolxor sit amet, consectetur Morxbi fringilla, dui axt tincidunt consectetur, libero arcxu, ut commodo lexctus magna vitxae venenatis neque. Matching n=3 words after is straightforward: [^ ]*x[^ ]*(?: [^ ]*){0,3} How to keep going if another 'x' is encountered, I'm not sure. I've tried this -- [^ ]*x[^ ]*(?: (?![^ ]*x[^ ]*)[^ ]*){0,3} -- but it terminates the search instead of continuing on check the next n words, which, given the example above, gives six results instead of the expected three: dolxor sit amet, consectetur Morxbi fringilla, dui axt tincidunt consectetur, libero arcxu, ut commodo lexctus magna vitxae venenatis neque. P.S. I'm working with python. EDIT: For context, I'm trying to get sufficient information about the surroundings of each appearance of the given pattern. (For simplicity's sake, I'm only including the words after the pattern, but it's easy to generalize to words in the back.) And the problem with the first regex is that it might result in a word that lacks information on its surroundings if it gets picked up as part of the surroundings of another match. For example, the first match given the text above would be 'Morxbi fringilla, dui axt', which gives us information about what comes after 'Morxbi' but not 'axt'. The second regex doesn't help because now matches with another match in its surroundings will lose that information, e.g., we won't know the third word that comes after 'Morxbi'. A: It's not clear that regex is the most natural way to solve your use case. Consider this hybrid approach. import re pattern = re.compile(r"x") # or whatever def get_at_least_n(text: str, n=3) -> Optional[range]: words = text.split() matches = list(map(pattern.search, words)) if not any(matches): return None last = sorted(_get_ranges(matches, n))[-1] _, i, j = last assert j >= i + n return range(i, j) def _get_ranges(matches, n): for i in range(len(matches)): if matches[i]: j = i + n k = j + 1 while k < len(matches) and k - j < n: if matches[k]: j = k yield j - i, i, j The regular expression engine loops over characters and can handle CFGs, but is not Turing complete. For one thing, evaluation is guaranteed to always Halt. (Cheating a bit, wrapping it in a loop such as sed offers, would enable Turing and even Conway's Life: https://bitly.com/regexgol) Here, the longest match you're looking for could be supplied in various orders, such as A B x D E x G H x J K L M N O x Q R x T or A B x D E F G H I J K L M N O x Q R x T so the correct answer for (1.) is "x D E x G H x J K" and for (2.) is "x Q R x T". Computing longest match using just the regex engine is not straightforward, and it would yield illegible code, unmaintainable. Given that you have cPython's Turing machine available to you, hey, may as well use it, right?
Regular expression to capture n words after pattern that do not contain that pattern
I'm trying to write a regular expression that captures n words after a pattern, which was answered in this question, except I want the search to keep going for another n words if it encounters that pattern again. For example, if my main search pattern is 'x', and I want to capture a word that contains 'x' and n=3 words after it that don't contain 'x', the following string should result in three matches: Lorem ipsum dolxor sit amet, consectetur adipiscing elit. Morxbi fringilla, dui axt tincidunt consectetur, libero arcu cursus arcxu, ut commodo lexctus magna vitxae venenatis neque. Matches ('x's in bold for ease of viewing) dolxor sit amet, consectetur Morxbi fringilla, dui axt tincidunt consectetur, libero arcxu, ut commodo lexctus magna vitxae venenatis neque. Matching n=3 words after is straightforward: [^ ]*x[^ ]*(?: [^ ]*){0,3} How to keep going if another 'x' is encountered, I'm not sure. I've tried this -- [^ ]*x[^ ]*(?: (?![^ ]*x[^ ]*)[^ ]*){0,3} -- but it terminates the search instead of continuing on check the next n words, which, given the example above, gives six results instead of the expected three: dolxor sit amet, consectetur Morxbi fringilla, dui axt tincidunt consectetur, libero arcxu, ut commodo lexctus magna vitxae venenatis neque. P.S. I'm working with python. EDIT: For context, I'm trying to get sufficient information about the surroundings of each appearance of the given pattern. (For simplicity's sake, I'm only including the words after the pattern, but it's easy to generalize to words in the back.) And the problem with the first regex is that it might result in a word that lacks information on its surroundings if it gets picked up as part of the surroundings of another match. For example, the first match given the text above would be 'Morxbi fringilla, dui axt', which gives us information about what comes after 'Morxbi' but not 'axt'. The second regex doesn't help because now matches with another match in its surroundings will lose that information, e.g., we won't know the third word that comes after 'Morxbi'.
[ "It's not clear that regex is the most natural way to solve your use case.\nConsider this hybrid approach.\nimport re\n\npattern = re.compile(r\"x\") # or whatever\n\ndef get_at_least_n(text: str, n=3) -> Optional[range]:\n words = text.split()\n matches = list(map(pattern.search, words))\n if not any(matches):\n return None\n last = sorted(_get_ranges(matches, n))[-1]\n _, i, j = last\n assert j >= i + n\n return range(i, j)\n\ndef _get_ranges(matches, n):\n for i in range(len(matches)):\n if matches[i]:\n j = i + n\n k = j + 1\n while k < len(matches) and k - j < n:\n if matches[k]:\n j = k\n yield j - i, i, j\n\nThe regular expression engine loops over characters\nand can handle CFGs, but is not Turing complete.\nFor one thing, evaluation is guaranteed to always Halt.\n(Cheating a bit, wrapping it in a loop such as sed offers,\nwould enable Turing and even Conway's Life: https://bitly.com/regexgol)\nHere, the longest match you're looking for could be\nsupplied in various orders, such as\nA B x D E x G H x J K L M N O x Q R x T\n\nor\nA B x D E F G H I J K L M N O x Q R x T\n\nso the correct answer for (1.) is \"x D E x G H x J K\"\nand for (2.) is \"x Q R x T\".\nComputing longest match using just the regex engine\nis not straightforward,\nand it would yield illegible code, unmaintainable.\nGiven that you have cPython's Turing machine available\nto you, hey, may as well use it, right?\n" ]
[ 1 ]
[]
[]
[ "python", "regex", "regex_lookarounds" ]
stackoverflow_0074631188_python_regex_regex_lookarounds.txt
Q: Pandas add column of count of another column across all the datafram I have a dataframe: df = C1 C2 E 1 2 3 4 9 1 3 1 1 8 2 8 8 1 2 I want to add another columns that will have the count of the value that is in the columns 'E' in all the dataframe (in the column E) So here the output will be: df = C1. C2. E. cou 1. 2. 3. 1 4. 9. 1. 2 3. 1. 1 2 8. 2. 8. 1 8. 1. 2. 1 #2 appears only one it the column E How can it be done efficiently ? A: Here's one way. Find the matches and add them up. import pandas as pd data = [ [1,2,3],[4,9,1],[3,1,1],[8,2,8] ] df = pd.DataFrame( data, columns=['C1','C2','E']) print(df) def count(val): return (df['C1']==val).sum() + (df['C2']==val).sum() df['cou'] = df.E.apply(count) print(df) Output: C1 C2 E 0 1 2 3 1 4 9 1 2 3 1 1 3 8 2 8 C1 C2 E cou 0 1 2 3 1 1 4 9 1 2 2 3 1 1 2 3 8 2 8 1
Pandas add column of count of another column across all the datafram
I have a dataframe: df = C1 C2 E 1 2 3 4 9 1 3 1 1 8 2 8 8 1 2 I want to add another columns that will have the count of the value that is in the columns 'E' in all the dataframe (in the column E) So here the output will be: df = C1. C2. E. cou 1. 2. 3. 1 4. 9. 1. 2 3. 1. 1 2 8. 2. 8. 1 8. 1. 2. 1 #2 appears only one it the column E How can it be done efficiently ?
[ "Here's one way. Find the matches and add them up.\nimport pandas as pd\n\ndata = [\n [1,2,3],[4,9,1],[3,1,1],[8,2,8]\n]\n\ndf = pd.DataFrame( data, columns=['C1','C2','E'])\nprint(df)\n\ndef count(val):\n return (df['C1']==val).sum() + (df['C2']==val).sum()\n\ndf['cou'] = df.E.apply(count)\nprint(df)\n\nOutput:\n C1 C2 E\n0 1 2 3\n1 4 9 1\n2 3 1 1\n3 8 2 8\n C1 C2 E cou\n0 1 2 3 1\n1 4 9 1 2\n2 3 1 1 2\n3 8 2 8 1\n\n" ]
[ 0 ]
[]
[]
[ "data_munging", "data_science", "dataframe", "pandas", "python" ]
stackoverflow_0074632631_data_munging_data_science_dataframe_pandas_python.txt
Q: Remove/ filter rows in JSON based on condition with python I have a table in a JSON whereby I need an entire row to be deleted/ filtered based on the condition if "Disposition (Non Open Market)" in "transactionType" then delete/ filter entry in all columns. Below is what my JSON file looks like: { "lastDate":{ "0":"11\/22\/2022", "1":"10\/28\/2022", "2":"10\/17\/2022", "3":"10\/15\/2022", "4":"10\/15\/2022", "5":"10\/15\/2022", "6":"10\/15\/2022", "7":"10\/03\/2022", "8":"10\/03\/2022", "9":"10\/03\/2022", "10":"10\/01\/2022", "11":"10\/01\/2022", "12":"10\/01\/2022", "13":"10\/01\/2022", "14":"10\/01\/2022", "15":"10\/01\/2022", "16":"10\/01\/2022", "17":"10\/01\/2022", "18":"08\/17\/2022", "19":"08\/08\/2022", "20":"08\/05\/2022", "21":"08\/05\/2022", "22":"08\/03\/2022", "23":"05\/06\/2022", "24":"05\/04\/2022" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "3":"Disposition (Non Open Market)", "4":"Option Execute", "5":"Disposition (Non Open Market)", "6":"Option Execute", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "10":"Disposition (Non Open Market)", "11":"Option Execute", "12":"Disposition (Non Open Market)", "13":"Option Execute", "14":"Disposition (Non Open Market)", "15":"Option Execute", "16":"Disposition (Non Open Market)", "17":"Option Execute", "18":"Automatic Sell", "19":"Automatic Sell", "20":"Disposition (Non Open Market)", "21":"Option Execute", "22":"Automatic Sell", "23":"Disposition (Non Open Market)", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "3":"6,399", "4":"13,136", "5":"8,559", "6":"16,612", "7":"167,889", "8":"13,250", "9":"176,299", "10":"177,870", "11":"365,600", "12":"189,301", "13":"365,600", "14":"184,461", "15":"365,600", "16":"189,301", "17":"365,600", "18":"96,735", "19":"15,366", "20":"16,530", "21":"31,896", "22":"25,000", "23":"1,276", "24":"25,000" } } My current code is the attempt to delete/ filter out an entry if the value is "Disposition (Non Open Market)": import json data = json.load(open("AAPL22_institutional_table_MRKTVAL.json")) modified = lambda feature: 'Disposition (Non Open Market)' not in feature['transactionType'] data2 = filter(modified, data) open("AAPL22_institutional_table_MRKTVAL.json", "w").write( json.dumps(data2, indent=4)) The preferred output JSON (showing the entry being deleted on all 3 columns): { "lastDate":{ "0":"11\/22\/2022", "1":"10\/28\/2022", "2":"10\/17\/2022", "4":"10\/15\/2022", "6":"10\/15\/2022", "7":"10\/03\/2022", "8":"10\/03\/2022", "9":"10\/03\/2022", "11":"10\/01\/2022", "13":"10\/01\/2022", "15":"10\/01\/2022", "17":"10\/01\/2022", "18":"08\/17\/2022", "19":"08\/08\/2022", "21":"08\/05\/2022", "22":"08\/03\/2022", "24":"05\/04\/2022" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "4":"Option Execute", "6":"Option Execute", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "11":"Option Execute", "13":"Option Execute", "15":"Option Execute", "17":"Option Execute", "18":"Automatic Sell", "19":"Automatic Sell", "21":"Option Execute", "22":"Automatic Sell", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "4":"13,136", "6":"16,612", "7":"167,889", "8":"13,250", "9":"176,299", "11":"365,600", "13":"365,600", "15":"365,600", "17":"365,600", "18":"96,735", "19":"15,366", "21":"31,896", "22":"25,000", "24":"25,000" } } A: I was able to remove according to value by appending the keys that have the string value to a list and then simply removing it import json data = json.load(open("AAPL22_institutional_table_MRKTVAL.json")) delete_keys = [] for value in data['transactionType']: if data['transactionType'][value] == 'Disposition (Non Open Market)': delete_keys.append(value) print(delete_keys) for key in delete_keys: del data['transactionType'][key] del data['lastDate'][key] del data['sharesTraded'][key] print(data) open("AAPL22_institutional_table_MRKTVAL.json", "w").write( json.dumps(data, indent=4)) A: data = { "lastDate":{ "0":"11\/22\/2022", "1":"10\/28\/2022", "2":"10\/17\/2022", "3":"10\/15\/2022", "4":"10\/15\/2022", "5":"10\/15\/2022", "6":"10\/15\/2022", "7":"10\/03\/2022", "8":"10\/03\/2022", }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "3":"Disposition (Non Open Market)", "4":"Option Execute", "5":"Disposition (Non Open Market)", "6":"Option Execute", "7":"Automatic Sell", "8":"Sell", }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "3":"6,399", "4":"13,136", "5":"8,559", "6":"16,612", "7":"167,889", "8":"13,250", } } for k,v in data["transactionType"].copy().items(): if v == "Disposition (Non Open Market)": for key in data: # Remove the key from all other nested dictionaries del data[key][k] print(data)
Remove/ filter rows in JSON based on condition with python
I have a table in a JSON whereby I need an entire row to be deleted/ filtered based on the condition if "Disposition (Non Open Market)" in "transactionType" then delete/ filter entry in all columns. Below is what my JSON file looks like: { "lastDate":{ "0":"11\/22\/2022", "1":"10\/28\/2022", "2":"10\/17\/2022", "3":"10\/15\/2022", "4":"10\/15\/2022", "5":"10\/15\/2022", "6":"10\/15\/2022", "7":"10\/03\/2022", "8":"10\/03\/2022", "9":"10\/03\/2022", "10":"10\/01\/2022", "11":"10\/01\/2022", "12":"10\/01\/2022", "13":"10\/01\/2022", "14":"10\/01\/2022", "15":"10\/01\/2022", "16":"10\/01\/2022", "17":"10\/01\/2022", "18":"08\/17\/2022", "19":"08\/08\/2022", "20":"08\/05\/2022", "21":"08\/05\/2022", "22":"08\/03\/2022", "23":"05\/06\/2022", "24":"05\/04\/2022" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "3":"Disposition (Non Open Market)", "4":"Option Execute", "5":"Disposition (Non Open Market)", "6":"Option Execute", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "10":"Disposition (Non Open Market)", "11":"Option Execute", "12":"Disposition (Non Open Market)", "13":"Option Execute", "14":"Disposition (Non Open Market)", "15":"Option Execute", "16":"Disposition (Non Open Market)", "17":"Option Execute", "18":"Automatic Sell", "19":"Automatic Sell", "20":"Disposition (Non Open Market)", "21":"Option Execute", "22":"Automatic Sell", "23":"Disposition (Non Open Market)", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "3":"6,399", "4":"13,136", "5":"8,559", "6":"16,612", "7":"167,889", "8":"13,250", "9":"176,299", "10":"177,870", "11":"365,600", "12":"189,301", "13":"365,600", "14":"184,461", "15":"365,600", "16":"189,301", "17":"365,600", "18":"96,735", "19":"15,366", "20":"16,530", "21":"31,896", "22":"25,000", "23":"1,276", "24":"25,000" } } My current code is the attempt to delete/ filter out an entry if the value is "Disposition (Non Open Market)": import json data = json.load(open("AAPL22_institutional_table_MRKTVAL.json")) modified = lambda feature: 'Disposition (Non Open Market)' not in feature['transactionType'] data2 = filter(modified, data) open("AAPL22_institutional_table_MRKTVAL.json", "w").write( json.dumps(data2, indent=4)) The preferred output JSON (showing the entry being deleted on all 3 columns): { "lastDate":{ "0":"11\/22\/2022", "1":"10\/28\/2022", "2":"10\/17\/2022", "4":"10\/15\/2022", "6":"10\/15\/2022", "7":"10\/03\/2022", "8":"10\/03\/2022", "9":"10\/03\/2022", "11":"10\/01\/2022", "13":"10\/01\/2022", "15":"10\/01\/2022", "17":"10\/01\/2022", "18":"08\/17\/2022", "19":"08\/08\/2022", "21":"08\/05\/2022", "22":"08\/03\/2022", "24":"05\/04\/2022" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "4":"Option Execute", "6":"Option Execute", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "11":"Option Execute", "13":"Option Execute", "15":"Option Execute", "17":"Option Execute", "18":"Automatic Sell", "19":"Automatic Sell", "21":"Option Execute", "22":"Automatic Sell", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "4":"13,136", "6":"16,612", "7":"167,889", "8":"13,250", "9":"176,299", "11":"365,600", "13":"365,600", "15":"365,600", "17":"365,600", "18":"96,735", "19":"15,366", "21":"31,896", "22":"25,000", "24":"25,000" } }
[ "I was able to remove according to value by appending the keys that have the string value to a list and then simply removing it\nimport json\n\ndata = json.load(open(\"AAPL22_institutional_table_MRKTVAL.json\"))\n\ndelete_keys = []\n\nfor value in data['transactionType']:\n if data['transactionType'][value] == 'Disposition (Non Open Market)':\n delete_keys.append(value)\n\nprint(delete_keys)\n\nfor key in delete_keys:\n del data['transactionType'][key]\n del data['lastDate'][key]\n del data['sharesTraded'][key]\n\nprint(data)\n\nopen(\"AAPL22_institutional_table_MRKTVAL.json\", \"w\").write(\n json.dumps(data, indent=4))\n\n", "data = {\n \"lastDate\":{\n \"0\":\"11\\/22\\/2022\",\n \"1\":\"10\\/28\\/2022\",\n \"2\":\"10\\/17\\/2022\",\n \"3\":\"10\\/15\\/2022\",\n \"4\":\"10\\/15\\/2022\",\n \"5\":\"10\\/15\\/2022\",\n \"6\":\"10\\/15\\/2022\",\n \"7\":\"10\\/03\\/2022\",\n \"8\":\"10\\/03\\/2022\",\n },\n \"transactionType\":{\n \"0\":\"Sell\",\n \"1\":\"Automatic Sell\",\n \"2\":\"Automatic Sell\",\n \"3\":\"Disposition (Non Open Market)\",\n \"4\":\"Option Execute\",\n \"5\":\"Disposition (Non Open Market)\",\n \"6\":\"Option Execute\",\n \"7\":\"Automatic Sell\",\n \"8\":\"Sell\",\n },\n \"sharesTraded\":{\n \"0\":\"20,200\",\n \"1\":\"176,299\",\n \"2\":\"8,053\",\n \"3\":\"6,399\",\n \"4\":\"13,136\",\n \"5\":\"8,559\",\n \"6\":\"16,612\",\n \"7\":\"167,889\",\n \"8\":\"13,250\",\n }\n}\n\nfor k,v in data[\"transactionType\"].copy().items():\n if v == \"Disposition (Non Open Market)\":\n for key in data: # Remove the key from all other nested dictionaries \n del data[key][k]\n\nprint(data)\n\n" ]
[ 2, 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074632543_json_python.txt
Q: Ultra Basic Question on PySpark with Kubernetes After fighting with the lack of documentation and wildly misleading information out there on PySpark with Kubernetes I think I have boiled this down to one question. How do I get the driver pod that gets spun up to read my python file (not a dependency, the actual file itself)? Here's the command I'm using: kubectl run --namespace apache-spark apache-spark-client --rm --tty -i --restart='Never' \ --image docker.io/bitnami/spark:3.1.2-debian-10-r44 \ -- spark-submit --master spark://10.120.112.210:30077 \ test.py Here's what I get back: python3: can't open file '/opt/bitnami/spark/test.py': [Errno 2] No such file or directory OK, so how do I get this python file onto the driver pod? This vital piece of information seems to be completely missing from hundreds of articles on the subject. I have mounted volumes that the workers can see and tried that as the path. Still doesn't work. So I'm assuming it has to be on the driver pod. But how? Every example just throws in the .py file without any mention of how it gets there. A: You are not mounting any volume to the pod, so even if the file is present in the NFS mount, it won't be visible from within the pod. You must mount it. In the following command, you are creating a pod but not attaching any volume to it. kubectl run --namespace apache-spark apache-spark-client --rm --tty -i --restart='Never' \ --image docker.io/bitnami/spark:3.1.2-debian-10-r44 \ -- spark-submit --master spark://10.120.112.210:30077 \ test.py If you wish to use NFS volume, you need to use the right PVC or hostPath to the NFS mount. TLDR, Mount the volume. Alternatively: You can refer to this example if you wish to use configMap and volumes to make a local file available inside the pod. In this example, I have created info.log file locally on the server where I run kubectl commands. // Create a test file in my workstation echo "This file is written in my workstation, not inside the pod" > info.log // create a config-map of the file: kubectl create cm test-cm --from-file info.log configmap/test-cm created // mount the configmap as volume, notice the volumes and volumeMounts section: apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: test-pod name: test-pod spec: nodeName: k8s-master containers: - command: - sleep - infinity image: ubuntu name: test-pod resources: {} volumeMounts: - name: my-vol mountPath: /tmp dnsPolicy: ClusterFirst restartPolicy: Always volumes: - name: my-vol configMap: name: test-cm status: {} // Test now, using the volume, I can access the info.log file from within the pod. kuebctl exec -it test-pod -- bash root@test-pod:/# cd /tmp/ root@test-pod:/tmp# ls info.log root@test-pod:/tmp# cat info.log This file is written in my workstation, not inside the pod root@test-pod:/tmp#
Ultra Basic Question on PySpark with Kubernetes
After fighting with the lack of documentation and wildly misleading information out there on PySpark with Kubernetes I think I have boiled this down to one question. How do I get the driver pod that gets spun up to read my python file (not a dependency, the actual file itself)? Here's the command I'm using: kubectl run --namespace apache-spark apache-spark-client --rm --tty -i --restart='Never' \ --image docker.io/bitnami/spark:3.1.2-debian-10-r44 \ -- spark-submit --master spark://10.120.112.210:30077 \ test.py Here's what I get back: python3: can't open file '/opt/bitnami/spark/test.py': [Errno 2] No such file or directory OK, so how do I get this python file onto the driver pod? This vital piece of information seems to be completely missing from hundreds of articles on the subject. I have mounted volumes that the workers can see and tried that as the path. Still doesn't work. So I'm assuming it has to be on the driver pod. But how? Every example just throws in the .py file without any mention of how it gets there.
[ "You are not mounting any volume to the pod, so even if the file is present in the NFS mount, it won't be visible from within the pod. You must mount it. In the following command, you are creating a pod but not attaching any volume to it.\nkubectl run --namespace apache-spark apache-spark-client --rm --tty -i --restart='Never' \\\n--image docker.io/bitnami/spark:3.1.2-debian-10-r44 \\\n-- spark-submit --master spark://10.120.112.210:30077 \\\ntest.py\n\nIf you wish to use NFS volume, you need to use the right PVC or hostPath to the NFS mount. TLDR, Mount the volume.\nAlternatively:\nYou can refer to this example if you wish to use configMap and volumes to make a local file available inside the pod. In this example, I have created info.log file locally on the server where I run kubectl commands.\n// Create a test file in my workstation\necho \"This file is written in my workstation, not inside the pod\" > info.log\n\n// create a config-map of the file:\nkubectl create cm test-cm --from-file info.log\nconfigmap/test-cm created\n\n// mount the configmap as volume, notice the volumes and volumeMounts section:\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: null\n labels:\n run: test-pod\n name: test-pod\nspec:\n nodeName: k8s-master\n containers:\n - command:\n - sleep\n - infinity\n image: ubuntu\n name: test-pod\n resources: {}\n volumeMounts:\n - name: my-vol\n mountPath: /tmp\n dnsPolicy: ClusterFirst\n restartPolicy: Always\n volumes:\n - name: my-vol\n configMap:\n name: test-cm\n\nstatus: {}\n\n// Test now, using the volume, I can access the info.log file from within the pod.\nkuebctl exec -it test-pod -- bash\nroot@test-pod:/# cd /tmp/\nroot@test-pod:/tmp# ls\ninfo.log\nroot@test-pod:/tmp# cat info.log\nThis file is written in my workstation, not inside the pod\nroot@test-pod:/tmp#\n\n" ]
[ 1 ]
[]
[]
[ "apache_spark", "kubernetes", "pyspark", "python" ]
stackoverflow_0074631612_apache_spark_kubernetes_pyspark_python.txt
Q: How to dynamically find the nearest specific parent of a selected element? I want to parse many html pages and remove a div that contains the text "Message", using beautifulsoup html.parser and python. The div has no name or id, so pointing to it is not possible. I am able to do this for 1 html page. In the code below, you will see 6 .parent . This is because there are 5 tags (p,i,b,span,a) between div tag and the text "Message", and 6th tag is div, in this html page. The code below works fine for 1 html page. soup = BeautifulSoup(html_page,"html.parser") scores = soup.find_all(text=re.compile('Message')) divs = [score.parent.parent.parent.parent.parent.parent for score in scores] divs.decompose() The problem is - The number of tags between div and "Message" is not always 6. In some html page its 3, and in some 7. So, is there a way to find the number of tags (n) between the text "Message" and nearest div to the left dynamically, and add n+1 number of .parent to score (in the code above) using python, beautifulsoup? A: As described in your question, that there is no other <div> between, you could use .find_parent(): soup.find(text=re.compile('Message')).find_parent('div').decompose() Be aware, that if you use find_all() you have to iterate your ResultSet while unsing .find_parent(): for r in soup.find_all(text=re.compile('Message')): r.find_parent('div').decompose() As in your example divs.decompose() - You also should iterate the list. Example from bs4 import BeautifulSoup import re html=''' <div> <span> <i> <x>Message</x> </i> </span> </div> ''' soup = BeautifulSoup(html) soup.find(text=re.compile('Message')).find_parent('div')
How to dynamically find the nearest specific parent of a selected element?
I want to parse many html pages and remove a div that contains the text "Message", using beautifulsoup html.parser and python. The div has no name or id, so pointing to it is not possible. I am able to do this for 1 html page. In the code below, you will see 6 .parent . This is because there are 5 tags (p,i,b,span,a) between div tag and the text "Message", and 6th tag is div, in this html page. The code below works fine for 1 html page. soup = BeautifulSoup(html_page,"html.parser") scores = soup.find_all(text=re.compile('Message')) divs = [score.parent.parent.parent.parent.parent.parent for score in scores] divs.decompose() The problem is - The number of tags between div and "Message" is not always 6. In some html page its 3, and in some 7. So, is there a way to find the number of tags (n) between the text "Message" and nearest div to the left dynamically, and add n+1 number of .parent to score (in the code above) using python, beautifulsoup?
[ "As described in your question, that there is no other <div> between, you could use .find_parent():\nsoup.find(text=re.compile('Message')).find_parent('div').decompose()\n\nBe aware, that if you use find_all() you have to iterate your ResultSet while unsing .find_parent():\nfor r in soup.find_all(text=re.compile('Message')):\n r.find_parent('div').decompose()\n\n\nAs in your example divs.decompose() - You also should iterate the list.\nExample\nfrom bs4 import BeautifulSoup\nimport re\nhtml='''\n<div>\n <span>\n <i>\n <x>Message</x>\n </i>\n </span>\n</div>\n'''\nsoup = BeautifulSoup(html)\n\nsoup.find(text=re.compile('Message')).find_parent('div')\n\n" ]
[ 3 ]
[]
[]
[ "beautifulsoup", "html", "html_parsing", "python" ]
stackoverflow_0074632532_beautifulsoup_html_html_parsing_python.txt
Q: How to test a single GET request in parallel for specified count? I want to parallely send a GET request for the specified count say 100 times. How to achieve this using JMeter or Python ? I tried bzm parallel executor but that doesn't workout. A: import requests import threading totalRequests = 0 numberOfThreads = 10 threads = [0] * numberOfThreads def worker(thread): r = requests.get("url") threads[thread] = 0 # free thread while totalRequests < 100: for thread in range(numberOfThreads): if threads[thread] == 0: threads[thread] = 1 # occupy thread t = threading.Thread(target=worker, args=(thread,)) t.start() totalRequests += 1 A: In JMeter: Add Thread Group to your Test Plan and configure it like: Add HTTP Request sampler as a child of the Thread Group and specify protocol, host, port, path and parameters: if you're not certain regarding properly configuring the HTTP Request sampler - you can just record the request using your browser and JMeter's HTTP(S) Test Script Recorder or JMeter Chrome Extension For Python the correct would be using Locust framework as I believe you're interested in metrics like response times, latencies and so on. The official website is down at the moment so in the meantime you can check https://readthedocs.org/projects/locust/
How to test a single GET request in parallel for specified count?
I want to parallely send a GET request for the specified count say 100 times. How to achieve this using JMeter or Python ? I tried bzm parallel executor but that doesn't workout.
[ "import requests\nimport threading\n\ntotalRequests = 0\nnumberOfThreads = 10\nthreads = [0] * numberOfThreads\n\n\ndef worker(thread):\n r = requests.get(\"url\")\n threads[thread] = 0 # free thread\n\n\nwhile totalRequests < 100:\n for thread in range(numberOfThreads):\n if threads[thread] == 0:\n threads[thread] = 1 # occupy thread\n t = threading.Thread(target=worker, args=(thread,))\n t.start()\n totalRequests += 1\n\n", "In JMeter:\n\nAdd Thread Group to your Test Plan and configure it like:\n\n\nAdd HTTP Request sampler as a child of the Thread Group and specify protocol, host, port, path and parameters:\n\nif you're not certain regarding properly configuring the HTTP Request sampler - you can just record the request using your browser and JMeter's HTTP(S) Test Script Recorder or JMeter Chrome Extension\n\n\nFor Python the correct would be using Locust framework as I believe you're interested in metrics like response times, latencies and so on. The official website is down at the moment\n\nso in the meantime you can check https://readthedocs.org/projects/locust/\n" ]
[ 1, 0 ]
[]
[]
[ "jmeter", "jmeter_plugins", "playwright", "python" ]
stackoverflow_0074631728_jmeter_jmeter_plugins_playwright_python.txt
Q: Sphinx - ran`make html` and I am missing content for a few modules Repo link: https://github.com/Eric-Cortez/aepsych-fork Problem: I am running into an issue when generating sphinx documentation when I run make html most of the modules are generated except for 3 which are aepsych.database, aepsych.plotting, and aepsych.server. file structure: (root) aepsych-fork/ |__ aepsych/ <--- docs in this dir |__ sphinx/ | |__ build/ | |__ Makefile | |__ make.bat | |__ source/ | |___conf.py | |___index.rst | **other files** conf.py [path]: import os import sys # from pkg_resources import get_distribution current_dir = os.path.dirname(__file__) target_dir = os.path.abspath(os.path.join(current_dir, "../..")) sys.path.insert(0, target_dir) What I tried: I have been able to generate this documentation previously by running make html. example: https://aepsych.org/api/database.html But, I tried to rebuild the sphinx docs todays and ran into a an error WARNING: autodoc: failed to import module 'acquisition.bvn' from module 'aepsych'; the following exception was raised: No module named 'gpytorch' gpytorch is a dependency that is being used within the the aepsych module, but I do not want to generate documentation for the gpytorch module. I did some research and added autodoc_mock_imports = ["botorch", 'gpytorch', "torch"] to the aepsych-fork/sphinx/conf.py which resolved the missing module error and generated most of the docs accept for the three missing modules. I am having trouble finding the reason that the docs for those files are not being generated as when I run make html I don't get any errors in the console. Below is the console output: Running Sphinx v5.0.2 making output directory... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 16 source files that are out of date updating environment: [new config] 16 added, 0 changed, 0 removed I did notice the reading source does not reach 100%. I gets stuck at reading sources... [ 25%] benchmark does anyone have any suggestion on how to resolve this issue? I did notice that the __init__.py file is empty for the database/ module. import sys from ..config import Config from .db import Database __all__ = [ "Database" ] Config.register_module(sys.modules[__name__]) I added the code above to the databases/__init__.py but the docs still did not generate. Does anyone have any suggestions on how to resolve this issue? A: I recreated a conda environment with all of the dependencies and was still getting an error. WARNING: autodoc: failed to import module 'acquisition.bvn' from module 'aepsych'; the following exception was raised: No module named 'botorch.sampling.normal' So I added autodoc_mock_imports = ["botorch"] to sphinx/conf.py and that resolved the issue. I may have been missing some dependencies.
Sphinx - ran`make html` and I am missing content for a few modules
Repo link: https://github.com/Eric-Cortez/aepsych-fork Problem: I am running into an issue when generating sphinx documentation when I run make html most of the modules are generated except for 3 which are aepsych.database, aepsych.plotting, and aepsych.server. file structure: (root) aepsych-fork/ |__ aepsych/ <--- docs in this dir |__ sphinx/ | |__ build/ | |__ Makefile | |__ make.bat | |__ source/ | |___conf.py | |___index.rst | **other files** conf.py [path]: import os import sys # from pkg_resources import get_distribution current_dir = os.path.dirname(__file__) target_dir = os.path.abspath(os.path.join(current_dir, "../..")) sys.path.insert(0, target_dir) What I tried: I have been able to generate this documentation previously by running make html. example: https://aepsych.org/api/database.html But, I tried to rebuild the sphinx docs todays and ran into a an error WARNING: autodoc: failed to import module 'acquisition.bvn' from module 'aepsych'; the following exception was raised: No module named 'gpytorch' gpytorch is a dependency that is being used within the the aepsych module, but I do not want to generate documentation for the gpytorch module. I did some research and added autodoc_mock_imports = ["botorch", 'gpytorch', "torch"] to the aepsych-fork/sphinx/conf.py which resolved the missing module error and generated most of the docs accept for the three missing modules. I am having trouble finding the reason that the docs for those files are not being generated as when I run make html I don't get any errors in the console. Below is the console output: Running Sphinx v5.0.2 making output directory... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 16 source files that are out of date updating environment: [new config] 16 added, 0 changed, 0 removed I did notice the reading source does not reach 100%. I gets stuck at reading sources... [ 25%] benchmark does anyone have any suggestion on how to resolve this issue? I did notice that the __init__.py file is empty for the database/ module. import sys from ..config import Config from .db import Database __all__ = [ "Database" ] Config.register_module(sys.modules[__name__]) I added the code above to the databases/__init__.py but the docs still did not generate. Does anyone have any suggestions on how to resolve this issue?
[ "I recreated a conda environment with all of the dependencies and was still getting an error. WARNING: autodoc: failed to import module 'acquisition.bvn' from module 'aepsych'; the following exception was raised: No module named 'botorch.sampling.normal' So I added autodoc_mock_imports = [\"botorch\"] to sphinx/conf.py and that resolved the issue. I may have been missing some dependencies.\n" ]
[ 0 ]
[]
[]
[ "documentation", "init", "python", "python_sphinx" ]
stackoverflow_0074608512_documentation_init_python_python_sphinx.txt
Q: Convert bytes in a pandas dataframe column into hexadecimals There is a problem when pandas reads read_sql bytes column. If you look at the sql request through DBeaver, then the bytes column shows differently, but if you look at it through read_sql, it seems that pandas translates the value into a hex. For example, pandas shows column value - b'\x80\xbc\x10`K\xa8\x95\xd8\x11\xe5K\xf9\xe7\xd7\x8cq' I need - 0x80BC10604BA895D811E54BF9E7D78C71 If I use, in sql, CONVERT(varchar(max),guid,1) pandas give correct values. But I need to convert column in python not in sql. A: It looks like the column (which I call 'col' below) contains bytes. There's the .hex() method that you can map to each item in the column to convert them into hexadecimal strings. df['col'] = df['col'].map(lambda e: e.hex()) This produces 80bc10604ba895d811e54bf9e7d78c71 It seems the specific output you want is to have the "0x" prefix and have upper-case letters for the digits above 9, which you can do by calling .upper() on the hex string and prepend '0x'. df['col'] = df['col'].map(lambda e: '0x' + e.hex().upper()) This produces 0x80BC10604BA895D811E54BF9E7D78C71
Convert bytes in a pandas dataframe column into hexadecimals
There is a problem when pandas reads read_sql bytes column. If you look at the sql request through DBeaver, then the bytes column shows differently, but if you look at it through read_sql, it seems that pandas translates the value into a hex. For example, pandas shows column value - b'\x80\xbc\x10`K\xa8\x95\xd8\x11\xe5K\xf9\xe7\xd7\x8cq' I need - 0x80BC10604BA895D811E54BF9E7D78C71 If I use, in sql, CONVERT(varchar(max),guid,1) pandas give correct values. But I need to convert column in python not in sql.
[ "It looks like the column (which I call 'col' below) contains bytes. There's the .hex() method that you can map to each item in the column to convert them into hexadecimal strings.\ndf['col'] = df['col'].map(lambda e: e.hex())\n\nThis produces\n80bc10604ba895d811e54bf9e7d78c71\n\nIt seems the specific output you want is to have the \"0x\" prefix and have upper-case letters for the digits above 9, which you can do by calling .upper() on the hex string and prepend '0x'.\ndf['col'] = df['col'].map(lambda e: '0x' + e.hex().upper())\n\nThis produces\n0x80BC10604BA895D811E54BF9E7D78C71\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "sql_server" ]
stackoverflow_0074628652_pandas_python_sql_server.txt
Q: trimming and removing adapter from sequences in biopython I got this question that I am unable to solve: A user has an input DNA.txt file which consists of 5 sequences. Each sequences starts with the same 14 base pair fragment - a sequencing adapter that should have been removed. Write a program that will (a) trim this adapter and write the cleaned sequences to a new file and (b) print the length of each sequence I am actually new in biopython. i thought of using strip() from Seq modules, but i don't think it will work. A: think I got it, not using biopython though: import io file =io.StringIO('AAAAAAAAAAAAAATTTTT\nAAAAAAAAAAAAAATTTTT\nAAAAAAAAAAAAAATTTT\nAAAAAAAAAAAAAATTTTT\nAAAAAAAAAAAAAATTTTT\n') print('read file :') for i in file.readlines(): print(i.strip(),'\n') file.seek(0) newfile =io.StringIO() for i in file.readlines(): i = i.lstrip('AAAAAAAAAAAAAA') #print(i) newfile.write(i) newfile.seek(0) print('read newfile :') for i in newfile.readlines(): print(i.strip() + ' lenght : ', len(i),'\n') output: read file : AAAAAAAAAAAAAATTTTT AAAAAAAAAAAAAATTTTT AAAAAAAAAAAAAATTTT AAAAAAAAAAAAAATTTTT AAAAAAAAAAAAAATTTTT read newfile : TTTTT lenght : 6 TTTTT lenght : 6 TTTT lenght : 5 TTTTT lenght : 6 TTTTT lenght : 6 If you go for biopython route you should be able to use lstrip the same way on Seq object as per the manual: https://biopython.org/docs/1.75/api/Bio.Seq.html#Bio.Seq.Seq.lstrip : lstrip(self, chars=None) Return a new Seq object with leading (left) end stripped. This behaves like the python string method of the same name. Optional argument chars defines which characters to remove. If omitted or None (default) then as for the python string method, this defaults to removing any white space. e.g. print(my_seq.lstrip(“-“)) See also the strip and rstrip methods. USING Biopython: from Bio import SeqIO import io file =io.StringIO('>1\nAAAAAAAAAAAAAATTTTT\n>2\nAAAAAAAAAAAAAATTTTT\n>3\nAAAAAAAAAAAAAATTTT\n>4\nAAAAAAAAAAAAAATTTTT\n>5\nAAAAAAAAAAAAAATTTTT\n') print('read file :') for i in SeqIO.parse(file, 'fasta'): print(i.seq,'\n') file.seek(0) newfile =io.StringIO() sequences = [] for i in SeqIO.parse(file, 'fasta'): i.seq = i.seq.lstrip('AAAAAAAAAAAAAA') sequences.append(i) # print('...', sequences) SeqIO.write(sequences, newfile, 'fasta') newfile.seek(0) for i in SeqIO.parse(newfile, 'fasta'): # print(i,'\n') print(i.seq, 'lenght :', len(i.seq),'\n') newfile.seek(0) output: read file : AAAAAAAAAAAAAATTTTT AAAAAAAAAAAAAATTTTT AAAAAAAAAAAAAATTTT AAAAAAAAAAAAAATTTTT AAAAAAAAAAAAAATTTTT TTTTT lenght : 5 TTTTT lenght : 5 TTTT lenght : 4 TTTTT lenght : 5 TTTTT lenght : 5 uncommenting # print(i,'\n') in the last loop, that parse the newfile will output: ............ ID: 4 Name: 4 Description: 4 Number of features: 0 Seq('TTTTT') ...... that is the __str__ value of SeqRecord object, while its __repr__ obtained with print(repr(i)) would be: SeqRecord(seq=Seq('TTTTT'), id='4', name='4', description='4', dbxrefs=[]) everything is explained: https://biopython.org/docs/1.78/api/Bio.SeqRecord.html be sure to check the version of biopython you are using: import Bio print('version :', Bio.__version__)
trimming and removing adapter from sequences in biopython
I got this question that I am unable to solve: A user has an input DNA.txt file which consists of 5 sequences. Each sequences starts with the same 14 base pair fragment - a sequencing adapter that should have been removed. Write a program that will (a) trim this adapter and write the cleaned sequences to a new file and (b) print the length of each sequence I am actually new in biopython. i thought of using strip() from Seq modules, but i don't think it will work.
[ "think I got it, not using biopython though:\nimport io\n\nfile =io.StringIO('AAAAAAAAAAAAAATTTTT\\nAAAAAAAAAAAAAATTTTT\\nAAAAAAAAAAAAAATTTT\\nAAAAAAAAAAAAAATTTTT\\nAAAAAAAAAAAAAATTTTT\\n')\n\nprint('read file :') \nfor i in file.readlines():\n print(i.strip(),'\\n')\nfile.seek(0)\n\nnewfile =io.StringIO()\nfor i in file.readlines():\n i = i.lstrip('AAAAAAAAAAAAAA')\n #print(i)\n newfile.write(i)\nnewfile.seek(0)\n\nprint('read newfile :') \nfor i in newfile.readlines():\n print(i.strip() + ' lenght : ', len(i),'\\n')\n\noutput:\nread file :\nAAAAAAAAAAAAAATTTTT \n\nAAAAAAAAAAAAAATTTTT \n\nAAAAAAAAAAAAAATTTT \n\nAAAAAAAAAAAAAATTTTT \n\nAAAAAAAAAAAAAATTTTT \n\nread newfile :\nTTTTT lenght : 6 \n\nTTTTT lenght : 6 \n\nTTTT lenght : 5 \n\nTTTTT lenght : 6 \n\nTTTTT lenght : 6 \n\nIf you go for biopython route you should be able to use lstrip the same way on Seq object as per the manual: https://biopython.org/docs/1.75/api/Bio.Seq.html#Bio.Seq.Seq.lstrip :\n\nlstrip(self, chars=None)\n\n\nReturn a new Seq object with leading (left) end stripped.\n\n\nThis behaves like the python string method of the same name.\n\n\nOptional argument chars defines which characters to remove. If omitted or None (default) then as for the python string method, this defaults to removing any white space.\n\n\ne.g. print(my_seq.lstrip(“-“))\n\n\nSee also the strip and rstrip methods.\n\nUSING Biopython:\nfrom Bio import SeqIO\n\nimport io\n\nfile =io.StringIO('>1\\nAAAAAAAAAAAAAATTTTT\\n>2\\nAAAAAAAAAAAAAATTTTT\\n>3\\nAAAAAAAAAAAAAATTTT\\n>4\\nAAAAAAAAAAAAAATTTTT\\n>5\\nAAAAAAAAAAAAAATTTTT\\n')\n\n\n\n\nprint('read file :') \n\n\nfor i in SeqIO.parse(file, 'fasta'):\n print(i.seq,'\\n')\nfile.seek(0)\n\nnewfile =io.StringIO()\n\nsequences = []\n\nfor i in SeqIO.parse(file, 'fasta'):\n i.seq = i.seq.lstrip('AAAAAAAAAAAAAA')\n \n sequences.append(i)\n \n \n# print('...', sequences)\n\nSeqIO.write(sequences, newfile, 'fasta')\n\nnewfile.seek(0)\n\nfor i in SeqIO.parse(newfile, 'fasta'):\n # print(i,'\\n')\n print(i.seq, 'lenght :', len(i.seq),'\\n')\nnewfile.seek(0)\n\noutput:\nread file :\nAAAAAAAAAAAAAATTTTT \n\nAAAAAAAAAAAAAATTTTT \n\nAAAAAAAAAAAAAATTTT \n\nAAAAAAAAAAAAAATTTTT \n\nAAAAAAAAAAAAAATTTTT \n\nTTTTT lenght : 5 \n\nTTTTT lenght : 5 \n\nTTTT lenght : 4 \n\nTTTTT lenght : 5 \n\nTTTTT lenght : 5 \n\nuncommenting # print(i,'\\n') in the last loop, that parse\nthe newfile will output:\n............\n\nID: 4\nName: 4\nDescription: 4\nNumber of features: 0\nSeq('TTTTT') \n\n......\n\n\nthat is the __str__ value of SeqRecord object, while its\n__repr__ obtained with print(repr(i)) would be:\nSeqRecord(seq=Seq('TTTTT'), id='4', name='4', description='4', dbxrefs=[])\neverything is explained: https://biopython.org/docs/1.78/api/Bio.SeqRecord.html\nbe sure to check the version of biopython you are using:\nimport Bio\nprint('version :', Bio.__version__)\n\n" ]
[ 0 ]
[]
[]
[ "biopython", "python" ]
stackoverflow_0074628843_biopython_python.txt
Q: pandas datetime Series difference not working as expected The following is a minimal working example import pandas as pd df = pd.DataFrame({"datetime": [ "2021-09-01 00:00:01", "2021-09-01 00:00:02", "2021-09-01 00:00:03", "2021-09-01 00:00:04", "2021-09-01 00:00:05", "2021-09-01 00:00:06", "2021-09-01 00:00:07", "2021-09-01 00:00:08", "2021-09-01 00:00:09", "2021-09-01 00:00:10", ]} ) df["datetime"] = pd.to_datetime(df["datetime"]) delta = df["datetime"][1::] - df["datetime"][0:-1] print(delta) I got 0 NaT 1 0 days 2 0 days 3 0 days 4 0 days 5 0 days 6 0 days 7 0 days 8 0 days 9 NaT Name: datetime, dtype: timedelta64[ns] but I though this would be an array with entries of Timedelta('0 days 00:00:01'), and in fact if I do df["datetime"][1] - df["datetime"][0] that's what I get. On the other hand df["datetime"][1:2] - df["datetime"][0:1] produces 0 NaT 1 NaT Name: datetime, dtype: timedelta64[ns] what am I missing here? A: In your example the index is taken into account. So it will take the same times an subtract them, which then ends up 0 days. NaT because index 0 and 9 are not present in both Series. df["datetime"][1::] - df["datetime"][0:-1].values
pandas datetime Series difference not working as expected
The following is a minimal working example import pandas as pd df = pd.DataFrame({"datetime": [ "2021-09-01 00:00:01", "2021-09-01 00:00:02", "2021-09-01 00:00:03", "2021-09-01 00:00:04", "2021-09-01 00:00:05", "2021-09-01 00:00:06", "2021-09-01 00:00:07", "2021-09-01 00:00:08", "2021-09-01 00:00:09", "2021-09-01 00:00:10", ]} ) df["datetime"] = pd.to_datetime(df["datetime"]) delta = df["datetime"][1::] - df["datetime"][0:-1] print(delta) I got 0 NaT 1 0 days 2 0 days 3 0 days 4 0 days 5 0 days 6 0 days 7 0 days 8 0 days 9 NaT Name: datetime, dtype: timedelta64[ns] but I though this would be an array with entries of Timedelta('0 days 00:00:01'), and in fact if I do df["datetime"][1] - df["datetime"][0] that's what I get. On the other hand df["datetime"][1:2] - df["datetime"][0:1] produces 0 NaT 1 NaT Name: datetime, dtype: timedelta64[ns] what am I missing here?
[ "In your example the index is taken into account. So it will take the same times an subtract them, which then ends up 0 days.\nNaT because index 0 and 9 are not present in both Series.\ndf[\"datetime\"][1::] - df[\"datetime\"][0:-1].values\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "pandas", "python" ]
stackoverflow_0074632594_datetime_pandas_python.txt
Q: check if element is in anywhere the list I have two lists: expected = ["apple", "banana", "pear"] actual = ["banana_yellow", "apple", "pear_green"] I'm trying to assert that expected = actual. Even though the color is added at the end to some elements, it should still return true. Things I tried: for i in expected: assert i in actual I was hoping something like this would work but it's trying to match the first element apple to banana and returns false rather than checking the entire list and returns true if there is apple anywhere in the list. I was hoping someone can help? A: Try this: def isequal(actual: list, expected: list) -> bool: actual.sort() expected.sort() if len(actual) != len(expected): return False for i, val in enumerate(expected): if not actual[i].startswith(val): return False return True print(isequal(actual, expected)) A: I suggest a custom assertion method.. This is a bit more than you need, but also a bit more flexible. Order of the entries is irrelevant How the color is added is irrelevant (e.g. by dash, underscore, ...) It still has the flaw that, if you have the color orange added to carot and are looking for orange, it would also assert sucessfully. For such cases you need to tailor the method to your actual needs. (e.g. replace the substring in string by string.startwith(substring) etc. However, this should give you a starting point: def assert_matching_list_contents(expected: list, actual: list) -> None: if len(expected) != len(actual): raise AssertionError('Length of the lists does not match!') for expected_value in expected: if not any(expected_value in entry for entry in actual): raise AssertionError(f'Expected entry "{expected_value}" not found') expectation = ["apple", "banana", "pear"] current = ["banana_yellow", "apple", "pear_green"] assert_matching_list_contents(expectation, current) A: expected = ["apple", "banana", "pear"] actual = ["banana_yellow", "apple", "pear_green", 'orange'] for act in actual: if not act.startswith(tuple(expected)): print(act) >>> orange If you want it to work in the opposite way, expected = ["apple", "banana", "pear", 'grapes'] actual = ["banana_yellow", "apple", "pear_green", 'orange'] expected_ = set(expected) for act in actual: for exp in expected: if act.startswith(exp): expected_.discard(exp) break assert not(expected_), f"{expected_} are not found in actual and " + f"{set(expected)-expected_} are found in actual" >>> AssertionError: {'grapes'} are not found in actual and {'apple', 'pear', 'banana'} are found in actual Another way, expected = ["apple", "banana", "pear", 'grapes'] actual = ["banana_yellow", "apple", "pear_green", 'orange'] for exp in expected: assert [exp for act in actual if act.startswith(exp)], f'{exp} not found' >>> AssertionError: grapes not found A: You can use .startswith, along with sorted lists to find it. expected = ["apple", "banana", "pear"] actual = ["banana_yellow", "apple", "pear_green"] expected.sort() # just looking for prefixes actual.sort() count = 0 for i in actual: if i.startswith(expected[count]): # prefix matches! count+=1 # move to next prefix if count != len(expected): # since it didn't find all expected print("Nope!") assert False It works because it just skips extras in actual, but count gets stuck if it can't find the prefix in actual, leading count to never reach the end of expected.
check if element is in anywhere the list
I have two lists: expected = ["apple", "banana", "pear"] actual = ["banana_yellow", "apple", "pear_green"] I'm trying to assert that expected = actual. Even though the color is added at the end to some elements, it should still return true. Things I tried: for i in expected: assert i in actual I was hoping something like this would work but it's trying to match the first element apple to banana and returns false rather than checking the entire list and returns true if there is apple anywhere in the list. I was hoping someone can help?
[ "Try this:\ndef isequal(actual: list, expected: list) -> bool:\n actual.sort()\n expected.sort()\n if len(actual) != len(expected):\n return False\n for i, val in enumerate(expected):\n if not actual[i].startswith(val):\n return False\n return True\n\nprint(isequal(actual, expected))\n\n", "I suggest a custom assertion method..\nThis is a bit more than you need, but also a bit more flexible.\n\nOrder of the entries is irrelevant\nHow the color is added is irrelevant (e.g. by dash, underscore, ...)\n\nIt still has the flaw that, if you have the color orange added to carot and are looking for orange, it would also assert sucessfully. For such cases you need to tailor the method to your actual needs. (e.g. replace the substring in string by string.startwith(substring) etc. However, this should give you a starting point:\ndef assert_matching_list_contents(expected: list, actual: list) -> None:\n if len(expected) != len(actual):\n raise AssertionError('Length of the lists does not match!')\n\n for expected_value in expected:\n if not any(expected_value in entry for entry in actual):\n raise AssertionError(f'Expected entry \"{expected_value}\" not found')\n\nexpectation = [\"apple\", \"banana\", \"pear\"]\ncurrent = [\"banana_yellow\", \"apple\", \"pear_green\"]\nassert_matching_list_contents(expectation, current)\n\n", "expected = [\"apple\", \"banana\", \"pear\"]\nactual = [\"banana_yellow\", \"apple\", \"pear_green\", 'orange']\n\nfor act in actual:\n if not act.startswith(tuple(expected)):\n print(act)\n>>>\norange\n\nIf you want it to work in the opposite way,\nexpected = [\"apple\", \"banana\", \"pear\", 'grapes']\nactual = [\"banana_yellow\", \"apple\", \"pear_green\", 'orange']\nexpected_ = set(expected)\nfor act in actual:\n for exp in expected:\n if act.startswith(exp):\n expected_.discard(exp)\n break\nassert not(expected_), f\"{expected_} are not found in actual and \" + f\"{set(expected)-expected_} are found in actual\"\n>>>\nAssertionError: {'grapes'} are not found in actual and {'apple', 'pear', 'banana'} are found in actual\n\nAnother way,\nexpected = [\"apple\", \"banana\", \"pear\", 'grapes']\nactual = [\"banana_yellow\", \"apple\", \"pear_green\", 'orange']\nfor exp in expected:\n assert [exp for act in actual if act.startswith(exp)], f'{exp} not found'\n>>>\nAssertionError: grapes not found\n\n", "You can use .startswith, along with sorted lists to find it.\nexpected = [\"apple\", \"banana\", \"pear\"]\nactual = [\"banana_yellow\", \"apple\", \"pear_green\"]\nexpected.sort() # just looking for prefixes\nactual.sort()\ncount = 0\nfor i in actual:\n if i.startswith(expected[count]): # prefix matches!\n count+=1 # move to next prefix\nif count != len(expected): # since it didn't find all expected\n print(\"Nope!\")\n assert False\n\nIt works because it just skips extras in actual, but count gets stuck if it can't find the prefix in actual, leading count to never reach the end of expected.\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "compare", "list", "python" ]
stackoverflow_0074632667_compare_list_python.txt
Q: How to handle input value error when using under sampling methods from imblearn? Thank you for your help in advance. I am trying to use the RandomUnderSampler() and fit_sample() methods from imblearn to balance a botnet dataset with two missing values. The dataset contains a label column for binary classification that uses 0 and 1 as values. I am using Azure ML designer where I created a Python Script Execute Module and handled the missing data using the mean(). There are no infinity values and the largest decimal value is 5,747.13 and the smallest value is 0. **Dataset sample with few entries: ** Code Snippet: def azureml_main(dataframe1 = None, dataframe2 = None): # Handle Nan values dataframe1.fillna(dataframe1.mean(), inplace=False) # Execution logic goes here rus = RandomUnderSampler(random_state=0) X = dataframe1.drop(dataframe1[['label']], axis=1) y = np.squeeze(dataframe1[['label']]) X_rus, y_rus = rus.fit_sample(X, y) # **line 32 with the ValueError** **Error: ** ---------- Start of error message from Python interpreter ---------- Got exception when invoking script at line 32 in function azureml_main: 'ValueError: Input contains NaN, infinity or a value too large for dtype('float64').'. ---------- End of error message from Python interpreter ---------- I used fillna to address the 2 missing values. I am not sure how to handle the large decimal values without affecting the current values. A: Thank you Ghada. Posting your solution into answer section to help other community members. Used the to_numeric() function to convert the string to numeric after removing the spaces in the string. columns = ['flgs', 'proto', 'saddr', 'daddr', 'state', 'category', 'subcategory'] for x in columns: dataframe1[x] = pd.to_numeric(dataframe1[x].str.replace(' ', ''), downcast='float', errors ='coerce').fillna(0)
How to handle input value error when using under sampling methods from imblearn?
Thank you for your help in advance. I am trying to use the RandomUnderSampler() and fit_sample() methods from imblearn to balance a botnet dataset with two missing values. The dataset contains a label column for binary classification that uses 0 and 1 as values. I am using Azure ML designer where I created a Python Script Execute Module and handled the missing data using the mean(). There are no infinity values and the largest decimal value is 5,747.13 and the smallest value is 0. **Dataset sample with few entries: ** Code Snippet: def azureml_main(dataframe1 = None, dataframe2 = None): # Handle Nan values dataframe1.fillna(dataframe1.mean(), inplace=False) # Execution logic goes here rus = RandomUnderSampler(random_state=0) X = dataframe1.drop(dataframe1[['label']], axis=1) y = np.squeeze(dataframe1[['label']]) X_rus, y_rus = rus.fit_sample(X, y) # **line 32 with the ValueError** **Error: ** ---------- Start of error message from Python interpreter ---------- Got exception when invoking script at line 32 in function azureml_main: 'ValueError: Input contains NaN, infinity or a value too large for dtype('float64').'. ---------- End of error message from Python interpreter ---------- I used fillna to address the 2 missing values. I am not sure how to handle the large decimal values without affecting the current values.
[ "Thank you Ghada. Posting your solution into answer section to help other community members.\nUsed the to_numeric() function to convert the string to numeric after removing the spaces in the string.\n\ncolumns = ['flgs', 'proto', 'saddr', 'daddr', 'state', 'category', 'subcategory']\nfor x in columns: dataframe1[x] = pd.to_numeric(dataframe1[x].str.replace(' ', ''), downcast='float', errors ='coerce').fillna(0)\n\n" ]
[ 0 ]
[]
[]
[ "azure", "imbalanced_data", "machine_learning", "python" ]
stackoverflow_0073664778_azure_imbalanced_data_machine_learning_python.txt
Q: Reorganize pandas 'timed' dataframe into single row to allow for concat I have dataframes (stored in excel files) of data for a single participant each of which look like df1 = pd.DataFrame([['15:05', '15:06', '15:07', '15:08'], [7.333879016553067, 8.066897471204006, 7.070168678977272, 6.501888904228463], [64.16712081101915, 65.08486717007806, 67.22483766233766, 64.40328265521458], [114.21879259980525, 116.49792952572476, 113.26931818181818, 108.35424424108551]]).T df1.columns = ['Start', 'CO', 'Dia', 'Sys'] Start CO Dia Sys 0 15:05 7.33388 64.1671 114.219 1 15:06 8.0669 65.0849 116.498 2 15:07 7.07017 67.2248 113.269 3 15:08 6.50189 64.4033 108.354 and I need to unstack it into 1 row so that I can then read all the different participants into a single dataframe. I have tried using the answer to this question, and the answer to this question to get something like this (a multiindexed dataframe) Time 1 Time 2 CO Dia Sys CO Dia Sys 0 7.33388 64.1671 114.219 8.0669 65.0849 116.498 But what I'm ending up with is ('15:05', 'CO') ('15:05', 'Dia') ('15:05', 'Sys') ('15:06', 'CO') ('15:06', 'Dia') ('15:06', 'Sys') 0 7.33388 64.1671 114.219 nan nan nan 1 nan nan nan 8.0669 65.0849 116.498 So as you can see, each minute is still a new row but now they are arranged in an even less useful way. Can anyone offer advice? A: Assuming that each row is Time 0, Time 1, etc. We can use the index for our top level in the MultiIndex # convert index to string and add "Time " df1.index = "Time " + df1.index.astype(str) Then groupby the index, take the max (or some other aggregate that keeps the original values) of all columns besides "Start" (0th element), stack, convert back to a frame, and transpose out = df1.groupby(df1.index)[df1.columns[1:]].max().stack().to_frame().T
Reorganize pandas 'timed' dataframe into single row to allow for concat
I have dataframes (stored in excel files) of data for a single participant each of which look like df1 = pd.DataFrame([['15:05', '15:06', '15:07', '15:08'], [7.333879016553067, 8.066897471204006, 7.070168678977272, 6.501888904228463], [64.16712081101915, 65.08486717007806, 67.22483766233766, 64.40328265521458], [114.21879259980525, 116.49792952572476, 113.26931818181818, 108.35424424108551]]).T df1.columns = ['Start', 'CO', 'Dia', 'Sys'] Start CO Dia Sys 0 15:05 7.33388 64.1671 114.219 1 15:06 8.0669 65.0849 116.498 2 15:07 7.07017 67.2248 113.269 3 15:08 6.50189 64.4033 108.354 and I need to unstack it into 1 row so that I can then read all the different participants into a single dataframe. I have tried using the answer to this question, and the answer to this question to get something like this (a multiindexed dataframe) Time 1 Time 2 CO Dia Sys CO Dia Sys 0 7.33388 64.1671 114.219 8.0669 65.0849 116.498 But what I'm ending up with is ('15:05', 'CO') ('15:05', 'Dia') ('15:05', 'Sys') ('15:06', 'CO') ('15:06', 'Dia') ('15:06', 'Sys') 0 7.33388 64.1671 114.219 nan nan nan 1 nan nan nan 8.0669 65.0849 116.498 So as you can see, each minute is still a new row but now they are arranged in an even less useful way. Can anyone offer advice?
[ "Assuming that each row is Time 0, Time 1, etc. We can use the index for our top level in the MultiIndex\n# convert index to string and add \"Time \"\ndf1.index = \"Time \" + df1.index.astype(str)\n\nThen groupby the index, take the max (or some other aggregate that keeps the original values) of all columns besides \"Start\" (0th element), stack, convert back to a frame, and transpose\nout = df1.groupby(df1.index)[df1.columns[1:]].max().stack().to_frame().T\n\n\n" ]
[ 1 ]
[]
[]
[ "data_munging", "dataframe", "pandas", "python" ]
stackoverflow_0074632618_data_munging_dataframe_pandas_python.txt
Q: How can I get all objects of a model and it's fields? I have a Product Model and I want to be able to count all it's objects in a method so I can render the total number in a template, same for each category but I can only do that with the number_of_likes method. class Category(models.Model): name = models.CharField(max_length=45) ... class Product(models.Model): author = models.ForeignKey(User, default=None, on_delete=models.CASCADE) title = models.CharField(max_length=120, unique=True) category = models.ForeignKey(Category, default=None, on_delete=models.PROTECT) product_type = models.CharField(max_length=30, choices=TYPE, default='Physical') likes = models.ManyToManyField(User, related_name='like') ... def number_of_likes(self): return self.likes.count() def number_of_products(self): return self.Products.objects.all.count() def number_of_products_for_category(self): return Product.objects.filter(category_id=self.category_id).count() def __str__(self): return str(self.title) + ' from ' + str(self.author) <div> <h3>All products ({{ number_of_products }})</h3> {% for category in categories %} <p>{{ category.name }} (q)</p> {{ number_of_products_for_category }} {% endfor %} </div> The number_of_products and number_of_products_for_category are the methods that aren't working. A: You can work with: def number_of_products_for_category(self): return Product.objects.filter(category_id=self.category_id).count() But if you use this for all categories, you can .annotate(..) [Django-doc] the queryset: from django.db.models import Count categories = Category.objects.annotate(num_products=Count('product')) and render with: {% for category in categories %} <p>{{ category.name }} ({{ category.num_products }})</p> {% endfor %} Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.
How can I get all objects of a model and it's fields?
I have a Product Model and I want to be able to count all it's objects in a method so I can render the total number in a template, same for each category but I can only do that with the number_of_likes method. class Category(models.Model): name = models.CharField(max_length=45) ... class Product(models.Model): author = models.ForeignKey(User, default=None, on_delete=models.CASCADE) title = models.CharField(max_length=120, unique=True) category = models.ForeignKey(Category, default=None, on_delete=models.PROTECT) product_type = models.CharField(max_length=30, choices=TYPE, default='Physical') likes = models.ManyToManyField(User, related_name='like') ... def number_of_likes(self): return self.likes.count() def number_of_products(self): return self.Products.objects.all.count() def number_of_products_for_category(self): return Product.objects.filter(category_id=self.category_id).count() def __str__(self): return str(self.title) + ' from ' + str(self.author) <div> <h3>All products ({{ number_of_products }})</h3> {% for category in categories %} <p>{{ category.name }} (q)</p> {{ number_of_products_for_category }} {% endfor %} </div> The number_of_products and number_of_products_for_category are the methods that aren't working.
[ "You can work with:\ndef number_of_products_for_category(self):\n return Product.objects.filter(category_id=self.category_id).count()\nBut if you use this for all categories, you can .annotate(..) [Django-doc] the queryset:\nfrom django.db.models import Count\n\ncategories = Category.objects.annotate(num_products=Count('product'))\nand render with:\n{% for category in categories %}\n <p>{{ category.name }} ({{ category.num_products }})</p>\n{% endfor %}\n\n\nNote: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0074632939_django_django_models_python.txt
Q: convert matrix to image How would I go about going converting a list of lists of ints into a matrix plot in Python? The example data set is: [[3, 5, 3, 5, 2, 3, 2, 4, 3, 0, 5, 0, 3, 2], [5, 2, 2, 0, 0, 3, 2, 1, 0, 5, 3, 5, 0, 0], [2, 5, 3, 1, 1, 3, 3, 0, 0, 5, 4, 4, 3, 3], [4, 1, 4, 2, 1, 4, 5, 1, 2, 2, 0, 1, 2, 3], [5, 1, 1, 1, 5, 2, 5, 0, 4, 0, 2, 4, 4, 5], [5, 1, 0, 4, 5, 5, 4, 1, 3, 3, 1, 1, 0, 1], [3, 2, 2, 4, 3, 1, 5, 5, 0, 4, 3, 2, 4, 1], [4, 0, 1, 3, 2, 1, 2, 1, 0, 1, 5, 4, 2, 0], [2, 0, 4, 0, 4, 5, 1, 2, 1, 0, 3, 4, 3, 1], [2, 3, 4, 5, 4, 5, 0, 3, 3, 0, 2, 4, 4, 5], [5, 2, 4, 3, 3, 0, 5, 4, 0, 3, 4, 3, 2, 1], [3, 0, 4, 4, 4, 1, 4, 1, 3, 5, 1, 2, 1, 1], [3, 4, 2, 5, 2, 5, 1, 3, 5, 1, 4, 3, 4, 1], [0, 1, 1, 2, 3, 1, 2, 0, 1, 2, 4, 4, 2, 1]] To give you an idea of what I'm looking for, the function MatrixPlot in Mathematica gives me this image for this data set: Thanks! A: You may try from pylab import * A = rand(5,5) figure(1) imshow(A, interpolation='nearest') grid(True) source A: Perhaps matshow() from matplotlib is what you need. A: You can also use pyplot from matplotlib, follows the code: from matplotlib import pyplot as plt plt.imshow( [[3, 5, 3, 5, 2, 3, 2, 4, 3, 0, 5, 0, 3, 2], [5, 2, 2, 0, 0, 3, 2, 1, 0, 5, 3, 5, 0, 0], [2, 5, 3, 1, 1, 3, 3, 0, 0, 5, 4, 4, 3, 3], [4, 1, 4, 2, 1, 4, 5, 1, 2, 2, 0, 1, 2, 3], [5, 1, 1, 1, 5, 2, 5, 0, 4, 0, 2, 4, 4, 5], [5, 1, 0, 4, 5, 5, 4, 1, 3, 3, 1, 1, 0, 1], [3, 2, 2, 4, 3, 1, 5, 5, 0, 4, 3, 2, 4, 1], [4, 0, 1, 3, 2, 1, 2, 1, 0, 1, 5, 4, 2, 0], [2, 0, 4, 0, 4, 5, 1, 2, 1, 0, 3, 4, 3, 1], [2, 3, 4, 5, 4, 5, 0, 3, 3, 0, 2, 4, 4, 5], [5, 2, 4, 3, 3, 0, 5, 4, 0, 3, 4, 3, 2, 1], [3, 0, 4, 4, 4, 1, 4, 1, 3, 5, 1, 2, 1, 1], [3, 4, 2, 5, 2, 5, 1, 3, 5, 1, 4, 3, 4, 1], [0, 1, 1, 2, 3, 1, 2, 0, 1, 2, 4, 4, 2, 1]], interpolation='nearest') plt.show() The output would be:
convert matrix to image
How would I go about going converting a list of lists of ints into a matrix plot in Python? The example data set is: [[3, 5, 3, 5, 2, 3, 2, 4, 3, 0, 5, 0, 3, 2], [5, 2, 2, 0, 0, 3, 2, 1, 0, 5, 3, 5, 0, 0], [2, 5, 3, 1, 1, 3, 3, 0, 0, 5, 4, 4, 3, 3], [4, 1, 4, 2, 1, 4, 5, 1, 2, 2, 0, 1, 2, 3], [5, 1, 1, 1, 5, 2, 5, 0, 4, 0, 2, 4, 4, 5], [5, 1, 0, 4, 5, 5, 4, 1, 3, 3, 1, 1, 0, 1], [3, 2, 2, 4, 3, 1, 5, 5, 0, 4, 3, 2, 4, 1], [4, 0, 1, 3, 2, 1, 2, 1, 0, 1, 5, 4, 2, 0], [2, 0, 4, 0, 4, 5, 1, 2, 1, 0, 3, 4, 3, 1], [2, 3, 4, 5, 4, 5, 0, 3, 3, 0, 2, 4, 4, 5], [5, 2, 4, 3, 3, 0, 5, 4, 0, 3, 4, 3, 2, 1], [3, 0, 4, 4, 4, 1, 4, 1, 3, 5, 1, 2, 1, 1], [3, 4, 2, 5, 2, 5, 1, 3, 5, 1, 4, 3, 4, 1], [0, 1, 1, 2, 3, 1, 2, 0, 1, 2, 4, 4, 2, 1]] To give you an idea of what I'm looking for, the function MatrixPlot in Mathematica gives me this image for this data set: Thanks!
[ "You may try \nfrom pylab import *\nA = rand(5,5)\nfigure(1)\nimshow(A, interpolation='nearest')\ngrid(True)\n\n\nsource\n", "Perhaps matshow() from matplotlib is what you need.\n", "You can also use pyplot from matplotlib, follows the code:\nfrom matplotlib import pyplot as plt\n\n\nplt.imshow(\n[[3, 5, 3, 5, 2, 3, 2, 4, 3, 0, 5, 0, 3, 2],\n [5, 2, 2, 0, 0, 3, 2, 1, 0, 5, 3, 5, 0, 0],\n [2, 5, 3, 1, 1, 3, 3, 0, 0, 5, 4, 4, 3, 3],\n [4, 1, 4, 2, 1, 4, 5, 1, 2, 2, 0, 1, 2, 3],\n [5, 1, 1, 1, 5, 2, 5, 0, 4, 0, 2, 4, 4, 5],\n [5, 1, 0, 4, 5, 5, 4, 1, 3, 3, 1, 1, 0, 1],\n [3, 2, 2, 4, 3, 1, 5, 5, 0, 4, 3, 2, 4, 1],\n [4, 0, 1, 3, 2, 1, 2, 1, 0, 1, 5, 4, 2, 0],\n [2, 0, 4, 0, 4, 5, 1, 2, 1, 0, 3, 4, 3, 1],\n [2, 3, 4, 5, 4, 5, 0, 3, 3, 0, 2, 4, 4, 5],\n [5, 2, 4, 3, 3, 0, 5, 4, 0, 3, 4, 3, 2, 1],\n [3, 0, 4, 4, 4, 1, 4, 1, 3, 5, 1, 2, 1, 1],\n [3, 4, 2, 5, 2, 5, 1, 3, 5, 1, 4, 3, 4, 1],\n [0, 1, 1, 2, 3, 1, 2, 0, 1, 2, 4, 4, 2, 1]], interpolation='nearest')\n\nplt.show()\n\nThe output would be:\n\n" ]
[ 16, 9, 0 ]
[]
[]
[ "image", "matrix", "python" ]
stackoverflow_0004841611_image_matrix_python.txt
Q: In dagster, how do I load_asset_value from a job executed in process with mem_io_manager? For this question, consider I have a repository with one asset: @asset def my_int(): return 1 @repository def my_repo(): return [my_int] I want to execute it in process (with mem_io_manager), but I would like to retrieve the value returned by my_int from memory later. I can do that with fs_io_manager, for example, using my_repo.load_asset_value('my_int'), after it ran. But the same method with mem_io_manager raises dagster._core.errors.DagsterInvariantViolationError: Attempting to access step_key, but it was not provided when constructing the OutputContext. Ideally, I would execute it in process and tell the executor to return me one (or more) of the assets, something like: my_assets = my_repo.get_job('__ASSET_JOB').execute_in_process(return_assets=[my_int, ...]) A: mem_io_manager doesn't store objects to file storage like fs_io_manager. You could in your my_int asset, save the value to a file or some other cloud storage and retrieve it later or Add the value as metadata if it is a simple integer or string and retrieve that later. For the second case, using metadata, you can do: @asset def my_int(context): return Output(my_int_value, metadata={'my_int_value': my_int_value}) and to retrieve it later you could in another asset: @asset def retrieve_my_int(context): asset_key = 'my_int' latest_materialization_event = ( self.init_context.instance.get_latest_materialization_events( [asset_key] ).get(asset_key) ) if latest_materialization_event: materialization = ( latest_materialization_event.dagster_event.event_specific_data.materialization ) metadata = { entry.label: entry.entry_data for entry in materialization.metadata_entries } retrieved_int = metadata['my_int_value'].value if 'my_int_value' in metadata.keys() else None ....... the metadata approach has limitations, as you can only store certain kinds of data. If you want to store any kind of data, you'd have to execute the jobs differently so that the results can be materialized to a file system or an io_manager of choice. You'd have to instead of execute_in_process, use materialize. @asset def my_int(context): .... @asset def asset_other(context): .... if __name__ == '__main__': asset_results = materialize( load_assets_from_current_module() ) This will materialize the assets and you could specify which io_manager to use in the resource parameter. To retrieve an asset value, you can do my_int_value = asset_results.output_for_node('my_int') A: Your question is a bit unclear - do you want to materialize the asset in-process, then at a later time / in a different process access the result? Or do you just want to execute in process and get the result back? In the former case, @Kay is correct that the result will disappear after the process completes as when using the mem_io_manager the memory the result is stored in is tied to the lifecycle of the process. In the latter case, you should be able to do something like from dagster import materialize asset_result = materialize([my_int]) A: Both answers given so far would require the materialization (to disk), which is not what I wanted in the first place, I wanted to retrieve the value from memory. But @kay pointed me in the right direction. output_for_node works on execute_in_process result, so that the following code achieves what I desired (retrieving my_int results from memory, after the jobs execution). from dagster import asset, repository @asset def my_int(): return 1 @repository def my_repo(): return [my_int] my_assets = my_repo.get_job('__ASSET_JOB').execute_in_process() my_assets.output_for_node("my_int")
In dagster, how do I load_asset_value from a job executed in process with mem_io_manager?
For this question, consider I have a repository with one asset: @asset def my_int(): return 1 @repository def my_repo(): return [my_int] I want to execute it in process (with mem_io_manager), but I would like to retrieve the value returned by my_int from memory later. I can do that with fs_io_manager, for example, using my_repo.load_asset_value('my_int'), after it ran. But the same method with mem_io_manager raises dagster._core.errors.DagsterInvariantViolationError: Attempting to access step_key, but it was not provided when constructing the OutputContext. Ideally, I would execute it in process and tell the executor to return me one (or more) of the assets, something like: my_assets = my_repo.get_job('__ASSET_JOB').execute_in_process(return_assets=[my_int, ...])
[ "mem_io_manager doesn't store objects to file storage like fs_io_manager. You could in your my_int asset,\n\nsave the value to a file or some other cloud storage and retrieve it later or\nAdd the value as metadata if it is a simple integer or string and retrieve that later.\n\nFor the second case, using metadata, you can do:\n@asset\ndef my_int(context):\n return Output(my_int_value, metadata={'my_int_value': my_int_value})\n\nand to retrieve it later you could in another asset:\n@asset\ndef retrieve_my_int(context):\n asset_key = 'my_int'\n latest_materialization_event = (\n self.init_context.instance.get_latest_materialization_events(\n [asset_key]\n ).get(asset_key)\n )\n if latest_materialization_event:\n materialization = (\n latest_materialization_event.dagster_event.event_specific_data.materialization\n )\n metadata = {\n entry.label: entry.entry_data\n for entry in materialization.metadata_entries\n }\n retrieved_int = metadata['my_int_value'].value if 'my_int_value' in metadata.keys() else None\n .......\n\nthe metadata approach has limitations, as you can only store certain kinds of data. If you want to store any kind of data, you'd have to execute the jobs differently so that the results can be materialized to a file system or an io_manager of choice.\nYou'd have to instead of execute_in_process, use materialize.\n@asset\ndef my_int(context):\n ....\n\n\n@asset\ndef asset_other(context):\n ....\n\n\nif __name__ == '__main__':\n asset_results = materialize(\n load_assets_from_current_module()\n )\n\nThis will materialize the assets and you could specify which io_manager to use in the resource parameter. To retrieve an asset value, you can do\nmy_int_value = asset_results.output_for_node('my_int')\n\n", "Your question is a bit unclear - do you want to materialize the asset in-process, then at a later time / in a different process access the result? Or do you just want to execute in process and get the result back?\nIn the former case, @Kay is correct that the result will disappear after the process completes as when using the mem_io_manager the memory the result is stored in is tied to the lifecycle of the process.\nIn the latter case, you should be able to do something like\nfrom dagster import materialize\nasset_result = materialize([my_int])\n\n", "Both answers given so far would require the materialization (to disk), which is not what I wanted in the first place, I wanted to retrieve the value from memory.\nBut @kay pointed me in the right direction. output_for_node works on execute_in_process result, so that the following code achieves what I desired (retrieving my_int results from memory, after the jobs execution).\nfrom dagster import asset, repository\n\n@asset\ndef my_int():\n return 1\n\n@repository\ndef my_repo():\n return [my_int]\n\nmy_assets = my_repo.get_job('__ASSET_JOB').execute_in_process()\nmy_assets.output_for_node(\"my_int\")\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "dagster", "python" ]
stackoverflow_0074615651_dagster_python.txt
Q: How to subtract/black-out regions within an image in Python OpenCV I am working on a project that involves automating a video game using computer vision. My next task involves separating the game's UI elements from the actual game's field of view. For instance: We would take a screenshot of the entire client window like so: Then we would locate the various UI elements on screen (this is accomplished with template matching, see: Template matching with transparent image templates using OpenCV Python). Then, we would need to subtract the matched templates from the client screenshot to end up with something like this: This allows me to perform computer vision functions on the game's field of view without the risk of the UI elements interfering. Let's say we have the following code: # Let's assume we have already taken a screenshot of the client window client_window = cv2.imread('client_window.png') # Here are the three UI areas I'd like to remove/blackout from the above image ui_chatbox = {'left': 0, 'top': 746, 'width': 520, 'height': 167} ui_minimap = {'left': 878, 'top': 31, 'width': 212, 'height': 207} ui_inventory = {'left': 849, 'top': 576, 'width': 241, 'height': 337} How can we blackout the pixels of the UI element bounding boxes from the client_window matrix? A: I ended up solving this by using slicing to set all pixels in a range to black. I thought of this because I knew that cropping an image was a similar process, and both involved selecting a range of pixels: import cv2 import numpy as np # Let's assume we have already taken a screenshot of the client window client_window = cv2.imread('client_window.png') cv2.imshow('client_window', client_window) cv2.waitKey(0) # Here are the three UI areas I'd like to remove/blackout from the above image ui_chatbox = {'left': 0, 'top': 746, 'width': 520, 'height': 167} ui_minimap = {'left': 878, 'top': 31, 'width': 212, 'height': 207} ui_inventory = {'left': 849, 'top': 576, 'width': 241, 'height': 337} client_window[ui_chatbox['top']:ui_chatbox['top'] + ui_chatbox['height'], ui_chatbox['left']:ui_chatbox['left'] + ui_chatbox['width']] = np.array([0,0,0]) client_window[ui_minimap['top']:ui_minimap['top'] + ui_minimap['height'], ui_minimap['left']:ui_minimap['left'] + ui_minimap['width']] = np.array([0,0,0]) client_window[ui_inventory['top']:ui_inventory['top'] + ui_inventory['height'], ui_inventory['left']:ui_inventory['left'] + ui_inventory['width']] = np.array([0,0,0]) cv2.imshow('result', client_window) cv2.waitKey(0)
How to subtract/black-out regions within an image in Python OpenCV
I am working on a project that involves automating a video game using computer vision. My next task involves separating the game's UI elements from the actual game's field of view. For instance: We would take a screenshot of the entire client window like so: Then we would locate the various UI elements on screen (this is accomplished with template matching, see: Template matching with transparent image templates using OpenCV Python). Then, we would need to subtract the matched templates from the client screenshot to end up with something like this: This allows me to perform computer vision functions on the game's field of view without the risk of the UI elements interfering. Let's say we have the following code: # Let's assume we have already taken a screenshot of the client window client_window = cv2.imread('client_window.png') # Here are the three UI areas I'd like to remove/blackout from the above image ui_chatbox = {'left': 0, 'top': 746, 'width': 520, 'height': 167} ui_minimap = {'left': 878, 'top': 31, 'width': 212, 'height': 207} ui_inventory = {'left': 849, 'top': 576, 'width': 241, 'height': 337} How can we blackout the pixels of the UI element bounding boxes from the client_window matrix?
[ "I ended up solving this by using slicing to set all pixels in a range to black. I thought of this because I knew that cropping an image was a similar process, and both involved selecting a range of pixels:\nimport cv2\nimport numpy as np\n\n# Let's assume we have already taken a screenshot of the client window\nclient_window = cv2.imread('client_window.png')\ncv2.imshow('client_window', client_window)\ncv2.waitKey(0)\n\n# Here are the three UI areas I'd like to remove/blackout from the above image\nui_chatbox = {'left': 0, 'top': 746, 'width': 520, 'height': 167}\nui_minimap = {'left': 878, 'top': 31, 'width': 212, 'height': 207}\nui_inventory = {'left': 849, 'top': 576, 'width': 241, 'height': 337}\n\nclient_window[ui_chatbox['top']:ui_chatbox['top'] + ui_chatbox['height'],\n ui_chatbox['left']:ui_chatbox['left'] + ui_chatbox['width']] = np.array([0,0,0])\nclient_window[ui_minimap['top']:ui_minimap['top'] + ui_minimap['height'],\n ui_minimap['left']:ui_minimap['left'] + ui_minimap['width']] = np.array([0,0,0])\nclient_window[ui_inventory['top']:ui_inventory['top'] + ui_inventory['height'],\n ui_inventory['left']:ui_inventory['left'] + ui_inventory['width']] = np.array([0,0,0])\ncv2.imshow('result', client_window)\ncv2.waitKey(0)\n\n" ]
[ 1 ]
[]
[]
[ "image_processing", "numpy", "python" ]
stackoverflow_0074632739_image_processing_numpy_python.txt
Q: Python Add an id column which resets based on another column value I have a data frame like the one below, and I want to add an id column that restarts based on the node value. node1,0.858 node1,0.897 node1,0.954 node2,3.784 node2,7.640 node2,11.592 For example, I want the output below 0, node1, 0.858 1, node1, 0.897 2, node1, 0.954 0, node2, 3.784 1, node2, 7.640 2, node2, 11.592 I have tried to use an index based on the node values but this would not rest the column's value after seeing a new node. I can use a loop but that is an anti-pattern in pandas. A: You can group by the column you wish to base the partition on and then use cumcount() or cumsum(). Then use set_index() to reassign the index to the new field. You can skip that line however if you just need the partition index as a column. import pandas as pd data = {'Name':['node1','node1','node1','node2','node2','node3'], 'Value':[1000,20000,40000,30000,589,682]} df = pd.DataFrame(data) df['New_Index'] = df.groupby('Name').cumcount() df.set_index('New_Index', inplace = True) display(df)
Python Add an id column which resets based on another column value
I have a data frame like the one below, and I want to add an id column that restarts based on the node value. node1,0.858 node1,0.897 node1,0.954 node2,3.784 node2,7.640 node2,11.592 For example, I want the output below 0, node1, 0.858 1, node1, 0.897 2, node1, 0.954 0, node2, 3.784 1, node2, 7.640 2, node2, 11.592 I have tried to use an index based on the node values but this would not rest the column's value after seeing a new node. I can use a loop but that is an anti-pattern in pandas.
[ "You can group by the column you wish to base the partition on and then use cumcount() or cumsum(). Then use set_index() to reassign the index to the new field. You can skip that line however if you just need the partition index as a column.\nimport pandas as pd\n\ndata = {'Name':['node1','node1','node1','node2','node2','node3'],\n 'Value':[1000,20000,40000,30000,589,682]}\n\ndf = pd.DataFrame(data)\n\ndf['New_Index'] = df.groupby('Name').cumcount()\ndf.set_index('New_Index', inplace = True)\n\ndisplay(df)\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074632959_dataframe_pandas_python.txt
Q: Postgres cursor execute for search bar not working properly (psycopg2) Trying to make a simple search bar for my website and found out about the "Like" feature of psycopg2 But im getting the error Incomplete placeholder and not sure how to fix it.[enter image description here](https://i.stack.imgur.com/lI2JC.png) Tried a bunch of stuff , Too much to list out. I'm expecting it to return all rows that have a value similar to "Keyword" in column text A: string = f"select * from tweet_fields WHERE text like '%{data.Keyword}%'"\ self.twittercursor.execute(string) Figured it out finally
Postgres cursor execute for search bar not working properly (psycopg2)
Trying to make a simple search bar for my website and found out about the "Like" feature of psycopg2 But im getting the error Incomplete placeholder and not sure how to fix it.[enter image description here](https://i.stack.imgur.com/lI2JC.png) Tried a bunch of stuff , Too much to list out. I'm expecting it to return all rows that have a value similar to "Keyword" in column text
[ "string = f\"select * from tweet_fields WHERE text like '%{data.Keyword}%'\"\\ self.twittercursor.execute(string) \nFigured it out finally\n" ]
[ 0 ]
[]
[]
[ "orm", "postgresql", "psycopg2", "python", "sql" ]
stackoverflow_0074632237_orm_postgresql_psycopg2_python_sql.txt
Q: python dict to html unordered list Currently im trying to transform this python dict to a html unordered list: {'dataStreamId': 'raw:com.google.nutrition:NutritionSource', 'dataStreamName': 'NutritionSource', 'type': 'raw', 'dataType': {'name': 'com.google.nutrition', 'field': [{'name': 'nutrients', 'format': 'map'}, {'name': 'meal_type', 'format': 'integer', 'optional': True}, {'name': 'food_item', 'format': 'string', 'optional': True}]}, 'application': {'version': '1', 'detailsUrl': 'http://example.com', 'name': 'My Example App'}, 'dataQualityStandard': []} with this function: def dict_to_html_ul(dd, level=4): import simplejson text = '<ul>' for k, v in dd.items(): text += '<li><b>%s</b>: %s</li>' % (k, dict_to_html_ul(v, level+1) if isinstance(v, (dict)) else (simplejson.dumps(v) if isinstance(v, list) else v)) text += '</ul>' return text but I am getting this result: <ul> <li><b>dataStreamId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>dataStreamName</b>: NutritionSource</li> <li><b>type</b>: raw</li> <li><b>dataType</b>: <ul> <li><b>name</b>: com.google.nutrition</li> <li><b>field</b>: [{"name": "nutrients", "format": "map"}, {"name": "meal_type", "format": "integer", "optional": true}, {"name": "food_item", "format": "string", "optional": true}]</li> </ul> </li> <li><b>application</b>: <ul> <li><b>version</b>: 1</li> <li><b>detailsUrl</b>: http://example.com</li> <li><b>name</b>: My Example App</li> </ul> </li> <li><b>dataQualityStandard</b>: []</li> </ul> And I am having issues trying to fix the result, basically I wanted to transform the rest of the result in the same way as the function was going. I tried to transform the text with some string replacing after the function: text = text.replace('[','').replace(']', '') text= text.replace('{', '<br>' + '&nbsp;' * level).replace('}', '') text = text.replace(',', '<br>' + '&nbsp;' * (level-1)) It came out with some extraspacing and I could not replace some parts like this: "word": "word" So I tried to make a "re.sub()" but did not have success. Edit: Expected output: <ul> <li><b>dataStreamId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>dataStreamName</b>: NutritionSource</li> <li><b>type</b>: raw</li> <li><b>dataType</b>: <ul> <li><b>name</b>: com.google.nutrition</li> <li><b>field</b>: <ul> <li><b>name</b>: nutrients</li> <li><b>format</b>: map</li> <br> <li><b>name</b>: meal_type</li> <li><b>format</b>: integer</li> <li><b>optional</b>: true</li> <br> <li><b>name</b>: food_item</li> <li><b>format</b>: string</li> <li><b>optional</b>: true</li> </ul> </li> </ul> </li> <li><b>application</b>: <ul> <li><b>version</b>: 1</li> <li><b>detailsUrl</b>: http://example.com</li> <li><b>name</b>: My Example App</li> </ul> </li> <li><b>dataQualityStandard</b>: []</li> </ul> Edit2: Thanks for the answer @AndrejKesely I actually got the proper html in the first specific dict, but I have another dict that didnt actually work with this function: – a = {'minStartTimeNs': '1573159699023000000', 'maxEndTimeNs': '1573159699023999000', 'dataSourceId': 'raw:com.google.nutrition:NutritionSource', 'point': [{'startTimeNanos': '1573159699023000000', 'endTimeNanos': '1573159699023999000', 'dataTypeName': 'com.google.nutrition', 'value': [{'mapVal': [{'key': 'fat.total', 'value': {'fpVal': 0.4}}, {'key': 'sodium', 'value': {'fpVal': 1}}, {'key': 'fat.saturated', 'value': { 'fpVal': 0.1}}, {'key': 'protein', 'value': {'fpVal': 1.3}}, {'key': 'carbs.total', 'value': {'fpVal': 27}}, {'key': 'cholesterol', 'value': {'fpVal': 0}}, {'key': 'calories', 'value': {'fpVal': 105}}, {'key': 'sugar', 'value': {'fpVal': 14}}, {'key': 'dietary_fiber', 'value': {'fpVal': 3.1}}, {'key': 'potassium', 'value': {'fpVal': 422}}]}, {'intVal': 4, 'mapVal': []}, {'stringVal': 'apple', 'mapVal': []}]}]} I was expecting a function that could work for both but the output with get_html() in the other dict outputs: <ul> <li><b>minStartTimeNs</b>: 1573159699023000000</li> <li><b>maxEndTimeNs</b>: 1573159699023999000</li> <li><b>dataSourceId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>point</b>: <ul> <li><b>startTimeNanos</b>: 1573159699023000000</li> <li><b>endTimeNanos</b>: 1573159699023999000</li> <li><b>dataTypeName</b>: com.google.nutrition</li> <li><b>value</b>: [{'mapVal': [{'key': 'fat.total', 'value': {'fpVal': 0.4}}, {'key': 'sodium', 'value': {'fpVal': 1}}, {'key': 'fat.saturated', 'value': {'fpVal': 0.1}}, {'key': 'protein', 'value': {'fpVal': 1.3}}, {'key': 'carbs.total', 'value': {'fpVal': 27}}, {'key': 'cholesterol', 'value': {'fpVal': 0}}, {'key': 'calories', 'value': {'fpVal': 105}}, {'key': 'sugar', 'value': {'fpVal': 14}}, {'key': 'dietary_fiber', 'value': {'fpVal': 3.1}}, {'key': 'potassium', 'value': {'fpVal': 422}}]}, {'intVal': 4, 'mapVal': []}, {'stringVal': 'apple', 'mapVal': []}]</li> </ul> </li> </ul> And I was expecting: <ul> <li><b>minStartTimeNs</b>: 1573159699023000000</li> <li><b>maxEndTimeNs</b>: 1573159699023999000</li> <li><b>dataSourceId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>point</b>: <ul> <li><b>startTimeNanos</b>: 1573159699023000000</li> <li><b>endTimeNanos</b>: 1573159699023999000</li> <li><b>dataTypeName</b>: com.google.nutrition</li> <li><b>value</b>: <ul> <li><b>mapVal</b>: <ul> <li><b>key</b>: fat.total</li> <li><b>value</b>: <ul> <li><b>fpVal</b>: 0.4</li> </ul> </li> <br> <li><b>key</b>: sodium</li> <li><b>value</b>: <ul> <li><b>fpVal</b>: 1</li> </ul> </li> <br> <li><b>key</b>:</li> 'fat.saturated', <li><b>value</b>: <ul> <li><b>fpVal</b>: 0.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'protein', <li><b>value</b>: <ul> <li><b>fpVal</b>: 5.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'carbs.total', <li><b>value</b>: <ul> <li><b>fpVal</b>: 6.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'cholesterol', <li><b>value</b>: <ul> <li><b>fpVal</b>: 4.5</li> </ul> </li> <br> <li><b>key</b>:</li> 'calories', <li><b>value</b>: <ul> <li><b>fpVal</b>: 3.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'sugar', <li><b>value</b>: <ul> <li><b>fpVal</b>: 5.5</li> </ul> </li> <br> <li><b>key</b>:</li>'dietary_fiber', <li><b>value</b>: <ul> <li><b>fpVal</b>: 1</li> </ul> </li> <br> <li><b>key</b>:</li> 'potassium', <li><b>value</b>: <ul> <li><b>fpVal</b>: 2</li> </ul> </li> <br> </ul> <li><b>intVal</b>: 4</li> <li><b>mapVal</b>: []</li> <br> <li><b>stringVal</b>: apple</li> <li><b>mapVal</b>: []</li> </li> </ul> </li> </ul> </li> </ul> A: Try: dct = { "dataStreamId": "raw:com.google.nutrition:NutritionSource", "dataStreamName": "NutritionSource", "type": "raw", "dataType": { "name": "com.google.nutrition", "field": [ {"name": "nutrients", "format": "map"}, {"name": "meal_type", "format": "integer", "optional": True}, {"name": "food_item", "format": "string", "optional": True}, ], }, "application": { "version": "1", "detailsUrl": "http://example.com", "name": "My Example App", }, "dataQualityStandard": [], } def get_html(o): s = "" if isinstance(o, dict): s += "<ul>\n" for k, v in o.items(): s += f"<li><b>{k}</b>: " + get_html(v) + "</li>\n" s += "</ul>" elif isinstance(o, list): s += "<ul>\n" if not o: return str(o) else: out = [] for v in o: ss = "" for kk, vv in v.items(): ss += f"<li><b>{kk}</b>: {vv}</li>\n" out.append(ss) s += "<br>\n".join(out) s += "</ul>" else: return str(o) return s print(get_html(dct)) Prints ("beautified" result): <ul> <li><b>dataStreamId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>dataStreamName</b>: NutritionSource</li> <li><b>type</b>: raw</li> <li><b>dataType</b>: <ul> <li><b>name</b>: com.google.nutrition</li> <li><b>field</b>: <ul> <li><b>name</b>: nutrients</li> <li><b>format</b>: map</li> <br> <li><b>name</b>: meal_type</li> <li><b>format</b>: integer</li> <li><b>optional</b>: True</li> <br> <li><b>name</b>: food_item</li> <li><b>format</b>: string</li> <li><b>optional</b>: True</li> </ul> </li> </ul> </li> <li><b>application</b>: <ul> <li><b>version</b>: 1</li> <li><b>detailsUrl</b>: http://example.com</li> <li><b>name</b>: My Example App</li> </ul> </li> <li><b>dataQualityStandard</b>: []</li> </ul> A: Got this function to work thanks to @AndrejKesely for basically crushing it def get_html(o): s = "" if isinstance(o, dict): s += "<ul>\n" for k, v in o.items(): s += f"<li><b>{k}</b>: " + get_html(v) + "</li>\n" s += "</ul>" elif isinstance(o, list): s += "<ul>\n" if not o: return str(o) else: out = [] for v in o: ss = "" for kk, vv in v.items(): ss += f"<li><b>{kk}</b>:" + get_html(vv) + " </li>\n" out.append(ss) s += "<br>\n".join(out) s += "</ul>" else: return str(o) return s The only thing I changed in his function so it could work in any list/dict was make this: for v in o: ss = "" for kk, vv in v.items(): ss += f"<li><b>{kk}</b>: {vv}</li>\n" out.append(ss) become iterative like this: for v in o: ss = "" for kk, vv in v.items(): ss += f"<li><b>{kk}</b>:" + get_html(vv) + " </li>\n" out.append(ss)
python dict to html unordered list
Currently im trying to transform this python dict to a html unordered list: {'dataStreamId': 'raw:com.google.nutrition:NutritionSource', 'dataStreamName': 'NutritionSource', 'type': 'raw', 'dataType': {'name': 'com.google.nutrition', 'field': [{'name': 'nutrients', 'format': 'map'}, {'name': 'meal_type', 'format': 'integer', 'optional': True}, {'name': 'food_item', 'format': 'string', 'optional': True}]}, 'application': {'version': '1', 'detailsUrl': 'http://example.com', 'name': 'My Example App'}, 'dataQualityStandard': []} with this function: def dict_to_html_ul(dd, level=4): import simplejson text = '<ul>' for k, v in dd.items(): text += '<li><b>%s</b>: %s</li>' % (k, dict_to_html_ul(v, level+1) if isinstance(v, (dict)) else (simplejson.dumps(v) if isinstance(v, list) else v)) text += '</ul>' return text but I am getting this result: <ul> <li><b>dataStreamId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>dataStreamName</b>: NutritionSource</li> <li><b>type</b>: raw</li> <li><b>dataType</b>: <ul> <li><b>name</b>: com.google.nutrition</li> <li><b>field</b>: [{"name": "nutrients", "format": "map"}, {"name": "meal_type", "format": "integer", "optional": true}, {"name": "food_item", "format": "string", "optional": true}]</li> </ul> </li> <li><b>application</b>: <ul> <li><b>version</b>: 1</li> <li><b>detailsUrl</b>: http://example.com</li> <li><b>name</b>: My Example App</li> </ul> </li> <li><b>dataQualityStandard</b>: []</li> </ul> And I am having issues trying to fix the result, basically I wanted to transform the rest of the result in the same way as the function was going. I tried to transform the text with some string replacing after the function: text = text.replace('[','').replace(']', '') text= text.replace('{', '<br>' + '&nbsp;' * level).replace('}', '') text = text.replace(',', '<br>' + '&nbsp;' * (level-1)) It came out with some extraspacing and I could not replace some parts like this: "word": "word" So I tried to make a "re.sub()" but did not have success. Edit: Expected output: <ul> <li><b>dataStreamId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>dataStreamName</b>: NutritionSource</li> <li><b>type</b>: raw</li> <li><b>dataType</b>: <ul> <li><b>name</b>: com.google.nutrition</li> <li><b>field</b>: <ul> <li><b>name</b>: nutrients</li> <li><b>format</b>: map</li> <br> <li><b>name</b>: meal_type</li> <li><b>format</b>: integer</li> <li><b>optional</b>: true</li> <br> <li><b>name</b>: food_item</li> <li><b>format</b>: string</li> <li><b>optional</b>: true</li> </ul> </li> </ul> </li> <li><b>application</b>: <ul> <li><b>version</b>: 1</li> <li><b>detailsUrl</b>: http://example.com</li> <li><b>name</b>: My Example App</li> </ul> </li> <li><b>dataQualityStandard</b>: []</li> </ul> Edit2: Thanks for the answer @AndrejKesely I actually got the proper html in the first specific dict, but I have another dict that didnt actually work with this function: – a = {'minStartTimeNs': '1573159699023000000', 'maxEndTimeNs': '1573159699023999000', 'dataSourceId': 'raw:com.google.nutrition:NutritionSource', 'point': [{'startTimeNanos': '1573159699023000000', 'endTimeNanos': '1573159699023999000', 'dataTypeName': 'com.google.nutrition', 'value': [{'mapVal': [{'key': 'fat.total', 'value': {'fpVal': 0.4}}, {'key': 'sodium', 'value': {'fpVal': 1}}, {'key': 'fat.saturated', 'value': { 'fpVal': 0.1}}, {'key': 'protein', 'value': {'fpVal': 1.3}}, {'key': 'carbs.total', 'value': {'fpVal': 27}}, {'key': 'cholesterol', 'value': {'fpVal': 0}}, {'key': 'calories', 'value': {'fpVal': 105}}, {'key': 'sugar', 'value': {'fpVal': 14}}, {'key': 'dietary_fiber', 'value': {'fpVal': 3.1}}, {'key': 'potassium', 'value': {'fpVal': 422}}]}, {'intVal': 4, 'mapVal': []}, {'stringVal': 'apple', 'mapVal': []}]}]} I was expecting a function that could work for both but the output with get_html() in the other dict outputs: <ul> <li><b>minStartTimeNs</b>: 1573159699023000000</li> <li><b>maxEndTimeNs</b>: 1573159699023999000</li> <li><b>dataSourceId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>point</b>: <ul> <li><b>startTimeNanos</b>: 1573159699023000000</li> <li><b>endTimeNanos</b>: 1573159699023999000</li> <li><b>dataTypeName</b>: com.google.nutrition</li> <li><b>value</b>: [{'mapVal': [{'key': 'fat.total', 'value': {'fpVal': 0.4}}, {'key': 'sodium', 'value': {'fpVal': 1}}, {'key': 'fat.saturated', 'value': {'fpVal': 0.1}}, {'key': 'protein', 'value': {'fpVal': 1.3}}, {'key': 'carbs.total', 'value': {'fpVal': 27}}, {'key': 'cholesterol', 'value': {'fpVal': 0}}, {'key': 'calories', 'value': {'fpVal': 105}}, {'key': 'sugar', 'value': {'fpVal': 14}}, {'key': 'dietary_fiber', 'value': {'fpVal': 3.1}}, {'key': 'potassium', 'value': {'fpVal': 422}}]}, {'intVal': 4, 'mapVal': []}, {'stringVal': 'apple', 'mapVal': []}]</li> </ul> </li> </ul> And I was expecting: <ul> <li><b>minStartTimeNs</b>: 1573159699023000000</li> <li><b>maxEndTimeNs</b>: 1573159699023999000</li> <li><b>dataSourceId</b>: raw:com.google.nutrition:NutritionSource</li> <li><b>point</b>: <ul> <li><b>startTimeNanos</b>: 1573159699023000000</li> <li><b>endTimeNanos</b>: 1573159699023999000</li> <li><b>dataTypeName</b>: com.google.nutrition</li> <li><b>value</b>: <ul> <li><b>mapVal</b>: <ul> <li><b>key</b>: fat.total</li> <li><b>value</b>: <ul> <li><b>fpVal</b>: 0.4</li> </ul> </li> <br> <li><b>key</b>: sodium</li> <li><b>value</b>: <ul> <li><b>fpVal</b>: 1</li> </ul> </li> <br> <li><b>key</b>:</li> 'fat.saturated', <li><b>value</b>: <ul> <li><b>fpVal</b>: 0.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'protein', <li><b>value</b>: <ul> <li><b>fpVal</b>: 5.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'carbs.total', <li><b>value</b>: <ul> <li><b>fpVal</b>: 6.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'cholesterol', <li><b>value</b>: <ul> <li><b>fpVal</b>: 4.5</li> </ul> </li> <br> <li><b>key</b>:</li> 'calories', <li><b>value</b>: <ul> <li><b>fpVal</b>: 3.4</li> </ul> </li> <br> <li><b>key</b>:</li> 'sugar', <li><b>value</b>: <ul> <li><b>fpVal</b>: 5.5</li> </ul> </li> <br> <li><b>key</b>:</li>'dietary_fiber', <li><b>value</b>: <ul> <li><b>fpVal</b>: 1</li> </ul> </li> <br> <li><b>key</b>:</li> 'potassium', <li><b>value</b>: <ul> <li><b>fpVal</b>: 2</li> </ul> </li> <br> </ul> <li><b>intVal</b>: 4</li> <li><b>mapVal</b>: []</li> <br> <li><b>stringVal</b>: apple</li> <li><b>mapVal</b>: []</li> </li> </ul> </li> </ul> </li> </ul>
[ "Try:\ndct = {\n \"dataStreamId\": \"raw:com.google.nutrition:NutritionSource\",\n \"dataStreamName\": \"NutritionSource\",\n \"type\": \"raw\",\n \"dataType\": {\n \"name\": \"com.google.nutrition\",\n \"field\": [\n {\"name\": \"nutrients\", \"format\": \"map\"},\n {\"name\": \"meal_type\", \"format\": \"integer\", \"optional\": True},\n {\"name\": \"food_item\", \"format\": \"string\", \"optional\": True},\n ],\n },\n \"application\": {\n \"version\": \"1\",\n \"detailsUrl\": \"http://example.com\",\n \"name\": \"My Example App\",\n },\n \"dataQualityStandard\": [],\n}\n\n\ndef get_html(o):\n s = \"\"\n\n if isinstance(o, dict):\n s += \"<ul>\\n\"\n\n for k, v in o.items():\n s += f\"<li><b>{k}</b>: \" + get_html(v) + \"</li>\\n\"\n\n s += \"</ul>\"\n elif isinstance(o, list):\n s += \"<ul>\\n\"\n\n if not o:\n return str(o)\n else:\n out = []\n for v in o:\n ss = \"\"\n for kk, vv in v.items():\n ss += f\"<li><b>{kk}</b>: {vv}</li>\\n\"\n out.append(ss)\n\n s += \"<br>\\n\".join(out)\n s += \"</ul>\"\n else:\n return str(o)\n\n return s\n\n\nprint(get_html(dct))\n\nPrints (\"beautified\" result):\n<ul>\n <li><b>dataStreamId</b>: raw:com.google.nutrition:NutritionSource</li>\n <li><b>dataStreamName</b>: NutritionSource</li>\n <li><b>type</b>: raw</li>\n <li><b>dataType</b>:\n <ul>\n <li><b>name</b>: com.google.nutrition</li>\n <li><b>field</b>:\n <ul>\n <li><b>name</b>: nutrients</li>\n <li><b>format</b>: map</li>\n <br>\n <li><b>name</b>: meal_type</li>\n <li><b>format</b>: integer</li>\n <li><b>optional</b>: True</li>\n <br>\n <li><b>name</b>: food_item</li>\n <li><b>format</b>: string</li>\n <li><b>optional</b>: True</li>\n </ul>\n </li>\n </ul>\n </li>\n <li><b>application</b>:\n <ul>\n <li><b>version</b>: 1</li>\n <li><b>detailsUrl</b>: http://example.com</li>\n <li><b>name</b>: My Example App</li>\n </ul>\n </li>\n <li><b>dataQualityStandard</b>: []</li>\n</ul>\n\n", "Got this function to work thanks to @AndrejKesely for basically crushing it\ndef get_html(o):\n s = \"\"\n if isinstance(o, dict):\n s += \"<ul>\\n\"\n for k, v in o.items():\n s += f\"<li><b>{k}</b>: \" + get_html(v) + \"</li>\\n\"\n s += \"</ul>\"\n elif isinstance(o, list):\n s += \"<ul>\\n\"\n if not o:\n return str(o)\n else:\n out = []\n for v in o:\n ss = \"\"\n for kk, vv in v.items():\n ss += f\"<li><b>{kk}</b>:\" + get_html(vv) + \" </li>\\n\"\n out.append(ss)\n s += \"<br>\\n\".join(out)\n s += \"</ul>\"\n else:\n return str(o)\n return s\n\nThe only thing I changed in his function so it could work in any list/dict was make this:\n for v in o:\n ss = \"\"\n for kk, vv in v.items():\n ss += f\"<li><b>{kk}</b>: {vv}</li>\\n\"\n out.append(ss)\n\nbecome iterative like this:\n for v in o:\n ss = \"\"\n for kk, vv in v.items():\n ss += f\"<li><b>{kk}</b>:\" + get_html(vv) + \" </li>\\n\"\n out.append(ss)\n\n" ]
[ 1, 1 ]
[]
[]
[ "dictionary", "html_lists", "python", "python_3.x" ]
stackoverflow_0074628578_dictionary_html_lists_python_python_3.x.txt
Q: How to read multiline json-like file with multiple JSON fragments separated by just a new line? I have a json file with multiple json objects (each object can be a multiple line json) Example: {"date": "2022-11-29", "runs": [{"23597": 821260}, {"23617": 821699}]} {"date": "2022-11-30", "runs": [{"23597": 821269}, {"23617": 8213534}]} Note that indeed this is not valid JSON as whole file (and hence regular "read JSON in Python" code fails, expected), but each individual "fragment" is complete and valid JSON. It sounds like file was produced by some logging tool that simply appends the next block as text to the file. As expected, regular way of reading that I have tried with the below snippet fails: with open('run_log.json','r') as file: d = json.load(file) print(d) Produces expected error about invalid JSON: JSONDecodeError: Extra data: line 3 column 1 (char 89) How can I solve this, possibly using the json module? Ideally, I want to read the json file and get the runs list for only a particular date (Ex : 2022-11-30), but just being able to read all entries would be enough. A: NDJSON, not JSON. It's a valid file format and often confused for JSON. Python of course has a library for this. import ndjson with open('run_log.json','r') as file: d = ndjson.load(file) for elem in d: print(type(elem), elem) output <class 'dict'> {'date': '2022-11-29', 'runs': [{'23597': 821260}, {'23617': 821699}]} <class 'dict'> {'date': '2022-11-30', 'runs': [{'23597': 821269}, {'23617': 8213534}]} A: Each line is valid JSON (See JSON Lines format) and it makes a nice format as a logger since a file can append new JSON lines without read/modify/write of the whole file as JSON would require. You can use json.loads() to parse it a line at a time. Given run_log.json: {"date": "2022-11-29", "runs": [{"23597": 821260}, {"23617": 821699}]} {"date": "2022-11-30", "runs": [{"23597": 821269}, {"23617": 8213534}]} Use: import json with open('run_log.json', encoding='utf8') as file: for line in file: data = json.loads(line) print(data) Output: {'date': '2022-11-29', 'runs': [{'23597': 821260}, {'23617': 821699}]} {'date': '2022-11-30', 'runs': [{'23597': 821269}, {'23617': 8213534}]}
How to read multiline json-like file with multiple JSON fragments separated by just a new line?
I have a json file with multiple json objects (each object can be a multiple line json) Example: {"date": "2022-11-29", "runs": [{"23597": 821260}, {"23617": 821699}]} {"date": "2022-11-30", "runs": [{"23597": 821269}, {"23617": 8213534}]} Note that indeed this is not valid JSON as whole file (and hence regular "read JSON in Python" code fails, expected), but each individual "fragment" is complete and valid JSON. It sounds like file was produced by some logging tool that simply appends the next block as text to the file. As expected, regular way of reading that I have tried with the below snippet fails: with open('run_log.json','r') as file: d = json.load(file) print(d) Produces expected error about invalid JSON: JSONDecodeError: Extra data: line 3 column 1 (char 89) How can I solve this, possibly using the json module? Ideally, I want to read the json file and get the runs list for only a particular date (Ex : 2022-11-30), but just being able to read all entries would be enough.
[ "NDJSON, not JSON.\nIt's a valid file format and often confused for JSON.\nPython of course has a library for this.\nimport ndjson\n\nwith open('run_log.json','r') as file:\n d = ndjson.load(file)\n for elem in d:\n print(type(elem), elem)\n\noutput\n<class 'dict'> {'date': '2022-11-29', 'runs': [{'23597': 821260}, {'23617': 821699}]}\n<class 'dict'> {'date': '2022-11-30', 'runs': [{'23597': 821269}, {'23617': 8213534}]}\n\n", "Each line is valid JSON (See JSON Lines format) and it makes a nice format as a logger since a file can append new JSON lines without read/modify/write of the whole file as JSON would require.\nYou can use json.loads() to parse it a line at a time.\nGiven run_log.json:\n{\"date\": \"2022-11-29\", \"runs\": [{\"23597\": 821260}, {\"23617\": 821699}]}\n{\"date\": \"2022-11-30\", \"runs\": [{\"23597\": 821269}, {\"23617\": 8213534}]}\n\nUse:\nimport json\n\nwith open('run_log.json', encoding='utf8') as file:\n for line in file:\n data = json.loads(line)\n print(data)\n\nOutput:\n{'date': '2022-11-29', 'runs': [{'23597': 821260}, {'23617': 821699}]}\n{'date': '2022-11-30', 'runs': [{'23597': 821269}, {'23617': 8213534}]}\n\n" ]
[ 0, 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074632449_json_python.txt
Q: What does sum(x%2==0) mean?? (python) import numpy as np x = np.array([1, -1, 2, 5, 7]) print(sum(x%2==0)) This is the code, and I can't understand what does ' sum(x%2==0) ' mean. Does it mean to sum even number? I'm studying for school test and My professor said output of the above code is 1. But I can't understand what does ' sum(x%2==0)' mean.. A: Set1: import numpy as np x = np.array([1, -1, 2, 5, 7,4]) print(x) y = sum(x) print(y) print(x%2==0) print(sum(x%2==0)) Output: [ 1 -1 2 5 7] 14 [False False True False False] 1 Set2: import numpy as np x = np.array([1, -1, 2, 5, 7, 4]) print(x) y = sum(x) print(y) print(x%2==0) print(sum(x%2==0)) [ 1 -1 2 5 7 4] 18 [False False True False False True] 2 Set3: import numpy as np x = np.array([1, -1, 2, 5, 7, 4,6]) print(x) y = sum(x) print(y) print(x%2==0) print(sum(x%2==0)) output: [ 1 -1 2 5 7 4 6] 24 [False False True False False True True] 3 Set4: import numpy as np x = np.array([1, -1, 2, 5, 7, 4,6,11]) print(x) y = sum(x) print(y) print(x%2==0) print(sum(x%2==0)) Output: [ 1 -1 2 5 7 4 6 11] 35 [False False True False False True True False] 3 Its return count of even number. A: import numpy as np x = np.array([1, -1, 2, 5, 7]) # step 1: create an intermediate array which contains the modulo 2 of each element (if the element is even it will be True, otherwise False) y = x % 2 == 0 # [False, False, True, False, False] # step 2: sum the intermediate array up. In this case the False values count as 0 and the True values as 1. There is one True value so the sum is 1 z = sum(y) # 1 A: x % 2 == 0 will change your array to [False, False, True, False, False] Because every element will be converted to a boolean, which represents, if the number is even or odd Then the sum gets evaluated, where False = 0 and True = 1 0 + 0 + 1 + 0 + 0 = 1 A: For your purposes, here's an explanation. For Stack Overflow's purposes, I'm recommending to close this question as it's more coding help than a novel coding question. The operations in this expresssion are as follows: # operation 1 intermediate_result_1 = x%2 # operation 2 intermediate_result_2 = (intermediate_result_1 == 0) # operation 3 sum(intermediate_result_2) Operation 1: the modulo operator essentially returns the remainder when the first term is divided by the second term. Most basic mathematical operations (e.g. +,-,*,/,%,==,!=, etc) are implemented element-wise in numpy, which means that the operation is performed independently on each element in the array. Thus, the output from operation 1: intermediate_result_1 = np.Array([1,1,0,1,1]) Operation 2: same for the equality operator ==. Each element of the array is compared to the right-hand value, and the resulting array has True (or 1) where the equality expression holds, and False (or 0) otherwise. intermediate_result_2 = np.Array([0,0,1,0,0]) Operation 3: Lastly, the default sum() operator for a numpy array sums all values in the array. Note that numpy provides its own sum function which allows for summing along individual dimensions. Quite evidently the sum of this array's elements is 1. A: numpy makes it easy for you to operate on the array object as many answers already suggest that x%2==0 returns [False, False, True, False, False] but if you are still confused then try to understand it like this lets make a function which checks if a value is even or not. def is_even(ele): return ele%2==0 then we use the map function map() function returns a map object(which is an iterator) of the results after applying the given function to each item of a given iterable (list, tuple etc.) NOTE: copied from GeeksforGeeks then we take a simple list and map it with this function like so: l=[1, -1, 2, 5, 7] # this is not a np array print(map(is_even, l)) # this prints [False, False, True, False, False] print(sum(map(is_even, l))) # this prints 1
What does sum(x%2==0) mean?? (python)
import numpy as np x = np.array([1, -1, 2, 5, 7]) print(sum(x%2==0)) This is the code, and I can't understand what does ' sum(x%2==0) ' mean. Does it mean to sum even number? I'm studying for school test and My professor said output of the above code is 1. But I can't understand what does ' sum(x%2==0)' mean..
[ "Set1:\nimport numpy as np\nx = np.array([1, -1, 2, 5, 7,4])\nprint(x)\ny = sum(x)\nprint(y)\nprint(x%2==0)\nprint(sum(x%2==0))\n\nOutput:\n[ 1 -1 2 5 7]\n14\n[False False True False False]\n1\n\nSet2:\nimport numpy as np\nx = np.array([1, -1, 2, 5, 7, 4])\nprint(x)\ny = sum(x)\nprint(y)\nprint(x%2==0)\nprint(sum(x%2==0))\n\n[ 1 -1 2 5 7 4]\n18\n[False False True False False True]\n2\n\nSet3:\nimport numpy as np\nx = np.array([1, -1, 2, 5, 7, 4,6])\nprint(x)\ny = sum(x)\nprint(y)\nprint(x%2==0)\nprint(sum(x%2==0))\n\noutput:\n[ 1 -1 2 5 7 4 6]\n24\n[False False True False False True True]\n3\n\nSet4:\nimport numpy as np\nx = np.array([1, -1, 2, 5, 7, 4,6,11])\nprint(x)\ny = sum(x)\nprint(y)\nprint(x%2==0)\nprint(sum(x%2==0))\n\nOutput:\n[ 1 -1 2 5 7 4 6 11]\n35\n[False False True False False True True False]\n3\n\nIts return count of even number.\n", "import numpy as np\n\nx = np.array([1, -1, 2, 5, 7])\n\n# step 1: create an intermediate array which contains the modulo 2 of each element (if the element is even it will be True, otherwise False)\ny = x % 2 == 0 # [False, False, True, False, False]\n\n# step 2: sum the intermediate array up. In this case the False values count as 0 and the True values as 1. There is one True value so the sum is 1\nz = sum(y) # 1\n\n", "x % 2 == 0 will change your array to [False, False, True, False, False]\nBecause every element will be converted to a boolean, which represents, if the number is even or odd\nThen the sum gets evaluated, where False = 0 and True = 1\n0 + 0 + 1 + 0 + 0 = 1\n", "For your purposes, here's an explanation. For Stack Overflow's purposes, I'm recommending to close this question as it's more coding help than a novel coding question.\nThe operations in this expresssion are as follows:\n# operation 1\nintermediate_result_1 = x%2\n\n# operation 2\nintermediate_result_2 = (intermediate_result_1 == 0)\n\n# operation 3\nsum(intermediate_result_2)\n\nOperation 1: the modulo operator essentially returns the remainder when the first term is divided by the second term. Most basic mathematical operations (e.g. +,-,*,/,%,==,!=, etc) are implemented element-wise in numpy, which means that the operation is performed independently on each element in the array. Thus, the output from operation 1:\nintermediate_result_1 = np.Array([1,1,0,1,1])\n\nOperation 2: same for the equality operator ==. Each element of the array is compared to the right-hand value, and the resulting array has True (or 1) where the equality expression holds, and False (or 0) otherwise.\nintermediate_result_2 = np.Array([0,0,1,0,0])\n\nOperation 3: Lastly, the default sum() operator for a numpy array sums all values in the array. Note that numpy provides its own sum function which allows for summing along individual dimensions. Quite evidently the sum of this array's elements is 1.\n", "numpy makes it easy for you to operate on the array object\nas many answers already suggest that\nx%2==0 returns [False, False, True, False, False]\nbut if you are still confused then try to understand it like this\nlets make a function which checks if a value is even or not.\ndef is_even(ele):\n return ele%2==0\n\nthen we use the map function\n\nmap() function returns a map object(which is an iterator) of the\nresults after applying the given function to each item of a given\niterable (list, tuple etc.)\nNOTE: copied from GeeksforGeeks\n\nthen we take a simple list and map it with this function like so:\nl=[1, -1, 2, 5, 7] # this is not a np array\nprint(map(is_even, l)) # this prints [False, False, True, False, False]\nprint(sum(map(is_even, l))) # this prints 1\n\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074631450_python.txt
Q: How to convert a Scikit-learn dataset to a Pandas dataset How do I convert data from a Scikit-learn Bunch object to a Pandas DataFrame? from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this? A: Manually, you can use pd.DataFrame constructor, giving a numpy array (data) and a list of the names of the columns (columns). To have everything in one DataFrame, you can concatenate the features and the target into one numpy array with np.c_[...] (note the []): import numpy as np import pandas as pd from sklearn.datasets import load_iris # save load_iris() sklearn dataset to iris # if you'd like to check dataset type use: type(load_iris()) # if you'd like to view list of attributes use: dir(load_iris()) iris = load_iris() # np.c_ is the numpy concatenate function # which is used to concat iris['data'] and iris['target'] arrays # for pandas column argument: concat iris['feature_names'] list # and string list (in this case one string); you can make this anything you'd like.. # the original dataset would probably call this ['Species'] data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) A: from sklearn.datasets import load_iris import pandas as pd data = load_iris() df = pd.DataFrame(data=data.data, columns=data.feature_names) df.head() This tutorial maybe of interest: http://www.neural.cz/dataset-exploration-boston-house-pricing.html A: TOMDLt's solution is not generic enough for all the datasets in scikit-learn. For example it does not work for the boston housing dataset. I propose a different solution which is more universal. No need to use numpy as well. from sklearn import datasets import pandas as pd boston_data = datasets.load_boston() df_boston = pd.DataFrame(boston_data.data,columns=boston_data.feature_names) df_boston['target'] = pd.Series(boston_data.target) df_boston.head() As a general function: def sklearn_to_df(sklearn_dataset): df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names) df['target'] = pd.Series(sklearn_dataset.target) return df df_boston = sklearn_to_df(datasets.load_boston()) A: Took me 2 hours to figure this out import numpy as np import pandas as pd from sklearn.datasets import load_iris iris = load_iris() ##iris.keys() df= pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) df['species'] = pd.Categorical.from_codes(iris.target, iris.target_names) Get back the species for my pandas A: Just as an alternative that I could wrap my head around much easier: data = load_iris() df = pd.DataFrame(data['data'], columns=data['feature_names']) df['target'] = data['target'] df.head() Basically instead of concatenating from the get go, just make a data frame with the matrix of features and then just add the target column with data['whatvername'] and grab the target values from the dataset A: New Update You can use the parameter as_frame=True to get pandas dataframes. If as_frame parameter available (eg. load_iris) from sklearn import datasets X,y = datasets.load_iris(return_X_y=True) # numpy arrays dic_data = datasets.load_iris(as_frame=True) print(dic_data.keys()) df = dic_data['frame'] # pandas dataframe data + target df_X = dic_data['data'] # pandas dataframe data only ser_y = dic_data['target'] # pandas series target only dic_data['target_names'] # numpy array If as_frame parameter NOT available (eg. load_boston) from sklearn import datasets fnames = [ i for i in dir(datasets) if 'load_' in i] print(fnames) fname = 'load_boston' loader = getattr(datasets,fname)() df = pd.DataFrame(loader['data'],columns= loader['feature_names']) df['target'] = loader['target'] df.head(2) A: Otherwise use seaborn data sets which are actual pandas data frames: import seaborn iris = seaborn.load_dataset("iris") type(iris) # <class 'pandas.core.frame.DataFrame'> Compare with scikit learn data sets: from sklearn import datasets iris = datasets.load_iris() type(iris) # <class 'sklearn.utils.Bunch'> dir(iris) # ['DESCR', 'data', 'feature_names', 'filename', 'target', 'target_names'] A: This is easy method worked for me. boston = load_boston() boston_frame = pd.DataFrame(data=boston.data, columns=boston.feature_names) boston_frame["target"] = boston.target But this can applied to load_iris as well. A: Many of the solutions are either missing column names or the species target names. This solution provides target_name labels. @Ankit-mathanker's solution works, however it iterates the Dataframe Series 'target_names' to substitute the iris species for integer identifiers. Based on the adage 'Don't iterate a Dataframe if you don't have to,' the following solution utilizes pd.replace() to more concisely accomplish the replacement. import pandas as pd from sklearn.datasets import load_iris iris = load_iris() df = pd.DataFrame(iris['data'], columns = iris['feature_names']) df['target'] = pd.Series(iris['target'], name = 'target_values') df['target_name'] = df['target'].replace([0,1,2], ['iris-' + species for species in iris['target_names'].tolist()]) df.head(3) sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target target_name 0 5.1 3.5 1.4 0.2 0 iris-setosa 1 4.9 3.0 1.4 0.2 0 iris-setosa 2 4.7 3.2 1.3 0.2 0 iris-setosa A: This works for me. dataFrame = pd.dataFrame(data = np.c_[ [iris['data'],iris['target'] ], columns=iris['feature_names'].tolist() + ['target']) A: Other way to combine features and target variables can be using np.column_stack (details) import numpy as np import pandas as pd from sklearn.datasets import load_iris data = load_iris() df = pd.DataFrame(np.column_stack((data.data, data.target)), columns = data.feature_names+['target']) print(df.head()) Result: sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target 0 5.1 3.5 1.4 0.2 0.0 1 4.9 3.0 1.4 0.2 0.0 2 4.7 3.2 1.3 0.2 0.0 3 4.6 3.1 1.5 0.2 0.0 4 5.0 3.6 1.4 0.2 0.0 If you need the string label for the target, then you can use replace by convertingtarget_names to dictionary and add a new column: df['label'] = df.target.replace(dict(enumerate(data.target_names))) print(df.head()) Result: sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target label 0 5.1 3.5 1.4 0.2 0.0 setosa 1 4.9 3.0 1.4 0.2 0.0 setosa 2 4.7 3.2 1.3 0.2 0.0 setosa 3 4.6 3.1 1.5 0.2 0.0 setosa 4 5.0 3.6 1.4 0.2 0.0 setosa A: As of version 0.23, you can directly return a DataFrame using the as_frame argument. For example, loading the iris data set: from sklearn.datasets import load_iris iris = load_iris(as_frame=True) df = iris.data In my understanding using the provisionally release notes, this works for the breast_cancer, diabetes, digits, iris, linnerud, wine and california_houses data sets. A: Here's another integrated method example maybe helpful. from sklearn.datasets import load_iris iris_X, iris_y = load_iris(return_X_y=True, as_frame=True) type(iris_X), type(iris_y) The data iris_X are imported as pandas DataFrame and the target iris_y are imported as pandas Series. A: Basically what you need is the "data", and you have it in the scikit bunch, now you need just the "target" (prediction) which is also in the bunch. So just need to concat these two to make the data complete data_df = pd.DataFrame(cancer.data,columns=cancer.feature_names) target_df = pd.DataFrame(cancer.target,columns=['target']) final_df = data_df.join(target_df) A: The API is a little cleaner than the responses suggested. Here, using as_frame and being sure to include a response column as well. import pandas as pd from sklearn.datasets import load_wine features, target = load_wine(as_frame=True).data, load_wine(as_frame=True).target df = features df['target'] = target df.head(2) A: Working off the best answer and addressing my comment, here is a function for the conversion def bunch_to_dataframe(bunch): fnames = bunch.feature_names features = fnames.tolist() if isinstance(fnames, np.ndarray) else fnames features += ['target'] return pd.DataFrame(data= np.c_[bunch['data'], bunch['target']], columns=features) A: This snippet is only syntactic sugar built upon what TomDLT and rolyat have already contributed and explained. The only differences would be that load_iris will return a tuple instead of a dictionary and the columns names are enumerated. df = pd.DataFrame(np.c_[load_iris(return_X_y=True)]) A: I took couple of ideas from your answers and I don't know how to make it shorter :) import pandas as pd from sklearn.datasets import load_iris iris = load_iris() df = pd.DataFrame(iris.data, columns=iris['feature_names']) df['target'] = iris['target'] This gives a Pandas DataFrame with feature_names plus target as columns and RangeIndex(start=0, stop=len(df), step=1). I would like to have a shorter code where I can have 'target' added directly. A: You can use pd.DataFrame constructor, giving a numpy array (data) and a list of the names of the columns (columns). To have everything in one DataFrame, you can concatenate the features and the target into one numpy array with np.c_[...] (note the square brackets and not parenthesis). Also, you can have some trouble if you don't convert the feature names (iris['feature_names']) to a list before concatenation: import numpy as np import pandas as pd from sklearn.datasets import load_iris iris = load_iris() df = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= list(iris['feature_names']) + ['target']) A: Plenty of good responses to this question; I've added my own below. import pandas as pd from sklearn.datasets import load_iris df = pd.DataFrame( # load all 4 dimensions of the dataframe EXCLUDING species data load_iris()['data'], # set the column names for the 4 dimensions of data columns=load_iris()['feature_names'] ) # we create a new column called 'species' with 150 rows of numerical data 0-2 signifying a species type. # Our column `species` should have data such `[0, 0, 1, 2, 1, 0]` etc. df['species'] = load_iris()['target'] # we map the numerical data to string data for species type df['species'] = df['species'].map({ 0 : 'setosa', 1 : 'versicolor', 2 : 'virginica' }) df.head() Breakdown For some reason the load_iris['feature_names] has only 4 columns (sepal length, sepal width, petal length, petal width); moreover, the load_iris['data'] only contains data for those feature_names mentioned above. Instead, the species column names are stored in load_iris()['target_names'] == array(['setosa', 'versicolor', 'virginica']. On top of this, the species row data is stored in load_iris()['target'].nunique() == 3 Our goal was simply to add a new column called species that used the map function to convert numerical data 0-2 into 3 types of string data signifying the iris species. A: This is an easy way and works with majority of datasets in sklearn import pandas as pd from sklearn import datasets # download iris data set iris = datasets.load_iris() # load feature columns to DataFrame df = pd.DataFrame(data=iris.data, columns=iris.feature_names) # add a column to df called 'target_c' then asign the target data of iris data df['target_c'] = iris.target # view final DataFrame df.head() A: There might be a better way but here is what I have done in the past and it works quite well: items = data.items() #Gets all the data from this Bunch - a huge list mydata = pd.DataFrame(items[1][1]) #Gets the Attributes mydata[len(mydata.columns)] = items[2][1] #Adds a column for the Target Variable mydata.columns = items[-1][1] + [items[2][0]] #Gets the column names and updates the dataframe Now mydata will have everything you need - attributes, target variable and columnnames A: Whatever TomDLT answered it may not work for some of you because data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) because iris['feature_names'] returns you a numpy array. In numpy array you can't add an array and a list ['target'] by just + operator. Hence you need to convert it into a list first and then add. You can do data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= list(iris['feature_names']) + ['target']) This will work fine tho.. A: import pandas as pd from sklearn.datasets import load_iris iris = load_iris() X = iris['data'] y = iris['target'] iris_df = pd.DataFrame(X, columns = iris['feature_names']) iris_df.head() A: One of the best ways: data = pd.DataFrame(digits.data) Digits is the sklearn dataframe and I converted it to a pandas DataFrame A: from sklearn.datasets import load_iris import pandas as pd iris_dataset = load_iris() datasets = pd.DataFrame(iris_dataset['data'], columns = iris_dataset['feature_names']) target_val = pd.Series(iris_dataset['target'], name = 'target_values') species = [] for val in target_val: if val == 0: species.append('iris-setosa') if val == 1: species.append('iris-versicolor') if val == 2: species.append('iris-virginica') species = pd.Series(species) datasets['target'] = target_val datasets['target_name'] = species datasets.head() A: A more simpler and approachable manner I tried import pandas as pd from sklearn import datasets iris = load_iris() X= pd.DataFrame(iris['data'], columns= iris['feature_names']) y = pd.DataFrame(iris['target'],columns=['target']) df = X.join(y)
How to convert a Scikit-learn dataset to a Pandas dataset
How do I convert data from a Scikit-learn Bunch object to a Pandas DataFrame? from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this?
[ "Manually, you can use pd.DataFrame constructor, giving a numpy array (data) and a list of the names of the columns (columns).\nTo have everything in one DataFrame, you can concatenate the features and the target into one numpy array with np.c_[...] (note the []):\nimport numpy as np\nimport pandas as pd\nfrom sklearn.datasets import load_iris\n\n# save load_iris() sklearn dataset to iris\n# if you'd like to check dataset type use: type(load_iris())\n# if you'd like to view list of attributes use: dir(load_iris())\niris = load_iris()\n\n# np.c_ is the numpy concatenate function\n# which is used to concat iris['data'] and iris['target'] arrays \n# for pandas column argument: concat iris['feature_names'] list\n# and string list (in this case one string); you can make this anything you'd like.. \n# the original dataset would probably call this ['Species']\ndata1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],\n columns= iris['feature_names'] + ['target'])\n\n", "from sklearn.datasets import load_iris\nimport pandas as pd\n\ndata = load_iris()\ndf = pd.DataFrame(data=data.data, columns=data.feature_names)\ndf.head()\n\nThis tutorial maybe of interest: http://www.neural.cz/dataset-exploration-boston-house-pricing.html\n", "TOMDLt's solution is not generic enough for all the datasets in scikit-learn. For example it does not work for the boston housing dataset. I propose a different solution which is more universal. No need to use numpy as well.\nfrom sklearn import datasets\nimport pandas as pd\n\nboston_data = datasets.load_boston()\ndf_boston = pd.DataFrame(boston_data.data,columns=boston_data.feature_names)\ndf_boston['target'] = pd.Series(boston_data.target)\ndf_boston.head()\n\nAs a general function:\ndef sklearn_to_df(sklearn_dataset):\n df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names)\n df['target'] = pd.Series(sklearn_dataset.target)\n return df\n\ndf_boston = sklearn_to_df(datasets.load_boston())\n\n", "Took me 2 hours to figure this out\nimport numpy as np\nimport pandas as pd\nfrom sklearn.datasets import load_iris\n\niris = load_iris()\n##iris.keys()\n\n\ndf= pd.DataFrame(data= np.c_[iris['data'], iris['target']],\n columns= iris['feature_names'] + ['target'])\n\ndf['species'] = pd.Categorical.from_codes(iris.target, iris.target_names)\n\nGet back the species for my pandas\n", "Just as an alternative that I could wrap my head around much easier:\ndata = load_iris()\ndf = pd.DataFrame(data['data'], columns=data['feature_names'])\ndf['target'] = data['target']\ndf.head()\n\nBasically instead of concatenating from the get go, just make a data frame with the matrix of features and then just add the target column with data['whatvername'] and grab the target values from the dataset\n", "New Update\nYou can use the parameter as_frame=True to get pandas dataframes.\nIf as_frame parameter available (eg. load_iris)\nfrom sklearn import datasets\nX,y = datasets.load_iris(return_X_y=True) # numpy arrays\n\ndic_data = datasets.load_iris(as_frame=True)\nprint(dic_data.keys())\n\ndf = dic_data['frame'] # pandas dataframe data + target\ndf_X = dic_data['data'] # pandas dataframe data only\nser_y = dic_data['target'] # pandas series target only\ndic_data['target_names'] # numpy array\n\n\nIf as_frame parameter NOT available (eg. load_boston)\nfrom sklearn import datasets\n\nfnames = [ i for i in dir(datasets) if 'load_' in i]\nprint(fnames)\n\nfname = 'load_boston'\nloader = getattr(datasets,fname)()\ndf = pd.DataFrame(loader['data'],columns= loader['feature_names'])\ndf['target'] = loader['target']\ndf.head(2)\n\n", "Otherwise use seaborn data sets which are actual pandas data frames:\nimport seaborn\niris = seaborn.load_dataset(\"iris\")\ntype(iris)\n# <class 'pandas.core.frame.DataFrame'>\n\nCompare with scikit learn data sets:\nfrom sklearn import datasets\niris = datasets.load_iris()\ntype(iris)\n# <class 'sklearn.utils.Bunch'>\ndir(iris)\n# ['DESCR', 'data', 'feature_names', 'filename', 'target', 'target_names']\n\n", "This is easy method worked for me.\nboston = load_boston()\nboston_frame = pd.DataFrame(data=boston.data, columns=boston.feature_names)\nboston_frame[\"target\"] = boston.target\n\nBut this can applied to load_iris as well.\n", "Many of the solutions are either missing column names or the species target names. This solution provides target_name labels.\n@Ankit-mathanker's solution works, however it iterates the Dataframe Series 'target_names' to substitute the iris species for integer identifiers.\nBased on the adage 'Don't iterate a Dataframe if you don't have to,' the following solution utilizes pd.replace() to more concisely accomplish the replacement.\nimport pandas as pd\nfrom sklearn.datasets import load_iris\n\niris = load_iris()\ndf = pd.DataFrame(iris['data'], columns = iris['feature_names'])\ndf['target'] = pd.Series(iris['target'], name = 'target_values')\ndf['target_name'] = df['target'].replace([0,1,2],\n['iris-' + species for species in iris['target_names'].tolist()])\n\ndf.head(3)\n\n\n\n\n\n\nsepal length (cm)\nsepal width (cm)\npetal length (cm)\npetal width (cm)\ntarget\ntarget_name\n\n\n\n\n0\n5.1\n3.5\n1.4\n0.2\n0\niris-setosa\n\n\n1\n4.9\n3.0\n1.4\n0.2\n0\niris-setosa\n\n\n2\n4.7\n3.2\n1.3\n0.2\n0\niris-setosa\n\n\n\n", "This works for me.\ndataFrame = pd.dataFrame(data = np.c_[ [iris['data'],iris['target'] ],\ncolumns=iris['feature_names'].tolist() + ['target'])\n\n", "Other way to combine features and target variables can be using np.column_stack (details)\nimport numpy as np\nimport pandas as pd\nfrom sklearn.datasets import load_iris\n\ndata = load_iris()\ndf = pd.DataFrame(np.column_stack((data.data, data.target)), columns = data.feature_names+['target'])\nprint(df.head())\n\nResult:\n sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target\n0 5.1 3.5 1.4 0.2 0.0\n1 4.9 3.0 1.4 0.2 0.0 \n2 4.7 3.2 1.3 0.2 0.0 \n3 4.6 3.1 1.5 0.2 0.0\n4 5.0 3.6 1.4 0.2 0.0\n\nIf you need the string label for the target, then you can use replace by convertingtarget_names to dictionary and add a new column:\ndf['label'] = df.target.replace(dict(enumerate(data.target_names)))\nprint(df.head())\n\nResult:\n sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target label \n0 5.1 3.5 1.4 0.2 0.0 setosa\n1 4.9 3.0 1.4 0.2 0.0 setosa\n2 4.7 3.2 1.3 0.2 0.0 setosa\n3 4.6 3.1 1.5 0.2 0.0 setosa\n4 5.0 3.6 1.4 0.2 0.0 setosa\n\n", "As of version 0.23, you can directly return a DataFrame using the as_frame argument. \nFor example, loading the iris data set:\nfrom sklearn.datasets import load_iris\niris = load_iris(as_frame=True)\ndf = iris.data\n\nIn my understanding using the provisionally release notes, this works for the breast_cancer, diabetes, digits, iris, linnerud, wine and california_houses data sets.\n", "Here's another integrated method example maybe helpful.\nfrom sklearn.datasets import load_iris\niris_X, iris_y = load_iris(return_X_y=True, as_frame=True)\ntype(iris_X), type(iris_y)\n\nThe data iris_X are imported as pandas DataFrame and\nthe target iris_y are imported as pandas Series.\n", "Basically what you need is the \"data\", and you have it in the scikit bunch, now you need just the \"target\" (prediction) which is also in the bunch.\nSo just need to concat these two to make the data complete \n data_df = pd.DataFrame(cancer.data,columns=cancer.feature_names)\n target_df = pd.DataFrame(cancer.target,columns=['target'])\n\n final_df = data_df.join(target_df)\n\n", "The API is a little cleaner than the responses suggested. Here, using as_frame and being sure to include a response column as well.\nimport pandas as pd\nfrom sklearn.datasets import load_wine\n\nfeatures, target = load_wine(as_frame=True).data, load_wine(as_frame=True).target\ndf = features\ndf['target'] = target\n\ndf.head(2)\n\n", "Working off the best answer and addressing my comment, here is a function for the conversion\ndef bunch_to_dataframe(bunch):\n fnames = bunch.feature_names\n features = fnames.tolist() if isinstance(fnames, np.ndarray) else fnames\n features += ['target']\n return pd.DataFrame(data= np.c_[bunch['data'], bunch['target']],\n columns=features)\n\n", "This snippet is only syntactic sugar built upon what TomDLT and rolyat have already contributed and explained. The only differences would be that load_iris will return a tuple instead of a dictionary and the columns names are enumerated.\ndf = pd.DataFrame(np.c_[load_iris(return_X_y=True)])\n\n", "I took couple of ideas from your answers and I don't know how to make it shorter :)\nimport pandas as pd\nfrom sklearn.datasets import load_iris\niris = load_iris()\ndf = pd.DataFrame(iris.data, columns=iris['feature_names'])\ndf['target'] = iris['target']\n\nThis gives a Pandas DataFrame with feature_names plus target as columns and RangeIndex(start=0, stop=len(df), step=1).\nI would like to have a shorter code where I can have 'target' added directly. \n", "You can use pd.DataFrame constructor, giving a numpy array (data) and a list of the names of the columns (columns). To have everything in one DataFrame, you can concatenate the features and the target into one numpy array with np.c_[...] (note the square brackets and not parenthesis). Also, you can have some trouble if you don't convert the feature names (iris['feature_names']) to a list before concatenation:\nimport numpy as np\nimport pandas as pd\nfrom sklearn.datasets import load_iris\n\niris = load_iris()\n\ndf = pd.DataFrame(data= np.c_[iris['data'], iris['target']],\n columns= list(iris['feature_names']) + ['target'])\n\n", "Plenty of good responses to this question; I've added my own below.\nimport pandas as pd\nfrom sklearn.datasets import load_iris\n\ndf = pd.DataFrame(\n # load all 4 dimensions of the dataframe EXCLUDING species data\n load_iris()['data'],\n # set the column names for the 4 dimensions of data\n columns=load_iris()['feature_names']\n)\n\n# we create a new column called 'species' with 150 rows of numerical data 0-2 signifying a species type. \n# Our column `species` should have data such `[0, 0, 1, 2, 1, 0]` etc.\ndf['species'] = load_iris()['target']\n# we map the numerical data to string data for species type\ndf['species'] = df['species'].map({\n 0 : 'setosa',\n 1 : 'versicolor',\n 2 : 'virginica' \n})\n\ndf.head()\n\n\nBreakdown\n\nFor some reason the load_iris['feature_names] has only 4 columns (sepal length, sepal width, petal length, petal width); moreover, the load_iris['data'] only contains data for those feature_names mentioned above.\nInstead, the species column names are stored in load_iris()['target_names'] == array(['setosa', 'versicolor', 'virginica'].\nOn top of this, the species row data is stored in load_iris()['target'].nunique() == 3\nOur goal was simply to add a new column called species that used the map function to convert numerical data 0-2 into 3 types of string data signifying the iris species.\n\n", "This is an easy way and works with majority of datasets in sklearn\nimport pandas as pd\nfrom sklearn import datasets\n\n# download iris data set\niris = datasets.load_iris()\n\n# load feature columns to DataFrame\ndf = pd.DataFrame(data=iris.data, columns=iris.feature_names)\n\n# add a column to df called 'target_c' then asign the target data of iris data\ndf['target_c'] = iris.target\n\n# view final DataFrame\ndf.head()\n\n", "There might be a better way but here is what I have done in the past and it works quite well:\nitems = data.items() #Gets all the data from this Bunch - a huge list\nmydata = pd.DataFrame(items[1][1]) #Gets the Attributes\nmydata[len(mydata.columns)] = items[2][1] #Adds a column for the Target Variable\nmydata.columns = items[-1][1] + [items[2][0]] #Gets the column names and updates the dataframe\n\nNow mydata will have everything you need - attributes, target variable and columnnames\n", "Whatever TomDLT answered it may not work for some of you because \ndata1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],\n columns= iris['feature_names'] + ['target'])\n\nbecause iris['feature_names'] returns you a numpy array. In numpy array you can't add an array and a list ['target'] by just + operator. Hence you need to convert it into a list first and then add.\nYou can do \ndata1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],\n columns= list(iris['feature_names']) + ['target'])\n\nThis will work fine tho..\n", "import pandas as pd\nfrom sklearn.datasets import load_iris\niris = load_iris()\nX = iris['data']\ny = iris['target']\niris_df = pd.DataFrame(X, columns = iris['feature_names'])\niris_df.head()\n\n", "One of the best ways:\ndata = pd.DataFrame(digits.data)\n\nDigits is the sklearn dataframe and I converted it to a pandas DataFrame\n", "from sklearn.datasets import load_iris\nimport pandas as pd\n\niris_dataset = load_iris()\n\ndatasets = pd.DataFrame(iris_dataset['data'], columns = \n iris_dataset['feature_names'])\ntarget_val = pd.Series(iris_dataset['target'], name = \n 'target_values')\n\nspecies = []\nfor val in target_val:\n if val == 0:\n species.append('iris-setosa')\n if val == 1:\n species.append('iris-versicolor')\n if val == 2:\n species.append('iris-virginica')\nspecies = pd.Series(species)\n\ndatasets['target'] = target_val\ndatasets['target_name'] = species\ndatasets.head()\n\n", "A more simpler and approachable manner I tried\nimport pandas as pd\nfrom sklearn import datasets\n\niris = load_iris()\n\nX= pd.DataFrame(iris['data'], columns= iris['feature_names'])\ny = pd.DataFrame(iris['target'],columns=['target'])\ndf = X.join(y)\n\n" ]
[ 197, 115, 85, 18, 15, 14, 9, 8, 8, 6, 6, 4, 3, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "dataset", "pandas", "python", "scikit_learn" ]
stackoverflow_0038105539_dataset_pandas_python_scikit_learn.txt
Q: Passing keyword arguments from URL to Flask api I stripped my snippets as far as I could. I want to use arbitrary number of passed keywords and values in my flask application from URL. For example: http://localhost:5000/duck?order=90 I would like to use order=90 as an item in a dictionary {"order" = 90} or setting its value to a variable. The app.py: from flask import Flask from flask_restful import Api from myresource import Quack def create_app(): app = Flask(__name__) register_resources(app) return app def register_resources(app): api = Api(app) api.add_resource(Quack, '/duck') if __name__ == '__main__': app = create_app() app.run(port=5000, debug=True) myresource.py: from flask_restful import Resource from http import HTTPStatus from webargs import fields from webargs.flaskparser import use_kwargs class Quack(Resource): @use_kwargs({"order" : fields.Int(missing=20)} def get(self, order): return order, HTTPStatus.OK surprisingly, if I test this app, it returns with 20 in any case, consequently, arguments are not passed. Where is my bug? or, does Anyone know a WORKING example? Thanks! I have tried this: https://flask-restful.readthedocs.io/en/latest/intermediate-usage.html?highlight=kwargs , but it simply does not work and exits at resource_class_kwargs. I also tried out some different approach from this community, but none of them worked as expected. The approach above came from here: https://github.com/TrainingByPackt/Python-API-Development-Fundamentals/blob/master/Lesson08/Exercise54/resources/recipe.py A: finally, after a day: it is working to some extent. myresource.py: from flask import request from flask_restful import Resource from http import HTTPStatus class Quack(Resource): def get(self): order = request.args.get("order") return order, HTTPStatus.OK
Passing keyword arguments from URL to Flask api
I stripped my snippets as far as I could. I want to use arbitrary number of passed keywords and values in my flask application from URL. For example: http://localhost:5000/duck?order=90 I would like to use order=90 as an item in a dictionary {"order" = 90} or setting its value to a variable. The app.py: from flask import Flask from flask_restful import Api from myresource import Quack def create_app(): app = Flask(__name__) register_resources(app) return app def register_resources(app): api = Api(app) api.add_resource(Quack, '/duck') if __name__ == '__main__': app = create_app() app.run(port=5000, debug=True) myresource.py: from flask_restful import Resource from http import HTTPStatus from webargs import fields from webargs.flaskparser import use_kwargs class Quack(Resource): @use_kwargs({"order" : fields.Int(missing=20)} def get(self, order): return order, HTTPStatus.OK surprisingly, if I test this app, it returns with 20 in any case, consequently, arguments are not passed. Where is my bug? or, does Anyone know a WORKING example? Thanks! I have tried this: https://flask-restful.readthedocs.io/en/latest/intermediate-usage.html?highlight=kwargs , but it simply does not work and exits at resource_class_kwargs. I also tried out some different approach from this community, but none of them worked as expected. The approach above came from here: https://github.com/TrainingByPackt/Python-API-Development-Fundamentals/blob/master/Lesson08/Exercise54/resources/recipe.py
[ "finally, after a day:\nit is working to some extent.\nmyresource.py:\n\nfrom flask import request\n\nfrom flask_restful import Resource from http import HTTPStatus\n\nclass Quack(Resource):\n\n def get(self): \n order = request.args.get(\"order\")\n return order, HTTPStatus.OK\n\n\n" ]
[ 0 ]
[]
[]
[ "flask_restful", "keyword_argument", "python", "url" ]
stackoverflow_0074632464_flask_restful_keyword_argument_python_url.txt
Q: How to make Faster API calls in Python I am trying to read Indian stock market data using API calls. For this example, I have used 10 stocks. My current program is: First I define the Function: def get_prices(stock): start_unix = 1669794745 end_unix = start_unix + 1800 interval = 1 url = 'https://priceapi.moneycontrol.com/techCharts/indianMarket/stock/history?symbol=' + str(stock) + "&resolution="+ str(interval) + "&from=" + str(start_unix) + "&to=" + str(end_unix) url_data = requests.get(url).json() print(url_data['c']) Next, I use multi-threading. I do not know much about the functioning of multithreading - I just used the code from a tutorial on the web. from threading import Thread stocks = ['ACC','ADANIENT','ADANIGREEN','ADANIPORTS','ADANITRANS','AMBUJACEM','ASIANPAINT','ATGL','BAJAJ-AUTO','BAJAJHLDNG'] threads = [] for i in stocks: threads.append(Thread(target=get_prices, args=(i,))) threads[-1].start() for thread in threads: thread.join() The time it takes is around 250 to 300 ms for the above program to run. In reality, I shall need to run the program for thousands of stocks. Is there any way to make it run faster. I am running the code in Jupyter Notebook on an apple M1 8 core chip. Any help will be greatly appreciated. Thank You! A: When scraping data from the web, most of the type is typically spent on waiting for server responses. In order to issue a large amount of queries and to get responses as fast as possible, issuing multiple queries in parallel is the right approach. To be as efficient as possible, you have to find the right balance between a large amount of parallel requests and being throttled (or blacklisted) by the remote service. In your code, you are creating as many threads as there are requests. In general, you would want to limit the number of threads and reuse the threads once they have performed a request in order to save resources. This is called a thread pool. Since you are using Python, another lighter alternative to multiple threads is to run parallel "I/O" tasks using asyncio in Python. Sample implementations of parallel requests using either a thread pool or asyncio are shown in this Stack Overflow answer. Edit: here is an adapted example from your code using asyncio: import asyncio from aiohttp import ClientSession stocks = ['ACC','ADANIENT','ADANIGREEN','ADANIPORTS','ADANITRANS','AMBUJACEM','ASIANPAINT','ATGL','BAJAJ-AUTO','BAJAJHLDNG'] async def fetch_price(session, stock, start_unix, end_unix, interval): url = f'https://priceapi.moneycontrol.com/techCharts/indianMarket/stock/history?symbol={stock}&resolution={interval}&from={start_unix}&to={end_unix}' async with session.get(url) as resp: data = await resp.json() return stock, data['c'] async def main(): start_unix = 1669794745 end_unix = start_unix + 1800 interval = 1 async with ClientSession() as session: tasks = [] for stock in stocks: tasks.append(loop.create_task( fetch_price(session, stock, start_unix, end_unix, interval) )) prices = await asyncio.gather(*tasks) print(prices) loop = asyncio.get_event_loop() loop.run_until_complete(main())
How to make Faster API calls in Python
I am trying to read Indian stock market data using API calls. For this example, I have used 10 stocks. My current program is: First I define the Function: def get_prices(stock): start_unix = 1669794745 end_unix = start_unix + 1800 interval = 1 url = 'https://priceapi.moneycontrol.com/techCharts/indianMarket/stock/history?symbol=' + str(stock) + "&resolution="+ str(interval) + "&from=" + str(start_unix) + "&to=" + str(end_unix) url_data = requests.get(url).json() print(url_data['c']) Next, I use multi-threading. I do not know much about the functioning of multithreading - I just used the code from a tutorial on the web. from threading import Thread stocks = ['ACC','ADANIENT','ADANIGREEN','ADANIPORTS','ADANITRANS','AMBUJACEM','ASIANPAINT','ATGL','BAJAJ-AUTO','BAJAJHLDNG'] threads = [] for i in stocks: threads.append(Thread(target=get_prices, args=(i,))) threads[-1].start() for thread in threads: thread.join() The time it takes is around 250 to 300 ms for the above program to run. In reality, I shall need to run the program for thousands of stocks. Is there any way to make it run faster. I am running the code in Jupyter Notebook on an apple M1 8 core chip. Any help will be greatly appreciated. Thank You!
[ "When scraping data from the web, most of the type is typically spent on waiting for server responses. In order to issue a large amount of queries and to get responses as fast as possible, issuing multiple queries in parallel is the right approach. To be as efficient as possible, you have to find the right balance between a large amount of parallel requests and being throttled (or blacklisted) by the remote service.\nIn your code, you are creating as many threads as there are requests. In general, you would want to limit the number of threads and reuse the threads once they have performed a request in order to save resources. This is called a thread pool.\nSince you are using Python, another lighter alternative to multiple threads is to run parallel \"I/O\" tasks using asyncio in Python. Sample implementations of parallel requests using either a thread pool or asyncio are shown in this Stack Overflow answer.\nEdit: here is an adapted example from your code using asyncio:\nimport asyncio\nfrom aiohttp import ClientSession\n\nstocks = ['ACC','ADANIENT','ADANIGREEN','ADANIPORTS','ADANITRANS','AMBUJACEM','ASIANPAINT','ATGL','BAJAJ-AUTO','BAJAJHLDNG']\n\nasync def fetch_price(session, stock, start_unix, end_unix, interval):\n url = f'https://priceapi.moneycontrol.com/techCharts/indianMarket/stock/history?symbol={stock}&resolution={interval}&from={start_unix}&to={end_unix}'\n async with session.get(url) as resp:\n data = await resp.json()\n return stock, data['c']\n\nasync def main():\n start_unix = 1669794745\n end_unix = start_unix + 1800\n interval = 1\n async with ClientSession() as session:\n tasks = []\n for stock in stocks:\n tasks.append(loop.create_task(\n fetch_price(session, stock, start_unix, end_unix, interval)\n ))\n prices = await asyncio.gather(*tasks)\n print(prices)\n\nloop = asyncio.get_event_loop()\nloop.run_until_complete(main())\n\n" ]
[ 2 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074632975_python_python_3.x.txt
Q: Subclassing module to deprecate a module level variable/constant? Assuming I have a module and I want to deprecate something in that module. That's very easy for functions, essentially this can be done using a decorator: import warnings def deprecated(func): def old(*args, **kwargs): warnings.warn("That has been deprecated, use the new features!", DeprecationWarning) return func(*args, **kwargs) return old @deprecated def func(): return 10 func() DeprecationWarning: That has been deprecated, use the new features! 10 However if I want to deprecate a constant that's not trivial, there is no way to apply a decorator to a variable. I was playing around and it seems possible to subclass a module and use __getattribute__ to issue the warning: I use NumPy here just to illustrate the principle: import numpy as np class MyMod(type(np)): # I could also subclass "types.ModuleType" instead ... def __getattribute__(self, name): if name in {'float', 'int', 'bool', 'complex'}: warnings.warn("that's deprecated!", DeprecationWarning) return object.__getattribute__(self, name) np.__class__ = MyMod np.float DeprecationWarning: that's deprecated! float However that seems to be impossible to do from within the package (at least on the top-level) because I can't access the own module. I would have to create another package that monkey-patches the primary package. Is there a better way to "deprecate" accessing variables from a package than to subclass the "module" class and/or to use a metapackage that monkey-patches the top-level module of the other package? A: PEP 562 has been accepted and will be added in Python 3.7 (not released at the time of writing) and that will allow (or at least greatly simplify) deprecating module level constants. It works by adding a __getattr__ function in the module. For example in this case: import builtins import warnings def __getattr__(name): if name == 'float': warnings.warn("That has been deprecated, use the new features!", DeprecationWarning) return builtins.float raise AttributeError(f"module {__name__} has no attribute {name}") This is basically the example in the PEP slightly adapted for the question. A: a compy-paste example MSeiferts answer is great. The link given (https://peps.python.org/pep-0562/) by him/her explains nicely, how to implement a deprecation mechanism for module constant. It also explains the missing return value in case of no deprecation (TLDR: getattr is only called if no attribute is found) Here some example implementation: (to keep thinks simple, I decided to go with print statements - sorry) main.py if __name__ == "__main__": from lib import MY_CONST # deprecated print(MY_CONST) from lib import MY_CONST_VALID print(MY_CONST_VALID) lib.py deprecated_names = ["MY_CONST"] def __getattr__(name): # https://peps.python.org/pep-0562/ if name in deprecated_names: print(f"importing {name} from {__name__} is deprecated") return globals()[f"_DEPRECATED_{name}"] raise AttributeError(f"module {__name__} has no attribute {name}") _DEPRECATED_MY_CONST = "dep" MY_CONST_VALID = "valid"
Subclassing module to deprecate a module level variable/constant?
Assuming I have a module and I want to deprecate something in that module. That's very easy for functions, essentially this can be done using a decorator: import warnings def deprecated(func): def old(*args, **kwargs): warnings.warn("That has been deprecated, use the new features!", DeprecationWarning) return func(*args, **kwargs) return old @deprecated def func(): return 10 func() DeprecationWarning: That has been deprecated, use the new features! 10 However if I want to deprecate a constant that's not trivial, there is no way to apply a decorator to a variable. I was playing around and it seems possible to subclass a module and use __getattribute__ to issue the warning: I use NumPy here just to illustrate the principle: import numpy as np class MyMod(type(np)): # I could also subclass "types.ModuleType" instead ... def __getattribute__(self, name): if name in {'float', 'int', 'bool', 'complex'}: warnings.warn("that's deprecated!", DeprecationWarning) return object.__getattribute__(self, name) np.__class__ = MyMod np.float DeprecationWarning: that's deprecated! float However that seems to be impossible to do from within the package (at least on the top-level) because I can't access the own module. I would have to create another package that monkey-patches the primary package. Is there a better way to "deprecate" accessing variables from a package than to subclass the "module" class and/or to use a metapackage that monkey-patches the top-level module of the other package?
[ "PEP 562 has been accepted and will be added in Python 3.7 (not released at the time of writing) and that will allow (or at least greatly simplify) deprecating module level constants.\nIt works by adding a __getattr__ function in the module. For example in this case:\nimport builtins\nimport warnings\n\ndef __getattr__(name):\n if name == 'float':\n warnings.warn(\"That has been deprecated, use the new features!\", DeprecationWarning)\n return builtins.float\n raise AttributeError(f\"module {__name__} has no attribute {name}\")\n\nThis is basically the example in the PEP slightly adapted for the question.\n", "a compy-paste example\nMSeiferts answer is great. The link given (https://peps.python.org/pep-0562/) by him/her explains nicely, how to implement a deprecation mechanism for module constant. It also explains the missing return value in case of no deprecation (TLDR: getattr is only called if no attribute is found)\nHere some example implementation:\n(to keep thinks simple, I decided to go with print statements - sorry)\nmain.py\n\nif __name__ == \"__main__\":\n from lib import MY_CONST # deprecated\n print(MY_CONST)\n\n from lib import MY_CONST_VALID\n print(MY_CONST_VALID)\n\n\nlib.py\ndeprecated_names = [\"MY_CONST\"]\n\ndef __getattr__(name):\n # https://peps.python.org/pep-0562/\n if name in deprecated_names:\n print(f\"importing {name} from {__name__} is deprecated\")\n return globals()[f\"_DEPRECATED_{name}\"]\n raise AttributeError(f\"module {__name__} has no attribute {name}\")\n\n_DEPRECATED_MY_CONST = \"dep\"\nMY_CONST_VALID = \"valid\"\n\n\n" ]
[ 3, 0 ]
[]
[]
[ "module", "python" ]
stackoverflow_0045744919_module_python.txt
Q: How to access a dataframe column with no name Here is the dataframe I am trying to access its columns (team and player) PSxG GA league season game team player ITA-Serie A 2223 2022-08-14 Fiorentina-Cremonese Cremonese Ionuț Radu 2.5 3 3ed8bdff Fiorentina Pierluigi Gollini 1.2 2 3ed8bdff Here is the output of the columns function: Index(['PSxG', 'GA', ''], dtype='object') I also tried to use iloc in order to access the first row and get player and team PSxG 2.5 GA 3 3ed8bdff Name: (ITA-Serie A, 2223, 2022-08-14 Fiorentina-Cremonese, Cremonese, Ionuț Radu), dtype: object A: Instead of .iloc, you can use .iat. In the example you provided, the column number for team is 3 and for player is 4, so you can access the column elements like this as shown below. for example you have 10 rows in your dataFrame name of your dataFrame is data_table for i in range(10): print(data_table.iat[i,3]) # printing "team" column elements print(data_table.iat[i,4]) # printing "player" column elements You can take a look at the documentation for more information. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iat.html
How to access a dataframe column with no name
Here is the dataframe I am trying to access its columns (team and player) PSxG GA league season game team player ITA-Serie A 2223 2022-08-14 Fiorentina-Cremonese Cremonese Ionuț Radu 2.5 3 3ed8bdff Fiorentina Pierluigi Gollini 1.2 2 3ed8bdff Here is the output of the columns function: Index(['PSxG', 'GA', ''], dtype='object') I also tried to use iloc in order to access the first row and get player and team PSxG 2.5 GA 3 3ed8bdff Name: (ITA-Serie A, 2223, 2022-08-14 Fiorentina-Cremonese, Cremonese, Ionuț Radu), dtype: object
[ "Instead of .iloc, you can use .iat.\nIn the example you provided, the column number for team is 3 and for player is 4, so you can access the column elements like this as shown below.\nfor example\n\nyou have 10 rows in your dataFrame\nname of your dataFrame is data_table\n\n\nfor i in range(10):\n print(data_table.iat[i,3]) # printing \"team\" column elements\n print(data_table.iat[i,4]) # printing \"player\" column elements\n\n\nYou can take a look at the documentation for more information.\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iat.html\n" ]
[ 0 ]
[]
[]
[ "data_preprocessing", "dataframe", "pandas", "python" ]
stackoverflow_0074633035_data_preprocessing_dataframe_pandas_python.txt
Q: How to correctly use iterrows in a DataFrame I want to find out the highest price of a specified house type, "mansion". instead of using df[df["h_type"] == "mansion"]["h_price"].max() , i want to try something new. I use iterrows() method, but it does not work out as expected. First, I defind a price function attempting to find out the highest price (this works) def price(): most=0 priceSr=df['h_price'].str.replace(',','').astype('float') if not df.empty: idMax = priceSr.idxmax() if not isnan(idMax): maxSr = df.loc[idMax] if most is None: most = maxSr.copy() else: if float(maxSr['h_price']) > float(most['h_price']): most = maxSr.copy() most = most.to_frame().transpose() print(most, '\n==========') Secondly, i narrow down to mansion under h_type (this work) mansion=df[df["h_type"].isin(["mansion"])] mansion Finally, i look up into "mansion" in second step, with price function. (it does not work, the result yields as if i have not look specifically into the second_step code mentioned above) for index, row in mansion.iterrows(): price() For another story, i try something new to replace the third step, it not yileds any results, instead gives an error message mansion.apply(price,axis=1) Error message 826 for i, v in enumerate(series_gen): 827 # ignore SettingWithCopy here in case the user mutates --> 828 results[i] = self.f(v) 829 if isinstance(results[i], ABCSeries): 830 # If we have a view on v, we need to make a copy because TypeError: price() takes 0 positional arguments but 1 was given Any advice concerning either codes will be appreciated. A: What you want is to groupby each h_type and then get the max h_price like below. df_grouped = df[['h_type','h_price'] ].groupby(['h_type']).max()
How to correctly use iterrows in a DataFrame
I want to find out the highest price of a specified house type, "mansion". instead of using df[df["h_type"] == "mansion"]["h_price"].max() , i want to try something new. I use iterrows() method, but it does not work out as expected. First, I defind a price function attempting to find out the highest price (this works) def price(): most=0 priceSr=df['h_price'].str.replace(',','').astype('float') if not df.empty: idMax = priceSr.idxmax() if not isnan(idMax): maxSr = df.loc[idMax] if most is None: most = maxSr.copy() else: if float(maxSr['h_price']) > float(most['h_price']): most = maxSr.copy() most = most.to_frame().transpose() print(most, '\n==========') Secondly, i narrow down to mansion under h_type (this work) mansion=df[df["h_type"].isin(["mansion"])] mansion Finally, i look up into "mansion" in second step, with price function. (it does not work, the result yields as if i have not look specifically into the second_step code mentioned above) for index, row in mansion.iterrows(): price() For another story, i try something new to replace the third step, it not yileds any results, instead gives an error message mansion.apply(price,axis=1) Error message 826 for i, v in enumerate(series_gen): 827 # ignore SettingWithCopy here in case the user mutates --> 828 results[i] = self.f(v) 829 if isinstance(results[i], ABCSeries): 830 # If we have a view on v, we need to make a copy because TypeError: price() takes 0 positional arguments but 1 was given Any advice concerning either codes will be appreciated.
[ "What you want is to groupby each h_type and then get the max h_price like below.\ndf_grouped = df[['h_type','h_price'] ].groupby(['h_type']).max()\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074632749_python.txt
Q: Set value as key and a list of values as value in Python I have a big dictionary (250k+ keys) like this: dict = { 0: [apple, green], 1: [banana, yellow], 2: [apple, red], 3: [apple, brown], 4: [kiwi, green], 5: [kiwi, brown], ... } Goal to achieve: 1. I want a new dictionary with the first value of the list as key, and a list of values for the same key. Something like this: new_dict = { apple: [green, red, brown] banana: [yellow] kiwi: [green, brown], ... } 2. After that I want to count the number of values for each key (e.g. {apple:3, banana:1, kiwi,2} ), and this could be easily achieved with a Counter, so it shouldn't be a problem. Then, I want to select only the keys that have a certain number of values (for example, if I want to mantain only keys associated to 2 or more values, the final_dict will be this: final_dict = { apple:3, kiwi:2, .... } 3. Then I want to return the original keys from dict of the elements that have at least 2 values, so at the end I will have: original_keys_with_at_least_2_values = [0, 2, 3, 4, 5] My code # Create new_dict like: new_dict = {apple:None, banana:None, kiwi:None,..} new_dict = {k: None for k in dict.values()[0]} for k in new_dict.keys(): for i in dict.values()[0]: if i == k: new_dict[k] = dict[i][1] I'm stuck using nested for cicles, even if I know Python comprehension is faster, but I really don't know how to solve it. Any solution or idea would be appreciated. A: You can use a defaultdict to group the items by the first entry from collections import defaultdict fruits = defaultdict(list) data = { 0: ['apple', 'green'], 1: ['banana', 'yellow'], 2: ['apple', 'red'], 3: ['apple', 'brown'], 4: ['kiwi', 'green'], 5: ['kiwi', 'brown'] } for _, v in data.items(): fruits[v[0]].extend(v[1:]) print(dict(fruits)) # {'apple': ['green', 'red', 'brown'], 'banana': ['yellow'], 'kiwi': ['green', 'brown']} If there is less than two entries in any list, you'll need to account for that... Then, use comprehension to get the counts, not Counter as that won't give you the lengths of those lists. fruits_count = {k: len(v) for k, v in fruits.items()} fruits_count_with_at_least_2 = {k: v for k, v in fruits_count.items() if v >= 2} And then a loop will be needed to collect the original keys original_keys_with_2_count = [] for k, values in data.items(): fruit = values[0] count = fruits_count.get(fruit, -1) if count >= 2: original_keys_with_2_count.append(k) print(original_keys_with_2_count) # [0, 2, 3, 4, 5]
Set value as key and a list of values as value in Python
I have a big dictionary (250k+ keys) like this: dict = { 0: [apple, green], 1: [banana, yellow], 2: [apple, red], 3: [apple, brown], 4: [kiwi, green], 5: [kiwi, brown], ... } Goal to achieve: 1. I want a new dictionary with the first value of the list as key, and a list of values for the same key. Something like this: new_dict = { apple: [green, red, brown] banana: [yellow] kiwi: [green, brown], ... } 2. After that I want to count the number of values for each key (e.g. {apple:3, banana:1, kiwi,2} ), and this could be easily achieved with a Counter, so it shouldn't be a problem. Then, I want to select only the keys that have a certain number of values (for example, if I want to mantain only keys associated to 2 or more values, the final_dict will be this: final_dict = { apple:3, kiwi:2, .... } 3. Then I want to return the original keys from dict of the elements that have at least 2 values, so at the end I will have: original_keys_with_at_least_2_values = [0, 2, 3, 4, 5] My code # Create new_dict like: new_dict = {apple:None, banana:None, kiwi:None,..} new_dict = {k: None for k in dict.values()[0]} for k in new_dict.keys(): for i in dict.values()[0]: if i == k: new_dict[k] = dict[i][1] I'm stuck using nested for cicles, even if I know Python comprehension is faster, but I really don't know how to solve it. Any solution or idea would be appreciated.
[ "You can use a defaultdict to group the items by the first entry\nfrom collections import defaultdict\n\nfruits = defaultdict(list)\n\ndata = {\n 0: ['apple', 'green'],\n 1: ['banana', 'yellow'],\n 2: ['apple', 'red'],\n 3: ['apple', 'brown'],\n 4: ['kiwi', 'green'],\n 5: ['kiwi', 'brown']\n}\n\nfor _, v in data.items():\n fruits[v[0]].extend(v[1:])\n\nprint(dict(fruits))\n# {'apple': ['green', 'red', 'brown'], 'banana': ['yellow'], 'kiwi': ['green', 'brown']}\n\nIf there is less than two entries in any list, you'll need to account for that...\nThen, use comprehension to get the counts, not Counter as that won't give you the lengths of those lists.\nfruits_count = {k: len(v) for k, v in fruits.items()}\nfruits_count_with_at_least_2 = {k: v for k, v in fruits_count.items() if v >= 2}\n\nAnd then a loop will be needed to collect the original keys\noriginal_keys_with_2_count = []\nfor k, values in data.items():\n fruit = values[0]\n count = fruits_count.get(fruit, -1)\n if count >= 2:\n original_keys_with_2_count.append(k)\n\nprint(original_keys_with_2_count)\n# [0, 2, 3, 4, 5]\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "dictionary_comprehension", "python" ]
stackoverflow_0074632369_dictionary_dictionary_comprehension_python.txt
Q: Simplest way to publish over Zeroconf/Bonjour? I've got some apps I would like to make visible with zeroconf. Is there an easy scriptable way to do this? Is there anything that needs to be done by my network admin to enable this? Python or sh would be preferrable. OS-specific suggestions welcome for Linux and OS X. A: pybonjour doesn't seem to be actively maintained. I'm using python-zeroconf. pip install zeroconf Here is an excerpt from a script I use to announce a Twisted-Autobahn WebSocket to an iOS device: from zeroconf import ServiceInfo, Zeroconf class WebSocketManager(service.Service, object): ws_service_name = 'Verasonics WebSocket' wsPort = None wsInfo = None def __init__(self, factory, portCallback): factory.protocol = BroadcastServerProtocol self.factory = factory self.portCallback = portCallback self.zeroconf = Zeroconf() def privilegedStartService(self): self.wsPort = reactor.listenTCP(0, self.factory) port = self.wsPort.getHost().port fqdn = socket.gethostname() ip_addr = socket.gethostbyname(fqdn) hostname = fqdn.split('.')[0] wsDesc = {'service': 'Verasonics Frame', 'version': '1.0.0'} self.wsInfo = ServiceInfo('_verasonics-ws._tcp.local.', hostname + ' ' + self.ws_service_name + '._verasonics-ws._tcp.local.', socket.inet_aton(ip_addr), port, 0, 0, wsDesc, hostname + '.local.') self.zeroconf.register_service(self.wsInfo) self.portCallback(port) return super(WebSocketManager, self).privilegedStartService() def stopService(self): self.zeroconf.unregister_service(self.wsInfo) self.wsPort.stopListening() return super(WebSocketManager , self).stopService() A: Or you can just use bash: dns-sd -R <Name> <Type> <Domain> <Port> [<TXT>...] This works by default on OS X. For other *nixes, refer to the avahi-publish man page (which you may need to install via your preferred package manager). A: I'd recommend pybonjour. A: Through the Avahi Python bindings, it's very easy. A: Although this answer points you in the right direction, it seems that python-zeroconf (0.39.4) had some changes making the example above not work (for me) anymore. Also I think a more minimal, self-contained, answer would be nice, so here goes: from zeroconf import ServiceInfo, Zeroconf PORT=8080 zeroconf = Zeroconf() wsInfo = ServiceInfo('_http._tcp.local.', "myhost._http._tcp.local.", PORT, 0, 0, {"random_key": "1234", "answer": "42"}) zeroconf.register_service(wsInfo) import time time.sleep(1000); Note that anything beyond PORT is optional for ServiceInfo(). You can run multiple of these programs at the same time; they will all bind to the same UDP port without a problem.
Simplest way to publish over Zeroconf/Bonjour?
I've got some apps I would like to make visible with zeroconf. Is there an easy scriptable way to do this? Is there anything that needs to be done by my network admin to enable this? Python or sh would be preferrable. OS-specific suggestions welcome for Linux and OS X.
[ "pybonjour doesn't seem to be actively maintained. I'm using python-zeroconf.\npip install zeroconf\n\nHere is an excerpt from a script I use to announce a Twisted-Autobahn WebSocket to an iOS device:\nfrom zeroconf import ServiceInfo, Zeroconf\n\nclass WebSocketManager(service.Service, object):\n ws_service_name = 'Verasonics WebSocket'\n wsPort = None\n wsInfo = None\n\n def __init__(self, factory, portCallback):\n factory.protocol = BroadcastServerProtocol\n self.factory = factory\n self.portCallback = portCallback\n self.zeroconf = Zeroconf()\n\n def privilegedStartService(self):\n self.wsPort = reactor.listenTCP(0, self.factory)\n port = self.wsPort.getHost().port\n\n fqdn = socket.gethostname()\n ip_addr = socket.gethostbyname(fqdn)\n hostname = fqdn.split('.')[0]\n\n wsDesc = {'service': 'Verasonics Frame', 'version': '1.0.0'}\n self.wsInfo = ServiceInfo('_verasonics-ws._tcp.local.',\n hostname + ' ' + self.ws_service_name + '._verasonics-ws._tcp.local.',\n socket.inet_aton(ip_addr), port, 0, 0,\n wsDesc, hostname + '.local.')\n self.zeroconf.register_service(self.wsInfo)\n self.portCallback(port)\n\n return super(WebSocketManager, self).privilegedStartService()\n\n def stopService(self):\n self.zeroconf.unregister_service(self.wsInfo)\n\n self.wsPort.stopListening()\n return super(WebSocketManager , self).stopService()\n\n", "Or you can just use bash:\ndns-sd -R <Name> <Type> <Domain> <Port> [<TXT>...]\n\nThis works by default on OS X. For other *nixes, refer to the avahi-publish man page (which you may need to install via your preferred package manager).\n", "I'd recommend pybonjour.\n", "Through the Avahi Python bindings, it's very easy.\n", "Although this answer points you in the right direction, it seems that python-zeroconf (0.39.4) had some changes making the example above not work (for me) anymore.\nAlso I think a more minimal, self-contained, answer would be nice, so here goes:\nfrom zeroconf import ServiceInfo, Zeroconf\n\nPORT=8080\n\nzeroconf = Zeroconf()\nwsInfo = ServiceInfo('_http._tcp.local.',\n \"myhost._http._tcp.local.\",\n PORT, 0, 0, {\"random_key\": \"1234\", \"answer\": \"42\"})\nzeroconf.register_service(wsInfo)\n\nimport time\ntime.sleep(1000);\n\n\nNote that anything beyond PORT is optional for ServiceInfo().\nYou can run multiple of these programs at the same time; they will all bind to the same UDP port without a problem.\n" ]
[ 11, 10, 7, 2, 1 ]
[]
[]
[ "bonjour", "python", "zeroconf" ]
stackoverflow_0001916017_bonjour_python_zeroconf.txt
Q: Trouble with injecting Callable I'm using python-dependency-injector. I tried this code and it worked perfectly: https://python-dependency-injector.ets-labs.org/providers/callable.html that page also mentioned next: Callable provider handles an injection of the dependencies the same way like a Factory provider. So I went and wrote this code: import passlib.hash from dependency_injector import containers, providers from dependency_injector.wiring import Provide, inject class Container(containers.DeclarativeContainer): password_verifier = providers.Callable(passlib.hash.sha256_crypt.verify) @inject def bar(password_verifier=Provide[Container.password_verifier]): pass if __name__ == "__main__": container = Container() container.wire(modules=[__name__]) bar() And it -- as you might expect -- didn't work. I received this error: Traceback (most recent call last): File "/home/common/learning_2022/code/python/blog_engine/test.py", line 20, in <module> bar() File "src/dependency_injector/_cwiring.pyx", line 26, in dependency_injector._cwiring._get_sync_patched._patched File "src/dependency_injector/providers.pyx", line 225, in dependency_injector.providers.Provider.__call__ File "src/dependency_injector/providers.pyx", line 1339, in dependency_injector.providers.Callable._provide File "src/dependency_injector/providers.pxd", line 635, in dependency_injector.providers.__callable_call File "src/dependency_injector/providers.pxd", line 608, in dependency_injector.providers.__call TypeError: GenericHandler.verify() missing 2 required positional arguments: 'secret' and 'hash' A: The method passlib.hash.sha256_crypt.verify requires two positional arguments, secret and hash as shown here: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.sha256_crypt.html Because you're injecting an attribute of the Container, the DI framework must create an instance of this to inject it into the bar() method. The DI framework is unable to instantiate this object however since the required positional arguments are missing. If you wanted to use the method in bar(), you'd have to do so without provoking the DI framework to try to create an instance of the object at initialisation time. You could do this with the following: import passlib.hash from dependency_injector import containers, providers from dependency_injector.wiring import Provide, inject class Container(containers.DeclarativeContainer): password_hasher = providers.Callable( passlib.hash.sha256_crypt.hash, salt_size=16, rounds=10000, ) password_verifier = providers.Callable(passlib.hash.sha256_crypt.verify) @inject def bar(container=Provide[Container]): hashed_password = container.password_hasher("123") assert container.password_verifier("123", hashed_password) if __name__ == "__main__": container = Container() container.wire(modules=[__name__]) bar() A: I ran into this problem too and after much search I found the answer. You can inject the callable by using the provider attribute: import passlib.hash from dependency_injector import containers, providers from dependency_injector.wiring import Provide, inject class Container(containers.DeclarativeContainer): # Use the 'provider' attribute on the provider in the container password_verifier = providers.Callable(passlib.hash.sha256_crypt.verify).provider @inject def bar(password_verifier=Provide[Container.password_verifier]): password_verifier(...) if __name__ == "__main__": container = Container() container.wire(modules=[__name__]) bar()
Trouble with injecting Callable
I'm using python-dependency-injector. I tried this code and it worked perfectly: https://python-dependency-injector.ets-labs.org/providers/callable.html that page also mentioned next: Callable provider handles an injection of the dependencies the same way like a Factory provider. So I went and wrote this code: import passlib.hash from dependency_injector import containers, providers from dependency_injector.wiring import Provide, inject class Container(containers.DeclarativeContainer): password_verifier = providers.Callable(passlib.hash.sha256_crypt.verify) @inject def bar(password_verifier=Provide[Container.password_verifier]): pass if __name__ == "__main__": container = Container() container.wire(modules=[__name__]) bar() And it -- as you might expect -- didn't work. I received this error: Traceback (most recent call last): File "/home/common/learning_2022/code/python/blog_engine/test.py", line 20, in <module> bar() File "src/dependency_injector/_cwiring.pyx", line 26, in dependency_injector._cwiring._get_sync_patched._patched File "src/dependency_injector/providers.pyx", line 225, in dependency_injector.providers.Provider.__call__ File "src/dependency_injector/providers.pyx", line 1339, in dependency_injector.providers.Callable._provide File "src/dependency_injector/providers.pxd", line 635, in dependency_injector.providers.__callable_call File "src/dependency_injector/providers.pxd", line 608, in dependency_injector.providers.__call TypeError: GenericHandler.verify() missing 2 required positional arguments: 'secret' and 'hash'
[ "The method passlib.hash.sha256_crypt.verify requires two positional arguments, secret and hash as shown here: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.sha256_crypt.html\nBecause you're injecting an attribute of the Container, the DI framework must create an instance of this to inject it into the bar() method. The DI framework is unable to instantiate this object however since the required positional arguments are missing.\nIf you wanted to use the method in bar(), you'd have to do so without provoking the DI framework to try to create an instance of the object at initialisation time. You could do this with the following:\nimport passlib.hash\n\nfrom dependency_injector import containers, providers\nfrom dependency_injector.wiring import Provide, inject\n\n\nclass Container(containers.DeclarativeContainer):\n password_hasher = providers.Callable(\n passlib.hash.sha256_crypt.hash,\n salt_size=16,\n rounds=10000,\n )\n password_verifier = providers.Callable(passlib.hash.sha256_crypt.verify)\n\n\n@inject\ndef bar(container=Provide[Container]):\n hashed_password = container.password_hasher(\"123\")\n assert container.password_verifier(\"123\", hashed_password)\n\n\nif __name__ == \"__main__\":\n container = Container()\n container.wire(modules=[__name__])\n\n bar()\n\n", "I ran into this problem too and after much search I found the answer.\nYou can inject the callable by using the provider attribute:\nimport passlib.hash\n\nfrom dependency_injector import containers, providers\nfrom dependency_injector.wiring import Provide, inject\n\n\nclass Container(containers.DeclarativeContainer):\n # Use the 'provider' attribute on the provider in the container\n password_verifier = providers.Callable(passlib.hash.sha256_crypt.verify).provider\n\n\n@inject\ndef bar(password_verifier=Provide[Container.password_verifier]):\n password_verifier(...) \n\n\nif __name__ == \"__main__\":\n container = Container()\n container.wire(modules=[__name__])\n\n bar()\n\n" ]
[ 0, 0 ]
[]
[]
[ "dependency_injection", "python" ]
stackoverflow_0073522651_dependency_injection_python.txt
Q: Mean of selected rows of a matrix with Numpy and performance I need to compute the mean of a 2D across one dimension. Here I keep all rows: import numpy as np, time x = np.random.random((100000, 500)) t0 = time.time() y = x.mean(axis=0) # y.shape is (500,) as expected print(time.time() - t0) # 36 milliseconds When I filter and select some rows, I notice it is 8 times slower. So I tried an easy test where selected_rows are in fact all rows. Still, it is 8 times slower: selected_rows = np.arange(100000) t0 = time.time() y = x[selected_rows, :].mean(axis=0) # selecting all rows! print(time.time() - t0) # 280 milliseconds! (for the same result as above!) Is there a way to speed up the process of selecting certain rows (selected_rows), and computing .mean(axis=0) ? In the specific case where selected_rows = all rows, it would be interesting to not have 8x slower execution. A: (Sorry for first version of this answer) Problem is with creation of new array which takes a lot of time compared to calculating mean. I tried to optimize whole process using numba: import numba @numba.jit('float64[:](float64[:, :], int32[:])') def selective_mean(array, indices): sum = np.zeros(array.shape[1], dtype=np.float64) for idx in indices: sum += array[idx] return sum / array.shape[0] t0 = time.time() y2 = selective_mean(x, selected_rows) print(time.time() - t0) There is little slowdown compared to numpy, but much smaller (20% slower?). After compilation (first call to this function) I got about the same timings. For fewer indices array you should see some gain. A: When you do x[selected_rows, :] where selected_rows is an array, it performs advanced (aka fancy) indexing to create a new array. This is what takes time. If, instead, you did a slice operation, a view of the original array is created, and that takes less time. For example: import timeit import numpy as np selected_rows = np.arange(0, 100000, 2) array = np.random.random((100000, 500)) t1 = timeit.timeit("array[selected_rows, :].mean(axis=0)", globals=globals(), number=10) t2 = timeit.timeit("array[::2, :].mean(axis=0)", globals=globals(), number=10) print(t1, t2, t1 / t2) # 1.3985465039731935 0.18735826201736927 7.464557414839488 Unfortunately, there's no good way to represent all possible selected_rows as slices, so if you have a selected_rows that can't be represented as a slice, you don't have any other option but to take the hit in performance. There's more information in the answers to these questions: Fast numpy fancy indexing get view of numpy array using boolean or sequence object (advanced indexing) dankal444's answer here doesn't help in your case, since the axis of the mean call is the axis you wanted to filter in the first place. It is, however, the best way to do this if the filter axis and the mean axis are different -- save the creation of the new array until after you've condensed one axis. You still take a performance hit compared to basic slicing, but it is not as large as if you indexed before the mean call. For example, if you wanted .mean(axis=1), t1 = timeit.timeit("array[selected_rows, :].mean(axis=1)", globals=globals(), number=10) t2 = timeit.timeit("array.mean(axis=1)[selected_rows]", globals=globals(), number=10) t3 = timeit.timeit("array[::2, :].mean(axis=1)", globals=globals(), number=10) t4 = timeit.timeit("array.mean(axis=1)[::2]", globals=globals(), number=10) print(t1, t2, t3, t4) # 1.4732236850004483 0.3643951010008095 0.21357544500006043 0.32832237200000236 Which shows that Indexing before mean is the worst by far (t1) Slicing before mean is best, since you don't have to spend extra time calculating means for the unnecessary rows (t3) Both indexing (t2) and slicing (t4) after mean are better than indexing before mean, but not better than slicing before mean A: AFAIK, this is not possible to do this efficiently only in Numpy. The second code is slow because x[selected_rows, :] creates a new array (there is no way to create a view in the general case as explained by @PranavHosangadi). This means a new buffer needs to be allocated, filled with data (causing it to be read on x86-64 platforms due to the write-allocate cache policy and causing slow page faults regarding the system allocator) from x that must also be read from memory. All of this is pretty expensive, not to mention the mean function then read the newly created buffer back from memory. Numpy also performs this fancy indexing operation sub-optimally internally in the C code (due to the overhead of advanced iterators, the small size of the last axis and the number of possible cases to optimize in the Numpy code). ufuncs reduction could help here (they are designed for such a use-case), but the resulting performance will be quite disappointing. Numba and Cython can help to make this faster. @dankal444 provided a quite fast serial implementation using Numba. Here is a faster parallel chunk-based implementation (which is also a bit more complicated): import numba as nb # Use '(float64[:,:], int64[:])' if indices is a 64-bit input. # Use a list with both signature to be able to use both types at the expense of a slower compilation time. @nb.jit('(float64[:,:], int32[:])', fastmath=True, parallel=True) def indexedParallelMean(arr, indices): splitCount = 4 l, m, n = arr.shape[0], arr.shape[1], indices.size res = np.zeros((splitCount, m)) # Parallel reduction of each chunk for i in nb.prange(splitCount): start = n * i // splitCount end = n * (i + 1) // splitCount for j in range(start, end): res[i, :] += arr[indices[j], :] # Final sequential reduction for i in range(1, splitCount): res[0] += res[i] return res[0] / n Here are performance results on my machine with a i5-9600KF processor (6-cores): Numpy initial version: 100 ms Numba serial (of dankal444): 22 ms Numba parallel (this answer): 11 ms The computation is memory bound: it saturate about 80% of the throughput of the RAM bandwidth. It is nearly optimal for a high-level Numba/Cython code.
Mean of selected rows of a matrix with Numpy and performance
I need to compute the mean of a 2D across one dimension. Here I keep all rows: import numpy as np, time x = np.random.random((100000, 500)) t0 = time.time() y = x.mean(axis=0) # y.shape is (500,) as expected print(time.time() - t0) # 36 milliseconds When I filter and select some rows, I notice it is 8 times slower. So I tried an easy test where selected_rows are in fact all rows. Still, it is 8 times slower: selected_rows = np.arange(100000) t0 = time.time() y = x[selected_rows, :].mean(axis=0) # selecting all rows! print(time.time() - t0) # 280 milliseconds! (for the same result as above!) Is there a way to speed up the process of selecting certain rows (selected_rows), and computing .mean(axis=0) ? In the specific case where selected_rows = all rows, it would be interesting to not have 8x slower execution.
[ "(Sorry for first version of this answer)\nProblem is with creation of new array which takes a lot of time compared to calculating mean.\nI tried to optimize whole process using numba:\nimport numba\[email protected]('float64[:](float64[:, :], int32[:])')\ndef selective_mean(array, indices):\n sum = np.zeros(array.shape[1], dtype=np.float64)\n for idx in indices:\n sum += array[idx]\n return sum / array.shape[0]\n\nt0 = time.time()\ny2 = selective_mean(x, selected_rows)\nprint(time.time() - t0) \n\nThere is little slowdown compared to numpy, but much smaller (20% slower?). After compilation (first call to this function) I got about the same timings. For fewer indices array you should see some gain.\n", "When you do x[selected_rows, :] where selected_rows is an array, it performs advanced (aka fancy) indexing to create a new array. This is what takes time.\nIf, instead, you did a slice operation, a view of the original array is created, and that takes less time. For example:\nimport timeit\nimport numpy as np\n\nselected_rows = np.arange(0, 100000, 2)\narray = np.random.random((100000, 500))\n\nt1 = timeit.timeit(\"array[selected_rows, :].mean(axis=0)\", globals=globals(), number=10)\nt2 = timeit.timeit(\"array[::2, :].mean(axis=0)\", globals=globals(), number=10)\n\nprint(t1, t2, t1 / t2) # 1.3985465039731935 0.18735826201736927 7.464557414839488\n\nUnfortunately, there's no good way to represent all possible selected_rows as slices, so if you have a selected_rows that can't be represented as a slice, you don't have any other option but to take the hit in performance. There's more information in the answers to these questions:\n\nFast numpy fancy indexing\nget view of numpy array using boolean or sequence object (advanced indexing)\n\ndankal444's answer here doesn't help in your case, since the axis of the mean call is the axis you wanted to filter in the first place. It is, however, the best way to do this if the filter axis and the mean axis are different -- save the creation of the new array until after you've condensed one axis. You still take a performance hit compared to basic slicing, but it is not as large as if you indexed before the mean call.\nFor example, if you wanted .mean(axis=1),\nt1 = timeit.timeit(\"array[selected_rows, :].mean(axis=1)\", globals=globals(), number=10)\nt2 = timeit.timeit(\"array.mean(axis=1)[selected_rows]\", globals=globals(), number=10)\nt3 = timeit.timeit(\"array[::2, :].mean(axis=1)\", globals=globals(), number=10)\nt4 = timeit.timeit(\"array.mean(axis=1)[::2]\", globals=globals(), number=10)\n\nprint(t1, t2, t3, t4)\n# 1.4732236850004483 0.3643951010008095 0.21357544500006043 0.32832237200000236\n\nWhich shows that\n\nIndexing before mean is the worst by far (t1)\nSlicing before mean is best, since you don't have to spend extra time calculating means for the unnecessary rows (t3)\nBoth indexing (t2) and slicing (t4) after mean are better than indexing before mean, but not better than slicing before mean\n\n", "AFAIK, this is not possible to do this efficiently only in Numpy. The second code is slow because x[selected_rows, :] creates a new array (there is no way to create a view in the general case as explained by @PranavHosangadi). This means a new buffer needs to be allocated, filled with data (causing it to be read on x86-64 platforms due to the write-allocate cache policy and causing slow page faults regarding the system allocator) from x that must also be read from memory. All of this is pretty expensive, not to mention the mean function then read the newly created buffer back from memory. Numpy also performs this fancy indexing operation sub-optimally internally in the C code (due to the overhead of advanced iterators, the small size of the last axis and the number of possible cases to optimize in the Numpy code). ufuncs reduction could help here (they are designed for such a use-case), but the resulting performance will be quite disappointing.\nNumba and Cython can help to make this faster. @dankal444 provided a quite fast serial implementation using Numba. Here is a faster parallel chunk-based implementation (which is also a bit more complicated):\nimport numba as nb\n\n# Use '(float64[:,:], int64[:])' if indices is a 64-bit input.\n# Use a list with both signature to be able to use both types at the expense of a slower compilation time.\[email protected]('(float64[:,:], int32[:])', fastmath=True, parallel=True)\ndef indexedParallelMean(arr, indices):\n splitCount = 4\n l, m, n = arr.shape[0], arr.shape[1], indices.size\n res = np.zeros((splitCount, m))\n\n # Parallel reduction of each chunk\n for i in nb.prange(splitCount):\n start = n * i // splitCount\n end = n * (i + 1) // splitCount\n for j in range(start, end):\n res[i, :] += arr[indices[j], :]\n\n # Final sequential reduction\n for i in range(1, splitCount):\n res[0] += res[i]\n\n return res[0] / n\n\nHere are performance results on my machine with a i5-9600KF processor (6-cores):\nNumpy initial version: 100 ms\nNumba serial (of dankal444): 22 ms\nNumba parallel (this answer): 11 ms\n\nThe computation is memory bound: it saturate about 80% of the throughput of the RAM bandwidth. It is nearly optimal for a high-level Numba/Cython code.\n" ]
[ 1, 1, 1 ]
[]
[]
[ "mean", "numpy", "performance", "python" ]
stackoverflow_0074628642_mean_numpy_performance_python.txt
Q: python: have a default value when formatting strings (str.format() method) Is there a way to have a default value when doing string formatting? For example: s = "Text {0} here, and text {1} there" s.format('foo', 'bar') What I'm looking for is setting a default value for a numbered index, so that it can be skipped in the placeholder, e.g. something like this: s = "Text {0:'default text'} here, and text {1} there" I checked https://docs.python.org/3/tutorial/inputoutput.html#tut-f-strings and didn't find what I need, maybe looking in the wrong place? Thanks. A: You can't do it within the format string itself, but using named placeholders, you can pass a dict-like thing to .format_map that contains a generic default value, or combine a dict of defaults for each value with the provided dict to override individually. Examples: With a defaulting dict-like thing: from collections import Counter fmt_str = "I have {spam} cans of spam and {eggs} eggs." print(fmt_str.format_map(Counter(eggs=2))) outputs I have 0 cans of spam and 2 eggs. With combining dict of defaults: def format_groceries(**kwargs): defaults = {"spam": 0, "eggs": 0, **kwargs} # Defaults are replaced if kwargs includes same key return "I have {spam} cans of spam and {eggs} eggs.".format(defaults) print(format_groceries(eggs=2)) which behaves the same way. With numbered placeholders, the solutions end up uglier and less intuitive, e.g.: def format_up_to_two_things(*args) if len(args) < 2: args = ('default text', *args) return "Text {0} here, and text {1} there".format(*args) The tutorial doesn't really go into this because 99% of the time, modern Python is using f-strings, and actual f-strings generally don't need to deal with this case, since they're working with arbitrary expressions that either work or don't work, there's no concept of passing an incomplete set of placeholders to them. A: If you only needed to insert an exact number of values positionally as a lambda meh = lambda x='default x',y='default y': 'Text {0} here, Text {1} here'.format(x,y) print(meh(3,7)) as a function def meh(x='default x',y='default y'): return "Text {0} here, Text {1}".format(x,y) print(meh(3,7))
python: have a default value when formatting strings (str.format() method)
Is there a way to have a default value when doing string formatting? For example: s = "Text {0} here, and text {1} there" s.format('foo', 'bar') What I'm looking for is setting a default value for a numbered index, so that it can be skipped in the placeholder, e.g. something like this: s = "Text {0:'default text'} here, and text {1} there" I checked https://docs.python.org/3/tutorial/inputoutput.html#tut-f-strings and didn't find what I need, maybe looking in the wrong place? Thanks.
[ "You can't do it within the format string itself, but using named placeholders, you can pass a dict-like thing to .format_map that contains a generic default value, or combine a dict of defaults for each value with the provided dict to override individually.\nExamples:\n\nWith a defaulting dict-like thing:\nfrom collections import Counter\n\nfmt_str = \"I have {spam} cans of spam and {eggs} eggs.\"\n\nprint(fmt_str.format_map(Counter(eggs=2)))\n\noutputs I have 0 cans of spam and 2 eggs.\n\nWith combining dict of defaults:\ndef format_groceries(**kwargs):\n defaults = {\"spam\": 0, \"eggs\": 0, **kwargs} # Defaults are replaced if kwargs includes same key\n return \"I have {spam} cans of spam and {eggs} eggs.\".format(defaults)\n\nprint(format_groceries(eggs=2))\n\nwhich behaves the same way.\n\n\nWith numbered placeholders, the solutions end up uglier and less intuitive, e.g.:\ndef format_up_to_two_things(*args)\n if len(args) < 2:\n args = ('default text', *args)\n return \"Text {0} here, and text {1} there\".format(*args)\n\nThe tutorial doesn't really go into this because 99% of the time, modern Python is using f-strings, and actual f-strings generally don't need to deal with this case, since they're working with arbitrary expressions that either work or don't work, there's no concept of passing an incomplete set of placeholders to them.\n", "If you only needed to insert an exact number of values positionally\nas a lambda\nmeh = lambda x='default x',y='default y': 'Text {0} here, Text {1} here'.format(x,y)\nprint(meh(3,7))\n\nas a function\ndef meh(x='default x',y='default y'):\n return \"Text {0} here, Text {1}\".format(x,y)\nprint(meh(3,7))\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x", "string" ]
stackoverflow_0074631840_python_python_3.x_string.txt
Q: Visual Studio Code does not show variable color or problems anymore About 4 days ago, while I was doing my schoolwork I noticed that the variable colors don't turn blue like they used to, and it does not show me problems in the code anymore. I am a beginner in coding, so the "not showing problem" thing is a big issue for me. Would anyone know how can I get them back? Also, this problem is in all of my Visual Studio Code tabs, so its not just a specific code doing it. [EDIT: nothing that i tried fixed the issue, since uninstalling or changing files is not an option for me since i need administrator rights to do so [which i do not have], but it is alright, i have installed another linter and now everything should be fine] A: Maybe you need to install a vs code plug-in called pylance and make sure that it is not disabled in your workspace. A: uninstall Visual Studio Code and delete user\AppData\Roaming\Code (windows) and the install it again, may be it's due to a buggy extension or some setting A: You can follow the picture and open your settings. Check this option so that intellisence and error wavy lines will be back. A: I had this issue and found that running updates on some extensions reloaded VSCode and I got my syntax highlighting back.
Visual Studio Code does not show variable color or problems anymore
About 4 days ago, while I was doing my schoolwork I noticed that the variable colors don't turn blue like they used to, and it does not show me problems in the code anymore. I am a beginner in coding, so the "not showing problem" thing is a big issue for me. Would anyone know how can I get them back? Also, this problem is in all of my Visual Studio Code tabs, so its not just a specific code doing it. [EDIT: nothing that i tried fixed the issue, since uninstalling or changing files is not an option for me since i need administrator rights to do so [which i do not have], but it is alright, i have installed another linter and now everything should be fine]
[ "Maybe you need to install a vs code plug-in called pylance and make sure that it is not disabled in your workspace.\n", "uninstall Visual Studio Code\nand delete user\\AppData\\Roaming\\Code (windows)\nand the install it again,\nmay be it's due to a buggy extension or some setting\n", "You can follow the picture and open your settings.\n\nCheck this option so that intellisence and error wavy lines will be back.\n\n", "I had this issue and found that running updates on some extensions reloaded VSCode and I got my syntax highlighting back.\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0071935260_python_visual_studio_code.txt
Q: How to deactivate a QVideoProbe? According to the docs "If source is zero, this probe will be deactivated" But calling setSource(0) gives the following exception: Exception has occurred: TypeError 'PySide2.QtMultimedia.QVideoProbe.setSource' called with wrong argument types: PySide2.QtMultimedia.QVideoProbe.setSource(int) Supported signatures: PySide2.QtMultimedia.QVideoProbe.setSource(PySide2.QtMultimedia.QMediaObject) PySide2.QtMultimedia.QVideoProbe.setSource(PySide2.QtMultimedia.QMediaRecorder) Im running my code on raspberry pi 4 with Rpi Os Bullseye 64bit and PySide2 version 5.15.2. Example code: import sys from PySide2 import QtCore, QtMultimedia from PySide2.QtMultimedia import * from PySide2.QtMultimediaWidgets import * from PySide2.QtWidgets import * class MainWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.available_cameras = QCameraInfo.availableCameras() self.camera = QCamera(self.available_cameras[0]) self.probe = QtMultimedia.QVideoProbe(self) self.probe.videoFrameProbed.connect(self.processFrame) self.probe.setSource(self.camera) self.probe.setSource(0) def processFrame(self, frame): pass if __name__ == "__main__": app = QApplication(sys.argv) mainWindow = MainWindow() mainWindow.show() sys.exit(app.exec_()) A: The source object can be cleared like this: self.probe.setSource(None) In C++, passing zero to a pointer argument means the function will recieve a null pointer. Since this can't be done explicitly in Python, PySide/PyQt allow None to be passed instead. Generally speaking, it's always advisable to consult the Qt Docs in cases like this. The PySide/PyQt docs are a work in progress and are mostly auto-generated from the Qt Docs. This can often result in somewhat garbled or misleading descriptions that don't accurately reflect how the given API works in practice.
How to deactivate a QVideoProbe?
According to the docs "If source is zero, this probe will be deactivated" But calling setSource(0) gives the following exception: Exception has occurred: TypeError 'PySide2.QtMultimedia.QVideoProbe.setSource' called with wrong argument types: PySide2.QtMultimedia.QVideoProbe.setSource(int) Supported signatures: PySide2.QtMultimedia.QVideoProbe.setSource(PySide2.QtMultimedia.QMediaObject) PySide2.QtMultimedia.QVideoProbe.setSource(PySide2.QtMultimedia.QMediaRecorder) Im running my code on raspberry pi 4 with Rpi Os Bullseye 64bit and PySide2 version 5.15.2. Example code: import sys from PySide2 import QtCore, QtMultimedia from PySide2.QtMultimedia import * from PySide2.QtMultimediaWidgets import * from PySide2.QtWidgets import * class MainWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.available_cameras = QCameraInfo.availableCameras() self.camera = QCamera(self.available_cameras[0]) self.probe = QtMultimedia.QVideoProbe(self) self.probe.videoFrameProbed.connect(self.processFrame) self.probe.setSource(self.camera) self.probe.setSource(0) def processFrame(self, frame): pass if __name__ == "__main__": app = QApplication(sys.argv) mainWindow = MainWindow() mainWindow.show() sys.exit(app.exec_())
[ "The source object can be cleared like this:\nself.probe.setSource(None)\n\nIn C++, passing zero to a pointer argument means the function will recieve a null pointer. Since this can't be done explicitly in Python, PySide/PyQt allow None to be passed instead.\nGenerally speaking, it's always advisable to consult the Qt Docs in cases like this. The PySide/PyQt docs are a work in progress and are mostly auto-generated from the Qt Docs. This can often result in somewhat garbled or misleading descriptions that don't accurately reflect how the given API works in practice.\n" ]
[ 2 ]
[]
[]
[ "pyqt5", "pyside2", "python", "qtmultimedia", "video" ]
stackoverflow_0074631000_pyqt5_pyside2_python_qtmultimedia_video.txt
Q: How do I run a while loop in tkinter window while it is open? I have a while loop that I want to run while the Tkinter window is open but the Tkinter window doesn't even open when the while loop is running. This is a problem since my while loop is an infinite loop. I basically want to create a programme that provides the users with new choices after a previous choice is selected by updating the buttons through a while loop, but whenever I use a while loop Tkinter doesn't open the window until I end the loop. root = Tk() i=0 while i==0: def choice1(): list1[a1].implement() list1.remove(list1[a1]) def choice2(): list2[a2].implement() list2.remove(list2[a2]) button1 = tk.Button(root, text=list1.headline, command=choice1) button2 = tk.Button(root, text=list2.headline, command =choice2) root.mainloop() Also, an error shows up because tkinter keeps executing this loop until there are no items in list1 or list2 left. What I want to know is if there is a way to run Tkinter window while the while loop is going on (a1 and a2 are randomly generated numbers.) A: The mainloop() is the main reason for the window to be displayed continuously. When the while loop is running, the mainloop() does not get executed until the while loop ends. And because in your case the while loop never ends, the code including the mainloop() keeps waiting for its turn to be executed. To overcome this issue you will have to put all the widgets you want to be displayed in the window along with the mainloop() inside the while loop Like this: import tkinter as tk root = tk.Tk() i = 0 while i == 0: def choice1(): list1[a1].implement() list1.remove(list1[a1]) def choice2(): list2[a2].implement() list2.remove(list2[a2]) button1 = tk.Button(root, text=list1.headline, command=choice1) button2 = tk.Button(root, text=list2.headline, command=choice2) root.mainloop() A: You should probably put root.mainloop in the loop, otherwise it will never execute. If mainloop() doesn't execute, the window won't stay open. And also: you need to call the functions, defining them isn't enough. So instead of in the loop only having the def choice1() and def choice2() you need to also have choice1() and choice2() in the loop, otherwise it won't execute those commands. And one more thing: you need to pack() the buttons, so add the lines button1.pack() and button2.pack(). The buttons also need to be before the loop, which means your def choice1() and def choice2() need to be before the loop also. (The buttons will never show up otherwise) A: tkinter runs in its own loop, each button/widget/element can be tied to more advanced functions. Judging by your script, you have your functions built into your loop, if you place them outside of the loop, you can then enhance their usability. Your code: root = Tk() i=0 while i==0: def choice1(): list1[a1].implement() list1.remove(list1[a1]) def choice2(): list2[a2].implement() list2.remove(list2[a2]) button1 = tk.Button(root, text=list1.headline, command=choice1) button2 = tk.Button(root, text=list2.headline, command =choice2) root.mainloop() Suggested code: root = Tk() i=0 def choice1(): list1[a1].implement() list1.remove(list1[a1]) def choice2(): list2[a2].implement() list2.remove(list2[a2]) while i==0: # put here, what this extra loop will do.# button1 = tk.Button(root, text=list1.headline, command=choice1) button2 = tk.Button(root, text=list2.headline, command =choice2) root.mainloop()
How do I run a while loop in tkinter window while it is open?
I have a while loop that I want to run while the Tkinter window is open but the Tkinter window doesn't even open when the while loop is running. This is a problem since my while loop is an infinite loop. I basically want to create a programme that provides the users with new choices after a previous choice is selected by updating the buttons through a while loop, but whenever I use a while loop Tkinter doesn't open the window until I end the loop. root = Tk() i=0 while i==0: def choice1(): list1[a1].implement() list1.remove(list1[a1]) def choice2(): list2[a2].implement() list2.remove(list2[a2]) button1 = tk.Button(root, text=list1.headline, command=choice1) button2 = tk.Button(root, text=list2.headline, command =choice2) root.mainloop() Also, an error shows up because tkinter keeps executing this loop until there are no items in list1 or list2 left. What I want to know is if there is a way to run Tkinter window while the while loop is going on (a1 and a2 are randomly generated numbers.)
[ "The mainloop() is the main reason for the window to be displayed continuously. When the while loop is running, the mainloop() does not get executed until the while loop ends. And because in your case the while loop never ends, the code including the mainloop() keeps waiting for its turn to be executed.\nTo overcome this issue you will have to put all the widgets you want to be displayed in the window along with the mainloop() inside the while loop\nLike this:\nimport tkinter as tk\n\nroot = tk.Tk()\n\ni = 0\n\nwhile i == 0:\n\n def choice1():\n list1[a1].implement()\n list1.remove(list1[a1])\n\n def choice2():\n list2[a2].implement()\n list2.remove(list2[a2])\n\n\n button1 = tk.Button(root, text=list1.headline, command=choice1)\n button2 = tk.Button(root, text=list2.headline, command=choice2)\n\n root.mainloop()\n\n", "You should probably put root.mainloop in the loop, otherwise it will never execute. If mainloop() doesn't execute, the window won't stay open.\nAnd also: you need to call the functions, defining them isn't enough.\nSo instead of in the loop only having the def choice1() and def choice2() you need to also have choice1() and choice2() in the loop, otherwise it won't execute those commands.\nAnd one more thing: you need to pack() the buttons, so add the lines button1.pack() and button2.pack(). The buttons also need to be before the loop, which means your def choice1() and def choice2() need to be before the loop also. (The buttons will never show up otherwise)\n", "tkinter runs in its own loop, each button/widget/element can be tied to more advanced functions. Judging by your script, you have your functions built into your loop, if you place them outside of the loop, you can then enhance their usability.\nYour code:\n\nroot = Tk()\ni=0\nwhile i==0:\n def choice1():\n list1[a1].implement()\n list1.remove(list1[a1])\n def choice2():\n list2[a2].implement()\n list2.remove(list2[a2])\n\nbutton1 = tk.Button(root, text=list1.headline, command=choice1)\nbutton2 = tk.Button(root, text=list2.headline, command =choice2)\nroot.mainloop()\n\n\nSuggested code:\n\nroot = Tk()\n\ni=0\n\ndef choice1():\n list1[a1].implement()\n list1.remove(list1[a1])\n\ndef choice2():\n list2[a2].implement()\n list2.remove(list2[a2])\n\nwhile i==0:\n\n # put here, what this extra loop will do.#\n\nbutton1 = tk.Button(root, text=list1.headline, command=choice1)\nbutton2 = tk.Button(root, text=list2.headline, command =choice2)\nroot.mainloop()\n\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "python_3.x", "tkinter", "user_interface", "while_loop" ]
stackoverflow_0061016789_python_python_3.x_tkinter_user_interface_while_loop.txt
Q: How to create a function that change is_active=False to True? How to create a function that change is_active=False to True? The case is that I want to create a function that change the value in an user case from is_active=False to is_active=True. At my final point I want to create "an email verification" when someone registered. If someone registered on my website, he receive an email with the verification. I guess I have to create a function that change "is_active=false" on "is_active=true" when someone clicked the link that call the function? Mean I well? Thanks! def activateEmail(request, user, email, first_name): send_mail( #email subject f"Activate your user account, {user.first_name} !", #email content f"Hi {user.first_name}!\nPlease click on the link below to confirm your registration\n{SITE_URL}\nhttps://patronite.pl/wizard/autor/profil?step=3", #email host user EMAIL_HOST_USER, #email to [user.email], #if error True is better. fail_silently=False, ) A: I think what you want is a function something like this def register_confirm(request, activation_key): if request.user.is_authenticated(): HttpResponseRedirect('/home') user_profile = get_object_or_404(UserProfile, activation_key=activation_key) if user_profile.key_expires < timezone.now(): return render_to_response('user_profile/confirm_expired.html') user = user_profile.user user.is_active = True user.save() return render_to_response('user_profile/confirm.html')
How to create a function that change is_active=False to True?
How to create a function that change is_active=False to True? The case is that I want to create a function that change the value in an user case from is_active=False to is_active=True. At my final point I want to create "an email verification" when someone registered. If someone registered on my website, he receive an email with the verification. I guess I have to create a function that change "is_active=false" on "is_active=true" when someone clicked the link that call the function? Mean I well? Thanks! def activateEmail(request, user, email, first_name): send_mail( #email subject f"Activate your user account, {user.first_name} !", #email content f"Hi {user.first_name}!\nPlease click on the link below to confirm your registration\n{SITE_URL}\nhttps://patronite.pl/wizard/autor/profil?step=3", #email host user EMAIL_HOST_USER, #email to [user.email], #if error True is better. fail_silently=False, )
[ "I think what you want is a function something like this\ndef register_confirm(request, activation_key):\n\nif request.user.is_authenticated():\n \n HttpResponseRedirect('/home')\n\nuser_profile = get_object_or_404(UserProfile, \n activation_key=activation_key)\n\nif user_profile.key_expires < timezone.now():\n return render_to_response('user_profile/confirm_expired.html')\nuser = user_profile.user\nuser.is_active = True\nuser.save()\nreturn render_to_response('user_profile/confirm.html')\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "django_views", "python", "reactjs" ]
stackoverflow_0074632893_django_django_rest_framework_django_views_python_reactjs.txt
Q: openpyxl error raise ValueError('Min value is {0}'.format(self.min)) in opening heavy file with formatting I'm trying to use openpyxl for the first time on a very heavy file, that happens to be over 20 500 Ko, has a lot of formatting and a VBA macro. My code keeps returning the following error: File " \Anaconda3\lib\site-packages\openpyxl\styles\alignment.py", line 52, in __init__ self.relativeIndent = relativeIndent File " \Anaconda3\lib\site-packages\openpyxl\descriptors\base.py", line 107, in __set__ raise ValueError('Min value is {0}'.format(self.min)) ValueError: Min value is 0 Would anyone know what the problem is / how to access the file despite it? I'm trying to post data into an existent Excel file to simplify processes and replace a heavy VBA code. So I can't just post it into a different xlsx file and call it using a VBA code (that would defeat the purpose). Thanks a lot! Here is my code : wb = load_workbook(filename='C:/dev/CodeRep/ProjectName/MainFile 2021_01.xlsm', read_only = False, keep_vba = True) A: The traceback says that there is a problem with the Alignment definition in the workbook's stylesheet. openpyxl follows the OOXML specification very closely to minimise unpleasant surprises later, this is why it tends to raise exceptions or give warnings rather than let things pass. For more details we'll need to see the XML source for the stylesheet, or the Alignments part at least. You can find this by unzipping the XLSM file and looking for the styles.xml file. That will give you more information and also allow you to submit a bug report to openpyxl. A: Preprocess the file I solved this issue by preprocessing the excel file. Found that mi problem was at "*/myfile.xlsx/xl/styles.xml" where several xf tags had an attribute indent="-1", and openpyxl only supports non-negative values, raising that exception when a negative value is found. After some time spent trying to override entire openpyxl hierarchy in order to catch the exception, I decided to process the XLSX. Here is my code: def fix_xlsx(file_name): with zipfile.ZipFile(file_name) as input_file, zipfile.ZipFile(file_name + ".out", "w") as output_file: # Iterate over files for inzipinfo in input_file.infolist(): with input_file.open(inzipinfo) as infile: if "xl/styles.xml" in inzipinfo.filename: # Read, Process & Write lines = infile.readlines() new_lines = b"\n".join([line.replace(b'indent="-1"', b'indent="0"') for line in lines]) output_file.writestr(inzipinfo.filename, new_lines) else: # Read & Write output_file.writestr(inzipinfo.filename, b"\n".join([line for line in infile.readlines()])) # Replace file os.replace(file_name + ".out", file_name) Disclaimer: I must say this is not a very elegant solution as the entire file is processed, and an auxiliary file is used. Also I am not so expert at excel to tell wheter changing that indent="-1" to indent="0" for those tags might cause format problems in the file. This is my working solution and can't really tell the effect of those tags. A: I got the same error and wasn't able to figure out the exact cause, but noticed when I ran my python script in a different environment it worked without issue. I realized it may have had something to do with the versions of the openpyxl and xlrd packages I was using so I downgraded them to openpyxl==3.0.4 and xlrd==1.2.0 (previously using openpyxl==3.0.7 and xlrd==2.0.1) and that solved my issue. A: I had the same issue — the file wasn't accepted by Openpyxl. I just opened the file in MS Excel and saved it to a new file. And it worked after that. A: I ran into this issue, my solution was to pinpoint what was causing the error in the spreadsheet (had something to do with a table that was recently modified) and reconstruct that table in the worksheet. much easier for me than debugging openpyxl or xml.
openpyxl error raise ValueError('Min value is {0}'.format(self.min)) in opening heavy file with formatting
I'm trying to use openpyxl for the first time on a very heavy file, that happens to be over 20 500 Ko, has a lot of formatting and a VBA macro. My code keeps returning the following error: File " \Anaconda3\lib\site-packages\openpyxl\styles\alignment.py", line 52, in __init__ self.relativeIndent = relativeIndent File " \Anaconda3\lib\site-packages\openpyxl\descriptors\base.py", line 107, in __set__ raise ValueError('Min value is {0}'.format(self.min)) ValueError: Min value is 0 Would anyone know what the problem is / how to access the file despite it? I'm trying to post data into an existent Excel file to simplify processes and replace a heavy VBA code. So I can't just post it into a different xlsx file and call it using a VBA code (that would defeat the purpose). Thanks a lot! Here is my code : wb = load_workbook(filename='C:/dev/CodeRep/ProjectName/MainFile 2021_01.xlsm', read_only = False, keep_vba = True)
[ "The traceback says that there is a problem with the Alignment definition in the workbook's stylesheet. openpyxl follows the OOXML specification very closely to minimise unpleasant surprises later, this is why it tends to raise exceptions or give warnings rather than let things pass.\nFor more details we'll need to see the XML source for the stylesheet, or the Alignments part at least. You can find this by unzipping the XLSM file and looking for the styles.xml file. That will give you more information and also allow you to submit a bug report to openpyxl.\n", "Preprocess the file\nI solved this issue by preprocessing the excel file.\nFound that mi problem was at \"*/myfile.xlsx/xl/styles.xml\" where several xf tags had an attribute indent=\"-1\", and openpyxl only supports non-negative values, raising that exception when a negative value is found.\nAfter some time spent trying to override entire openpyxl hierarchy in order to catch the exception, I decided to process the XLSX.\nHere is my code:\ndef fix_xlsx(file_name):\n with zipfile.ZipFile(file_name) as input_file, zipfile.ZipFile(file_name + \".out\", \"w\") as output_file:\n # Iterate over files\n for inzipinfo in input_file.infolist():\n with input_file.open(inzipinfo) as infile:\n if \"xl/styles.xml\" in inzipinfo.filename:\n # Read, Process & Write\n lines = infile.readlines()\n new_lines = b\"\\n\".join([line.replace(b'indent=\"-1\"', b'indent=\"0\"') for line in lines])\n output_file.writestr(inzipinfo.filename, new_lines)\n else:\n # Read & Write\n output_file.writestr(inzipinfo.filename, b\"\\n\".join([line for line in infile.readlines()]))\n # Replace file\n os.replace(file_name + \".out\", file_name)\n\nDisclaimer:\nI must say this is not a very elegant solution as the entire file is processed, and an auxiliary file is used.\nAlso I am not so expert at excel to tell wheter changing that indent=\"-1\" to indent=\"0\" for those tags might cause format problems in the file. This is my working solution and can't really tell the effect of those tags.\n", "I got the same error and wasn't able to figure out the exact cause, but noticed when I ran my python script in a different environment it worked without issue.\nI realized it may have had something to do with the versions of the openpyxl and xlrd packages I was using so I downgraded them to openpyxl==3.0.4 and xlrd==1.2.0 (previously using openpyxl==3.0.7 and xlrd==2.0.1) and that solved my issue.\n", "I had the same issue — the file wasn't accepted by Openpyxl.\nI just opened the file in MS Excel and saved it to a new file. And it worked after that.\n", "I ran into this issue, my solution was to pinpoint what was causing the error in the spreadsheet (had something to do with a table that was recently modified) and reconstruct that table in the worksheet. much easier for me than debugging openpyxl or xml.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "openpyxl", "python", "valueerror", "vba", "xlsm" ]
stackoverflow_0066499849_openpyxl_python_valueerror_vba_xlsm.txt